back door cryptography

Back doors are bad for security architecture

Instead of inventing encryption that only government can break, we should just breed a special unicorn that magically blocks terrorist acts.
Ryan Paul

Back doors into security systems weaken security. For everyone. This remains true despite wishful thinking on the part of those who may advocate back doors. The claim that back doors could be added to systems for law enforcement purposes without compromising the security of those systems was something that was heatedly discussed in the 1990s.

I had hoped that we had driven a stake through its heart back then, but it has been revived in the wake Apple’s announcement last Autumn that they have no method to unlock iOS devices without the user’s consent, and so don’t have anything that can be given to law enforcement agencies. The current version of Apple’s statement reads:

On devices running iOS 8.0 and later versions, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. For all devices running iOS 8.0 and later versions, Apple will not perform iOS data extractions in response to government search warrants because the files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.

Ever since then there has been official and unofficial hand wringing about the threat that this poses to public safety and national security. This is often accompanied by “suggestions” of building systems that don’t compromise the security of a system, give (the right) governments the access they want, and are called something other than “back doors”.

But in addition to whatever risks government access poses, there is a subtle but crucial point that is often overlooked: The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. I will come back to that in more detail below, but first let me review a few events and concepts.

Wishful thinking

Over the past half a year, we’ve been told that through some technological wizardry there must be a way to provide governments with what they want without compromising user security. Each time suggestions of that sort come up they are met with ridicule from cryptographers and information security specialists.

An early example is from a Washington Post editorial in October 2014

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

The phrase “secure golden key” has become a running joke among security specialists since then.

More recently (in January of this year) British Prime Minister David Cameron called for government readable encryption. Prime Minister Cameron declared that there should be “no means of communication” that his government “cannot read.” Yet he also stated that this would not involve a “back door.”

Without a very specific proposal in hand, it is hard to analyze the suggestions in detail: all we can do is poke fun at what we imagine they might mean. At least we now have a slightly more specific idea of what it might mean in the US from Michael S. Rogers, the head of the National Security Agency (NSA). He appears to be advocating key escrow with threshold secret sharing for the escrowed key. As described in the Washington Post on April 10:

Why not, suggested [Rogers], require technology companies to create a digital key that could open any smartphone or other locked device to obtain text messages or photos, but divide the key into pieces so that no one person or agency alone could decide to use it?

I would love to talk about how keys can be divided into pieces so that no one person can decide to use it, but I will save that for another article. (It’s really cool, and the essential mathematical concept is not actually that hard to grasp.)  But that slightly more specific proposal still doesn’t address the fact that key escrow can’t really be built into securely designed systems. This should become more clear below.

Each of those proposals, in their own way, fail to recognize that entirely separate from the privacy concerns, inserting some government access mechanism into cryptographic systems requires a weakening of those systems.

What’s a back door?

A back door is simply a second way of gaining access to some resource. Imagine a bank vault with a very visible and secure vault door. Now imagine that there is a hidden back door into the vault that few people are aware of. Typically a back door is created deliberately and its existence is kept secret. It isn’t too far from the truth to consider a back door a deliberate security vulnerability.

I am using the term “back door” broadly here because from the user’s point of view, and from the point of view of implications on security architecture, the narrower definition isn’t useful. Under a narrow definition, a back door can only be added systems that have (front) doors. Tools like 1Password and Knox for Mac don’t have any doors to begin with, as they operate solely through encryption and not authentication.

Not everything that looks like a back door is secret or malicious. For example, when my bank needs to deposit or withdraw funds from my account, it doesn’t go in through the same door that I do. The bank has legitimate access through their own doors. Indeed, one of the major reasons I use a bank is so that it can perform such transactions on my behalf. So in this case the apparent back door is essential to the purpose of the system in the first place. I will not be including such things in my discussion of “back doors.” Those are just other front doors.

Indeed, my usage is similar to what appears in Matt Blaze’s prepared testimony (PDF)  before Congress for April 29, 2015.

These law enforcement access features have been variously referred to as “lawful access”, “back doors”, “front doors”, and “golden keys”, among other things. While it may be possible to draw distinctions between them, it is sufficient for the purposes of the analysis in this testimony that all these proposals share the essential property of incorporating a special access feature of some kind that is intended solely to facilitate law enforcement interception under certain circumstances.

Key escrow

It appears that Admiral Rogers is advocating a key escrow system. Under my broad definition of back door, this is one mechanism. The notion is that a copy of a cryptographic key is deposited with a safe pair of hands (an escrow service) who store that copy securely and will only release it under certain circumstances.

Keymaster from Ghost Busters

Sometimes it’s hard to find the right Keymaster

Additionally, he is suggesting that it not be a single entity or agency that holds the key, but the key is “split” in such a way that it may require multiple parties to work together to retrieve or reconstruct the key. Typically this is done through an algorithm called Shamir secret sharing which allows one to do things like give a separate secret to five different people which will allow any three of them to recover the master secret (“three of five”). I really, really want to write about how Shamir secret sharing works, but I must leave that for another day.

Although this kind of key splitting for the escrowed key is a good thing to help protect it from theft or abuse, it does nothing to address its implications for the security design of some application which must comply with it. So let me repeat again that these sorts of proposals have implications for the security design of systems that comply.

Vital Technicalities

There are a number of technical facts that policy makers should understand:

  1. Software and hardware cannot distinguish between good guys and bad guys.
  2. Back doors pose a direct risk to all users.
  3. Designs that enable back doors (whether or not a back door is present) are weaker than systems which preclude back doors.
  4. There is no useful and coherent way to distinguish between cryptographic tools for communication and those not for communication.

I am mostly going to talk about number 3 on that list. This is my point that security designs that make it hard to insert a back door are more secure than designs in which it is easy. But let me briefly address the other ones.

Good guys and bad guys

One of the interesting phrases in the Washington Post editorial back in October was notion that the golden key could only be used when a court has produced a warrant. This isn’t actually as ridiculous as it first seems if we consider that the relevant court might hold part of a split key. But a cryptographic system only knows whether it has been given keys that work or not; it cannot decide whether the person who is using that key is using it properly or came upon it through legitimate means.

1Password, for example, only knows if you have provided the correct Master Password. It doesn’t know if you are a good guy or a bad guy. It doesn’t know if you obtained the Master Password through torture. It doesn’t know if you are a photogenic hero who needs to decrypt the data to save the world from destruction by Dr No. These are simply not the kinds of things that software can know. As clever as we may be, we cannot build software that will “let the good guy in.” Instead we build systems that let the holder of the correct Master Password in and nobody else.

Inherent risks

The most obvious risk of a back door is that the keys to the back door will be captured by “the wrong people.” The holders of the key to the back door need to protect it well, not only from outsiders but from misuse from themselves. This is an enormous topic that I will largely skip since it is widely discussed elsewhere. But I will point out that in the US, the court oversight of secret programs has not lived up to what law makers wished, and that if one government is allowed a back door, many other governments will insist on similar access.

Systems for Communication

As mentioned above, Prime Minister Cameron expressed interest in “communication” and, so, perhaps, is envisioning rules that would apply only to systems that are used for communication. Perhaps text messaging systems would be subject to his rules that they must be readable by the British government, but Full Disk Encryption (FDE) systems like Bitlocker or FileVault would not be. The difficulty with taking such an approach is that even FDE systems could be used for secret communication. Patty may encrypt a disk and send the physical disk to Molly. Sure, Patty and Molly may have preferred to use tools better suited for communication, but if no such secure tools are available, they will make do with others.

Indeed this reflects the fact that cryptographers don’t typically distinguish between the case where Alice encrypts a message for Bob and the case where Alice encrypts a message for herself to decrypt at some later time. Communicating securely with a separate person is a lot like communicating securely with yourself in the future, and so tools that help with the latter can be co-opted to do the former.

Doors and architectures

I would now like to return to the central point I am trying to make. The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door.

This is a fundamental part of security engineering. By using strong encryption with keys that only the end user has access to, a huge number of potential attacks are suddenly off the table. As Matthew Green, a cryptographer at Johns Hopkins University, wrote in an article on Slate discussing the reaction to Apple’s statement:

Apple is not designing systems to prevent law enforcement from executing legitimate warrants. It’s building systems that prevent everyone who might want your data – including hackers, malicious insiders, and even hostile foreign governments — from accessing your phone. This is absolutely in the public interest. Moreover, in the process of doing so, Apple is setting a precedent that users, and not companies, should hold the keys to their own devices.

Apple isn’t designing iOS security with the aim of thumbing their noses at law enforcement. They are following good design principles that protect your data. Likewise, when we design our products so that only you can decrypt your data, we are doing so to protect you from those who would read your data without your consent. As described above, no software can determine the intent of the people using it.

Doors must lead somewhere

A back door can pretty much only be placed into a system at a point where that system has a secret such as an encryption key in memory. Otherwise it is a door to nowhere. The parts of a system that require the most protection are the ones that handle the secrets. A principle of security design is to reduce those portions of the system to the smallest possible.

Let’s consider software bugs. Continuing with our metaphor of doors, we can imagine a software bug as not so much another door but as a weakness that allows an attacker to break a hole in a wall. The attacker manages to go around the doors to get to the secrets.

The fewer places that secrets are held, the fewer the number of places where a dangerous vulnerability can occur. If the rooms with the secrets are small, there is less wall area to attack. So good security design means reducing the number of places and times where secrets are held. Great security design places all of those secret-holding components under the user’s control. Naturally, we strive for great design in our own products.

Some of the technical jargon is about “attack surfaces.” Good security design seeks to limit the attack surface, and therefore inherently limits the ways in which a back door could be inserted into a system. By building systems that preclude back doors in most places, we are also preventing a large class of accidental vulnerabilities.

Secrets under your control

One of the most important ways to achieve good security design is to make sure that your decrypted secrets never leave the system without your consent. In the case of 1Password, you may export your data, you may copy a password out of an item, you may use the 1Password extension to fill Login credentials into a web browser. But each of those is an action that you choose to take.

This is a slightly more general notion of what is meant by “end-to-end” encryption. Your encryption keys (the secrets that are derived from your Master Password) never leave your computers or devices and are only used when you want them to be used. Your encryption keys are created on your own devices and never leave your device unencrypted.

That sort of end-to-end encryption is essential to your security. It means that the only attacks that could ever be launched off of your system would involve guessing your Master Password. As a consequence, a back door could only be placed in the software running on a device under your control. By using end-to-end encryption we have dramatically narrowed down the attack surface. A side effect of this is that we also limit the places into which a back door could be inserted.

Where it would have to go

It appears that Admiral Rogers is advocating a key escrow system. Cryptographic tools would use strong encryption and would use strong keys, but the government would have a copy of the keys. His proposal of requiring multiple entities to unlock the escrowed key does make it harder to steal those keys from the government, but it does not stop this from being a key escrow system.

Even if we were fully confident that those keys would be stored safely and would only be used appropriately, the question of security architecture remains. Let’s look at 1Password for an example:

When you create a new vault (or even a new item) in 1Password, 1Password running on your machine will generate random cryptographic keys. We at AgileBits never have the opportunity to see those keys. Nor does anyone else. This is an example of what I meant when I said above that great security design places all of the secret holding components under the user’s control. The creation and handling of those keys happens only on your machine.

Under 1Password’s design, the only way to comply with key escrow would be to send a copy of the key to some government controlled entity when the key is created or after you have entered your Master Password (when these keys are decrypted on your machine). Roughly speaking, 1Password would have to send your Master Password (or keys derived from it) to some government entity. But because these only exist on your system (and not ours) it would have to be your system that is sending the information.

You can control what is transmitted from your computer. Sure, it may take technical skill to do so, but this is something that neither we nor a government can prevent you from doing. Indeed, in the unlikely event that we are ever required to produce a version of 1Password or Knox that would transmit your data to another system, we would display a huge notice to you about what is happening.

There might be more reliable ways in which we could (be forced to) comply with a key escrow scheme, but each of them involve weakening the overall security architecture of 1Password. It would mean that our software would only work if someone other than you had access to your keys. That is not how we build things.

This example should illustrate that the strongest security architectures cannot reliably participate in key escrow. This means that it is often a mistake to frame the discussion as a “clash between privacy and security.” We weaken many kinds of security when we weaken privacy protections.

Law enforcement is right to want a back door

The October Washington Post article that I keep referencing is absolutely correct when they say,

Law enforcement officials deserve to be heard in their recent warnings about the impact of next-generation encryption technology on smartphones, such as Apple’s new iPhone.

Those voices do need to be heard. So let’s start with them.

From the point of view of law enforcement, they rightly want to be able to actually get at data that they have the legal right to acquire.

Suppose that Molly, one of my dogs, is suspected of kidnapping, torturing, and even eating rabbits. (Molly, I’m sorry if some of my social media posts have implicated you in an FBI investigation, but your behavior was suspicious.) Also suppose that the FBI has good reason to suspect that Molly may even be taking pictures of her victims. The FBI should have little difficulty obtaining a warrant to confiscate and search Molly’s iPhone. If Molly has set a decent passcode for the device and has not leaked those photos off of her phone, then the FBI will have no means whatsoever (other than compelling Molly to reveal her passcode, which is a whole different set of very confused legal issues in the US) to get the evidence they need to lock Molly up in a crate. More bunnies will suffer and die as a consequence of the security design of iOS and the iPhone.

This isn’t as funny when we switch our example away from Molly and rabbits to the sorts of things that the FBI does investigate. Giving people access to encryption that law enforcement can’t break will mean that some investigations are harder, some never get solved, and some prosecutions will fail. There will be times when some very bad dogs get away with their crimes because of this.

It is no surprise that those given the task of fighting crime do not want to encounter encryption that they can’t break. Indeed, if they didn’t seek back doors into such systems they might not be doing their jobs. But this isn’t a question for law enforcement to decide on their own. It is a question for the public and for policy makers.

You can’t always get what you want

Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them.

Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins.

Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe.

DevBits, featured image

How 1Password syncs changes to your Master Password

Alliance of MagiciansAt the risk of being blackballed by the Alliance of Magicians (from even the smallest venues), we want to reveal the secret to Master Password syncing. It’s all just an illusion — a clever one — but since we don’t actually store your Master Password, we don’t sync it for you either. You do, but we’ve made it look like we do.

There are a lot of seemingly mysterious things that go on when a Master Password changes, so it is quite reasonable to have questions about security in this area. A cornerstone of Master Password security, though, is that 1Password never stores your Master Password in any form. (A not very relevant exception is for use with Touch ID.)

Suppose you change your Master Password on one of your computers. The next time you unlock 1Password on some other device, you can unlock it with your new Master Password. How can 1Password on the second machine accept the new Master Password if we are careful to never store it? This has led a lot of astute users to mistakenly imagine that their data isn’t really protected by their Master Password. But, let me assure you that it is, and all the tricks up our sleeves make things both more secure and more convenient for you.

The Basics

At its most basic, a 1Password vault (be it a local vault in 1Password for Mac/iOS, or a sync vault in the form of an Agile Keychain or an iCloud vault) contains a couple things:

  • A universally unique identifier (UUID)
  • An AES key used to encrypt and decrypt items

Every vault has a different UUID, and a different AES key. These are both randomly generated when the vault is created. This means that your local Mac vault does not share a UUID with the Agile Keychain it’s syncing with, and any other Mac or iOS device that’s also syncing with that Agile Keychain will have its own UUID/AES key combination. In the simplest scenario of a Mac syncing with an iOS device via Agile Keychain, that’s 3 vaults, 3 AES keys.

Obviously, it’s a bad idea to store the keys unencrypted. It would be like leaving your house key on a hook next to the door, outside. We need a key to encrypt your AES key with. For this, we use the Master Password you provided when you created the vault to derive another key. The key that is derived from your Master Password is then used to encrypt your vault’s AES key. We never store the derived key anywhere.

Let’s look at a very basic example. You have an unsynced vault on your desktop. To unlock 1Password, you provide us with a Master Password (which may or may not be correct). We use this Master Password and go through the same key derivation process that we used originally. If it’s the same Master Password, the result will be the same key.

A key will be derived even if an incorrect Master Password has been entered. Determining whether it’s the right key is a matter of attempting to decrypt the AES key. A successful decryption indicates that the correct Master Password for this vault has been entered, thus the vault will unlock. If the decryption fails, it’s the incorrect Master Password.

Master Password Changes

Now that we know how your vault is unlocked with your Master Password, let’s discuss what happens when you change your Master Password (leaving sync out of the equation for now). When you change your Master Password locally, the vault’s UUID and AES key do not change at all. What changes is the key that is used to encrypt your AES key (the one that’s derived from your Master Password). Effectively all that changes is the encrypted form of your vault’s AES key. It’s encrypted with the new key, derived from the new Master Password.

This means that trying to use your old Master Password will generate the old derived key, which will not be able to decrypt your AES key that’s now encrypted with the new derived key. Simple enough.

Master Passwords and Syncing

It is important to reiterate that your Master Password is never stored anywhere. This is the case for your local vault as well as the Agile Keychain you’re syncing to, so we’re essentially responsible for syncing something we don’t have. This is important as it allows 1Password to continue syncing even after you’ve changed a Master Password on one device.

During initial sync setup, we ask you for the Master Password of the sync vault (typically an Agile Keychain). We use this password to unlock the sync vault, but once it’s unlocked we throw away the password you gave us. Instead of the password, we store the sync vault’s UUID and its AES key, and we encrypt those with our local AES key. We know that a vault’s UUID and AES key will never change for the lifetime of the vault. This means that as long as we know that we have the same vault (the UUIDs match), we’ll be able to use the AES key we have to decrypt its data (even if the encrypted form of that AES key was changed to be encrypted with a new key derived from a new Master Password).

This allows us to sync multiple devices, each having their own Master Password. They don’t actually need to have the same password, and as one device changes its Master Password, even though the sync vault’s Master Password will also get updated, the other devices don’t necessarily need to care.

Please note, while this suggests that you could potentially use a different Master Password on each of your devices, we do not recommend that you attempt it. The mechanism described above is merely necessary to ensure that the Master Password changes are able to be synced.

Syncing Master Password Changes

When you change the Master Password of a vault on one device, the same process that happens locally for a Master Password change happens to the sync vault. We re-encrypt its AES key with the new derived key. This makes the sync vault’s Master Password the same as the new local Master Password. To read data within this vault you need either the new Master Password or a copy of its AES key.

Now that you’ve changed the Master Password on one device, and it has changed the Master Password of the sync vault, what you want is for all other devices associated with this sync vault to also have the new Master Password. Unfortunately, we’ve done all of the pushing we can. Despite the fact that we can detect that the Master Password in the sync vault has changed, we don’t know what it has been changed to because we don’t store your Master Password anywhere. We have no way of knowing what key to use when re-encrypting the AES keys on your other devices. So, from here on, we need your help!

The other devices still have the old Master Password. That old Master Password will still work to unlock your vault on these other devices because the AES key is still stored, encrypted with the derived key from your old Master Password. These vaults can continue to sync with other devices because they happen to have a copy of the sync vault’s AES key. They don’t need to care about the sync vault’s Master Password.

Updating the Master Password

At the start of this post, I said that the process of unlocking a vault is a matter of deriving a key based on the provided password, and attempting to decrypt the vault’s AES key with it. If the decryption of the vault’s AES key is successful, we’ve unlocked the vault. If the decryption is unsuccessful, then we’ve failed to unlock the vault. It turns out that this isn’t quite true, at least not in the case of synced vaults.

If the decryption of the vault’s AES key is successful, we’ve unlocked the vault. That part is true. But if the decryption is unsuccessful, all it means is that we’ve failed to unlock the local vault with the provided key. When a failure is detected, we look to see if we’re syncing with any vaults. Then we try the Master Password against the sync vault as well. If the Master Password that you have entered is the new Master Password of the sync vault, unlocking the sync vault will be successful. This tells us that the password you entered is the correct Master Password for the sync vault, and from that we can infer that this is the new Master Password that you would like to use on this local device.

This is all well and good, but we’ve really only managed to unlock the sync vault, not your own local vault. Remember how we stored the sync vault’s AES key, encrypted with your local AES key so that we could keep syncing even after the sync vault’s Master Password changed? It turns out doing the reverse here is possible. When we set up sync, not only did we store the sync vault’s AES key encrypted with the local AES key, but we also stored the local AES key encrypted with the sync vault’s AES key. This means that as long as we can unlock one of the two vaults, we can unlock both. We can use the unlocked sync vault to decrypt our own local AES key to unlock your own vault.

So we’ve used the new Master Password to unlock the sync vault, then used the sync vault to unlock your local vault. All that’s needed to do now is re-encrypt your local AES key with the new Master Password. And voilà, the next time you unlock 1Password with that new Master Password the decryption will succeed.

Recap

That might sound like a lot of information. Let’s go over that again, in simple, flow-chart form:

  • Device1 and Device2 sync via Agile Keychain, all share MasterPassword1
  • Device1 updates to MasterPassword2 by re-encrypting Device1’s AES key with key derived from MasterPassword2
  • Device1 updates Agile Keychain by re-encrypting Agile Keychain’s AES key with key derived from MasterPassword2
  • Agile Keychain files sync to all devices
  • Device2 attempts unlock with MasterPassword2
  • Device2 rejects unlock with MasterPassword2
  • Device2 attempts to unlock Agile Keychain with MasterPassword2
  • Device2 successfully unlocks Agile Keychain with MasterPassword2
  • Device2 uses Agile Keychain’s AES key to decrypt a copy of Device2’s AES key that was encrypted using Agile Keychain’s AES key, earlier when sync was originally setup.
  • Device2 uses decrypted Device2 AES key to unlock its local vault
  • Device2 re-encrypts its AES key with key derived from MasterPassword2
  • Device2 unlocks
  • Device2 locks
  • Device2 attempts unlock with MasterPassword2
  • Device2 unlocks

Master Password sync infographic

So, there you have it. The secret of our illusion: the Master Password update has propagated to another device without the password itself being stored anywhere.

Enigma machine

Bcrypt is great, but is password cracking “infeasible”?

There are a lot of technical terms that mean something very specific to cryptographers but often mean something else to everyone else, including security professionals. Years ago I wrote about what it means to say that a cipher is “broken”. Today’s word is “infeasible”.

The news that sparked this lesson is the use of “computationally infeasible” in an announcement by Slack. Slack has announced that their hashed password database had been compromised, and their message was excellent: They clearly described what was available to attackers (usernames, email address, hashed passwords, and possibly phone numbers and contact information users may have added); they offered clear and useful instructions on what users should do (change passwords, enable two-step verification), and described what they have done and what they will be doing. And – most relevant for the technical discussion here – they have told us how the passwords were hashed.

In this case they said:

Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that your password could be recreated from the hashed form.

It is terrific that they chose to use bcyrpt for password hashing. bcrypt is among the three password hashing schemes that we recommend for sites and services that must store hashed passwords. The other two are PBKDF2 and scrypt. But Slack’s use of the term “computationally infeasible” here illustrates that one must be very careful when using cryptographic technical terms.

If you have a weak or reused password for Slack, change it immediately. Here is a guide to using 1Password for changing a password. And because the Slack app on iOS makes use of the 1Password App Extension, it is easy to use a strong and unique password for Slack.

Slack 1Password login Slack 1Password extension

If you would like to see how to use Slack’s two-step verification with 1Password take a look at our handy guide on doing just that.

 

But now back to what is feasible with password hashing.

One way hashing

When services that you log into store your password they should never store those as unencrypted “plaintext”. If they are stored as plaintext it means that anyone who can get their hands on that data file can learn everyone’s passwords. For example, Molly (one of my dogs) uses the same password on Poop Deck as she does on Barkbook. So if Patty (my other dog) learns Molly’s Poop Deck password, she can use it to break into Molly’s Barkbook account as well. This is why it is important not to reuse passwords.

Now suppose that Molly uses the password “rabbit” on Barkbook. (Have I mentioned that Molly is not the smartest dog in the pack?) Barkbook shouldn’t store just “rabbit”, but instead should store a one way hash of rabbit. A cryptographic hash function will transform something like “rabbit” into something like “bQ67vc4yR024FB0j0sAb2WKNbl8=” (base64 encoded).

One of the features of a cryptographic hash function is that it should be quick and easy to compute the hash from the original, but that it should be infeasible to perform the computation in the other direction. That is it should be pretty much impossible to go from “bQ67vc4yR024FB0j0sAb2WKNbl8=” back to “rabbit”. And it is.

Guessing versus reversing

With any halfway decent cryptographic hash function is it infeasible to compute the original from its hash if the original is chosen at random! But if you can make some reasonable guesses about the original then you can use the hash to check your guesses. Because passwords created by humans are not chosen at random, then it does become computationally feasible (and often quite practical) to discover the original based on the hash.

The actual formal definition of “one-way” for a cryptographic hash function, H(x), includes the requirement that x be the output of a uniformly distributed sampling of the domain of H. That is, considering all of the things that you can hash (under some set length), you need to pick something at random.  Otherwise a hash function might be invertible. Human created passwords do not meet that requirement and so the “computational infeasibility” of inverting a one way function isn’t applicable when its input is not chosen at random.

So now let’s correct Slack’s statement:

Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that a randomly created password could be recreated from the hashed form.

Modified Slack statement.

This, of course, is why you should use 1Password’s Strong Password Generator for creating your passwords. When your password is chosen randomly with a large set of possibilities, then it really is computationally infeasible to discover the password from the cryptographic hash.

Slowing down guessing

I mentioned that (for now) bcrypt, scrypt, and PBKDF2 are good choices for password hashing. Once the final results are in from the Password Hashing Competition and the dust has settled, we will probably have a good successor to those three. These are built upon cryptographic hash functions, but are designed for hashing specifically for when their input is not selected randomly.

Because cryptographic hashing is something that we have computers do a lot of, one of the things that we want is that it be fast. We want to be able to perform lots and lots of SHA-256 hashes per second without straining a computer’s memory. But if an attacker is going to be guessing passwords to see if they produce the right hash, we want to slow down the hashing. PBKDF2, scrypt, and bcrypt are all designed to require much more computation than a regular hash function to compute a hash. This can slow down an attacker from performing millions of computations per second to just thousands. The actual speed depends on many things, including the hardware that the attacker brings to bear on the system. scrypt, additionally, places specific demands on memory.

So the use of bcrypt means that attackers will need to do more work than they otherwise would to guess passwords stolen from Slack. That is a good thing, but it is not an “infeasible” amount of work.

What’s infeasible?

I started out by saying that I was going to talk about the word “infeasible”, but so far I have just been using it a lot. This is because its definition is abstract, subtle, and hard. I am not going to give a full definition, but I am going to try to get reasonably close. The discussion that follows is inherently technical, and nobody will blame you if instead of reading further you just wish to watch us pour ice water over ourselves. (Remember, that was a thing just last year.)

Welcome back to this article. It get’s progressively more arcane from this point onward.

The notion of infeasible depends on the relationship between the amount of work the defender has to do to secure the system compared to the amount of work that the attacker has to do to break it. A bank vault may take a minute to unlock if you know the combination, but it may take days to break through if you don’t. With cryptographic systems it can take just a fraction of a second to decrypt data if you have a key, but many times the age of the universe to do so if you don’t have the key.

Security parameters

What we want is the amount of work the attacker has to do to be vastly disproportionate to the work that the defender must do. It turns out that this can be stated mathematically, but first we need to introduce the notion of “security parameter” if we want our definition to stand the test of time instead of depending on the speed and power of current computers. So we will talk about how much work the defender and the attacker have to do in proportion to some security parameter.

Let’s pick, for purposes of exposition, an encryption system that operates at a security parameter of 56. The amount of computation that the the defender has to do to decrypt some data with the key is proportional to 56, but the amount of work that the attacker has to do to decrypt the data without the key is proportional to 2⁵⁶. Fifty-six is much much smaller than 2 raised to the 56th power, but today even 2⁵⁶ operations is within the reach of many attackers. Thirty years ago it was within the reach of just a few.

So now let’s suppose that we want to double this security parameter to 112. How much of a work increase might this cause the defender? You might be thinking that it doubles the cost to the defender, but the system I’m thinking of actually tripled the cost to the defender. Tripling the cost for double the security parameter may not seem like a good deal, but doubling the security parameter increased the work of the attacker by another 2⁵⁶, for a total of 2¹¹². This puts it well outside the reach of even the most resourceful attacker for a long time to come.

When we doubled the security parameter in that example, the work to the defender increased linearly while the work to the attacker increased exponentially. We want the work required of the attacker to increase exponentially with the security parameter while for the defender we increase it linearly or polynomially.

Doing time, time, time in an exponential rhyme

If the security parameter is n, we will tolerate it if the amount of work the defender must do is proportional to na for any a > 1. That’s what we mean when we say the work is “polynomial in n“. So if the work goes up with the square or cube of n we might grumble and seek more practical systems, but no matter how big the power that n is raised to gets, this is still a polynomial expression. An algorithm that works this way is called a “polynomial time algorithm”.

For the attacker we want the number of computations needed to be proptional to an expression in which n is in the exponent. So if the work to the attacker is proportional to b for any b > 1, so that the work is exponential in n. (Those of you who know this stuff, know that I’m leaving some things out and am taking some shortcuts.)

It might seem that a “big” polynomial get us bigger numbers than a “small” exponential, but no matter how much a polynomial function starts out ahead of an exponential, the exponential will always catch up. Let’s compare the exponential  y=1.1ˣ with the polynomial y=x⁶ + 2. For values of x below a few hundred, it looks like the polynomial is the runaway winner.Plot of polynomial taking early lead over exponentialBut we inevitably reach a point where the exponential function catches up. For the particularly examples I’ve given, the exponential catches up with the polynomial when x is about 372.73.

Plot with exponential catching up

Finally, if we go just a bit beyond the point where the exponential overtakes the polynomial, we see that the exponential completely flattens the polynomial.

Plot on scale where exponential flattens polynomial

Some computations will take a number of steps that are polynomial in n (“polynomial time algorithms”), and others will be exponential (“exponential time algorithms”). We say that a task is infeasible if there is no polynomial time algorithm to complete it with a non-negligible chance of success. I have not defined what a non-negligible chance of success is, but as the article appears to be growing in length exponentially, I will leave that discussion for our forums.

When we have this sort of asymmetry, where the work done by the attacker grows exponentially with the security parameter, but grows at most polynomially for the defender, there will always be some security factor beyond which the work to be done by the attacker is so enormously larger than what the defender must do as to just not be an option for any attacker.

Quibbling over terminology

Now that we have a workable definition of “infeasible” and a better understanding of what cryptographic hash functions do, we can take a closer look at Slack’s statement. First let me repeat that their overall statement was excellent, and I fully sympathize with the difficulty involved in writing something about security that is correct, clear, and usable. I’ve taken some shortcuts in my exposition on any number of occasions, and I’ve made my share of errors as well. My point here is not to criticize but instead to use this as an opportunity to explain.

Given what we believe about cryptographic hash functions it is infeasible to discover x if you are only given the hash of x but only if x is chosen at random. Furthermore this is true of any (decent) cryptographic hash function and is not limited to the slow functions that are recommended for password hashing. That is, we don’t need bcrypt or PBKDF2 for that property to hold.

The limits of slow hashes

Slow hashes – specifically designed for password hashing – are built because we know that passwords are not chosen at random and so are subject to guessing attacks. But slow hashes have their limits, and with the notions that have been introduced above, we can now talk about them more clearly. Using a slow hash like PBKDF2 slows things down for both the attacker and for the defender. And the amount of slow-down is roughly the same for both the attacker and for the defender.

If we increase the security parameter (number of iterations) for PBKDF2 the computational cost rises linearly for both the attacker and for the defender. This is unlike the security parameters we use elsewhere in cryptography, where we would like a small (linear or perhaps polynomial) increase in cost to the defender to create a large (exponential) increase for the attacker.

Let’s see how that works out with a concrete, if hypothetical, example. Suppose it is using 40,000 PBKDF2 iterations. Now suppose that you add a really randomly chosen digit to the end of your Master Password. Adding a single random digit will make an attacker do 10 times the amount of work that they would have to do to crack the original password. Adding two digits would make the attacker have to do 100 times the work of the original. Making a password just a little bit longer (with random stuff) makes the work required by the attacker increase exponentially. That is the kind of proportion of work that we like.

Now suppose 1Password uses 40,000 PBKDF2 iterations in deriving your Master Password. To get the same additional work as adding a single digit to your password, you would need to increase the number of PBKDF2 iterations to 400,000. And to get the equivalent of adding two digits, you would need to increase the number of iterations to 4,000,000. Once we have a goodly amount of PBKDF2 iterations, there isn’t that much gained by increasing it by an additional ten or twenty thousand. But there is much to be gained by even a small improvement in a Master Password.

PBKDF2 is terrific, and it is an important part of the defense that 1Password offers if an attacker gets a hold of your encrypted data. But you must still pick a good Master Password because the security parameter is linear for both the defender and the attacker. Unless there is a breakthrough in the slow hashing competition, a strong Master Password will always be required in order to ensure your security can withstand the test of time.

An Open Letter from AgileBits

An open letter to banks

Update (2015-04-02): TD Canada Trust updated their iPhone app today re-enabling pasting in the login fields. It’s a great first step toward friendliness with security-conscious customers and password managers.

TD Canada Trust made quite a splash recently when it launched its redesigned iPhone app which disabled pasting in the password field. Users who embrace password managers for their online security were quick to point out their … well, ‘unhappiness’ with this decision. TD Canada’s original response to those users was unsettling:

Hi Steve, thx for stopping by. For ur security, your password should be committed to memory rather than using a password mgr. ^SB

The original tweet has since been deleted by @TD_Canada.

For those of us who rely on 1Password (and other password managers) on a daily basis, this advice is completely cringe-worthy … unfortunately, it’s really not all that uncommon in the banking world. Many banking and financial sites implement restrictions on password length, require certain special characters to be present, and put in place various ‘security theatre’ measures on their websites that do little for increasing user security, while ultimately making it more difficult for users to rely on password managers to fill their complex passwords in on the site. Why do they do this? Well, it’s difficult to know for sure, although our Chief Defender Against the Dark Arts does have a theory on the matter.

With the conversation about online security and banking so fresh in everyone’s minds, I thought now would be a great time to send a message out to banks and financial institutions everywhere to encourage them to to take users’ security more seriously. I’m writing this not only as a member of the 1Password team who deals with security issues on a daily basis, but also as a concerned customer who just wants simple and secure access to her data.


Dear banks,

I know that you have my best interests at heart.

I know that you’ve worked hard to put ‘safeguards’ into place (such as disabling pasting into password fields, obfuscating usernames, spreading the login process across multiple pages and “please input the nth character of your password” fields) to thwart various types of attacks.

But the truth is that these ‘security measures’ are not actually helping your users.

Do you know what would really help your users? Long, random passwords.

Using long, random, and unique passwords is the best defense that we, your users, have against attackers. This advice is true for every site we have to sign in to these days … and believe me, we sign in to a lot more than just our financial sites. Keeping 100 or so strong and unique passwords memorized is not only a silly suggestion, it’s nearly impossible for all but the most savant-ish of us. Password managers help us increase our security by remembering these unique passwords for us, keeping them stored securely, and filling them in on websites so we don’t have to.

Many of the ‘security measures’ you have put into place serve only to make it much more difficult for those of us who rely on password managers. Password managers are not your enemy here. In fact, encouraging the use of trusted password managers will do more for your users’ security than any of the measures you currently have in place.

You have an awesome opportunity here. Take the time to educate your users on the value of true security. Encourage users to adopt long, random, and unique passwords that never need to be stored in their brains. Make it easy for password managers to store and fill these secure passwords for your users (in web browsers as well as in mobile apps).

Now, it just so happens that there is a very simple way that you can give your users easy access to their banking data in your mobile apps. We’ve written an App Extension API that can be added to your iOS app in 3 easy steps. The app extension will allow users to select their password manager of choice and fill their complex passwords into your form, with no typing required.

1Password has been giving people control over passwords for almost 10 years now, and it truly is a wonderful thing. Our team built 1Password around the idea that being secure should never be compromised for convenience. We’ve been advocating for stronger, safer passwords for years, and we’d be so happy if you stood with us.

For now, passwords are a necessary evil. Remembering them shouldn’t have to be.

Please help us increase awareness of online security. Your users will be ever-so-grateful that you are taking their security seriously, and you’ll be making their lives a lot simpler too.

Signed, a hopeful user.


Since TD’s original response last week, they seem to have had a change of heart. A tweet from @TD_Canada on Saturday indicates that they are in fact working on an update that will allow copy and paste within their app … and possibly considering integrating password managers.

Hi Rick, we're working on providing our customers w/ the option to use copy/paste & PW managers. No dates to share yet. ^SK

This is incredible news! Without seeing the update, it’s hard to know exactly what they have in store for users, but they have a great opportunity here to set the standard for banking apps and give other financial institutions a secure example to follow. I’m excited to see what they come out with!

If you believe as I do that banks should add 1Password (and other password manager) integration to their iOS apps, please consider sharing this open letter with your bank. #BanksNeed1Password

Password wordcloud from xato.net

When is a password leak not a password leak?

Password wordcloudI’d like to take a moment to talk a little bit about how people who study password behavior go about their job.

In the process, I would like to thank all password researchers and, in particular, Mark Burnett for both his years of excellent research and the help he has provided to other researchers. He is unequivocally one of the good guys, even if portions of the technical and popular press have entirely misunderstood the impact of his support for the research community.

Before getting into any detail, I would like to make it clear that Mark’s posting of 10 million passwords on Monday did not reveal any new information to hackers, and did not enable any new attacks. All of the information he packaged was already public, and Mark’s preparation made it even less useful to bad guys. For details, it’s best to read his own FAQ.

Of course, you, our readers, will all be using 1Password to help ensure you have unique passwords for each and every site and service.

Researching secrets

One of the biggest difficulties in studying password behavior is that people are supposed to keep their passwords secret. Because of this not-so-minor drawback, there are two ways to get real data on people’s behavior.

One way is to conduct experiments and simulations. There is some really exciting research along these lines, particularly from Lorrie Cranor’s group researching Usable Privacy and Security at Carnegie-Mellon University. But there are many others contributing to that research.

One of the advantages of these experiments, which almost no other method offers, is that they help us figure out how well people can use and remember passwords. Of course, 1Password saves you from having to remember all but one (or a very few) of your passwords, but those passwords need to be strong. We rely on the research conducted by the academic community on password learnability, usability, and memorability when offering our own advice on creating better Master Passwords.

The second way to analyze people’s behavior with respect to passwords is to study the data that comes from password breaches. For example, when RockYou was hacked in 2009, the attacker published a list of 32 Cranor wearing RockYou password dressmillion user account passwords. Much of the advice you see today about most common passwords comes from the study of the RockYou data. Note that not all breaches involve revealing passwords. The recent breach of Anthem, for example, didn’t reveal customer passwords.

Pretty much everyone who studies password behavior grabbed a copy those RockYou records. Professor Cranor, who I mentioned above, even made a dress based on the most popular passwords found on in the RockYou data. Although we do not condone such breaches, we all make use of the data if it is published.

It is almost certainly true that only a small portion of such breaches are made public. Many of the criminals would like to keep both the fact of the breach and any passwords they obtain secret so that they can be exploited before people change those passwords. Sadly, the criminals have more data than we do, so they know more about actual password practices than we do.

1Password 1Password window, crediting Mark BurnetOne of the many uses of this sort of data is to figure out what the most common passwords are. Lists like the ‘top 10′ or ‘top 100′ passwords are often published in attempts to shame people to make better choices. But Mark’s earlier publication of the top 10000 passwords has made it into 1Password itself. In addition to other tools and guidelines, we use that list in the Mac and iOS versions when calculating password strength.

For big data sets, like RockYou or Adobe in November 2013, I will usually make a point of getting a copy. That way, I can do my own research on some of these datasets, as well as read about the analyses that others do.

Tracking password dumps

Tweets from @dumpmonThere are smaller data sets published very frequently, but sporadically, on sites like Pastebin. In fact, there is a handy Twitter bot, @dumpmon, that reports them.

To make things more confusing, many of the Pastebin posts make false claims about their data. They will claim that it is new data from, say, Gmail, while in fact it is old data drawn from previously published data. Quite simply, it is a substantial chore to watch for such data, evaluate it, and organize it into usable form. It takes skill, dedication, and analysis to do that.

I’m sure that I am not alone among those who study passwords to say that I am glad that Mark Burnett has been doing that work so that I don’t have to. Mark has been studying these for many years now. He has always shared his research results with the community, and has been very helpful when people (like me) ask him for some data.

When someone asks Mark for some of his data, he has to worry about removing credit card information that may be part of one leak, or revealing information about the site from which the username and password were obtained. Despite the fact that information has already been made public, he correctly does not feel comfortable re-releasing it. This is why he prepared the sanitized list that he released Monday.

What have I learned studying these 10 million passwords?

To be honest, I haven’t really dived into to studying these. I’m lazy efficient and patient, and am waiting for others to publish their results. However, if I don’t see certain types of analyses that I believe would be useful, I’ll roll up my sleeves and take the plunge.

But in playing with these for about 10 minutes, I (re-)learned a couple of things:

  • Modern computers are fast enough that I can actually do much preliminary poking around using AWK.
  • I was able to say “I told you so” to some friends about some clever passwords that were far more frequent than they’d imagined.
  • I confirmed (as I did with the Adobe set), that David Malone and Kevin Maher were correct when they concluded that – despite appearances – passwords frequency does not follow Zipf’s Law.
  • I hadn’t used Transmission/BitTorrent in ages, and no longer needed to seed the FreeBSD8.2 iso (The password list was made available via torrent).
  • Update: Someone actually used “correcthorsebatterystaple” as a password, illustrating the dangers of presenting examples when explaining password creation schemes.

I do not wish to give the impression that I won’t be able to make valuable use of the data. There are a number of interesting analyses I would like to run. In particular, I would like to see if I can identify passwords created by a good password generator, but that will be a long and hard project. Broadly seeing what password creation schemes are the most popular would also be useful. I may use Dropbox’s zxcvbn password analysis engine to make a rough pass at that.

And there is no question that Mark’s collection, tidying, sanitizing, and releasing of this data will help us good guys learn more about password behavior.

1P Pro features

TOTP for 1Password users

1P Pro features1Password 5.2 for iOS and 1Password 4.1.0.538 for Windows are out, and they provide support for using Time-based One Time Passwords (TOTP) in your Logins (note: in iOS, it’s part of our Pro Features). Note that this is not for unlocking 1Password itself, but to aid with logging into sites for which you may be using TOTP, such a Dropbox and Tumblr.

To learn how to have 1Password help you manage your TOTP Logins, go straight to our user guide. If you would like to better understand when and why TOTP is useful for 1Password users, and what to do if you truly want two-factor security, continue reading here.

TOTP countdownI’ve previously written (at excessive length, in some cases) about TOTP in general, but in each instance pointed out that it is of limited utility to 1Password users. This is because such schemes are of most use to those people who have weak or reused passwords. If you are using a strong and unique password for a site, then many of the gains of two-step (or multi-step) verification are not relevant for you.

But “most” is not the same as “all”. There still are some cases where multi-step verification is useful to people using 1Password.

Sometimes you must use TOTP

Sometimes a site or service will simply require that TOTP always be used along with your regular password. Patty (one of my dogs) is working with a research group analyzing the structure of heart worm DNA. When she connects to the lab’s server, she is required to use TOTP.

TOTP example in 1Password for Windows

TOTP example in 1Password for Windows

She has set up an app on her laptop that just constantly displays the current TOTP code. It’s sitting there ticking away all the time her laptop is running. Ideally, it should only be visible when she actually needs it, but she is understandably just trying to save time. Clearly, she could use TOTP more securely if it were available for the Login item within 1Password.

One-timeness? Yes

One-time passwords (the “OTP” in “TOTP”) are useful over insecure networks. Normally, when you submit a password to a site or service, you send the same password each time. Ideally, that connection is well encrypted so that the password cannot be captured when it is in transit. This is why it is very important to:

  • use HTTPS instead of HTTP when doing anything sensitive
  • pay attention to the lock icon in your browser’s address field (indicating HTTPS)
  • heed browser warnings about such connections

But networks are easy to compromise. Recently Molly (my other dog) was at the Barkville Airport. When she connected to Wifi, she saw several open wifi IDs. One was BVT-access, and the other one was “Airport Free Wifi”. As it turned out, BVT-access was the legitimate one, but she connected to Airport Free Wifi. Airport Free Wifi was actually a laptop operated by Mr Talk, our neighbor’s cat.

Mr Talk is using SSL-strip on his rogue wifi hotspot. If Molly isn’t paying close attention to the HTTPS status of her browser’s connection, she can send things unencrypted over Mr Talk’s network while thinking it is a secure connection. I should probably point out that Molly lacks the discipline to pay close attention to anything other than a squirrel or rabbit. This way, Mr Talk can capture Molly’s passwords in transit to the servers and save them for later use.

That is one of several ways that passwords can be captured in transit. The point of one-time passwords is that they are not reusable even if they are captured in transit. In this way, TOTP provides a meaningful defense against plausible attacks even though there is nothing “second factor” about how it is being used.

Second factor? No

We need to make the distinction between one time passwords and second factor security. One time passwords are often part of second factor security systems, but using one time passwords doesn’t automatically give you second factor security. Indeed, when you store your TOTP secret in the same place that you keep your password for a site, you do not have second factor security.

However, you still have the benefits of the one-timeness of TOTP codes.

Systems like TOTP are sometimes used as part of second (or multi) factor authentication systems. But this is far from their only usage. To be truly second factor, the TOTP secret (from which the one time password is generated) must not be stored on the same device that you use the regular password on.

Let’s consider an example. Molly has a Tumblr where she posts pictures of the squirrels she is after. So far, she has been using the Authy app on her phone to manage TOTP. If she never logs into to Tumblr on the same phone, then she is using her phone as a second factor. But if she is also using Tumblr from her phone and has had to use her one time password from there, then there is no second factor.

In general, there is a reason why many services that offer TOTP refer to it as “two-step verification” instead of as “second factor authentication”. The security that such sites seek to gain from this is not in the second-factorness; it is in the one-timeness. In particular, many of the sites and services that offer or require two-step verification with one time passwords are doing so because many of their users have weak or reused passwords. Although that should not apply to 1Password users, there are other benefits to one time passwords as I discussed above.

If you really want true two factor

If you would like to turn a site’s offering of TOTP into true two-factor security, you should not store your TOTP secret in 1Password (or in anything that will synchronize across systems). Furthermore, you should not use the regular password for the site on the same device that holds your TOTP secret.

Put simply: the device that holds your TOTP secret should never hold your password if your aim is genuine two factor security.

Personally, I don’t think that following that practice would be worthwhile for anything but a very small number of special circumstances, in which case, you should probably be using a specialized second factor device instead of something like a phone. But not everyone shares my opinion on this, and if you have a need for true second-factor security for some particular site or service, you should take that into account before adding a TOTP secret to 1Password.

For everyone else, if you find the one-timeness of TOTP worthwhile on its own (or are required to use it), 1Password’s new support in v5.2 for iOS and v4.1.0.538 makes it easier to use than ever.

1P4 Android bot

Avoiding the clipboard with 1Password and Lollipop

Copy & Paste clipboards (or “pasteboards” as they are called on Mac and iOS) can be dangerous places for secrets if you have malicious software running on your device. On most operating systems – mobile and desktop alike – most running applications can read from the system clipboard. When you copy a secret to the system clipboard, a malicious process may be able to read and steal that secret.

This, by the way, is not news, but it is good that it has made the news. It helps people be aware of clipboard usage, and it gives me the opportunity to talk a bit about what we have been doing over the years about this.

We have always worked to reduce how much people need to depend on system clipboards when using 1Password. The details differ from system to system, and each operating environment gives us different ways to help reduce clipboard use. On the Mac and Windows PCs we have the 1Password Browser Extensions communicate with 1Password so that web form filling can avoid the clipboard. 1Password for Windows also uses auto-type to reduce clipboard activity. 1Password 5 on iOS offers 1Browser and integration with other apps through App Extensions

1Password Android browserBut today I will reveal a few things that our 1Password for Android beta testers know.

Aside: Before I get to that discussion, I should point out (as I often do), that the single best defense against a malicious program running on your machine or device is to keep your systems up to date with all software and system updates. It is also important to be careful in what you install on your system. 1Password can offer some significant defenses against malware on your system, but you have to help keep your systems free of malware.

1Password 4 for Android already has a simple built-in browser. This allows you to go directly from your Login item in 1Password to the web page, filling the data without the clipboard. Our iOS users are already familiar with 1Browser, and this is shaping up on Android.

Lollipop provides clipboardless sweetness

Of course, web pages aren’t the only thing that people need to fill passwords into, and sometimes people may wish to use something other than the browser built in to 1Password. In the current Beta release of 1Password for Android, we used the latest security and accessibility features in Android 5 (Lollipop) to allow 1Password to fill into other apps without making use of the clipboard.

Starting with Lollipop, we have a way to fill password data into other apps without using the clipboard. Perhaps it would be best to just quote what Nik, our Happiness Engineer, had to say in the beta newsletter just a couple of weeks ago:

Wondering why app and browser filling requires OS 5.0? Me too! So I asked our developers. It turns out that the only way for us to do this in earlier versions of Android OS was to use copy/paste accessibility APIs, meaning that any clipboard manager or malicious app could listen to clipboard events and collect login credentials as they were filled.

In Lollipop, 1Password can fill your information directly, without using the clipboard. Therefore, it isn’t possible for a third party to obtain your passwords by snooping on what 1Password’s doing.

Prior to Lollipop, it would be possible to get this kind of app-filling, but it would have relied on the clipboard under the hood. Because using the clipboard involves known risks, we feel that we should make it clear when copy/paste is being used and minimize it’s use wherever possible. As a result, we decided to focus on a Lollipop-only implementation of our filling feature

If you have an Android device with Lollipop installed and would like a sneak peek, I invite you to sign up for our Android beta.

Clipboards may always be with us

As you can see, we are working to reduce dependency on system clipboards when using 1Password. This is an on-going process. Browser integration on the desktops was something we started with back when the very first versions of 1Password was released for the Mac nearly eight years ago. Later, we introduced our own browser into 1Password for iOS, and much more recently encouraged 1Password integration for other iOS 8 apps using App Extensions. Along the way, we introduced auto-type in 1Password for Windows and a web browser into 1Password for Android. As you’ve learned here, we have in-app filling in our Android Beta, making use of the latest features of Android 5.0, Lollipop.

But while we are progressively reducing the need for copy and paste to a system clipboard, we are a long way from eliminating the need to use these. This is why I must repeat my advice to keep your system free of malicious software.

What I would like to see is a clipboard that could only be read when the user explicitly chooses to paste. This is something that has been suggested a number of times before, but has not be implemented on the most popular operating systems. I suspect that there is a reason for that, but if you know, I eagerly await your insights in the comments.

 

Watchtower icon 1024

Viewing Drupal from the 1Password Watchtower

1Password WatchtowerWhen a large number of websites are discovered to have been vulnerable, as is the case with websites running recent versions of Drupal, people need clear and unambiguous advice that you can act on. And so, our clear and unambiguous advice is:

If you have a username and password on a site which has been using Drupal for its content management, you should change that password. You will need to change that password everywhere you use it, not just on the potentially affected sites.

Our Watchtower service within 1Password for Mac and Windows will recommend password changes for a number of sites that we detect as using Drupal. Here you can see what that will look like.

Drupal Watchtower example

We should also make it clear that none of our systems are affected by the Drupal vulnerability. We don’t use Drupal.

Site administrators know best

We don’t know the status of any particular site other than it appears to be running Drupal. Therefore, if our advice conflicts with advice you received from the administrators of a site, follow their recommendations.

We don’t know when a site gets fixed

Some vulnerable Drupal systems may have been fixed on October 15. Others may still not be fixed yet. Our tests are only capable of determining whether a website is using Drupal (and even that test is imperfect).

Merely patching Drupal is not sufficient for sites that may have been compromised. That is because an attacker using the vulnerability may have left a “backdoor” in a site allowing them back in even after the original vulnerability has been fixed. This makes it yet more difficult to determine whether a site remains vulnerable.

We don’t know if a site has been compromised

Drupal icon 400pxJust because a site has been vulnerable doesn’t mean that it has been compromised. However, it appears that automated attacks have been systematically breaking into vulnerable sites and planting “back doors” that would allow the attacker a way back in at any time in the future. So we should assume that most Drupal sites which weren’t patched very quickly on October 15 have been compromised.

A password compromised anywhere must be changed everywhere

If you reuse the same password on more than one site, you will have some extra work cut out for you. Let me explain why.

Suppose that Molly (one of my dogs) has used the same password on Bark Book as she does on Sprayed By a Mink Anonymous, and let’s also suppose that Bark Book gets compromised by Mr Talk (the neighbor’s cat).  Molly will need to change her password on both the compromised site (BarkBook.com) and on the uncompromised site (SprayedByMinkAnon.org) . That is because Mr Talk can use what he has learned from Bark Book against all of the sites and services that he thinks that Molly may be using. I must also report that Mr Talk, along with everyone down wind, can easily guess that Molly may well be visiting SprayedByMinkAnon.org.

Molly should take this opportunity to work towards having a unique password for each and every service. 1Password will remember those for her. The closer she gets to having a unique password for each site, the less of a headache the next big incident will be.

Shellshock bash terminal

Shellshock is bad, unique passwords are good

Shellshock bash terminalA new security bug, commonly known as Shellshock (Officially CVE-2014-6271, is bad. It is fair to say that a large number of servers (particularly web servers) were vulnerable to serious attack for some time. It is likely that many still are, and we are unlikely to learn about most of them.

What are we do to? Answer: Use unique passwords for each site and service.

Squirrels, rabbits, and passwords

Squirrel mollyLet’s consider Molly, one of my dogs. She has a one track mind: Squirrels and rabbits. She also is not very good at counting, so she doesn’t understand the difference between one track and two tracks.

Molly tends to reuse the same password for lots of things. Her password for Barkbook is squirrel. It’s also the password for CatChasers and a number of other sites and services.

Suppose that Patty, my other dog, isn’t the sweet innocent little thing that she pretends to be. Suppose that she breaks into CatChasers and is able to steal user passwords from it. She learns that Molly’s password was “squirrel” on CatChasers, so she’ll check if Molly used the same password on Barkbook and other sites.

1P squirrel password

Password reuse is doubly bad

Indeed, when Molly uses the password “squirrel” on multiple sites, she is putting all those squirrels in one basket. If her password is stolen on any one of those sites, Patty can get into all of those.

The more places that Molly uses the password “squirrel,” the more likely it is that at least one of that sites will get breached, and the more damage is done when her password gets discovered at any one of those sites.

If Molly uses “squirrel” for twenty sites, there is a very strong chance that several of them are vulnerable to this new Shellshock flaw, Heartbleed, or any of the other known and unknown vulnerabilities being exploited. When Patty does break into one of those twenty sites, she will now have control of twenty of Molly’s accounts.

What you can do

In short, be careful. System administrators will be busy for a while. In addition to upgrading bash on systems that use it, they should be trying to track down which systems create environment variables with untrusted content and whether those systems ever invoke a shell.

But normal people (and I don’t think that many will dispute that system administrators are not “normal people”) are left with the knowledge that there are a lot of vulnerable systems out there. By far, the single best things we can do is to cut down on our password reuse. The easiest way to do that with 1Password is to give Security Audit a whirl.

There is so much more to say

Everyone with some sort of security point to make is using Shellshock to help illustrate and draw their favorite lesson from it. This is easy to do because Shellshock isn’t just a bug, it is a bug that can be exploited because of a series of design decisions that were pretty much asking for trouble. Each one of those decisions (or non-decisions) is something that everyone in the business really does know better about. But somehow, the software and systems engineering community has managed to ignore its own wisdom at each step of the way.

  1. We members of this community know not to pass untrusted data to various other processes, yet we’ve allowed systems that create shell environment variables (things designed to be passed all over the place) from the most untrusted sources of all. [E.g. CGI, DHCP Clients, etc].
  2. Our community knows that tricking systems into executing “data” is often how attacks happen, yet bash has a feature that deliberately allows what is normally data passed around to be executed.
  3. Whether computer science students like it or not they are taught that when data is in a particular class of languages it is impossible to validate it, yet with bash we’ve stuck a Type 0 languages inside of variables.
  4. Scripts and programs should (generally) avoid invoking a shell as even the Linux manual page for system(3) says

    Do not use system() from a program with set-user-ID or set-group-ID privileges, because strange values for some environment variables might be used to subvert system integrity.

    Yet calling system(3) is common practice because it is easier than invoking other programs the proper way.

When a system falls victim to Shellshock, it is because every one of those principles and guidelines have been ignored. The first one is in the design of various network services (such as web servers). Numbers two and three are in the design of bash, and number four crops up in innumerable scripts and programs. None of them are actually about the specific bug in bash. Instead, one through three are about specific design features of various systems.

There is a great deal I would like to say about each of these, but I will leave that ranting for another time. Today, I just wish to remind everyone about the importance of using unique passwords for each and every service.

Bash update for Mac OS X

Apple has made bash updates available to those who do not wish to wait
for regular software update:

OS X bash Update 1.0 may be obtained from the following webpages:
http://support.apple.com/kb/DL1767 – OS X Lion
http://support.apple.com/kb/DL1768 – OS X Mountain Lion
http://support.apple.com/kb/DL1769 – OS X MavericksTo check that bash has been updated:* Open Terminal
* Execute this command:
bash --version
* The version after applying this update will be:
OS X Mavericks:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin13)
OS X Mountain Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin12)
OS X Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin11)
Windows v4 blog

Watch what you type: 1Password’s defenses against keystroke loggers

1Password for WindowsI have said it before, and I’ll say it again: 1Password and Knox cannot provide complete protection against a compromised operating system. There is a saying (for which I cannot find a source), “Once an attacker has broken into your computer [and obtained root privileges], it is no longer your computer.” So in principle, there is nothing that 1Password can do to protect you if your computer is compromised.

In practice, however, there are steps we can and do take which dramatically reduce the chances that some malware running on your computer, particularly keystroke loggers, could capture your Master Password.

Safe at rest

Let me clarify one thing before going on. 1Password does protect you from the attacker who breaks into your computer and steals your 1Password data. The 1Password data format is designed with just such attacks in mind. This is why your data is encrypted with keys derived from your Master Password. It is also why we’ve put in measures to make it much harder for an attacker to try to guess your Master Password in the event that they do capture your data.

Even if an attacker gains access to your computer and 1Password data, there is little she can do without your Master Password. In this article, I’m focusing on another kind of attack in which the attacker tries to “listen in” to you typing your Master Password. This attacker is running a program on your computer that attempts to record everything you type on the keyboard or enter through some sort of keyboard-like device.

Countering counter-counter measures

I will get to the details below, but this article aims to describe and explain a change in how 1Password for Windows secures its Secure Desktop, a counter measure against a common type of keystroke logger. This change was added recently to 1Password 1 for Windows and has been included in 1Password 4 for Windows since its launch.

Márcio Almeida de Macêdo and Bruno Gonçalves de Oliveira of Trustwave SpiderLabs have discovered a way that a keystroke logger could work around our use of Secure Desktop and reported this to us. They have now reported this publicly (link might be having trouble, but it’s listed among their Security Advisories). We have since added a mechanism which prevents that particular counter measure to Secure Desktop. We very much appreciate SpiderLabs for giving us the opportunity to put a fix in place before announcing their discovery to the public. Trustwave SpiderLabs might grab fewer headlines by having done the right thing, but they have done the right thing.

Secure Desktop itself is a counter measure to keystroke loggers. De Macêdo and de Oliveira’s discovery is a counter measure to our counter measure. We have now introduced a counter-counter-counter measure. All of this will be explained, but it requires a lot of background into how keystroke loggers work and various ways to defend against them.

Keystroke loggers

Keystroke loggers attempt to capture everything that is typed on a particular computer or keyboard and pass that information on to a third party.

There are one or two legitimate uses of these (such as in research on writing), but those all involve the consent of those whose key strokes are being logged. More typically, keystroke loggers run surreptitiously, and are an attack on user privacy. I know that people don’t come to this blog for relationship advice, but if you are seriously tempted to install a keystroke logger to spy on a spouse or lover – a popular use of these things – then I have my doubts about the future of your relationship. Since you didn’t come here for relationship advice (and if you did you came to the wrong place), let’s return to how keystroke loggers work.

Logger in the middle

There are many different ways that keystroke loggers can work, but one useful way to think about this is as something (either hardware or software) that sits between your keyboard and the program you are typing into, something which shouldn’t be there.Hardware PS/2 keylogger in action

For keyboards that are attached to a computer with a cable, the simplest keystroke loggers are little physical devices that the attacker plugs into the computer, and then plugs the keyboard cable into that.

The keystroke logger is, in this case, sitting between the keyboard and the computer. The computer thinks it is talking directly to the keyboard, and the keyboard thinks it is talking to the computer, but the keystroke logger is sitting between them.

Alternatively, software keystroke loggers sit between components deep within the operating system and silently grab data. Things that are embedded that deeply or are using hardware loggers are not things that user software can detect or defend against.

Most keystroke logging is shallow

Most keystroke loggers take a simpler approach, rather than inserting themselves deep within the system. It is much simpler to write a program that says “hey, I am a program that needs to know everything that is coming in from the keyboard.” Operating systems provide hooks for programs to do exactly that.

You might be asking why operating systems might make writing keystroke loggers so easy. What business does any program running in the background have in seeing the input to some other program? One reason is to help my poor dog Molly, who suffers from (among other things) diabetes. This has led to sufficient necrosis in her paws so that she cannot easily type using a standard keyboard. The specialized device that she uses involves some clever software that looks at the input and uses various predictive technologies to replace the actual input with the intended text. This system intercepts (and changes) input bound for any program running on her computer; however, as far as most programs know, they are just getting input from a “keyboard”. Assistive technologies similar to the one Molly uses are a big part of making computing and communication accessible to more people.

Not only is a basic keystorke logger easy to write, it doesn’t require a complete break into a system. Different processes on a computer run with different privileges. When Molly logs in to her account and runs a program on a computer, the program is run under her user ID and with her privileges. This means that she isn’t able to interfere with processes that are run by Patty (the other dog). She also isn’t able to interfere with the system as a whole. If Mr Talk (the neighbor’s cat) tricks Molly into running a malicious program, that malware will be limited in the damage it can do.

The really deep and hard-to-avoid keystroke loggers would require full power over the system to install. But one of these simpler keystroke loggers requires only the privileges of the user whose keystrokes are to be recorded. So if Molly gets tricked into running a keystroke logger, it won’t affect Patty even if they use the same computer (as long as they are using different accounts). As you can imagine, the bulk of malicious keystroke loggers that spread through computer infection are of this shallower sort.

Counter measures

Now that we have some idea of how the typical keystroke logger works, it’s time to look at some counter-measures. The two most important counter-measures are:

  • keep your system and software up to date
  • exercise caution in what software you install and run

But let me focus a couple of the counter-measures that 1Password takes.

Counter measures on Mac: Secure Input

On Mac OS X, there are two simple provisions that makes it easy to thwart those shallow key loggers. The first one of these is called “Secure Input” and was introduced with OS X 10.3 Panther in 2003. A program—1Password for example—can say, “when the user types something into this particular input field, it must be done in a way that other processes can’t interfere.” Secure Input needs to be used sparingly, as it blocks all of the sorts legitimate activity, including assistive technologies that many people (and a few dogs) rely on. And Secure Input blocks TextExpander, which I rely on.

1Password declares the field in which you type your Master Password as a “Secure Input field”, then ordinary key loggers won’t have access to it. Since last year’s OS X 10.9 Mavericks, there is another defense built into the operating system. A program can only capture all of a users’ keystrokes if the user has explicitly granted it that permission in System Preferences > Security & Privacy > Privacy under Accessibility. As I described earlier, most (but not all) such software are components of assistive technologies designed to make computers accessible to more people. That is why this system preference is ultimately under Accessibility.

Between these two mechanisms – Secure Input and that any application which has the capacity to log keystrokes must have explicit user approval to do so – OS X defends against these otherwise common sorts of keystroke loggers.

Counter measures on Windows: Secure Desktop

1P Win unlock secure desktop

Windows doesn’t offer the same sorts of defenses that OS X has, but it does allow for the creation of somewhat isolated environments called “Desktops”. On Windows, one can set up different Desktops in which only your program is running (along with system processes). A program running in one Desktop will not be able to listen in on keyboard input in a separate Desktop.

You will find a button that says “Unlock with Secure Desktop” in the upper right corner of the lock screen in 1Password 4. Clicking on that launches the Secure Desktop in which you will be prompted for your Master Password. You can take a look at Unlock with Secure Desktop in action.

Countering Secure Desktop

What de Macêdo and de Oliveira have discovered is that there is a way to set up a keystroke logger that does operate in all desktops, not just the one it was started in. Quite simply, their system launches a process that is able to listen for the creation of new desktops and add a process to each desktop created.

The ease at which they were able to do this (well, everything looks easy in retrospect) reflects the fact that the SwitchDesktop function in Windows was not designed for security purposes. We and others who use Secure Desktop as a mechanism for evading keystroke loggers have been taking advantage of the relatively isolated environment of a separate Desktop. Once the authors of keystroke loggers take our counter measures into account, they can launch counter-counter measures like the one Trustwave describes.

Knowing your environment

We want nothing but system processes and 1Password’s Master Password entry to be running in a Secure Desktop. We don’t want other, probably malicious, processes joining that Desktop. And so, our counter-counter-counter measure is to simply look around and see if there is anything running in the SecureDesktop that is unexpected.

If some unexpected process is found in the Secure Desktop environment, you’ll be prompted to close the Secure Desktop.

Secure Desktop: 1Password has detected an unknown process

Lessons

1. Keep your system and software up to date

The single biggest thing you can do for your computer security is to keep your system and
software up to date. The overwhelming majority of actual break-ins are through vulnerabilities that have already been fixed by the software vendors.

2. Pay attention to what software you install and where you get it from

Keystroke loggers and other malware are often installed unwittingly by the victims themselves. Try not to be one of those victims. Be particularly careful of anything that tries to frighten you into installing it. Fake security software and alerts are a common way to get people to install malicious software.

The move toward curated app stores offers additional protections, but it isn’t a complete solution. Still, using those where available will reduce your risks.

3. Use Windows Defender on Windows

I have long been skeptical of most anti-virus software, but Microsoft Security Essentials is something I can unequivocally recommend for those using Windows 7. In Windows 8, Windows Defender is automatically built in and enabled.

4. Understand what software can and can’t do for you

The core security design of 1Password is extremely strong. Quite simply: if you have a good Master Password, nobody who gets a copy of  your 1Password data will be able to decrypt it. 1Password can and does offer outstanding security.

At the same time, 1Password is limited in what it can do to protect you when you are using a compromised computer. It can (and does) offer some protection against shallow (the most common) attacks. But this is a bit of an arms race. As you see, we have had to put into place a counter measure to a counter measure to our counter measure against common keystroke loggers.

This is why the first two items on this list are so important.

In conclusion

1Password takes extraordinary and effective steps to protect your data. This is built into every aspect of its design. But you have to help protect 1Password from malware running on your machine. We do what we can to make things harder for the malware writers, but we can’t do it alone. You must try to provide a safe environment for 1Password and all of your software to run in.

This shared responsibility is similar to that which we have with your Master Password. We provide excellent encryption and protections and defenses against automated password guessing. But you have to pick a good Master Password and treat it well. For those who might be wondering, displaying your password on a giant screen is not treating a password well.

wold-cup-wifi