Posts

Stop me if you’ve heard this password before

It seems that “Password1″ is the number 1 password on business systems. (Source Trustwave’s 2012 Global Security Report.) Of course if people used 1Password (the application, not as a password) they wouldn’t be stuck having to remember passwords.

The reason, according to the report, that “Password1″ is so popular within businesses is that it meets the requirements on a typical corporate network. It is at least eight characters, it contains a capital, and it has something that isn’t a letter.

Here are some other things we’ve heard around the net about trying to meet similar requirements. Let’s hope they aren’t true:

My password has at least eight characters in it: “Snow White and the Seven Dwarves”

Or:

I always mention London or Paris in mine. That way it will have a capital.

And to get helpful reminders if you forget your password:

My password is “incorrect”. This way, if I type it in wrong the systems tells me what it is.

We fear that not all of the instances of the following conversation are tongue in cheek:

“I use 1Password.” “So do I. I use the same one everywhere.”

Of course all of you reading this know better. You use the Strong Password Generator in 1Password to get strong and unique passwords for each site and service.

Do you know where your software comes from? Gatekeeper will help

Mountain LionYou trust us to provide you tools that keeps some of your very valuable secrets safe. Part of that trust means that, when you install or update 1Password or Knox, you know the app you are getting comes from us. After all, if a bad guy produces a modified version of 1Password, it could do bad things. So far there have been no such modified versions “out there” and we want to keep it that way. In addition to all of the things that we do to ensure that you get the genuine article, Apple is working to make it even easier to keep your Mac free of malicious software.

Apple has just announced that Mountain Lion (to be released in the summer of 2012) will include something called Gatekeeper. This is a core OS X feature that I and others have been anticipating for a while. (surprisingly, almost all of its components are actually built into the latest version of Lion). Roughly speaking, Gatekeeper will allow you to control which apps to run depending on where the software comes from.

The question then is: how does your Mac know where your software comes from?

The Magic of Digital Signatures

I would love nothing more than to explain the mathematics behind digital signatures. But for today’s purposes, let’s just say it is magic (even when you know the math, it feels like magic). When you connect over HTTPS to a secure website, that website proves who it is because it knows a particular secret (called a “private key” or “secret key”). The corresponding “public key” is not kept secret.

The magic is that the website doesn’t have to reveal the secret to prove that it knows it.

Evilgrade

Evilgrade Interface

Instead your computer system can use the non-secret public key to construct a mathematical puzzle that only someone who knows the secret key can solve; anyone with the public key can check that the solution to the puzzle is correct, but they can’t figure out what the secret key is. This can prevent someone hijacking the download process with a tool like evilgrade.

In the same way that a secure website can prove who it is without revealing any secrets, a digital signature on a file (or a group of files) can prove who made it. If someone makes even the smallest change to the signed file, the signature won’t verify.

Three Kinds of Apps

Applications that you install through the Mac App Store (MAS) are all digitally signed this way. But for years, Apple has been encouraging developers to digitally sign applications even if they aren’t sold through the MAS. So on your Mac today there are probably three kinds of applications:

  1. Those that came from the MAS
  2. Signed applications that did not come through the MAS
  3. Applications that aren’t signed

Gatekeeper will allow you to decide which of these categories of applications may run on your machine.

If you are running 1Password 3.9, then that came signed through the MAS. But if you are running 1Password 3.8 or Knox 2 from our website, they are still signed by us and will fall into the second category.

Verifying a signature today

When you install an application from the Mac App Store, the installation process checks the signature. It won’t install the app if it isn’t signed or if the signature doesn’t verify (which is more likely to happen through a damaged download than through a malicious attack, but both can happen). When you update the non-MAS version of 1Password, our updater runs a code signing signature verification as one of the three checks we use to ensure that you are getting the genuine 1Password from us. For those who are curious, our other two verification mechanisms are (1) fetching from a secure web server and verifying the server signature, and (2) checking a cryptographic checksum for the download which we fetch from a separate secure server.

But suppose you wanted to check the version of 1Password that you currently are running. All of those behind-the-scenes checks on the download and installation processes won’t help you do that. Well, the way to check now is hard, which involves running a complicated command in a Terminal window. Here it is for the non-MAS version of 1Password

codesign -vvv -R="identifier ws.agile and anchor trusted" \
/Applications/1Password.app

The output should be something like

/Applications/1Password.app: valid on disk
/Applications/1Password.app: satisfies its Designated Requirement
/Applications/1Password.app: explicit requirement satisfied

Clearly we don’t expect users to run these sorts of commands.

codesign in Terminal

We have been using Apple’s code signing mechanism for years because we wanted to be able to direct concerned users to this kind of command if they specifically ask. We’ve also been using it for additional security in our own updater. But another reason that we’ve been doing this for a while is because we’ve been anticipating either Gatekeeper or something similar.

Verifying a signature tomorrow

Gatekeeper will perform the codesign verification when an application is launched. This adds a great level of additional security beyond verifying the download source when the application is downloaded and installed.

A mathematically valid signature is the easy part

Apple Developer IDThe mathematics (the magic) makes all of the above simple. The hard part of Gatekeeper is the trustworthiness of the signatures. I can sit at a my computer and create a public/private key pair that says that it belongs to Alan Turing. Since Turing has been dead for more than half a century, few people would think that it actually belongs to that great mathematician and codebreaker. But what if I picked the name of a trusted person or institution that is around today?

The answer is that some trusted third party digitally signs my public key after verifying it belongs to who it says it belongs to. I’ve discussed how this system works (and how it can break down) when it comes to web server certificates, so I won’t repeat that here; the concepts are all the same. In the case of codesigning certificates for Mac developers, Apple does that checking and signing.

We changed our name a while back, so at some point before Gatekeeper is in common use, we will have to update our codesigning certificate identifier from “ws.agile” to “com.agilebits”. But for the time being, when you see “ws.agile” as the entity behind the digital signature on 1Password and Knox, you should know that that is us.

Other than getting a new certificate with our new name, we have been ready and waiting for years to get on board with the new security provided by Gatekeeper.

[Update: As of 1Password 3.8.19 Beta 1, 1Password is now signed with our new Apple Developer ID, AgileBits Inc.]

PSA: Keep your software up to date (an ode to Apple Security Update 2012-001)

Mac Software Update

Apple released its first big OS X update of 2012 this week, and it’s pretty big. It’s easier than ever to keep your computer up-to-date these days, but it never hurts to review good habits, especially when it comes to keeping your computer and data secure.

By far, the largest number of compromises of home computer systems is through vulnerabilities that the victims could have avoided if they only kept their systems up to date. If you want to see the numbers, take a look at Microsoft Security Intelligence Report volume 11 (PDF). While that report is specific to Microsoft Windows, the lesson applies across operating systems.

This is why I am reminding all Mac users of Lion (OS X 10.7) and Snow Leopard (OS X 10.6) to update their systems by using Software Update. For Lion, the security updates come as part of the update from 10.7.2 to 10.7.3. On Snow Leopard, it is a separate security update that does not change the version number. If you are still using OS X 10.5 (Leopard), please understand that Apple is no longer providing any updates, including security updates for it.

There are a large number of security fixes in the latest (February 1, 2012) updates, Security Update 2012-001. None of the fixed security issues directly affect 1Password or Knox, but as always, it is better to keep your system secure through regular software updates.

Automatic Operating System updates

Windows Update icon On both the Mac and Windows you can set your system to check for updates automatically. On the Mac, just go to Apple Menu > System Preferences > Software Update and use the “Scheduled Check” tab.

On Windows 7, just go to Start > Control Panel > System and Security > Windows Update and then “Change settings” in the sidebar at the left. Note that the layout is slightly different depending on the version of Windows.

Windows Update Settings window

Keeping 1Password up to date

Naturally, you should also be keeping 1Password and its components up to date. If you are using the Mac App Store version of 1Password, then the App Store application will keep track of this for you. Just keep an eye out for a red badge on the App Store icon in your Dock or open the store every now and then and check the Updates tab.

If you got 1Password from our website, just go to 1Password > Preferences > Updates and make sure that you have things set to automatically check for updates.

Keeping the 1Password extension up to date

Back in the old days (before June 2011), the 1Password browser extensions came directly with the 1Password application. If we needed to make a change to, say, the Firefox extension we needed to release a new version of 1Password. Now, for all supported browsers on the Mac and for Safari and Chrome on Windows, we have a new spiffy browser extension. This extension is automatically updated through the each browsers’ extension management system so you don’t have to lift a finger!

This allows us to update the extension much more rapidly than we update the main application. It is also why the Safari upgrade to 5.1.3 that comes with yesterday’s Lion update and the release of Firefox 10 a few days ago do not require new versions of 1Password to be released.

Each browser does things a bit differently, so I won’t review their individual update processes here. Instead, take a look at our dedicated guide with step by step instructions for installing and updating the 1Password browser extension.

Make the computer do the work

Keeping software up to date used to be a chore, but more developers and more systems are working diligently to make it easier. Things like the Mac App Store along with automatic checking for updates within operating systems and individual apps lets you pass most of the work to your computer. After all, computers should be the ones performing the tedious chores. You do still need to supervise the computer in this task to make sure it gets done, though.

It’s hardly anything new or insightful to say that keeping your system up to date is one of the best things you can do for your security, but that doesn’t make it any less true.

Staying ahead with security

We just released 1Password 3.8.11, and this seemingly minor update packs some important security changes under the hood. I’d love to share those with you all.

For a quick review, recall that keeping 1Password secure is a process, and one which requires we at AgileBits keep our eyes on the horizon for potential threats to your security. We want 1Password to be as secure in the future as it has been up to now, so read on about some of  the key changes we’re making.

Also remember that 1Password 3.8.11 for Mac from our website and 3.9.2 from the Mac App Store are both our latest versions for the Mac. Their version numbers are different for now mostly to help us keep them straight.

Increased and more flexible PBKDF2 iterations

PBKDF2 is a trick we use with 1Password to make it much harder for automated password guessing systems to crack your password if they get hold of your encrypted 1Password data file.PBKDF2 diagram Without PBKDF2, automated password guessing systems could guess hundreds of thousands, even millions, of passwords per second. PBKDF2 slows down the processing of your Master Password. There is no way to check a possible password without going through many cycles, called “iterations.”

This means that if you have a good master password along with our use of PBKDF2, your encrypted data is safe even if the bad guys get hold of your data file. If you want to learn more about PBKDF2, check out our previous overview on the topic: Defending Against Crackers: Peanut Butter Keeps Dogs Friendly, Too.

We were ahead of the game when we used PBKDF2 in the design of the current 1Password data file format more than three years ago. But we aren’t going to rest on our laurels. We have been phasing in an increase in the number of PBKDF2 iterations used from 1000 to 10000 or even more. Going from 1000 to 10000 iterations means that a cracker has to do ten times as much work to try a particular guess.

1Password 3.9, the Mac App Store version, can make use of a cool Lion-only feature that automatically calculates the optimal number of PBKDF2 iterations for use on your computer. When you first create a 1Password data file using 1Password 3.9, it will call the CCCalibratePBKDF function that is part of Apple’s new CommonCrypto framework. This will calculate how many PBKDF2 iterations are needed to force, say, a 500 millisecond delay on your machine. We then use this when creating the new data file. We do put an upper limit on these, because the files you create on your super powerful Mac Pro will still need to be used on other potentially less powerful devices that you sync your 1Password data file with.

1Password 3.8 needs to run on Snow Leopard as well as on Lion, so it doesn’t use the same mechanism as the MAS version. But in 1Password 3.8.11 we have set things so that a newly created data file will now use 10000 iterations instead of 1000.

These new settings apply only to newly created 1Password data files. You will have to be patient before we can provide a rock solid way of upgrading an existing data file. We need to make absolutely sure that no matter what may go wrong during a data file upgrade, you will not lose any data, and that testing simply takes time.

Groundwork laid long ago

We have been anticipating this increase in PBKDF2 iterations for a while. One thing that we need to make sure of is that every version of 1Password you may sync your data with will be able to cope with increased iterations. Otherwise, a 1Password data file created with, say, 1Password 3.9 couldn’t be unlocked on other systems, but we’ve had the infrastructure for this change in place for more than a year.

This means that new 1Password data files will also work with current versions of 1Password on iOS, Windows, and Android. It also means that those using 1Password 3.6 on Leopard or Snow Leopard will have no problem unlocking data files created using our latest version. Users of 1Password version 2 (are there any still out there?) will still be able to work with 1Password data files that have already been created, but will encounter problems using a data file that was created with the very latest systems.

HTML export and Login Bookmarklet are on their way out

It’s time to say good-bye to a couple of features that won’t stand up to the anticipated threat environment. One feature, loved by many, is the Login Bookmarklet. This was originally designed as a way to get some 1Password functionality into browsers we didn’t support at the time. Before we had 1Password for iOS, this could be used to kinda-sorta get 1Password data into browsers that didn’t support 1Password directly.

The data in the 1Password Bookmarklet is very well encrypted, but the password for it is not secured using PBKDF2. This means that if the Bookmarklet were to be captured it would need a very strong password on it to resist attack. Because the Login Bookmarklet lives in the browser’s bookmarks, there are more opportunities for it to be captured. Given these two issues, it is time to phase the bookmarklet out. Existing bookmarklets, already in the browser, will continue to work if users decide to keep them. But from this point onward, you will not be able to create new ones.

The story is similar for 1Password’s Encrypted HTML export feature. The passwords for those HTML files are also not protected by the PBKDF2 technique. But the good news is that our much-loved 1PasswordAnywhere feature will continue to work. 1PasswordAnywhere actually uses the same data file as the 1Password application itself, so there are no worries about its data format.

The Login Bookmarklet and Encrypted HTML export features were meant as temporary measures until something better could be put in place. 1PasswordAnywhere, 1Password for non-Mac platforms, and our 1Password extension for Google Chrome are those better ways of doing things.

Password strength indications

The last change I want to talk about is a perfect example of security in one area conflicting with security in another. The fact that you can list and sort your Logins by password strength means that you can easily see which of your passwords need to be updated. This is a very good thing for your security, but as we’ve described before, the information by which you can sort data is not encrypted in the current data format.

The security concern here is that if someone captured your 1Password data file, they wouldn’t be able to break its encryption but they could learn that you had a weak password for www.example.com. If they can also guess your username, they could try weak passwords logging into www.example.com. Fortunately, the real threat is far smaller because almost every website limits login attempts. But more importantly, we have been aware of this security trade-off since we designed the current format.

What we have been waiting for to resolve this trade-off is more powerful computers and cleverer programing techniques. Now that we have both, we’re implementing a new approach: starting with 1Password version 3.8.11 on Mac and 1.0.9.BETA-238 on Windows, password strength is not stored in the data at all, but is recalculated by 1Password as needed.

If you sync your data across different versions of 1Password, you may see some oddnesses where you would expect password strength to show up until those versions fully catch up with 3.8.11. That is, 1Password 3.9.2 from the Mac App Store (at the moment) will incorrectly treat the lack of a strength indicator as a lack of strength. So please have some patience with potentially misleading strength indications on versions that haven’t yet caught up to 1Password 3.8.11 with this.

Repeating myself

We’ve always known that security is a dynamic process and that we would need make these changes at some point. We remain vigilant about changes in computing and the threat environment so that you can rest assured that your data is safe.

Steamed up and ready to change passwords

Steam logo
The details are still vague, but it appears that the encrypted passwords of 35 million Steam users have been captured by bad guys. Note that there were two breaches. One was of Steam forums, the other is of their main user database. I am just discussing the later here as it involves many more users.

The passwords in the captured database were “hashed and salted”, which means that if you were using a strong password (say one generated by 1Password’s Strong Password Generator) you should be unaffected. Also if your password there was only used for Valve Corporation’s Steam game platform, then you don’t need to change it on other sites. Valve has not released details about exactly how the passwords are salted and hashed, so we should assume that weak passwords there are still vulnerable to crackers.

Tips for checking for duplicate and weak passwords

It’s really really important to have strong and unique passwords. So we’ve written about those before. You can read more about finding duplicates in 1Password for Mac and finding them using 1Password for Windows.

But for the very short version, on 1Password for Windows, you can sort your passwords by strength.

and in 1Password on the Mac you can search for specific passwords, which can help you find duplicates.

“Hashed and salted”

Websites should store your passwords in an encrypted format, typically using a “hash” function. The crucial characteristic of a hash algorithm is that it is unfeasible to calculate the original password (or other data) from the hash. For example, if we take the string “My voice is my passport, verify me” and run that through the (outdated) MD5 hashing algorithm, we get “7be5e25ce0fe807127c694c9bcb0008b”. If you have no prior reason to suspect what the password is, there, is no feasible way of computing this backwards.

Now suppose that someone has used the most common password out there, “123456”. The MD5 hash of that is “e10adc3949ba59abbe56e057f20f883e”. Couldn’t the bad guys just compute the hashes of some common passwords and then look for those hashes in the database? A quick scan of the database for “e10adc3949ba59abbe56e057f20f883e” should get you all of users who have “123456” as their password.

This is where salting comes in. Systems add a random something, called “salt”, to the password before hashing it. So if the random salt for a particular user is “4c8x” then what would get hashed would be “4c8x123456″ and then what gets stored is both the salt and the hash of the salted password. Maybe something like “4c8x+70914eddcc1e5ad56f18076f7d2433cf”. The salt isn’t secret, but because it will be different for each user, the attacker can’t simply pre-compute the hashes for common passwords. It also means that if two users have the same passwords, the hashes will be different.

Salting passwords pretty much essential. Any site that isn’t salting passwords before hashing isn’t, well, worth their salt. Databases of this sort do get stolen, and the designers of these systems need to take that into account. It’s nice to know that Steam didn’t make the same mistakes as Sony.

For higher security passwords and for things that attackers have easier access to, salting isn’t enough. For those cases (like your 1Password Master Password) a cracker thwarting key derivation function is needed. I’ve written about our use PBKDF2 for those who would like to understand what we do to protect your master password.

Facebook and CAPS-LOCK: Unexpectedly Secure

Facebook SecurityIt has recently been noted over at ZDnet that if your Facebook password is PattyAndMolly, Facebook will also accept pATTYaNDmOLLY as a valid password. This may initially seems look something that weakens users’ security. However it actually is a good thing.

Facebook designed their system this way to help people log in even if they have their Caps-Lock key on (or had it on when they created their passwords for Facebook). The Caps-Lock problem is remarkably common, and it’s not at all surprising that this is at the top of our list of things to check in our “I forgot my master password” document.

My over all point in this post (which I probably repeat too much) is that some times our intuitions about security run counter to what we find when we look at things more deeply. So let’s look at this case a bit more deeply and explore how it impacts security.

How does Facebook do it?

I assume that when a user enters a password that fails, the Facebook login procedure will retry with a modified version of what the user entered. That is, Facebook it really just trying again on behalf of the user, but it is trying as if the Caps-Lock key was set differently. If you give Facebook your password as PattyAndMolly and that login fails, Facebook will immediately retry with pATTYaNDmOLLY.

Of course 1Password users will be using strong random passwords for things like Facebook instead of passwords like PattyAndMolly, but I will stick with this example to illustrate what is happening.

Lets return to what Facebook does when a user enters a password that doesn’t work. The Facebook login system will say to itself, “Hmm, the user gave me an incorrect password. Let me take what the user gave me and try again but this time pretending to press the Caps-Lock button first.”

Working with this assumption about how the Facebook login system works, we need to look at how Facebook’s policy might make things easier (or not) for an attacker. There are three cases to consider.

Off-line attacks

If someone captures Facebook’s database of encrypted passwords the attacker will be able to use his or her own system to have a go at cracking the passwords. This is called on “off-line” attack.

The database will have only one encrypted entry per user. And so a password guessing program will still need to try all of the combinations, including both PattyAndMolly and pATTYaNDmOLLY. This is because the underlying system is still only accepting one form of the password, even if the login system that a user interacts with takes a second guess.

We can see that in this case, the Caps-Lock transformation doesn’t weaken security.

On-line attacks

If an attacker must guess at various passwords over the network by connecting to Facebook’s login mechanism, then before they can try more than a handful of guesses, Facebook’s lock-out and throttling mechanism will come into play. If you enter a password incorrectly too many times, Facebook will deliberately slow down (throttle) how many login attempts you attempt in a minute. It might even refuse to process any more login attempts (lock-out) and require that you go through a different login procedure.

So unless an on-line attacker is extremely lucky, throttling or lock-out will kick in long before any gain from the system trying multiple versions of the password can benefit the attacker.

On-line, but without throttling

There is a third possibility that we need to consider. Suppose that an attacker is able to get behind the throttling and lock-out system, but still tries to guess passwords using the remainder of Facebook’s system. That is they can query Facebook’s password checking system without having to worry about throttling or lock-out. Does Facebook’s transformation of failed passwords help the attacker?

The answer, again, is “no”. This will not help the attacker. This is because for every password the attacker tries, two passwords need to be checked. This doubles the checking time. It makes no difference to an attacker if it takes one second to check each of PattyAndMolly and pATTYaNDmOLLY, or it takes two seconds to check only one of those while having Facebook perform the second check for them.

Presumably Facebook using something like PBKDF2 and trying two passwords instead of one will have the effect of doubling the PBKDF2 iterations.

Again, in this third case we find that adding in the Caps-Lock transformation does no harm.

Why is it good for security?

I think that I’ve covered why Facebook performing this kind of transformation (assuming it is implemented along the lines that I imagine) does no harm. But why is this good for security?

It certainly is a convenience for users who have mishaps with the Caps-Lock key. But the actual security gain comes reducing the number of password reset requests that Facebook needs to handle. Processing a password resets is fraught with difficulty. It often involves a secret (typically a link or a temporary code) being sent by email. The email can be intercepted or the user’s email account may be compromised.

Password reset requests really are a common way for attacking services like Facebook. So Facebook needs to check and audit those requests carefully. The fewer unnecessary password resets the easier it will be for them to spot malicious attempts.

Teaching bad habits or scaring users

There are two real downside to this policy that I see. The helpful Caps-Lock transformation can train users to be sloppy about case. That is, if users get too accustomed to things like this they may form the opinion that case doesn’t matter in passwords. I don’t see that as a big danger here, but there are other instances when training people to behave unsafely may be a very bad thing indeed.

Child illusion speed bump
One spectacular example of training people to behave unsafely is painting roads so that it appears that there is a person on them in a misguided effort to get drivers to slow down. Imagine what happens when drivers grow accustomed to these optical illusions and start adjusting their behavior.

In the case of Facebook’s treatment of Caps-Lock, we have much less to worry about. It is unlikely to affect the general behavior of many users. Still, we must always be mindful of the dangers of teaching people bad habits.

The other security downside to Facebook’s practice is that when users discover this, they tend to (incorrectly) perceive it as a weakness. Scared and panicked people make very poor security decisions in other places. This, by the way, is why we no longer use the same Caps-Lock mechanism for 1Password master passwords.

A few more notes

Facebook’s scheme is a bit more complicated than presented here. With login attempts from mobile devices they also will retry failed logins by changing whether the first letter of the password is capitalized. This is because many mobile devices will put in automatic capitalization, even where it gets in the way.

Also I should note that I have no inside information to how Facebook manages these things, and have taken some guesses about how things work. However, the kinds of designs that I’ve assumed have precedents. These aren’t wild guesses at how things work.

My main point, if you will forgive me for repeating it, is that security issues can be subtle — sometimes even counter-intuitive. We are always on the alert for ways that we can make the secure thing to do the same as the easy thing to do even if the underlying system is more complex.

Convenience is Security

We often hear people say that there is a trade-off between security and convenience. Although there is some truth to that, I want to explain why, more often than not, security actually requires convenience. I should warn you, though, that this is going to be one of my most boastful articles to date.

Users of 1Password will certainly have experienced for themselves that security and convenience go hand in hand. Our design goal has always been to make it easier for people to behave securely, rather than insecurely. We have customers who use 1Password only for the convenience, yet they enjoy real security benefits as a result.

We’ve made this point in several places (it’s something I often say in some of our forum discussions), but it always comes across better if I quote someone else instead of myself. So here is noted security researcher Matt Blaze commenting in Why (special agent) Johnny (still) Can’t Encrypt. He discusses a police radio system which had users frequently talking over unencrypted channels while they thought their communication was secured:

The unintended cleartext problems […] first and foremost remind us that cryptographic usability matters. All the security gained from using well-analyzed ciphers and protocols, or from careful code reviews and conservative implementation practices, is lost if users can’t reliably figure out how to turn the security features on and still get their work done.

This is not news to researchers in the field, or at least it shouldn’t be. It certainly isn’t news to us, because we’ve designed 1Password with this fundamental truth in mind from the beginning. If some security policy or mechanism becomes onerous or confusing to use, the people who it’s designed to protect will circumvent it. If people are forced to use a difficult and confusing system, they are likely to make serious mistakes. At best, a security product should make it easier to get your work done. At worst, it shouldn’t make things prohibitively difficult to complete your tasks.

If researchers have understood this concept for decades, why does the view still persist that there is a trade-off between security and convenience? I have a few theories.

1. Rules instead of reasons

Dilbert.com

Most people just want to be told that something is secure and how to use it. They aren’t all that interested in how the thing works. You, reading this article or diving into the deeper parts of our documentation, are an exception. I have a pathological desire to explain things, and I love a job that turns my pathology into something good. But there is nothing wrong with the majority of users who have other interests. We need to make sure that our products work for those people who don’t take a great interest in how things work.

This brings us to rules of thumb. We all follow some rules of thumb without understanding the reasons behind them. Life is too short to investigate everything. Sometimes these rules persist when the reason behind them disappears. An example of this is the habit some people still have of underlining for emphasis. No printed document should have underlining in it, but the practice is a holdover from the days of typewriters. Here’s another apocryphal example:

I was having dinner at a friends house and before he put the roast in the oven, he cut about an inch off from each end. I asked him why and he said that it makes the roast taste better. I said that I’d never seen this before, but he said that he learned it from his mother, but never really asked about it. So he called his mother to ask why she always cut off the ends of the roast. She said, “Well, with the tiny oven we had back then, a large roast wouldn’t fit; so I had to cut a bit off.”

Rules, without reasons, can turn out to be wasteful or sometimes actively harmful.

Now time for a true story: Back in the days when banks were issuing credit cards to trees a friend of mine explained what he did with the credit cards he didn’t plan to use. He knew that the cards had to be signed to be valid (it said so right on the card!), so he never signed the ones that he didn’t plan to use. This, of course, made him less secure, because a stolen credit card could then be signed by the thief whose signature would easily match what was on the back of the card. But if you don’t think about the reason behind the rule, then his mistake was very reasonable.

Security systems (well, the good ones anyway) are designed by people who fully understand the reasons behind the rules. The problem is that they try to design things for people like themselves—people who thoroughly understand the reasons. Thus we are left with products that only work well for people who have a deep understanding of the system and its components. The fantastic designers and developers here at AgileBits would fall into the same trap if we didn’t constantly remind ourselves that we want to bring a secure password and information management experience to everyone.

2. Your security is my security

Helping other people be secure is a good thing in and of itself. But there is also a selfish motivation: Your security is my security.

I used to hear people say, “Well I don’t do anything sensitive with my computer, so I’m not worried about its security.” I hear that statement less now that more people are doing on-line banking and shopping, but let me use this example anyway. A compromised home computer that is connected to the network, even if there is no data worth stealing on that computer, gets used for other criminal activity. The computer can be used for sending spam, attacking other systems, or hosting fake pharmacy sites.

A compromised computer joins the arsenal of the bad guys even if they don’t do damage to its the legitimate owner. This means that the cost of letting your computer be compromised is not borne by you, but is distributed over the rest of the net. Because of this distorted incentive system, many people are unwilling to take on the chore of security if they don’t see the benefit.

I think that many security developers have mistakenly assumed that people are willing to pay a substantial price in convenience for security, forgetting that most home users don’t actually feel the consequences of their own insecure practices. I suspect that one reason why the convenience versus security myth has persisted is that the system developers cared so much about security that they assumed that everyone else did too. These assumptions and their resulting obsession over security meant that little or no effort went into looking at usability.

3. Complex options

If you look at the Preferences > Security window in 1Password you will see seven different options.
This is more than we we would like. You will also find that although 1Password is extremely powerful there aren’t boat loads of “Advanced Options” either. For a product that has been under such intensive development (1Password for Mac has been updated over 150 times in its life) you would expect options from “use Blowfish instead of AES” to “store my keys file on an external device” or additional functions such as “manage my Certificate Signing Requests” to “manage my hardware serial numbers”.

To help explain our reluctance to add these seemingly useful features, I’ll quote from an old (2003) article by Niels Ferguson and Bruce Schneier on why IPSec—an internet security technology—never met expectations:

Our main criticism of IPsec is its complexity. IPsec contains too many options and too much flexibility; there are often several ways of doing the same or similar things. This is a typical committee effect. Committees are notorious for adding features, options, and additional flexibility to satisfy various factions within the committee. As we all know, this additional complexity and bloat is seriously detrimental to a normal (functional) standard. However, it has a devastating effect on a security standard.

We will, of course, add features and options when they make things easier and more secure for a large portion of users. But we also resist the temptation toward feature bloat, even when it is “just an advanced option for those who want it”. The thinking that “well it’s just one option that most people can ignore” is fine when it really is just one option, but it never really is just one.

We do look seriously at feature requests; I don’t mean to suggest otherwise. But our concern for usability for everyone is why we tend to be very conservative about adding more options. Never fear, though. There are some great new things in the pipeline that will make 1Password even more useful for everyone. As it has been for more than five years now, 1Password is under active development, and we have some wonderful stuff for you to look forward to.

4. Because the myth is kind of true

Of course there is some truth to the convenience versus security myth. After all, the fact that we have passwords (or other authentication means) at all for websites is an inconvenience. So it would be absurd for me to completely deny the myth.

Most of the trade-offs we face are between security in one respect with security in another. For example, we could store more of 1Password’s indexed information in an unencrypted format (which would slightly speed up some processes) if we didn’t insist on decrypting only the smallest amount of information needed at any one time.

Why Wendy can encrypt

You may have met Wendy Appleseed. She is our sample user if you import our Sample data (Help > Tools > Import Sample Data File). Wendy can get the full benefits of the top notch algorithms and protocols we use because we take her user experience very seriously; we see convenience as part of security. When we are presented with something that appears to be conflict between usability and security, we take that as a challenge. Meeting that challenge is hard work, but we love it.

AES Encryption isn't Cracked


An otherwise excellent article over at The Inquirer has a very unfortunate title: AES encryption is cracked. AES is the Advanced Encryption Standard and is at the heart of so much encryption used today by governments, militaries, banks, and all of us. It is used by 1Password and less directly by Knox for Mac. It is the work horse of modern cryptography, and modern computer chips even have components built is to allow AES to be used efficiently. If AES were to be found weakened in any meaningful way, it would be very bad news for a lot of people.

Before I get into what has happened, I’d like to quote from the research paper itself: “As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way.

And quoting the Inquirer’s interview with Andrey Bogdanov, one of the researchers, we learn

“To put this into perspective: on a trillion machines, that each could test a billion keys per second, it would take more than two billion years to recover an AES-128 key,” the Leuven University researcher added. “Because of these huge complexities, the attack has no practical implications on the security of user data.” Andrey Bogdanov told The INQUIRER that a “practical” AES crack is still far off but added that the work uncovered more about the standard than was known before.

“Indeed, we are even not close to a practical break of AES at the moment.”

What’s the news

I’ve been trying to work through the actual paper and presentation slides by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger who were visiting Microsoft Research when they developed this. And although this research is far from having any practical influence on the use of AES it is actually fairly big news.

Cryptographers use the word “broken” in a very special way. If an attack on an algorithm can be computed with fewer computations than is required to check every single possible key, then the system is “broken”. Even if the improvement in the number of computations is negligible (as in this case) and even if other resources needed to get that small advantage are outrageously huge (again as in this case) it still gets called “broken”. But in this very specialized sense of the word “broken” the new research represents the first break of the full AES. It also displays the power of a technique developed earlier by the authors.

Impracticality #1: Impossible amounts of data

The authors calculate the best attack using their technique on AES with a 128 bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data. Although estimates are hard to pin down, this is more than all the data stored on all the computers on the planet.

Impracticality #2: All for a two-bit gain

The number of encryptions that have to be performed to recover a 128 bit AES key is 2126.1 instead of the 2128 encryptions it would take to try all of the possible keys. This is a very small gain, as a 126-bit key (instead of 128-bits) would still take billions of years.

Impracticality #3: Lots of known plaintext needed

I may be misreading the research, but I believe that to discover an AES key, an attacker needs an enormous amount of known plaintext. That is, the attacker needs to already have a huge amount of information in both decrypted and encrypted form. I don’t know exactly how “huge” this will be, but I expect it to be far larger than any data anyone would or could encrypt using 1Password. I’m speculating here until I get a better grasp of things. Indeed, the amount is almost certainly related to the amount of data needed in “Impracticality #1″.

So this all doesn’t represent any threat to the practical use of AES for any purpose it is used for. An unfeasible amount of data needs to be stored in order to gain an insignificant improvement over just trying every key. But what it does do is highlight features of AES that make it subject to this kind of attack. Whether attacks based on this ever become any kind of real threat, we can bet that successors of AES will incorporate mechanisms to thwart them.

Where’s the meat? It’s in the middle

From here on out, I will try to explain some of what I understand from the new attack. There is much that I don’t understand of this, but I will give a broad outline and then wave my hands a bit. This part gets very technical, and I won’t be the slightest bit hurt if you stop reading here.

You may have heard of 3DES (Triple DES) which was used in many places before AES was settled upon as a replacement. The old Data Encryption Standard (DES) uses 56 bit keys. By the time we got into the 1980s it was absolutely clear that 56 bits was no longer enough for a key size. One could imagine (as many people did) taking two DES keys and just encrypting your data twice, first with one DES key and then taking that output and encrypting that with the second DES key. This, you might think, would get you the strength of a 112 bit key. It doesn’t.

It turns out that if you have an sample plaintext and ciphertext pair what you can do is try everyone one of the 256 possible keys on the plaintext and also try everyone of the possible keys on the ciphertext as well. You will then find that there is an overlap of results. Some things that the plaintext encrypts into with one key will be the same as what the ciphertext decrypts into. This will give you (pretty much) the two 56 bit keys. This looking for where the output of one can meet up with the input of the other leads to this being called a “meet-in-the-middle” attack (not to be confused with a “man-in-the-middle” attack which is something else entirely). Note that in doing this, we “only” had to work through 256 keys twice. That is the same as working through 257 once. So double encrypting with DES only improved the security by one bit. This is why to get 112 bit strength out of DES we need to go through it three times, and so even though it allows for double the number of key bits, it is actually Triple DES.

Meet-in-the-middle diagram

Now back to AES. Ciphers like AES go through multiple rounds of scrambling and manipulating the data. They also have various internal states as they process a block of data with a key. If we find an internal variable that allows us to break the encipherment into two halves then it is possible to do a meet-in-the-middle attack on that. AES, along with every modern cipher, is designed with this in mind. It is designed with enough rounds and interactions among them so that a standard meet-in-the-middle attack will not be quicker than simply trying every key.

Instead of doing the traditional meet-in-the-middle attack, the new attack constructs entities that group internal states, potential keys which complement each other in specific ways, and ciphertext into what they call “bicliques”. By using these more abstract entities instead of an intermediate variable, the attack can avoid some of the limitations on meet-in-the-middle attacks and be effective over a greater number of rounds. By carefully selecting which potential keys go into which biclique, some computation can be reduced by avoiding any duplication of effort. I still haven’t managed to understand, even in overview, how and why these bicliques do what they do, so I can’t say much more.

Thanks for joining me

If you’ve read this far (including the last section) then I thank you for joining me through my process of trying to understand this new attack on AES. Even though it has no practical implications, I find this stuff oddly fascinating. If you’ve just skipped right to the bottom (not an unreasonable thing to do at all) then let me say again everyone who has studied this, including the authors of the attack, state that this has no implications whatsoever for the practical use of AES.

Better Master Passwords: The geek edition

I’ve always wanted to write a technical followup to an earlier post, Toward Better Master Passwords, but this time going into some of the math behind it. Today’s xkcd comic does that for me:

Indeed, what took me nearly 2000 words to say in non-technical terms, Randall Monroe was able to sum up in a comic. This just shows the power of math, but that’s another issue. So for those of you who want to understand the comic and see how it relates to my earlier post, read on. But first read or re-read my earlier post on strong master passwords.

If, like most sane people, you don’t want to dive into a technical discussion, then stop here and just read the original, non-technical, post that says the same thing as the comic. It’s also where the practical advice is.

The only thing I’ll restate

There is one concept (well, actually two concepts) from the Toward Better Master Passwords post that needs to be restated. It is central to everything that follows:

The strength of a password creation system is not how many letters, digits, and symbols you end up with, but how many ways you could get a different result using the same system.

This embodies two things that we need to take into account when looking at the strength of some components of security. Kerchoff’s Principle, and entropy.

Kerchoff’s Principle

Kerchoff’s Principle states that you should assume that your adversary knows as much about the system you use as you do. In this case it means that if you are following advice about how to generate strong memorable passwords, the people who will be trying to break that password are at least as familiar with that advice as you are.

I can’t over-emphasize the point that we need to look at the system instead of at a single output of the system. Let me illustrate this with a ridiculous example. The passwords F9GndpVkfB44VdvwfUgTxGH7A8t and rE67AjbDCUotaju9H49sMFgYszA each look like extremely strong passwords. Based on their lengths and the use of upper and lower case and digits, any password strength testing system would say that these are extremely strong passwords. But suppose that the system by which these were generated was the following: Flip a coin. If it comes up heads use F9GndpVkfB44VdvwfUgTxGH7A8t, and if it comes up tails use rE67AjbDCUotaju9H49sMFgYszA.

That system produces only two outcomes. And even though the passwords look strong, passwords generated by that system are extremely weak. Of course nobody would recommend a system that only produced two outcomes, but people do recommend systems that produce a far more limited number of outcomes than one might think by inspecting an individual result of the system. This is because humans are far more predictable than we like to believe.

Entropy

What unit do we use to measure the number of different results you can get from a system? The answer to that is “bits of entropy”. The silly system I listed above can get us two different results. We can represent two different outcomes using one binary digit (bit). Passwords from that system have just one bit of entropy.

Now suppose we had a similar system that involved rolling one die. That would lead to six possibilities. Six outcomes can be represented in three bits (with a little room to spare). The actual number of bits is closer to 2.58. (And for those who really want to know where that number came from it is the base-2 logarithm of 6.)

One feature of using bits of entropy as a measure is that each bit represents a doubling of the number of possible outcomes. Something with 10 bits of entropy represents 1024 possibilities, while 11 bits will double that to 2048 possible outcomes. There are many reasons that we use bits instead of the number of possibilities. I won’t go into the mathematical reasons, but one nice result is that it gives us manageable numbers. In cryptography we routinely deal with things that have 128 bits of entropy. 128-bits would represent 340282366920938463463374607431768211456 possible outcomes. It’s hard think about or compare such numbers.

Working through the comic

Now lets look at a few things in the first pane of the comic. Let’s start with the stuff I’ve put into the green pink box. The single small gray square in the bottom of the green pink box shows that the choice between capitalizing or not capitalizing the word adds one bit of entropy. Of course people could add more entropy by possibly capitalizing other letters, but people don’t capitalize randomly. The do so at the beginning, at the end, or sometimes at internal word or syllable boundaries.
If people capitalized randomly that would add a lot of entropy, but capitalizing randomly would make the password impossible to remember.

Now let’s look at the stuff that I’ve put in the blue box. Here 16 bits are awarded to picking an uncommon, but non gibberish word. That would imply that the person picked a word in a truly random fashion from a list of 216 words (65536). I don’t believe that people would be truly random in their choice of base words, so I would assign fewer bits to this choice, but I’m not going to quibble about a few bits here and there.

The red box covers three “tricks”. Adding some punctuation and a numeral to the end of the password. Adding a numeral gives us roughly four bits of additional entropy, and punctuation gives us four. We get one additional bit by not knowing which comes first, the digit or the punctuation.
I didn’t put a box around the common substitutions and misspellings of changing something like Troubadour to Tr0b4dor; three additional bits seems about right.

When we add up all of the bits of entropy that this system uses we get 28-bits. Of course the system can be made more complex and may go up a few bits, but almost certainly at a great cost of memorability.

For those who recall some laws of logarithms, you now can see an additional benefit for using bits as our unit instead of numbers of possible outcomes: We can add the bits contributed by each choice instead of having to multiply the number of possibilities. It is very convenient to say that such-and-such adds X bits of entropy.

Now contrast this with using a sequence of random common words. It is absolutely crucial that the words be chosen in a truly random fashion. Here 11 bits are assigned to each word. That means that the list of common words used has 211 elements. That is, it is from a word list of 2048 words. This gives a more memorable password with 44 bits of entropy.

Cracking time

Depending on what sort of access the bad guys have, they can test from 1000 passwords per second to hundreds of thousands per second. For more information on how we slow this down, see the post on PBKDF2. Only you can decide how much effort someone will put into cracking your master password if they get hold of your data.

Using Diceware alone with five words (you know what I’m talking about because you read the earlier post) you will get 64 bits of entropy. If you add your own private scheme (say it contributes 10 bits) then you will have 74 bits of entropy, which would take about 500 million years to crack at one million guesses per second. Not everyone needs that kind of strength in a master password.

Of course if you do have Three Letter Agencies willing to spend hundreds of millions of dollars specifically on getting at your secrets, then you have problems bigger than what can be managed through software alone. Indeed, let me point you to another favorite xkcd comic.

[Updated: August 11 to correct my colorblindness error and spelling of Randall Munroe's name. — jpg]

JavaScript grows up and plays in a sandbox

About 12 years ago I was fighting a losing campaign against JavaScript’s ubiquity. There was a time when JavaScript was a security nightmare, and I ranted and raved against it. Things have changed enormously since then, all for the better. A few of the slogans that I and my colleagues shouted from the rooftops in the previous millennium seem to have stuck in the public mind, “Using JavaScript is insecure” or “JavaScript can’t be used for encryption”. Those slogans are no longer true. This post talks a little bit about how things have changed.

What has happened over the past dozen years falls into two categories. The first have to do with the way browser developers look at JavaScript. All of the vulnerabilities in handling JavaScript taught browser developers to be much more systematic and careful with how they handled JavaScript itself. The design of JavaScript and browsers have gone through a lot of change during this time.

Playing in the sandbox

Rhino Sandbox

The second category of change is more recent, and it really changes the game. The buzzword is “sandbox”, and it needs a lot of clarification because it is used all over the place in different ways. Roughly the idea is that something can do what it wants within a limited area and cannot really interact with anything outside of that sandbox. It is a safe place to play.

Increased browser sandboxing affects 1Password in two very big ways, and these are because of two different kinds of sandboxing relevant to us.

Sandboxing the browser

The first sandbox that matters for us is that browsers are becoming increasingly restrictive in what bits of your system they can interact with. The way that 1Password interacts with Safari 5.0 and earlier is profoundly different than the way that 1Password is allowed to interact with Safari 5.1 and above. Prior to Safari 5.1, there were “hooks” in Safari that allowed external applications to communicate with Safari. But in Chrome, from its inception, and in Safari from version 5.1 that kind of communication isn’t allowed. This is a major security enhancement; it limits the damage that a browser exploit can do. A successful browser exploit now can only interact with data and processes that are within the browsers’ sandboxes.

The upshot of this is that we have had to entirely redesign our Safari extension to fit within the tighter, better security model of Safari. It means that 1Password needs to work within the browser to do its job. That work must be done using only CSS, HTML, and JavaScript. Clearly for 1Password most of that work will be in JavaScript.

Sandboxing the extension

Browser extensions shouldn’t step on each others’ toes. We need to prevent this not only for security reasons, but for stability reasons. Extension X shouldn’t be able to see the database created by extension Y. So each Safari extension is also put in its own sandbox. This not only protects others from a misbehaving extension, but it protects the extension from outside interference from other sources, including JavaScript in the web page a browser is visiting.

JavaScript and encryption

People used to say that you can’t do encryption in JavaScript (because it doesn’t have the right data types, and because as in interpreted language it is far too slow). I suspect that most readers will have noticed that computers have gotten a little bit faster over the past 10 years. So while JavaScript may not be ones first choice of language coding encryption routines, there are now well developed, publicly available implementations of all of the algorithms and protocols that we rely on.

Users of 1PasswordAnywhere over the years have already experienced JavaScript opening of their 1Password data.

Times, and rules of thumb, change

It seems like yesterday (though it was actually years ago) that I was telling people to distribute documents as PDFs instead of word processer documents, because PDFs can’t be exploited in the same way. (As an aside, I would like to mention that there is a new security update for iPhone which fixes an exploit that can live in a malicious PDF file). It’s also true that the password advice I would have given 10 years ago is much different that what I would give now. The tools and the threats have changed so much. So it is with JavaScript.

Enhanced by Zemanta