Blizzard and insecurity questions: My father’s middle name is vR2Ut1VNj

By now most people will have heard that email addresses, hashed passwords, and some other data has been stolen from Blizzard’s servers, and people are advised to change their passwords there. As unfortunate as this story is, it serves as yet another good reminder of why we very strongly encourage people to not reuse the same password on multiple sites and services.

One thing we don’t know yet is exactly how well hashed the passwords are. Blizzard EntertainmentFrom Blizzard’s announcement, we do know that the passwords were salted and hashed, but we don’t know whether it was simple salting (and how big the salt is) or whether they used something like PBKDF2. Their announcement tells us:

We use Secure Remote Password protocol (SRP) to protect these passwords, which is designed to make it extremely difficult to extract the actual password, and also means that each password would have to be deciphered individually

That tells us that the passwords were hashed and salted (see “A salt-free diet is bad for your security” for an explanation of what that all means). So Blizzard has certainly done a far better job in protecting users than, say, LinkedIn, which did not salt at all, but we don’t know exactly how much better. Unless I have misunderstood something, I believe that their use of SRP, while cool and good for some purposes, is not relevant to this particular case.

It’s the security questions that worry me

The Blizzard data theft also includes “the answer to the personal security question”. This is a bigger problem because even people who are careful to not reuse the same password at multiple sites may provide the same answers to “security questions” everywhere.

We’ve all seen – and probably made use of – schemes on websites that will let you reset your password if you can answer a few security questions (I’ll drop the scare quotes from here on out, no matter how poorly I think the name fits what they do). These questions are typically things like your mother’s maiden name or street where you lived when you were 10 years old.

Not good secrets

In March 2010 someone going by the handle “Hacker Croll” gained control of President Obama’s and other celebrities’ Twitter accounts by “simply working out the answers to password reminder questions on targets’ e-mail accounts” according to the BBC. It was neither the first nor the last time that so-called security questions have been used to compromise accounts. Quite simply, the information in these questions and answers are not really very secret. Parents’ names, for example, are available on birth certificates (which are a matter of public record in many places) and other information can often be gleaned with a bit of research. In the case of people who’ve written auto-biographies, the information can be all in one place.

They encourage reuse

The point of security questions is that they are something that the user can remember because they are true things that the user knows; it is exactly that which makes them easy to guess. If my father’s middle name is Walter, then that is what I would normally answer every place that I am asked.

Naturally, the whole problem is solved if you don’t have to remember your security questions and answers yourself—you can just let 1Password do the remembering for you. I’ll get back to that later, but first I’d like to point out a couple of other points (beyond guessability and reuse) about why being careful even with security questions is also so important.

Too Much Information

You, for whatever reason, may not wish to let the world know what your father’s middle name is. Yet security questions may ask you to provide exactly the sort of information you would rather not share. Although I may not consider a particular bit of personal information to be sensitive or confidential, you may very legitimately feel otherwise.

Reasonable people can disagree about whether they feel that revealing their father’s middle name is too much information. But I think that we will all agree that “your preferred Internet password” is far too much to ask. Our friends over at 37signals, reporting on what they found when setting up an account on some site.

World's worst challenge question

As you can see, one of the challenge questions is “What is your preferred internet password?” When Roustem pointed this one out to me, I was truly at a loss for words, which is not something that happens very often.

Not stored well

The security questions are never stored more securely than your password for the site, and often they are handled far less securely. As noted above the (unencrypted) answers to Blizzard security questions were stolen. It really isn’t surprising that these things aren’t encrypted or hashed well. Often your security questions and answers are visible to people who work for the organization. Armed with the knowledge that a human may well see the security question and answer, some people have suggested (in the comments) clever (and often profane) texts.

Strong and unique answers with 1Password

Instead of telling the site the real name of your first pet, use the 1Password’s Strong Password Generator to create a random and unique name for that pet and store that information in the Login entry in 1Password. It may be tempting to use the same random and unique password that you use for the site, but there are a couple of reasons not to do that. I’ve already mentioned the first reason: These security question responses are not stored as securely as passwords; they might appear in email or be given out over the phone. The second reason not to use your site password as your security question answer is that if you change your password for a site, you are likely to forget to update the security question as well. This can leave you with a site security question that you have no record of and no way to remember. (I have learned this from experience.)

1Password does many things automatically for you, but this isn’t one of them, so you will need to help 1Password along to get this information stored properly. There are two things that 1Password will need help with. The first is that the strong password generator only fills password fields, and so can’t automatically fill most security question fields. The second is that you will need to add the security question and answer to a note within your Login.

I’ve spoken with my friends here, and it seems like we all have our own different work flows for handling this. I will step through the way that I do this.

1. Save the Login first

JSE save Login

Before we go about getting getting an answer for a security question, we need to make sure that the Login for the site exists within 1Password.. So I will save the new Login from the browser extension, even if I don’t yet have the security question part filled out. Remember that you can save a Login with data you have filled out on a form before you actually submit the form. Just go the the browser extension and click on the “+” button in its upper right corner. (Or on Windows, depending on your browser, use the “Save” button.)

2. Invoke the Strong Password Generator

I like to invoke the Strong Password Generator from the 1Password application itself when dealing with security questions. So I open and unlock 1Password Application and go to File > New Item > New Password from the menubar.
This will open up the Strong Password Generator. (It is important to launch it this way, because if you use the “generate” button within an item, the Generator will replace an existing password.)

Invoke generator

In 1Password for Windows, you can launch the Strong Password Generator through Internet Explorer browser extension. For other browsers, I recommend getting to the Strong Password Generator through the application itself using File > New Item > New Password.

Strong Password Generator on Windows

3. Generate and Copy a new password

The Strong Password Generator will save the passwords that it creates in the Generated Password Vault, so if you ever need to go hunting for this one there (you shouldn’t have to, but it’s a good to know it’s in there), you should set a useful Title for your Generated password. In my example below I have that as “Security question for”.Generator

In some cases the answers to security questions are meant to be read by humans. For example, it may need to be asked about during a telephone conversation as part of a password recovery process. In these cases, it may be useful to use the Advanced Options in the Generator to select a pronounceable password.

We will need to make sure that the generated password gets copied to the clipboard. You can do this either by clicking on the “Copy” button or making sure that the check box is set by “Copy password to clipboard on completion. Then you can Save this item to your Generated Passwords vault.

4. Paste the generated stuff into the notes field in the Login

Now we need to find and edit the Login for the site to include both the security question and your newly generated random answer.
Paste note into JSE
You can do this either within the browser extension (on the Mac or in the Safari extension on Windows) or within the Application. In the browser extension, just search for the Login, click the little arrow at the right, and then the Edit button.

In my example the Strong Password Generator gave me vayt-jebs-yaf-g, so I paste that into the notes field in the Login along with the question. After I save the Login, my notes field for the login will say “First pet: vayt-jebs-yaf-g”.

One point I should make here is that I store these in the “notes” within a Login (instead of within custom fields) because notes will be preserved if the Login is updated later in the browser. Form fields will be replaced on such an update.

5. Paste security answer into the web form

Now that we have everything squared away within 1Password, we need to make sure that the website asking the security question knows that your pet’s name is vayt-jebs-yaf-g. So you will need to paste that information into the web page as your answer to the security question.

At your finger tips

The principle purpose of security questions is for password recovery. It’s what you are supposed to use if you forget your password. With 1Password you shouldn’t have to worry about forgetting a password for a site because 1Password remembers it for you. However, if you do store security questions and their answers within 1Password, you must take extra care to ensure that you never lose access to your 1Password data. You are storing your passwords and your password recovery secrets in the same (highly secure) basket.

Being more helpful

We clearly need to look at how we can make 1Password be more helpful in this process. We’ve been hoping that “security questions” would just go away, but it looks like this practice will be around a a while. As always, we never (well, hardly ever) talk about features until they are delivered, so no promises.

I’d love to hear what other solutions and ideas our users have for handling these. So please join the discussion of this post in our forums.

Updated: 13:30 EDT August 10 to correct instructions for using 1Password for Windows.

Password reuse strikes again, and a bit closer to home at Dropbox

1Password in DropboxNot so long ago, I wrote about a case where attackers were taking passwords that were leaked from one site to go after users on another. In that case, the target was Best Buy. Today’s case hits a bit closer to home for 1Password users, as Dropbox accounts are being attacked using passwords stolen from non-Dropbox sites. Just as no passwords were stolen from Best Buy, no passwords have been stolen from Dropbox. The passwords were stolen from breaches at other sites and then just applied to Dropbox accounts.

To give you an idea of the dangers of password reuse and how this type of attack works: suppose Molly has an account on and that site got breached. Also imagine that the site didn’t store passwords in sufficiently secure ways, so that after the breach it was possible for attackers to actually figure out what many users’s passwords, including Molly’s, are. The attackers now have Molly’s username and password on that site. You might figure that there is little harm done because there really isn’t a lot that people could do with Molly’s account on

But now suppose that Molly uses the same username and password on Dropbox (Molly doesn’t take advice all that well). The attackers can try to log into Molly’s Dropbox account with the username and password they discovered from the breach at This, according to the recent announcement by Dropbox, has been going on with a small number of Dropbox accounts.

Users of 1Password probably don’t need to be reminded why it is so important to have unique passwords for every separate site and service. But because this case of password reuse with Dropbox is such a good reminder, it can’t hurt to talk about it again in the hopes we can all help spread the word.

Of course, even if someone were to get a hold of your 1Password data through Dropbox or some other means, they would not be able to get your usernames, passwords, and other data stored within it without knowing your 1Password Master Password. But this is why we designed the 1Password data format with the knowledge that some people may have their data files stolen. Indeed, just yesterday I wrote (in gory detail) just how well 1Password and your Master Password work together to resist even the most sophisticated password cracking tools.

Irony, spam, and discovery

Mail things this message is spam

One of the accounts broken into that way belonged to a Dropbox employee.
It turned out that that the unnamed (but presumably very embarrassed employee) also had some files that included the usernames (which are email addresses) of some Dropbox customers. Naturally, the attackers did what attackers do when they get a hold of a bunch of email addresses: they passed (presumably sold) that list to spammers.

Again, it was only the email addresses that got taken; not passwords. But then, spam went out to a number of Dropbox users, some of whom had set up site specific email addresses. For example, Patty may have created an email address that he used only for Dropbox. Nobody knew or ever saw that email address except for Patty and Dropbox.

But if was among the email addresses stolen and passed on to spammers, Patty would start getting spam to that address. Patty, being the clever one, would realize that somehow her Dropbox email address had been captured. She might even complain to Dropbox about this, along with a few others who have suddenly seen spam to their Dropbox-only email address.

And now we arrive at the beginning of the story. It was exactly complaints like this that led Dropbox to bring in an outside security investigator to look at the spam issue back on July 18.

What have we learned and what is there to do?

Molly is afraid of ThunderFirst of all, we have been reminded yet again that Molly is not as clever as she thinks she is, while Patty shows an abundance of caution (sometimes too much). Now, regular readers of my posts will know that Molly and Patty are my dogs. Molly, while incredibly sweet, wins few prizes for intelligence. Patty is much brighter, but is always on high alert and will fire off a warning bark if a molecule moves across the street. There is a lesson somewhere in here about learning to be cautious about the right things. Panicked people (and dogs) tend to make poor security decisions, but that discussion is for another time. All we need to say here is that Molly, as presented above, needs to have a unique password for each site.

Patty barks at everything

We are also reminded that one site’s security is dependent in some ways on other sites. The operators of might think that they don’t need to be particularly careful with their users’ passwords, but they are wrong. As long as people (or dogs) reuse passwords, it is the responsibility of everyone who stores passwords to store them in uncrackable forms.

Furthermore, because so many websites and services are storing passwords poorly, we all need to act as if our website passwords are stored poorly at each site. As we’ve been reminded by an ongoing case involving Tesco (a large UK grocery chain and retailer in the UK and Europe), its generic statements like “passwords are stored in a secure way” are meaningless unless we are told how they are encrypted.

How many passwords do I need to remember?

I would love to be able to say that with 1Password you only need to remember your 1Password Master Password. It’s in the name. 1Password does the job of remembering all of the other non-memorable passwords. After all, Troy Hunt is largely right when he says that the only secure password is the one that you can’t remember. But the first thing I do when I set up a new computer is install Dropbox to sync my 1Password data, and then install 1Password. There are times when I need to know my Dropbox password in order to be able to use 1Password. So, despite the name, I actually need to remember two strong, unique, and, this time, memorable passwords.

Fortunately, you and I can use the mechanism described in Toward Better Master Passwords to be able to manage the very few passwords that we actually need to remember.

1Password is Ready for John the Ripper

John the Ripper, the pre-eminent password cracking tool, is getting ready to take on 1Password. Is 1Password ready? Yes! We have been ready for a long time, but you need to do your part by having a good Master Password.

We’ve written many times about how 1Password defends against automated password guessing programs (password crackers). And we’ve been strengthening those defenses as well. If you’ve been wondering why we’ve been devoting so much effort to this, well this is the article for you.

We’ve always known that that there is nothing we can do prevent someone developing an automated Master Password guessing tool that is tuned to 1Password data, and so we’ve designed our security around the assumption that such tools do exist. What we can do (and have done) is make any password guessing program work extra hard, so that it can only guess thousands of passwords per second instead of many millions per second. We also have been advising people to make sure that their 1Password Master Passwords are strong, unique, and memorable.

What’s new

Password crackers don’t break the cryptography or exploit bugs or design weaknesses. They are just programs that try millions or billions of different passwords until they either find one that works or the person running the program gives up.

John the RipperThe news is that the most popular and sophisticated open source password cracking tools available, John the Ripper, is now being adapted toward cracking password managers Master Passwords. More recently (July 25) we see the development of tools specifically designed for making John the Ripper work with 1Password’s Agile Keychain Format.

John the Ripper expects the data that it works with to be in particular formats. The modifications to John the Ripper for 1Password involve two components. One converts the relevant part of the Agile Keychain Format into an appropriate input file, and the second part allows John the Ripper to test against that input file in a way that allows it to recognize a successful guess.

Let me stress again that the existence of a password cracking tool does not reflect any kind of weakness in the system it is attacking. When you have encrypted data, there is nothing stopping a person or a computer from trying to guess the password.

What’s not new

Other than repeating the fact that 1Password users should have a unique, strong, and memorable Master Password, there is nothing that we need to do with 1Password in response to the new components of John the Ripper. We have been operating under the assumption that these sorts of tools already existed, even if they hadn’t been made publicly available.

PBKDF2 diagramWhen we introduced the 1Password data format in 2008, we knew that we needed to design it to defend against crackers, so we used PBKDF2 in the process of getting from Master Password to encryption key. PBKDF2 means that a computer has to do many complicated and slow computations to derive an encryption key from a password. So you might have to wait half a second or so after entering your master password for 1Password to actually be able to unlock your data, but that is barely noticeable to someone using the system. But for an automated password cracking system, it dramatically reduces the number of possible passwords it can guess in a day. We’ve written more about how PBKDF2 works in Peanut Butter Keeps Dogs Friendly, too. Plus, last year we increased the number of PBKDF2 iterations that many versions of 1Password use when creating a new data file.

We have also been encouraging people to use good Master Passwords for their 1Password data. 1Password means that your various login passwords don’t need to be anything that you need to remember (and so it is easy for those to be strong and unique), but your 1Password Master Password is something that needs to be strong, unique, easy to remember, and reasonable to type. There is a great deal of advice on the ‘net about how to pick a good password that you can remember, but much of that advice fails to take into account the flexibility of password cracking tools. So please take a look at Toward Better Master Passwords if you haven’t already looked at that.

How fast in John the Ripper and what does it mean for your master password?

I’ve spent much of the weekend playing with the new tools in the developer versions of John the Ripper.CPU cores are completely pegged I ran John the Ripper (JtR) against my 1Password data for about 20 minutes on my Early 2009 Mac Pro (Quad Core). John the Ripper cranked away and consumed more CPU power than I knew my machine had (see picture at right). Yet working against a 1Password data file that used 1000 PBKDF2 iterations, JtR was only able to try about 4200 password guesses per second. For my calculations in the table below, I rounded that up to 5000 guesses per second.

I also tested a data file with 28,000 PBKDF2 iterations. As expected John the Ripper was slowed down about 28 times from the case with 1000 PBKDF2 iterations.  In the table I provide estimated cracking times if the data file uses 25000 PBKDF2 iterations, which should make JtR run about 25 times slower than when there are 1000 iterations. (Again, my timing data was a bit messier, but I am always rounding toward the worst case. That is, whenever there is some ambiguity or a range of results, I am always picking the estimates that would have John the Ripper be faster.)

The author of the 1Password plug-in, Dhiru Kholia, anticipates that the module will soon be modified so that it can make use of Graphic Processor Units (GPUs). This, he estimates, will increase the guessing speed by more than 100 times. In my table below, I have used a 200 times increase in speed for GPUs. So that where I had roughly 5000 guesses per second on my Mac Pro, I assume that with GPU acceleration, there will be one million guesses per second.

Mac Pro (Early 2009)And finally to read the table below, you need to be reminded of how I am measuring password strength. I can only calculate the strength of a password if I know the system by which the password was created. A great deal of advice floating around about creating passwords fails to take into account that the attackers know more about how people create passwords than the people creating the passwords, and these attackers can and do tune their cracking tools accordingly. Much of the common advice also fails to take into account that people are far more predictable than we imagine. So passwords that may look really strong are often far weaker than people imagine.

If you read the advice in Toward Better Master Passwords, you will see that the recommended system is to pick words from a list truly at random (by rolling dice) and then just using that sequence of randomly chosen words as your Master Passwords. This sort of system, until it became the subject of an xkcd comic, was known as diceware.

The table below looks at Master Passwords 3, 4, 5, 6, and 7 diceware words long. The entropy of those passwords are listed in bits. For an explanation of what “bits of entropy” means, take a look at The Geek Edition of the article on better master passwords.

The cracking times are the average or mean time to crack. For example if it would take 116 years to try every possible 4 word password created with the diceware scheme, then it would take on average 58 years.

JtR crack times for agilekeychain


From the table you should surmise that three-word-long passwords of this sort aren’t long enough to withstand a plausible attack. You should also be able to see that anything over five words long is overkill. Or, given a Master Password with more than about 55 bits of real entropy (not the false reports that you get from most websites pretend to calculate password strength), you should be fine against any plausible attack for a long time to come.

PBKDF2 and its limits

It really is because of PBKDF2 that tools like John the Ripper will only be able to find weak Master Passwords. Its role is vital. But it is important to notice that once we have a sufficient number of PBKDF2 iterations, increasing those doesn’t add that much additional security. Going from 1000 iterations to 25000 iterations is the equivalent of adding less than 5 bits of entropy to a password, which is about the same as adding a truly random, lowercase letter to a password. Furthermore, there are continuing diminishing returns: Going from, say, 25,000 PBKDF2 iterations to 50,000 would only add the equivalent of one bit of entropy to a password.

In short, once PBKDF2 is in place with a reasonable number of iterations, you get far far more security for the effort by making your Master Password stronger.

Why doesn’t 1Password limit the number of password attempts?

Many websites or an ATM will lock someone out if they enter their password incorrectly too many times. Websites will also have a kind of back-off. After you’ve entered your password incorrectly three times in a row, it may make you wait ten minutes before it lets you try again. These are very good security measures for those sorts of services. So the question is: why don’t we do the same with unlocking 1Password?

1Password stores your encrypted data on your computer. You may use Dropbox for syncing, but when 1Password does its thing, it is always unlocking data that is on your computer. This means that if an attacker gets hold of your 1Password data (say your computer is stolen), they have the data right there. They do not need to use the 1Password software, they can go directly to the data with their own tools. Because an attacker doesn’t have to go through our software or systems we control, there is no place for using using throttling or back-off techniques. We have to build the security into the data format itself. Our use of PBKDF2 is an example of that.

More abstractly and to introduce a bit of technical jargon, your Master Password is an encryption password, not an authentication password. It is used as (or to derive) a key that decrypts something. It is not used as a mechanism to prove who you are so that you could be let into some service. The distinction is subtle, and particularly prone to confusion because, for the most part, a user shouldn’t have to know or care about it. But there are a couple of places where the distinction matters. It is part of the explanation for why 1Password doesn’t do back-off or throttling.

There is one other place that the distinction between encryption password and authentication password matters for users of 1Password. I will be writing about it more later, but in summary: it means that once you have a good 1Password Master Password, you should keep it for life. You gain no security by changing an encryption password frequently (indeed, it can hurt). So you should only change your 1Password Master Password if there is something wrong with your current one. I’ll talk more about changing passwords (when you should and when you shouldn’t) in a future article.

Should we keep our data format secret?

It’s natural to ask whether we could have made things harder for the developers of password cracking tools if we weren’t as open as we are about the design of our data format. We certainly could have made things a bit more annoying for them if we attempted to conceal details of our data format, but we couldn’t have made things harder for them in a way that would have mattered for security.

On the whole, people should be distrustful of systems that claim to gain security by having secrets known only to the developer. That approach is often called security through obscurity. It not only rarely works, but it often implies a weakness in the design. For example, if there were something that I knew about the design of 1Password that would enable me to unlock your data, then it would necessarily be the case that there is a weakness in the system. Such secrets can often be discovered by careful analysis or through the secret actually getting out from those who know it. Despite what I may have said on April Fool’s Day, proprietary encryption systems are a warning sign, not a virtue.

Join the discussion

I have set up a specific discussion thread on our forums for further discussion. Please join us there.

Friends don’t let friends reuse passwords

We’ve written about password reuse before, and we’ll be writing about it again. Password reuse—using the same password for multiple sites or services—is both rampant and dangerous.

There is real evidence that people are getting robbed because they are reusing their passwords. Thieves systematically exploit reused password to pay for retail items or hijack accounts for other intentions. And yet, we are reminded again this week by the recent leak of almost half a million Yahoo passwords that a majority of people just can’t stop reusing passwords.

I’ve seen password reuse and the damage done

Suppose you used the same password on Sony’s PlayStation Network as you Best Buyuse for your password when shopping with Best Buy. Now, suppose that your PSN username and password was among the 77 million leaked in April 2011. An attacker could, in principle, use that information to take a good guess at your password for, say, Best Buy.

Well this isn’t just something that can happen “in principle”. From an underappreciated report by John Fontana over at ZDNet:

After months of Best Buy customers reporting compromised accounts, the company has finally confirmed hackers are attacking its online retail site using credentials stolen from other sites.

It’s a worst-case scenario, where credentials stolen from one site are used to access other sites, most notably retail or banking sites where hackers can extract some value.

I have no doubt that things like this have been going on for a while, but it is always hard to confirm that this is what has happened when someone “hacks” a users account at a retail site. So let there be no doubt that password reuse puts people into real danger. It happens.

Some habits are hard to kick

Sites and services that store user passwords unencrypted or poorly hashed are a serious danger to their customers. But when they get breached and their password database made public, it is a boon to people who study password security. One of those people is Troy Hunt, who has taken this as an opportunity to look at password reuse between PSN and Yahoo.Chart of PSN-Yahoo password reuse

Of the 302 usernames in common between the two breaches, 59% of them had the same password on each site. Note also that the PSN breach was more than a year ago and was very widely reported. Put simply, we can estimate that about 60 percent of people ignore advice to change their passwords elsewhere.

Helping you to help yourself

I’ve argued many times before that when we see people systematically making poor security choices, we can’t just blame the user. We—the security business—have to look at how we can make it easier for people to behave securely. 1Password is our attempt at fulfilling that goal. We work to make it easy for people to have strong and unique passwords for each site.

First of all, 1Password provides you with a Strong Password Generator, in both the app itself and the 1Password browser extension. This generator makes it drop-dead-simple to create website passwords that are strong and unique. Plus, you don’t need to remember them because 1Password remembers them for you.

Although 1Password wasn’t born yesterday, many of us have Logins and passwords for sites that were created before we had the Strong Password Generator, which means we may still be using some passwords on multiple sites. Here are some tips on how to use 1Password to help you find duplicate password and clear those up.

Websites have responsibilities, too

Anybody who stores your passwords has responsibilities, too. We know that sites get compromised and their user databases stolen. What we don’t really know is how frequently this happens because only a fraction of those breaches get made public. Not only do websites and services need to take steps to prevent the theft of their user’s personal data, they need to store the passwords in a form that is useless to intruders.

It appears that the Yahoo passwords were stored with no encryption or hashing at all. I was astounded when this was discovered with Sony’s PSN last year, and I am astounded today that Yahoo would make the same mistake. I have been berating sites for not hashing their passwords well; I hadn’t expected to encounter more sites that didn’t hash at all. Because of their poor practice, everyone whose Yahoo password was leaked is vulnerable to having their accounts hijacked on every other site where they use the same password.

“Check out my debit card!” Or: why people make bad security choices

Yes, the stories are true, and no, this isn’t The Onion. People are, once again, displaying their affinity for tweeting photos of things that should never be tweeted.

Let’s set the scene and put you in the shoes of a number of today’s (possibly young, possibly naïve) Twitter users: you get your first debit card, you’re excited, you want to tell your friends. But who calls or even texts in private anymore? So you take a quick picture of your newfound glory and power, then tweet about it to not just your friends, but the world—credit card number, your name, and the card’s expiration date; the works.

Don’t believe me? A new Twitter bot, @NeedADebitCard, has been retweeting these posts, and plenty of them.

Blurred version of debit card posted on TwitterThe card that you see at right was posted publicly to Twitter. Note that I’m the one who actually took the precaution of blurring the number, name, and other details. In the original tweet and the many others like it, all of those are fully visible.

Considering that credit card details (along with verification codes, and billing addresses) can be bought on the black market for about one US dollar a piece if you buy in bulk, posting these really isn’t going to be tragic. Most people’s credit card details have been stolen a dozen times over (mostly through breaches of traditional merchants). But tweeting these certainly reflects some bad habits. For the record, the AgileBits store does not keep a copy of your full credit card numbers when you purchase through us. And when you purchase through iTunes or the Apple Mac Store, then we get no information about you or your purchase at all.

Credit card stored in 1PasswordFortunately, you, my informed 1Password-using reader, can keep your credit and debit card details safely stored in the 1Password Wallet instead of posting them to Twitter or Instagram.

I laughed, but I shouldn’t have

I must confess that I laughed when Stu, my colleague here at AgileBits, pointed these out. If we want to stop people from making poor security decisions (and we certainly do), then we need to understand why people make them in the first place. Of course, some people don’t fully understand the public nature of Twitter, and that certainly plans a role in many cases. But there are other aspects of human nature at play.

There are times, and I think this is one of them, when we can’t blame the users. Instead, the blame falls with the designers of the systems that people must use. But before I talk about credit and debit cards, let me talk about Social Security numbers.

Social Security numbers and identity

For those of you outside of the United States who aren’t familiar with US Social Security numbers, they very roughly correspond to national insurance numbers or national identity numbers. They are officially used as tax payer identification numbers, among other things.

When the system was first set up, Social Security numbers were not at all secret. They were used both by banks and by employers so that income could be reported to the tax authorities. Indeed, like many people, I included my Social Security number on a résumé back in the 1980s as a courtesy to prospective employers. These numbers were used for identification in the same way that a name is, except that there are plenty of people who share my name, while I should be the only one with my Social Security number. Knowing my Social Security numbers was never intended to prove my identity. In other words: knowledge of the number was not supposed to be used for authentication.

Let’s consider this distinction between “identification” and “authentication”. Identification is how you figure out who you are talking about. Authentication is how you test whether someone really is that person. For a typical website, your username (or email address) serves to identify you, while your password is used to authenticate that identity. You may or may not wish to make your username public, but you certainly don’t want to make your password public.

Through a convoluted history that I won’t go into (in short, blame the banks), Social Security numbers switched away from being used merely for identification; the came to be used for authentication as well. They got used in a way that they were never designed for. This led to various problems, a number of new laws, and simply helped to muddle the distinction between identification and authentication.

Credit cards are more confusing

When credit cards and debit cards were originally designed, the numbers were pretty much for  used only for identification, though there were exceptions. People would authenticate themselves by having physical possession of the card and by a producing a signature with a pen on a piece of paper. Certainly there was a lot of scope for fraud which meant some care was needed in not making the identity numbers all too public, but still the card numbers were not designed as authentication.

But then we started using credit cards remotely, first by making payments over the telephone and later for use over computer networks. Neither physical possession of the cards nor a signature on paper could be used for authentication any more. So the card numbers themselves, along with the three digit security (CVV), codes became the means of authentication. As with Social Security numbers, we ended up using a piece of information for authentication that was never designed to be used that way.

Here we go again. This time with checks

Over the past few years I found that when I telephone my bank, and after they have identified me, they request my checking account number. It appears that knowledge of my checking account number is now being used as part of their authentication system. Again, account numbers were never meant for this purpose. Indeed my checking account number is printed on every check I write. These are not secret.

The system that we all use has been giving people mixed messages about what is and isn’t secret. Or it’s been trying to use non-secret information in ways that are only appropriate for secrets. On one hand we are told to keep our credit card numbers secret, while on the other hand we are conditioned to toss over those numbers to any hotel, clerk, or waiter.

Helping people get things right

This misuse of Social Security and credit card numbers evolved through a process of changing scenarios and needs at the time, making due with what was available. It’s not a surprise that the system is far from what we would expect if we designed in intelligently from scratch, so of course our existing, incoherent system is going to cause confusion.

When we do have the opportunity to design something from the ground up, we have the responsibility to ensure that it is easy for people to behave securely. We may not always be able to fully succeed in that attempt, but it must be a guiding design principle for anyone who wants security systems to work for people.

Finally, when we observe people systematically behaving insecurely, we have to ask not “how can people be so stupid” but instead “how is the system leading them to behave insecurely.” Maintaining a clear distinction between non-secret identification information and secret authentication information would be one place to start.

Join the discussion in our forums

I’ve created a forum topic to discuss this article. Please join us there.

Flames and collisions

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.Mikko Hypponen

Unless you are a system administrator for a government institution in or around the Middle East you do not need to worry about Flame infecting your computer. Flame (also known as “Flamer” and “skywiper”) itself is not a security concern except to a very narrow, targeted group. Quite simply you don’t need to worry about being infected by Flame, and antivirus vendors who suggest otherwise may be engaging in fear mongering.

With so few people in danger of Flame, why am I writing about it? Good question. I’m writing about it because one of the methods used in Flame has the potential of undermining a crucial part of computer security. The authors of Flame have the ability to subvert the Windows Update process. Whatever Flame itself does or doesn’t do, the fact that its authors acquired the capability to distribute fake updates to Microsoft Windows is cause for serious concern.

Software updates and chains of trust

I have previously written about how an important part of computer security is ensuring that your software updates come from the right place. You don’t want someone who pretends to be AgileBits giving you malicious updates to 1Password. And you don’t want someone who pretends to be Microsoft giving you malicious Windows Updates. The methods used for digitally signing downloads and updates involves some mathematical magic and a Chain of Trust. In the summer 2011, we saw, in the example of DigiNotar, what can happen when someone finds a way to insert themselves into the chain of trust.

These two articles, “Who do you trust to tell you who to trust?” and “A peek over the Gatekeeper” explain the security infrastructure I’ll be writing about here. You will see terms like Certificate Authority or Man in the Middle attack in this post, but they are more fully explained and illustrated in those other posts.

Putting trust to the flame

Windows Update iconThe authors of Flame have acquired the ability to digitally sign their own updates to Microsoft Windows as if they were (almost) Microsoft. They have been able to create digital signatures in a way that will successfully fool the Windows Update process. Microsoft made a series of mostly innocent blunders in creating some intermediate Certification Authorities that are used by customers to creating individual licenses for a particular product. However, in combination those blunders on something that was not supposed to be part of the critical system turned out the have major consequences.

The details are subtle, and much of what we know comes from announcements by Microsoft about their recent urgent security updates. With the sketchiness of the details and the fluidity of the situation, it is safe to assume that anything I write about the mechanism used to create the bogus signatures will be out of date by the time I actually post this article. A great source for information about this is Matthew Green’s post on his Cryptographic Engineering blog. The technical details of Flame (PDF) itself are coming from research at the Budapest University of Technology and Economics.

Microsoft has issued a security update, which you get via – you guessed it – Windows Update. It is not clear (to me, at this point) whether these updates fix the long term problem or just fix the Flame-specific signatures. In a notice on June 4 they state that the June 3 update was the first of a series of actions in a phased mitigation strategy. So there will be more to come.

But the most interesting thing (to me) in their notice is

The Flame malware used a cryptographic collision attack in combination with the terminal server licensing service certificates to sign code as if it came from Microsoft. [Emphasis added]

And this gives me the opportunity to discuss an important concept in cryptography that I haven’t talked about before.

Hashes and Collisions

In the article about Gatekeeper I talked a bit about some of the mathematical magic behind digital signatures, but I left many pieces out. The digital signature of a file is not actually a signature of the file directly. For a number of technical reasons it is too unwieldily to directly construct a signature of a large chunk of data; the file itself would need to be treated is a single enormous number that is then used as an exponent of some other large number. If the file were just 10 kilobytes, its number would be on the order of 281920 (about 25,000 digits long). We deal with some really big numbers in cryptography, but we would like to just deal with numbers about the size of 2256 (around 77 digits).

To create a digital signature of a file we first need to do is reduce the file (no matter how big it is) to a number in the range we can deal with. We create a cryptographic hash (sometimes called a message digest of a file) and then digitally sign that hash. If a hash function (something that takes all the data in a file and produces a smaller number) is to be useful for digital signatures it must have a number of security properties. The security property we are interested in today is called (strong) collision resistance.

Collision resistance
It must be infeasible to find or create two distinct files that have the same hash.

Any time you have two distinct files with the same hash you have a collision. In principle collisions are inevitable because there are more possible files than there are hashes, after all, the point of the hash to turn a really big chunk of data (a file) into a smaller (though still big) number. Instead the security of hash functions relies on how difficult it is to find or create collisions.

In a recent article, I talked about another security requirement of a cryptographic hash. It must be infeasible to calculate anything about the original file from the hash. That is called pre-image resistance, but today’s topic is collision resistance.

Danger: Collision ahead!

Now let’s look at why collision resistance is important for digital signatures. Collision illustrationSuppose that Patty, one of my dogs, creates two files. One of them says, “Molly is the cleverest and prettiest dog ever, and Molly gets to rule.” The other file that Patty creates says, “Patty will get all of Molly’s dog treats.” Patty, of course, is the clever one. She is able to create these two files so that they produce the same hash. Patty has created these to files in such as way that their hashes collide. Patty asks Molly to digitally sign the “Molly rules” file, and Molly is happy to comply. Molly, using her secret key, creates a signature for the “Molly rules” file. The digital signature is actually a signature of the hash of the file, and so that signature will also work as a signature for the “Patty gets all the treats” file.

Patty will then bring me the “Patty gets all of Molly’s treats” file along with Molly’s signature. I calculate the hash of the file and verify that the hash really is signed using Molly’s secret key. Nobody but Molly knows that secret key, and so I figure that Molly must have signed that file. I give all of Molly’s treats to Patty. Molly may like to collide with Patty on walks, but this is one collision that Molly is not at all happy about.

It’s more than just dog treats that are at stake here. As I described in the Gatekeeper article, it’s not just ordinary files that can be signed (or more correctly, have their hashes signed), but certificates and Certificate Authorities are also just data that can be signed. The signature on a certificate helps our computer determine whether it can trust that certificate. If Molly controls a Certification Authority (CA) Patty can ask Molly’s CA to sign a certificate for Patty that Molly is happy to sign. But if Patty has another, trickier, certificate that produces the same hash as the one that Molly signed, Patty can simply add Molly’s signature to the bogus certificate.

The solution is to make sure that the hash function that is used for signing certificates (or files related to dog treats) is collision resistant. In particular this means that the MD5 hash algorithm should not be used. Unfortunately some of Microsoft’s Certification Authorities use the MD5 hash system. The MD5 hash system is badly broken, and it is very possible to generate collisions.

The downfall of MD5

Cryptographers don’t talk about “impossible” or “possible”; they talk about “feasible” and “infeasible”. An infeasible task usually means something like “if you used all the computing power on Earth now and for the next decades, would still have only a negligible chance of successfully completing the task.” With MD5 we’ve witnessed the progression from “secure” to “weaknesses that are infeasible to exploit” and eventually to “practical exploits” in the space of about 10 years.

MD5 (Message Digest 5) was shown in 1995 to have theoretical weaknesses. From there the history is remarkably interesting (if you are into that sort of thing). There was one group of people who were saying “don’t use MD5 any new software or systems; use SHA1 instead.” There was another group saying that the weaknesses in MD5 don’t pose an actual threat to how it is actually used. By 2005 it became absolutely clear that the weaknesses in MD5 had been expanded and could be leveraged into real attacks, and in 2008 there was a spectacular demonstration to create a rogue Certification Authority. An important side note is that while the advice to use SHA1 instead of MD5 was very good advice in 1995; today that advice is to use SHA2 instead of SHA1.

As we all watched MD5 go from secure to completely broken with respect to collision resistance in the space of ten years, we also saw that its use continued. I think that many of us failed to understand just how easily collisions could be exploited. It was tough to steer a course between people panicking on one side who were saying that anything involving MD5 was tainted and those on the other side who always seemed to think, “sure MD5 has its weaknesses, but those weaknesses don’t affect my application.”

Microsoft using MD5 in 2009

People continued to use MD5 for a number of purposes that they didn’t see as critical. The intermediate certification authority that Microsoft set up for signing Terminal Services licenses was, presumably, thought to be not critical enough to worry about. The worst (they thought) that could happen is that customers could create a few extra licenses for themselves if they went through the effort to create collisions. So as even as late as 2009 Microsoft created CAs that used MD5 as its hashing algorithm. They turned out to be wrong about “the worst that could happen”. Other blunders allowed a weak CA created in 2007 to sign certificates for other CAs.

Using the same trick that Patty pulled on Molly, an attacker could get a signature for an acceptable CA to work on a malicious one. The malicious one could then be used to sign false updates to Windows. Thus, the attacker would have the ability to sign things that would look like legitimate updates to Windows.

Microsoft has recently (June 6) provided more details, which confirm that what I’ve described above did play a role in the creation of the signatures for the bogus updates.


Follow early warnings

When we, the technology community, were advised in 1995 to stop relying on MD5 we should have been quicker to make the transition to SHA-1, even though at the time it wasn’t clear how those weaknesses could be exploited. Likewise, we should follow the same warnings about SHA-1 today. We shouldn’t panic every time a theoretical weakness is found, but we should remember that these can often be leveraged in ways we can’t predict.

Hash algorithms are hard. MD5, as you’ve seen deteriorated rapidly. SHA-1 has been shown to be less than ideal (not yet exploitable in practice), and even its replacement, SHA-2 isn’t all we would hope it to be. Cryptographers and the NIST are working on finding a system to become SHA3. If you have a high tolerance for geeks singing out of key, you may wish to sing along to the SHA-3 song.
The refrain is

SHA-2 will soon retire
Because NIST is learning and SHA-1 is burning
SHA-2 will soon retire
No we didn’t light it but tried to fight it.

The trust infrastructure is fragile

There are a large number of Root Certification Authorities and an even larger number of intermediate CAs. As we’ve seen here, the chain of trust can be compromised through some poorly designed CAs. As we saw last summer, in the case of DigiNotar, a Root CA can have their computer systems compromised allowing an attacker to gain accesses to the secret key needed to sign certificates. A third possibility is that the operators of CA may simply go rogue.

Charles Dudley once noted that “everybody complains about the weather, but nobody does anything about it.” The same can be said about the trust infrastructure that so much of our security relies on. Fortunately there are a few talented people who are working on serious proposals that, while they won’t completely fix the system, will mitigate some of the problems. I won’t review them here.

Are governments friends or foes of computer security?

Flame is almost certainly the creation of a “Western intelligence agency”. A few weeks ago, I would have guessed that it was an Israeli creation, but recent news about the creation of Stuxnet leads me to suspect that Flame, like Stuxnet, was primarily created by the United States. Parts of the US government have done and continue to do a great deal of important work in promoting good security. But we now have an example where the government has undermined a crucial part of computer security.

So far the victims of state-sponsored attacks on the certificate trust infrastructure have been in Iran. Last summer the government of Iran used certificates from a compromised Root Certification Authority, DigiNotar, to interest “secure” connections between individuals in Iran and Gmail. Now it looks like like the US (or possibly Israeli) government has been using poorly constructed Microsoft Intermediate CAs to target specific entities in Iran.

The good news, I suppose, is that these governments still had to exploit vulnerabilities instead of, say, compelling Microsoft to sign bogus updates. Of course, we cannot rule out that government agencies asked Microsoft to include those vulnerabilities. I suspect that Microsoft’s errors were innocent because they are the kinds of errors that people and organizations do make, but I can’t entirely rule out the more paranoid interpretation.

Security has a lot of moving parts

Each of the mistakes that Microsoft made were probably harmless on their own. No doubt they were made by different people who weren’t aware of the decisions that others had made. In combination, the mistakes add up to several ways of generating “good” signatures for bad Microsoft Updates, something with potentially catastrophic consequences.

A few closing remarks

Truth and speculation

I have speculated about how Microsoft made the errors that they did, and I have speculated about what the creators of Flame have done with that. What we do know is the bogus certificates for signing Windows Updates were created, and we know what Microsoft has said about it. We know that CAs using MD5 in their digital signatures are vulnerable in the way discussed, and we know that that the bogus certificates were signed by those weak CAs.

As it turns out, some of my initial speculation has been confirmed since I first drafted this. Microsoft has published an outstanding and detailed report of their analysis.

We also have a rapidly changing situation. It shouldn’t be too surprising if much of what I say about the specifics of the case today is out of date by tomorrow. Still, it gave me the chance to talk about hash functions and attacks based on collisions, along with the problems of using MD5 for digital signatures. Furthermore, Patty and Molly are always seeking attention, so they appreciate that I’ve mentioned them here.

Continuing the discussion

If you would like to comment or continue this discussion, please do so in the Flames and Collisions forum topic I’ve created for the purpose.


This article was out of date before it was finished, and I expected that there would be more news to come. And so to avoid a continues series of updates, I will be adding updates to the forum thread set up for the purpose.

However, the news that came out right after this article was posted is so jaw dropping that I will repeat it here.

The cryptanalytic technique used to create the MD5 collision is new. It isn’t radically different than previous known techniques, but this is using a technique that would have taken a great deal of expertise to develop.

People have always speculated about how far ahead in cryptanalysis agencies like the NSA or GCHQ are compared to what is known by the academic community. The assumption is that the gap has been narrowing over the decades as there is more open work in cryptography done outside of intelligence agencies. We don’t often get data to help pin anything down with our speculation, but this definitely is interesting.

The Ars Technica article linked to above has a terrific diagram outlining the nature of the general technique used to create MD5 collisions.


A salt-free diet is bad for your security

I am not giving anyone health advice. Instead, I’m going to use the example of the recent LinkedIn breach to talk about hashes and salt. Not the food, but the cryptology. Before you dive into this article, you should certainly review the practical advice that Kelly has posted first. Also Kelly’s article has more information about the specific incident.

I’m writing for people who saw security people talking about “salt” and want to know more.
You may have seen things like this, that appeared in an article at The Verge.

It’s worth noting that the passwords are stored as unsalted SHA-1 hashes. SHA-1 is a secure algorithm, but is not foolproof. LinkedIn could have made the passwords more secure by ‘salting’ the hashes.

If you would like to know what that means, read on.

What we know

We know that a data set of about 6 million password hashes has been released. We also know that this does include LinkedIn passwords (more on how we know that later). The data made public does not include the usernames (email addresses), but it is almost certain that the people who got this data from LinkedIn have that. By the end of this article, you will understand why people who broke in would release only the hashes.

Hashing passwords

Websites and most login systems hash passwords so that they only have to store the hash and not the password itself. I have been writing about the importance of hash functions for a forthcoming article, but here I will try to keep things simple. A hash function such as SHA-1 converts a password like Password123 to “b2e98ad6f6eb8508dd6a14cfa704bad7f05f6fb1″. A good hash function will make it unfeasible to calculate what the password was if you only know the hash, but the hash function has to make easy to go from the password to the hash.

A hash function always produces the same hash from the same input. It will always convert “Password1″ to that same thing. If both Alice and Bob use Password123, their passwords will be hashed to the same thing.

Let’s put rainbows on the table

Even if Bob and Alice use the same password (and so have the same hash of their passwords) they don’t have to worry about each other. Who they need to worry about is Charlie the Cracker. Charlie may have spent months running software that picks passwords and generates the hashes of those passwords. He will store those millions of passwords and their hashes in a database called a “Rainbow Table.”

A small portion of Charlie’s table may look like this

Password SHA-1 Hash
123456 7c4a8d09ca3762af61e59520943dc26494f8941b
abc123 6367c48dd193d56ea7b0baad25b19455e529f5ee
Password123 b2e98ad6f6eb8508dd6a14cfa704bad7f05f6fb1

(Rainbow tables are structured to allow for more efficient storing and lookup for large databases. They don’t actually look anything like my example.)

When the hashed passwords for millions of accounts are leaked, Charlie can simply look up the hashes from the site and see which ones match what he has in his tables. This way he can instantly discover the passwords for any hash that is in his database.

Of course, if you have been using 1Password’s Strong Password Generator to create your passwords for each site, then it is very unlikely that the hash of your password would be in Charlie’s table. But we know that not everyone does that.

Charlie also has a lot of friends (well, maybe not a lot, but he does have some) who have built up their own tables using different ways of generating passwords. This is why Charlie, who has both the usernames and the hashed passwords, may wish to just circulate the hashes. He is asking his friends to help lookup some of these hashes in their own tables.

I also promised to tell you how we know that the leaked hashes are indeed of LinkedIn passwords. If someone has a strong unique password for their LinkedIn account it is very unlikely that it was ever used elsewhere. So if the hash for that strong and unique password turns up on the leaked list, we can know it is from LinkedIn. This is presumably what Robert Graham did when determined that this is the LinkedIn list.

Salting the hash

More than 30 years ago, people realized the problem of pre-computed lookup tables, and so they developed a solution. The solution was to add some salt to password.

Here is how salting can work. Before the system creates a hash of the password it adds some random stuff (called “salt”) to the beginning of password. Let’s say it adds four characters. So when Alice uses the Password123 the system might add “MqZz” as the salt to the beginning to make the password MqZzPassword123. We then calculate the hash of that more unique password. When storing the hash, we also use store the salt with it. The salt is not secret, it just has to be random. Using this method, the salted hash that would be stored for Alice’s password would be “MqZz1b504173d594fd43c0b2e70022886501f30aee16″.

Bob’s password will get a different random salt, say “fgNZ”, which will make the salted hash of his password “fgNZ2ec6fa506fa9048d231b765559e2f3c79bdee5a1″. This is completely different than Alice’s, and – more importantly – it is completely different from anything Charlie has in his rainbow tables. Charlie can’t just build a table with passwords like Password123, instead he would have to build a table that contained Password123 prefixed by all of the two million possible salts we get using a four character salt.

Beyond salt

Salting is an old technology, yet a surprising number of web services don’t seem to be using it. This is probably because many of the tool kits for building websites didn’t include salting by default. The situation should improve as newer toolkits encourage more secure design, and also as these issues make the news.

But just as we are getting people up to speed with a 30 year old technology, salting may no longer be enough. And mere salting certainly isn’t good for the passwords that require the most security. To defend against determined and resourceful password crackers you should use both strong passwords and a password based key derivation function like
PBKDF2, which 1Password does use when encryption your Master Password.

On password breaches and security processes

Today it was reported LinkedIn had a password breach. This is the most frustrating sort of security problem, because even if you’re using all the security available on the longest most complex password you can generate, that doesn’t help if someone else gets ahold of it. As more and more services are offered online, and then connected to other logins you already have, it’s not just a minor password change when something happens to, for example, your Facebook password. Now you have to worry that whoever got access to Facebook now has access to all those other things. Do you even remember what ALL those other things are?

As part of the “security is a process, not a product” philosophy we use here at AgileBits, there are a variety of things to consider when you want to secure your online activity. One such thing is what I mentioned above: those other apps and services you authorized to use your login on Facebook or Twitter, or in today’s current example, LinkedIn. It’s a good idea to review those permissions periodically, and one handy way to do that is with a site called MyPermissions.

You can think of MyPermissions as a dashboard for reviewing and controlling your preferences across many of these sites at once. It’s like cleaning out your closet and getting rid of all those things you have no more use for. Hopefully you won’t have too many “I authorized THAT!?” moments, but if you do, revoking access is usually pretty easy.

On the upside, one of the nice things about 1Password is the extra layer of security it provides with the Strong Password Generator. As long as we see that “Password1″ is still one of the most popular passwords in use, you, dear 1Password and Strong Password Generator user, are already way ahead of most people when it comes to securing your data.

If you still have friends, family, or coworkers who just don’t get why strong passwords are more important than ever, here’s an analogy that might help: Using a really common easy password (password, Password1, 123456, etc) is the equivalent of leaving the windows down and the keys in the car. Using 1Password is locking the car, rolling up the windows, and having an excellent alarm system. Why would a thief bother with your car when there’s one right next to it just begging to be stolen? Particularly when we are talking about logins that store credit card data (Amazon 1-click, anyone?), a nefarious person will be happier with the hundreds of numbers they can snag in less time than it would take to crack your password.

I know it’s frustrating to have to keep track of all of this, but it’s really no different than real life. I always think of the Public Service Announcement on television where a guy walks up to people in a coffee shop and starts trying to convince them to let him use their bank account for a wire transfer. Everyone turns him down and the ad says “If you wouldn’t fall for it in real life, why fall for it online?” It seems like more work because most of us haven’t been doing this our whole lives, so it’s not second nature like “don’t wave around the money you just got from the ATM” and other life tips we now consider common sense.

Having said all that, here are some useful information and steps to find your weakest links and strengthen them.

You can start by opening 1Password for Mac and selecting View > Layout > Traditional. Once there, then go to View > Columns > Password Strength, and make sure it’s enabled. Now you can see, and sort by, the security of all your passwords. If you want to collect the weakest passwords, create a Smart Folder to show those. Go to File > New Smart Folder, and a search dialog will pop up in the top of your window. Make sure you set the search criteria in the bar at the top to search “Everywhere” and “Everything,” and below that select the following:

  • All of the following are true:
  • Password Strength
  • is less than or equal to
  • Select a number here. It doesn’t matter what number you use. Start with 40 and if that looks overwhelming, switch to 20. Once 20 is done, then go back to 40. This is your first pass at updating passwords.
  • Click Save, and retitle your New Saved Search something else (Mine is just called Weakest).
  • When you have time to devote to updating these weaker passwords, you can use the Password Generator within 1Password to update them. As you increase the strength, they will no longer show in your Smart Folder.
  • Now you’ve updated all of the ones that were the weakest. Hooray! Now right-click (or ctrl-click) on your empty Smart Folder and choose Edit Smart Folder… and bump up that number. Now if you have a few more showing, get those updated too.

Of course, I don’t expect you to drop everything and blow your weekend on the exhilarating task of updating your passwords. But when you have a little time to spare, knock off a few here and there. The next thing you know, your weak passwords are history.

Flashback to Leopard

OS X LeopardIt seems that my ability to predict the future with respect to Mac malware is, indeed, on par with Digitime’s ability to predict anything. Just recently I wrote, “on the Mac, Leopard and Tiger are no longer being updated”. To prove me wrong (yeah, I’m sure that’s why they did it), Apple has just released a couple of security updates for Mac OS X 10.5 (Leopard). These are “Flashback Removal Security Update for Leopard (Intel)” and “Leopard Security Update 2012-003.” Both of these are available through Software Update.

The Flashback Removal Security tool is now available for Leopard users on Intel Macs. Leopard Security Update doesn’t actually fix any weaknesses in the operating system; its only job is to disable old versions of Adobe Flash Player and encourage people to upgrade to more recent versions of the Flash Player.

Beyond the ordinary

I am hesitant, without doing more research, to call this kind of security update “unprecedented” from Apple, but I am more than willing to call it “extraordinary”. Providing an security update to systems that have long fallen off the “supported” versions is highly unusual. I can only speculate that Apple has examined which systems have been most effected by Flashback and it taking extraordinary steps to help clear that up where it needs to.

Leopard users are still not covered

I have been pleading with people to keep their systems up to date. If you must run OS X Leopard then you should take extra care to keep your web browsers up to date, and (if you use it at all)  Adobe’s Flash player up to date.  Adobe’s Flash about page will let you know what version you are currently running.

Failure to keep systems up to date with the latest versions of software is an enormous security risk. Good updating habits not only help keep your system free of malware, but these habits can help reduce the amount of malware in the environment.

Apple’s two extraordinary updates do only two things. They provide the Flashback removal tool for Leopard users and they prevent Leopard users from using outdated versions of Adobe Flash. They do nothing whatsoever to bring the enormous security enhancements and fixes that have been brought to Snow Leopard and Lion.

You scream, I scream, we all scream for Apple security updates!

Mac Software UpdateI’ve been talking a lot lately about the importance of keeping systems up to date and the role this plays in keeping malware at bay. I even suggested that Mac users are particularly good at keeping there systems up to date. So if you’re on OS X 10.6 Snow Leopard or 10.7 Lion, please help prove me right by running Software Update now.

Apple has released a big update from OS X 10.7.3 to 10.7.4, which includes many important security fixes, among them a fix for the the FileVault issue we talked about a few days ago. The security update is also available for those running 10.6.8 (Snow Leopard). Among the many important security updates are fixes for  Safari, WebKit (used by Safari and much, much more), Bluetooth, and QuickTime.

Unsupported systems are unsupported

If, for some reason, you are using an older, unsupported, version of Mac OS X such as Leopard  (OS X 10.5) or Tiger  (10.4), your system is unprotected. As I explained last week, the large majority of security flaws that get exploited by malware are things that people could have avoided if only they kept their systems up to date.

On a similar note, Mozilla, the makers of Firefox, stopped support for Firefox 3.6 in April. Running the updater from within Firefox 3.6 should bring you to the current version, Firefox 12. Our modern 1Password extension for Firefox uses the same powerful and flexible design that we have for Safari and Chrome; and it makes future browser upgrades a breeze.

Home Folders, FileVault, and passwords

One of the things that the OS X update fixes is the aforementioned FileVault problem. If your system was set up in such a way that your login password was needed for your Home Folder FileVault iconto get loaded by the system, your login password may have been written to system logs in plain text. The most typical way for this to happen is if you had configured FileVault to encrypt your Home Folder back before OS X 10.7 (Lion) and upgraded your way to 10.7.3.

The same problem may also occur if your Home Folder is mounted from a network server. This is because, even under these circumstances, the actual bug was not in FileVault itself, but in the system that handles using login passwords for mounting Home Folders. Anyone with administrator powers on affected Macs could simply read everyone’s login password.

You might think that, if someone has administrative powers, it doesn’t matter if they also have your login password for your Mac. As usual, things are not that simple. It does matter if others get ahold of your login password, even if they already have administrative power on the Mac you use. First, there is the fact that many people reuse passwords like this (so they could compromise your other accounts), but your login password is also used to encrypt your OS X keychain. This includes things like passwords that, iCloud, iChat, Safari, and many other apps may store in your OS X login keychain. An attacker with administrative powers, but without your login password, can not get at that information. But they can if they have your login password.

There are three things that affected users need to do:

  1. Run software update to prevent any further logging of passwords
  2. Change your login password through System Preferences > Users & Groups
  3. Remove old system logs that contain the old password by following the instructions in Apple’s support document on removing sensitive information from system logs.

It’s great that Apple was able to fix this quickly. The error really was an embarrassing blunder. But while this particular fix may be getting the headlines, there are many other important security fixes. Don’t for a moment think that you can skip it just because you aren’t affected by that specific bug.