Why is this information sensitive? The deeper Equifax problem

As the world now knows Equifax, the credit rating company and master of our fates, suffered a data breach in May and June 2017, which revealed to criminals details of 143 million people. (I would have liked to say, “143 million customers“, but that is very far from the case. We have no control at all over Equifax and other credit rating companies collecting information about us. We are neither their customers nor users.)

The revealed data includes:

  • Social Security numbers
  • Dates of birth
  • Addresses
  • Driver’s license numbers (unspecified number of these)
  • Credit card numbers (209,000 of these)

There are many important things to ask about this incident, but what I am focusing on today is why has non-secret information become sensitive? None of those numbers were designed to be used as secrets (including social security numbers and credit card numbers), yet we live in a world in which we have to keep these secret. What is going on here?

Identity crisis

Names only provide a first pass at identifying individuals in some list or database. There are a lot of Jeffrey Goldbergs out there. (For example, I am not the journalist and now editor-in-chief at the Atlantic. But there are lots of others that I also am not.) Also people change their names. Some people change their name when they get married. (My wife, Lívia Markóczy, decided to keep her name because we figure it is easier to spell than “Goldberg”.) Others change their names for other reasons.

We have three “Jeffreys” at AgileBits, but fortunately we have distinct family names. Though sometimes I think that everyone who joins the company should just go by “Jeffrey” to avoid confusion.

Anyway, names alone are not enough to figure out who we are talking about once we get beyond a small group of people. So we use other things. Social security numbers worked well in the US for some time. They didn’t change over your lifetime (except in rare circumstances) and nearly everyone had one. Dates of birth also don’t change. So a combination of a name, a date of birth, and a social security number was a good way to create an identifier for nearly every individual in the US, with the understanding that a name might change.

Sometimes it is not a person that we need to uniquely and reliably identify. Sometimes it is something like a bank account or charge account. Cheques (remember writing those?) have the account number printed on them. They uniquely identify the particular account within a bank, and a routing number (in the US) identifies the bank. The routing number is also printed on each cheque.

Things like social security numbers and driver’s license numbers are designed as “identifiers” of people. They are ways to know which Jeffrey Goldberg is which. Occasionally getting email meant for the journalist is no big problem, but if he gets himself on the no-fly list, I want to be sure that I don’t get caught up in that net. Likewise, I don’t want my doctor or pharmacist mixing me up with some other Jeffrey Goldberg who isn’t allergic to the same stuff that I am. Nor does some other Jeffrey Goldberg want the record of speeding tickets I seem to acquire.

Things like bank or charge account numbers are used to uniquely and reliably identify the particular account. While I wouldn’t mind if my credit card charges were charged against someone else’s account, they would certainly mind, and so would the the relevant bank. (I’m going to just start using the word “bank” broadly to include credit card issuers, automobile loan issuers, and the like.)

A username on some system is also an identifier. It identifies to the service which particular user or account is being talked about. I am jpgoldberg on our discussion forums. That username is how the system knows what permissions I have and how to verify my password.

Identifiers are bad secrets

Something that is designed and used as an identifier is hard to keep secret. A service can hash a password, but it needs to know which account is being talked about before it can look up any information. In many database systems, identifiers are used as record locators. These need to be efficiently searchable for lookup.

Identifiers also need to be communicated before secret stuff can happen. Bank account numbers are printed on cheques for a reason. Now really clever cryptographic protocols – like the one behind Zero Cash – can allow for transactions which don’t reveal the account identifier of the parties, but for almost everything else, account identifiers are not secret.

Identifiers are hard to change. If you depend on the secrecy of some identifier for your security, then you are stuck with a problem when those secrets do get compromised. It is a pain to get a new credit card number, and it is far worse trying to get a new social security number. Getting a new date of birth might also be a teeny tiny problem.
The point here is that, given what identifiers are designed to do, they aren’t designed to be kept secret.


Authentication is the process of proving some identity. And this almost always involves proving that you have access to a secret that only you should have access to. When I use 1Password to fill in my username (jpgoldberg) and password to our discussion forums, I am proving to the system that I have access to the secret (the password) associated with that particular account.

The password is designed to be kept secret. The server running the discussion forum doesn’t need to search to find the password (unlike searching to do a lookup from my username), so it can get away with storing a salted hash of the password. Also, I can change the password without losing all of the stuff that lives under my account. (Changing my username would require more work.) Plus, my username is used to identify me to other people using the system, and so is made very public. My password, on the other hand, is not.

What banks did wrong

The mess we are in today is because financial institutions have been using knowledge of identifiers as authentication secrets. The fact that someone can defraud a credit card issuer by knowing my credit card number (an account number) and my name and address (matters of public record) is all because at one point, credit card issuers decided that knowledge of the credit card number (a non-secret account number) was good way to authenticate.

I have not researched the history in detail, but I believe that this started with credit card numbers when telephone shopping first became a thing (early 1970s, I believe). Prior to then, credit cards were always used when the account holder was physically present and could show the merchant an ID with a signature. The credit card number was used solely as designed up until that point: as a record locator.

The same thing is true of social security numbers. Social security numbers were not secret until banks started to use knowledge of them as authentication proofs when they introduced telephone banking. Before then, there was nothing secret about them.

And on it goes

Because high-value systems use knowledge of identifiers as authentication proofs we are in deep doo-doo. And it will take a long time to dig ourselves out. But we continue to dig ourselves deeper.

It is fine to be asked for non-secret identifying information to help someone or something figure out who they are talking about. I like it when my doctor asks for my date of birth to make sure that they are looking at and updating the right records. But when they won’t reveal certain information to me unless I give them my date of birth, then we have a problem. That is when they start using knowledge of an identifier as an authentication secret.

Over the past decade or so, various institutions have been told that they can’t hold on to social security numbers, and so can’t use them for identifiers. That is a pity, because those are the best identifiers we have in the US. But what is worse is that knowledge of the new identifiers is being used for authentication.

Right now, Baskin-Robbins knows my date of birth (so they can offer me some free ice-cream on my birthday). In ten years, will I have to keep my birth date a closely guarded secret so that I don’t become a victim of some financial or medical records crime? If we keep on making this mistake – using identifiers as authentication secrets – that is where we are headed.

Incentives matter more than technology

I do not want to dismiss the technological hurdles in fixing this problem, but I believe that there is a bigger (and harder) problem that will need to be fixed first: the incentives are in the wrong place.

When Fraudster Freddy gets a loan from Bank Bertha using the identity of Victim Victor, Bertha is (correctly) responsible for the direct financial loss. The problem is that there are costs beyond the immediate fraudulent loan that are borne by Victor. But Victor has no capacity or opportunity to prevent himself from being a victim. In economics jargon, Victor suffers a negative externality.

Bertha factors in the risk of the direct cost to her of issuing a loan to a fraudster. She looks at that risk when deciding how thoroughly to check that Freddy is who he says he is. Bertha could insist that new customers submit notarized documents, but if she insists on that and her competitors don’t, then she would lose business to those competitors.

But Bertha does not factor in the indirect costs to Victor. She has no dealings with Victor. Victor isn’t a potential customer. So if Victor has costly damage to his credit and reputation that requires a lot of effort to sort out, that is not Bertha’s problem (and it certainly isn’t Freddy’s problem.)

Only when Freddy and Bertha (the parties to the original deal) have to pay the cost of the damage done to Victor (Economics jargon: “internalizing the externalities”) will Bertha have the incentives to improve authentication. I don’t have an answer to how we get there from here, but that is the direction we need to head. In the meantime, if you find yourself a victim (whether you’re a Victor, a Jeffrey, or something else entirely), Kate published a post earlier this week with tips to protect yourself until we (hopefully) do get all of this figured out one day.

Net neutrality: Keeping the Internet safe and accessible for all

Lo, everyone! Back on October 29, 1969, that two-letter greeting was the first message sent over ARPANET, the predecessor to the World Wide Web. Today, on July 12, 2017, people from around the globe are coming together for a day of action to fight for net neutrality. The principle of net neutrality states that all Internet traffic should be treated equally, but those who control the transmission of that data have been fighting for the right to place their preferred data in the fast lane and leave data they don’t like in a traffic jam. We here at AgileBits care quite a lot about data, and while we’re glad your sensitive data is safely locked away, we think the data we want to share on the Internet should remain accessible to everyone. Read more

Be your own key master with 1Password

Encryption is great. By magic (well, by math) it converts data from a useful form to complete gibberish that can only be turned back into useful data with secret number called a key.

I happen to think that the term “key” that we use for encryption and decryption keys is a poor metaphor, as it suggests unlocking a door or a box. Cryptographic keys are more like special magic wands that are essential to the process of transforming data from its useful (decrypted) form to gibberish and back again. Read more

Introducing Travel Mode: Protect your data when crossing borders

We often get inspired to create new features based on feedback from our customers. Earlier this month, our friends at Basecamp made their Employee Handbook public. We were impressed to see they had a whole section about using 1Password, which included instructions for keeping work information off their devices when travelling internationally.

We knew right away that we wanted to make it easier for everyone to follow this great advice. So we hunkered down and built Travel Mode.

Travel Mode is a new feature we’re making available to everyone with a 1Password membership. It protects your 1Password data from unwarranted searches when you travel. When you turn on Travel Mode, every vault will be removed from your devices except for the ones marked “safe for travel.” All it takes is a single click to travel with confidence.

It’s important for me that my personal data be as secure and private as possible. I have data on my devices that’s ultimately a lot more sensitive than my personal data though. As one of the developers here at AgileBits I’m trusted with access to certain keys and services that we simply can’t take any risks with.

How it works

Let’s say I had an upcoming trip for a technology conference in San Jose. I hear the apples are especially delicious over there this time of year. :) Before Travel Mode, I would have had to sign out of all my 1Password accounts on all my devices. If I needed certain passwords with me, I had to create a temporary travel account. It was a lot of work and not worth it for most people.

Now all I have to do is make sure any of the items I need for travel are in a single vault. I then sign in to my account on 1Password.com, mark that vault as “safe for travel,” and turn on Travel Mode in my profile. I unlock 1Password on my devices so the vaults are removed, and I’m now ready for my trip. Off I go from sunny Winnipeg to hopefully-sunnier San Jose, ready to cross the border knowing that my iPhone and my Mac no longer contain the vast majority of my sensitive information.

After I arrive at my destination, I can sign in again and turn off Travel Mode. The vaults immediately show up on my devices, and I’m back in business.

Not just a magic trick

Your vaults aren’t just hidden; they’re completely removed from your devices as long as Travel Mode is on. That includes every item and all your encryption keys. There are no traces left for anyone to find. So even if you’re asked to unlock 1Password by someone at the border, there’s no way for them to tell that Travel Mode is even enabled.

In 1Password Teams, Travel Mode is even cooler. If you’re a team administrator, you have total control over which secrets your employees can travel with. You can turn Travel Mode on and off for your team members, so you can ensure that company information stays safe at all times.

Travel Mode is going to change how you use 1Password. It’s already changed the way we use it. When we gave a sneak peak to our friends at Basecamp, here’s what their founder, David Heinemeier Hansson, had to say:

International travel while maintaining your privacy (and dignity!) has become increasingly tough. We need better tools to help protect ourselves against unwarranted searches and the leakage of business and personal secrets. 1Password is taking a great step in that direction with their new Travel Mode. Bravo.

Travel Mode is available today, included in every 1Password membership. Give it a shot, and let us know how you travel with 1Password.

Learn how to use Travel Mode on our support site.

1Password is #LayerUp-ed with modern authentication

We should remember on this World Password Day that passwords have been around for thousands of years. They can’t be all bad if we’ve kept them around so long, but some things need to change.

This year we are partnering with Intel to talk about layers of security, and in this article I’m going talk about how 1Password adds invisible layers of security to what might look like an old-fashioned sign-in page with a username and password.

Passwords are not a bad way of proving who you are. But the way that they’re used in practice exposes people to a number of unnecessary risks. It is long past time to move beyond traditional password authentication, and with 1Password accounts, we’re doing so.

Breaking tradition

When we designed the sign-in process for your 1Password account, we wanted something that looks reasonably familiar to people but secures against the risks that a traditional password login does not. Indeed, when you sign in to a 1Password account, although you enter your Master Password, you do not send us any secrets at all.

The buzzword explanation is that our end-to-end encryption uses Secure Remote Password (SRP) as a Password-based Authenticated Key Exchange (PAKE) along with Two-Secret Key Derivation (2SKD) to get all of the security properties we want without having to place too much additional burden on the users.

If you’re interested in knowing what exactly all that means, read on. Otherwise stop here and enjoy a technical discussion of how migratory swallows can transport coconuts:

Halt! Who goes there?

Authenticating (proving who you are) with passwords hasn’t really changed much over the centuries. And I’m going to start my discussion of the security properties it does and doesn’t offer with a very traditional example.

King Arthur Queen Penelope approaches a castle gate where a guard (we’ll call him Vincent) is on duty.

Many web services use this traditional method, and many are plagued with the same security problems (which I will get to shortly). As a society we need to be doing much better than this, and we are proud to say that 1Password is doing much better than this!

1Password’s major line of security is its end-to-end encryption: your data is encrypted with keys that never leave your devices. So even if someone does get past the castle gate there is not much they can do with what they find inside. But today we are talking about the security of signing in to the 1Password service.

The problems found in the castle gates method

A method of authentication that has been in place for thousands of years may have a lot going for it. It simply shouldn’t be dropped like coconuts dropped by migrating swallows. But it does bear looking into. And when we do look into it, we see lots of problems.

  • Reuse: If Penelope uses the same password for multiple castles, then the discovery of one of those passwords in one place can lead to a breach of all of the castles that she uses that password for.
  • Guessable: The sorts of passwords that Penelope is going to use in these sorts of circumstances are probably guessable with a reasonable amount of effort. This guessing can be made easier if an attacker gets hold of the data that Vincent stores to verify passwords (even if it isn’t the passwords themselves).
  • Replay: If someone overhears Penelope saying her password to Vincent, then they can use what they overheard to pretend to be Penelope at some later time.
  • Revealing password to those in the castle: Even if Penelope and Vincent could avoid being overheard, Penelope is giving away a secret to someone who may or may not be the real Vincent. Even if it is the right Vincent, she has to trust him to not do anything he shouldn’t with her password (like pretending to be her at some other castle).
  • Not mutual: Penelope proves to Vincent that she is who she says she is, but Vincent does not prove to Penelope that she is at the castle that she thinks she is at. Sure, it might be tricky for one castle to pretend to be a different one, but internet services are another matter.
  • Further communication is still unsafe: If Penelope is brought within the castle walls after authentication, all is good. But if (keeping the analogy closer to an Internet service) she remains outside the castle but carries on conversations with people within the castle, their conversation will still be overheard and perhaps tampered with by someone else outside the castle walls.

Building better castles and gates

Traditional authentication, whether it be castle gates or websites, only really have the client or user proving their identity to the server. But we should be asking for much more security in a modern authentication process. Not only should it have the client prove its identity to the server, but

  1. The server should prove its identity to the client (mutual authentication).
  2. An eavesdropper on the conversation should not be able to learn any secrets of either the client or the server.
  3. An eavesdropper should not be able to replay the conversation to get in at some later time.
  4. Client secrets should not be revealed to the server.
  5. The process should set up a secure session beyond just the authentication process.
  6. User secrets should be unguessable.
  7. User secrets should be unique.

I know that a lot of the things on that list overlap, and solving one often involves solving another. But there are some technical reasons for their separation. Also it makes for a more impressive list when they are broken apart this way.

What’s inside the castle

There may be ways for Oscar (Penelope’s opponent) to get into the castle that do not involve providing proofs of identity to Vincent. Perhaps there are other doors. Perhaps it is possible to tunnel in or or fly over the walls. Perhaps it is possible to hide inside a large wooden rabbit that is brought into the castle. Perhaps someone already inside and trusted is an enemy spy. Requiring that Vincent check multiple factors will not help defend against any of those.

It is 1Password’s end-to-end encryption that defends against those sorts of attacks. Although end-to-end encryption is probably our most important layer of security, it is not the one that I am talking about today. I bring it up because not only is end-to-end encryption our most important line of defense, but because we need to make sure that nothing in our authentication system could weaken that encryption. We don’t want an attack on the authentication system to give the attacker any information that would be helpful in decrypting Penelope’s data. (Don’t worry if that didn’t make sense at this point; it should by the end.)

With end-to-end encryption the stuff in the castle that is valuable to Penelope is useless to Oscar because only Penelope has the ability to make use of it. Penelope has secrets that never leave her side that can transform her pile of useless stuff kept inside to castle into things that are very valuable. Authentication is about proving who you are and being given access to something. Encryption (and decryption) is about transforming stuff from a useful form into a useless form (and back again).

End-to-end encryption means that even if Oscar gets inside, he will not be able to read the data that he finds. Penelope’s data – even within the castle – can only be decrypted using secrets that only Penelope has and which are never revealed to anyone else.

Once more into the breach

There is one extra thing we need to consider to keep all of this secure. And that is whether an attacker, Oscar, who gets into the castle can use the same information that Vincent has in a way that helps him (Oscar) decrypt Penelope’s data.

When Vincent checks to see whether Penelope has provided the correct password, he needs to check against some data stored some place (perhaps in his head). If computer systems worked like those castles then there might be the same password for everyone and Vincent would know it, too. Fortunately, most web services do much better than that. Each person has their own password and the service should not be storing those passwords unencrypted.

Services do not want to store the passwords that are needed to get into the service. Vincent shouldn’t store that Penelope’s password is xyzzy. Having a piece of paper around with all of that data would be dangerous. So Vincent doesn’t actually have those passwords himself; instead he has password verifiers. (Vincent verifies passwords in this story, and Penelope proves her identity. Oscar is their opponent.)

Making a hash of it (is a good thing)

I have to step back from the castle metaphor at this point, as Vincent needs to do some math that wasn’t around at the time. The verifiers that Vincent stores are typically (though not necessarily) cryptographic hashes of the password. The beauty of a cryptographic hash is that it is quick and easy to compute that the hash of “xyzzy” is “mzgC7LrRFCZ90NGkMeV7C8qVkwo=”, but it is pretty much impossible to compute things in the other direction. So in a typical web login system, when Penelope tells Vincent her password, Vincent computes the hash of what she says and then compares that to what he has stored. This way, if Oscar steals the list of these hashes from the castle, the passwords that people use to authenticate remain safe.

But we still have a problem. Even though Oscar can’t compute “xyzzy” from “mzgC7LrRFCZ90NGkMeV7C8qVkwo=” he can use that hash to verify guesses. Oscar can guess “OnlyUlysses4Me” and see what hash it produces. If it doesn’t produce the hash he is checking against, then he tries another guess. If Oscar has the right sort of equipment and a copy of the verifier/hash stolen from the castle, he can make a large number of guesses very quickly. If Oscar knows that Penelope enjoys adventures, he will eventually guess “xyzzy”, and will see that he gets the right verifier.

One password, two secrets

Now those of you still following along might wonder why Oscar would bother trying to discover Penelope’s password to get into the castle if he already managed to get into the castle to steal Vincent’s list. There are many reasons why figuring out Penelope’s password would be useful to Oscar, but the one that we are concerned about here is that it might be useful for decrypting Penelope’s data. Even though Penelope’s data is encrypted using a secret that only Penelope knows, we want Penelope to only have to know one password to get to her data (be able to retrieve her stuff from the castle) and decrypt it once she has retrieved it. This one password is her Master Password.

Her password, xyzzy, is being used for two purposes. Once is for authentication (getting into the castle) and the other is for her end-to-end encryption to decrypt her data. And this is why – even though 1Password’s security is primarily based on end-to-end encryption – we need our authentication system to not give Oscar anything useful for attacking Penelope’s data. We do not want anything we store to aid Oscar in guessing Penelope’s Master Password.

We want it so that even if Oscar gets into the castle and gets hold of Vincent’s list of verifiers he is no closer to getting at Penelope’s secrets. Although SRP gives us many of the security properties we want, it still leaves Vincent holding a verifier. That verifier – if we didn’t take counter measures – could be used by Oscar for checking password guesses even though it isn’t a traditional password hash.

Two secrets, one password

We want the verifiers that we store to be uncrackable. Our solution is Two-Secret Key Derivation (2SKD). Penelope has a secret other than her Master Password. It is her Secret Key. When she first sets up her account the software running on her machine will create an unguessable Secret Key. This Secret Key gets blended in with Penelope’s Master Password all on her own devices.

When Penelope proves her identity to our servers, she is not just proving that she knows her Master Password, but she is also proving that she (well, her software) has her Secret Key. In fact, she is proving that she has a combination of these. That combination is unguessable and so cannot be used to get at the secrets needed to decrypt her data.

2SKD means that Vincent only stores uncrackable verifiers. So there is little that Oscar could do if he stole the list of verifiers. SRP means that neither Vincent or anyone listening in during authentication can learn Penelope’s secrets. And with all of Penelope’s authentication secrets nice and secure, they can also be used for the end-to-end encryption of her data.

A very brief word about SRP

SRP and other similar systems involve proving that you know a secret without revealing that secret. As much as I would love to go more into how that happens, this article has become too long as it is. So let me just say that the client and the server present each other with mathematical puzzles that can only be solved with knowledge of a secret, but solutions to those puzzles do not reveal anything about the secret. And because each puzzle is new each time, someone who records a session cannot replay it later.

I’ve tried to give an overview of how the interaction of three things in 1Password’s design – end to end encryption, 2SKD, and SRP – protect your personal information from whole categories of attacks. These design elements are mostly invisible to people using 1Password. You do not need to know about these things to use 1Password well, but we like to let you know it is there. This is all part of how we at strive to make dealing with passwords both easier and more secure. Although not all of the techniques described here are appropriate for all sites and services, we do hope that we that the ideas discussed here can help improve authentication not just for 1Password but for other systems as well.

More detail about all of this and more is in our 1Password Security white paper.

And so a Happy World Password Day to one and all.

More than just a penny for your thoughts — $100,000 top bounty

We believe that we’ve designed and built an extremely secure password management system. We wouldn’t be offering it to people otherwise.  But we know that we – like everyone else – may have blind spots. That is why we very much encourage outside researchers to hunt for security bugs. Today we are upping that encouragement by raising the top reward in our bug bounty program.

bugcrowd-logoWe have always encouraged security experts to investigate 1Password, and in 2015 we added monetary rewards though Bugcrowd. This has been a terrific learning experience for both us and for the researchers. We’ve learned of a few bugs, and they’ve learned that 1Password is not built like the web services they are used to attacking. [Advice to researchers: Read the brief carefully and follow the instructions for where we give you some internal documentation and various hints.]

Since we started with our bounty program, Bugcrowd researchers have found 17 bugs, mostly minor issues during our beta and testing period. But there have been a few higher payout rewards that pushed up the average to $400 per bug. So our average payout should cover a researcher’s Burp Suite Pro license for a year.

So far none of the bugs represented a threat to the secrecy of user data, but even small bugs must be found and squashed. Indeed, attacks on the most secure systems now-a-days tend to involve chaining together a series of seemingly harmless bugs.

Capture the top flag to get $100,000

Capture the flag for $100,000

Our 1Password bug bounty program offers tiered rewards for bug identification, starting at $100. Our top prize goes to anyone who can obtain and decrypt some bad poetry (in particular, a horrible haiku) stored in a 1Password vault that researchers should not have access to. We are raising the reward for that from $25,000 to $100,000. (All rewards are listed in US dollars, as those are easier to transfer than hundreds or thousands of Canadian dollars worth of maple syrup.) This, it turns out, makes it the highest bounty available on Bugcrowd.

We are raising this top bounty because we want people really trying to go for it. It will take hard work to even get close, but that work can pay off even without reaching the very top prize: In addition to the top challenge, there are other challenges along the way. But nobody is going to get close unless they make a careful study of our design.

Go for it

Here’s how to sign-up:

  • Go to bugcrowd.com and set up an account.
  • Read the documentation on the 1Password bugcrowd profile
  • The AgileBits Bugcrowd brief instructs researchers where to find additional documentation on APIs, hints about the location of some of the flags, and other resources for taking on this challenge. Be sure to study that material.
  • Go hunting!

If you have any questions or comments – we’d love to hear from you. Feel free to respond on this page, or ping us an email at security@agilebits.com.

Three layers of encryption keeps you safe when SSL/TLS fails

No 1Password data is put at any risk through the bug reported about CloudFlare. 1Password does not depend on the secrecy of SSL/TLS for your security. The security of your 1Password data remains safe and solid.

We will provide a more detailed description in the coming days of the CloudFlare security bug and how it (doesn’t) affect 1Password. At the moment, we want to assure and remind everyone that we designed 1Password with the expectation that SSL/TLS can fail. Indeed it is for incidents like this that we deliberately made this design.

No secrets are transmitted between 1Password clients and 1Password.com when you sign in and use the service. Our sign-in uses SRP, which means that server and client prove their identity to each other without transmitting any secrets. This means that users of 1Password do not need to change their Master Passwords.

UmbrellaBearYour actual data is encrypted with three layers (including SSL/TLS), and the other two layers remain secure even if the secrecy of an SSL/TLS channel is compromised.

The three layers are

  1. SSL/TLS. This is what puts the “S” in HTTPS. And this is what data may have been exposed due to the Cloudflare bug during the vulnerable period.
  2. Our own transport layer authenticated encryption using a session key that is generated using SRP during sign in. The secret session keys are never transmitted.
  3. The core encryption of your data. Except for when you are viewing your data on your system, it is encrypted with keys that are derived from your Master Password and your secret Account Code. This is the most important layer, as it would protect you even if our servers were to be breached. (Our servers were not breached.)

Using Intel’s SGX to keep secrets even safer

When you unlock 1Password there are lots of secrets it needs to manage. There are the secrets that you see and manage such as your passwords and secure notes and all of the other things you trust to 1Password. But there are lots of secrets that 1Password has to juggle that you never see. These include the various encryption keys that 1Password uses to encrypt your data. These are 77-digit (256-bit) completely random numbers.

You might reasonably think that your data is encrypted directly by your Master Password (and your secret Account Key), but there are a number of technical reasons why that wouldn’t be a good idea. Instead, your Master Password is used to derive a key encryption key which is used to encrypt a master key. The details differ for our different data formats, but here is a little ditty from our description of the OPVault data format to be sung to the tune of Dry Bones.

Each item key’s encrypted with the master key
And the master key’s encrypted with the derived key
And the derived key comes from the MP
Oh hear the word of the XOR
Them keys, them keys, them random keys (3x)
Oh hear the word of the XOR

And that is a simplification! But it is the appropriate simplification for what I want to talk about today: Some of our intrepid 1Password for Windows beta testers can start using a version of 1Password 6 for Windows that will have an extra protection on that “master key” described in that song. We have been working with Intel over the past few months to bring the protection of Intel’s Software Guard Extensions (SGX) to 1Password.

Soon (some time this month) 1Password for Windows customers running on systems that support Intel’s SGX will have another layer of protection around some of their secrets.

SGX support in 1Password isn’t ready for everybody just yet as there are a number of system requirements, but we are very happy to talk about what we have done so far and where we are headed. I would also like to say that we would not be where we are today without the support of many people at Intel. It has been great working with them, and I very much look forward to continuing this collaberation.

What does Intel’s SGX do?

Intel, as most of you know, make the chips that power most of the desktop and laptop computers we all use. Their most recent CPUs include the ability for software running on Windows and Linux to create and use secure enclaves that are safe from attacks coming from the operating system itself. It is a security layer in the chip that cryptographically protects regions of operating system memory.

SGX does a lot of other things, too; but the feature I’m focusing on now is the privacy it offers for regions of system memory and computation.

Ordinary memory protection

A program running on a computer needs to use the system’s memory. It needs this both for the actual program and for the data that the program is working on. It is a Bad Thing™ if one program can mess with another program’s memory. And it is a security problem if one program can read the memory of another program. We don’t want some other program running on your computer to peer what is in 1Password’s memory when 1Password is unlocked. After all, those are your secrets.

It is the operating system’s (OS’s) job to make sure that one process can’t access the memory of another. Back in the old days (when I had to walk two miles through the snow to school, up hill, both ways) some operating systems did not do a good job of enforcing memory protection. Programs could easily cause other programs or the whole system to crash, and malware was very easy to create. Modern operating systems are much better about this. They do a good job of making sure that only the authorized process can read and manipulate certain things in memory. But if the operating system itself gets compromised or if some other mechanism might allow for the reading of all memory then secrets in one program’s part of memory may still be readable by outsiders.

Extraordinary memory protection

One way to protect a region of memory from the operating system itself is to encrypt that region’s contents using a key that even the operating system can’t get to. That is a tricky thing to do as there are few places to keep the key that encrypts this memory region if we really want to keep it out of the hands of the operating system.

SGX memory access drawingSo what we are looking for is the ability to encrypt and decrypt regions of memory quickly, but using a key that the operating system can’t get to. Where should that key live?  We can’t just keep it in the the innards of a program that the operating system is running, as the operating system must be able to see those innards to run the program. We can’t keep the key in the encrypted memory region itself because that is like locking your keys in your car: Nobody, not even the rightful owner, could make use of what is in there. So we need some safe place to create and keep the keys for these encrypted regions of memory.

Intel’s solution is to create and keep those keys in the hardware of the CPU. A region of memory encrypted with such a key is called an enclave. The SGX development and runtime tools for Windows allow us to build 1Password so that when we create some keys and call some cryptographic operations those will be stored and used with an SGX enclave.

An enclave of one’s own

When 1Password uses certain tools provided by Intel, the SGX module in the hardware will create an enclave just for the 1Password process. It does a lot of work for us behind the scenes. It requests memory from the operating system, but the hardware on Intel’s chip will be encrypting and validating all of the data in that region of memory.

When 1Password needs to perform an operation that relies on functions or data in the enclave, we make the request to Intel’s crypto provider, which ends up talking directly to SGX portions of the chip which will then perform the operation in the encrypted SGX enclave.

Not even 1Password has full access to its enclave; instead 1Password has the ability to ask the enclave to perform only those tasks that it was programmed to do. 1Password can say, “hey enclave, here is some data I would like you to decrypt with that key you have stored” Or “hold onto this key, I may ask you to do things with it later.”

What’s in our enclave? Them keys, of course!

protected-keysWhen you enter your Master Password in 1Password for Windows, 1Password processes that password with PBKDF2 to derive the master key to your primary profile in the local data store. (Your local data store and the profiles within it are things that are well hidden from the user, but this is where the keys to other things are stored. What is important about this is that your master key is a really important key.)

When you do this on a Windows system that supports SGX the same thing happens, except that the the computation of the master key is done within the enclave.  The master key that is derived through that process is also retained within the enclave. When 1Password needs to decrypt something with that key it can just ask the enclave to perform that decryption. The key does not need to leave the enclave.

Answers to anticipated questions

What does (and doesn’t) this protect us from?

I must start out by saying what I have often said in the past. It is impossible for 1Password (or any program) to protect you if the system you are running it on is compromised. You need to keep your devices free of malware. But using SGX makes a certain kind of local attack harder for an attacker, particularly as we expand our use of it.

The most notable attacks that SGX can start to help defend against are attacks that exploit Direct Memory Access. Computers with certain sorts of external ports can sometimes be tricked in allowing a peripheral device to read large portions of system memory.

As we expand and fine tune our use of SGX we will be in a better position to be more precise about what attacks it does and doesn’t defend against, but the ability to make use of these enclaves has so much potential that we are delighted to have made our first steps in using the protections that SGX can offer.

What will be in our enclave in the future?

As we progress with this, we will place more keys and more operations involving those keys into the SGX secure enclave. What you see today is just the beginning. When the master key is used to decrypt some other key that other key should only live within the enclave. Likewise the secret part of your personal key set should also have a life within the enclave only. I can’t promise when these additions will come. We still need to get the right cryptographic operations functioning within the enclave and reorganize a lot of code to make all of that Good Stuff™ happens, but we are very happy to have taken the first steps with the master key.

We do not like promising features until they are delivered. So please don’t take this as a promise. It is, however, a plan.

Sealed enclaves?

Among the features of SGX that I have not mentioned so far is the ability to seal an enclave. This would allow the enclave to not just keep secrets safe while the system is running, but to allow it to persist from session to session. Our hope is that we can pre-compute secrets and keep them in a sealed enclave. This should (if all goes to plan) allow 1Password to start up much more quickly as most of the keys that it needs to compute when you first unlock it can already be in an enclave ready to go.

A sealed enclave would also be an ideal place to store your secret 1Password.com Account Key, as a way of protecting that from someone who gains access to your computer.

Is security platform-specific?

1Password can only make use of SGX on some Windows PCs running on CPUs with Intel’s Skylake CPUs and which have been configured to make use of SGX. Thus SGX support in 1Password is not going to be available to every 1Password user. So it is natural to ask whether 1Password’s security depends on the platform you use.

Well, there is the trivial answer of “yes”. If you use 1Password on a device that hasn’t been updated and is filled with dubious software downloaded from who knows where, then using 1Password will not be as secure as when it is running on a device which is better maintained. That goes without saying, but that never stops me from saying it. Really, the easiest and number one thing you can do for your security is to keep your systems and software up to date.

The nontrivial answer is that 1Password’s security model remains the same across all of the platforms on which we offer it. But it would be foolish to not take advantage of some security feature available on one platform merely because such features aren’t available on others. So we are happy to begin to offer this additional layer of security for those of our customers how have computers which can make use of it.

Upward and downward!

I’d like to conclude by just saying how much fun it has been breaking through (or going around) layers. People like me have been trained to think of software applications and hardware being separated by the operating system. There are very good reasons for that separation — indeed, that separation does a great deal for application security — but now we see that some creative, thoughtful, and well-managed exceptions to that separation can have security benefits of its own. We are proud to be a part of this.

Our Security, Our Rights

Every day it feels like our rights to privacy and security are under attack, and indeed, if you’re keeping up with the news, this is a lot more than just a feeling.

Governments and law enforcement agencies around the world are pushing hard for new powers to keep tabs on their citizens. They argue they require the ability to track your activities and access your private information in order to protect you. And they’re willing to weaken encryption for everyone to do so.

We’ve already seen this happen in the UK with their newly passed laws that grant the government unprecedented surveillance powers, and as James Vincent so eloquently states there, the new laws establish a dangerous new norm where surveillance is seen as the baseline for a peaceful society.

Laws like these in the UK are likely to spread to other countries if citizens don’t take a stand. Indeed these laws could end up appearing tame by future standards if we’re not vigilant.

As tempting as it is to give the government more powers to nab the bad people before crimes have even been committed, history has proven time and again that these broader powers are most often used against law-abiding citizens rather than criminals themselves.

It’s possible laws like these will find their way into Canada as well so I’m asking for your help to send a clear message to our ministers before the ball starts rolling in that direction.

Since September Public Safety Canada has been holding a Consultation on National Security to prompt discussion and debate on future policy changes. Feedback is accepted from all Canadians as well as international readers, so everyone is welcome to contribute.

The set of questions and discussion points is quite broad but the one that’s most important to 1Password users is Investigative Capabilities in a Digital World, particularly this question:

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

Or said another way, how can the government institute a backdoor into encryption software that only they can exploit? It sounds simple but in fact it’s simply not possible. As we discussed previously on this blog, back doors are bad for security architecture, and when back doors go bad: mind your Ps and Qs covers an example of a backdoor that went awry along with the math that made it possible.

Please complete the survey and let the Canadian government know you’re not willing to weaken your security or give up your privacy. The opportunity to provide feedback ends on Thursday, December 15th.

I know it’s tempting to give up some freedoms to allow someone else protect you, but whenever I feel that way I remind myself of what Benjamin Franklin would say:

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.

Please forgive a Canadian for quoting one of America’s founding fathers, but Ben summed things up so well that I couldn’t resist. ?

Thanks for caring about privacy and security as much as we do ❤️

Send in the crowds (to hunt for bugs)

We unequivocally encourage security researchers to poke around 1Password. It is an extremely important part of the process that helps us deliver and maintain a more secure product to everyone. Finding and reporting potential security vulnerabilities is what we should all expect from bug hunters around the world; the hunters and yourself should expect that we address those vulnerabilities promptly.

We have always welcomed security reports that arrive at security@agilebits.com, and over most of the past year we offered a more formal, invitation-only bug bounty program through Bugcrowd. We are pleased to now take that program public: https://bugcrowd.com/agilebits.


Before I get into what the program offers, I’d like to remind you that there is always room to improve the security of any complicated system, 1Password included. As clever as we may think we are, there will be security issues that we miss and different perspectives help reveal them. Software updates that address security issues are part of a healthy product. This, by the way, is why it is important to always keep your systems and software up to date. Even in the complete (and unlikely) absence of software bugs, threats change over time, and defenses should try to stay ahead of the game.

Some words about Bounty

A bug bounty program offers payouts for different sorts of bugs. The first bug bounty that I recall seeing was Donald Knuth’s for the the TeX typesetting system, though I have since learned that he does this for most of his books and programs. It started out with $2.56 (256 US cents) for the first year, and doubled each year after that, reaching a final limit of $327.68.

Check from Donald Knuth made out to Richard Kinch.

A bounty check from Donald Knuth made out to Richard Kinch

Of course given Donald Knuth’s well-deserved fame and reputation, few people cashed the checks they received. Instead, they framed them.

Anyway, enough about me revealing my age. Let’s talk about today’s bug bounty program. There is a community of people who earn a portion of their income from bounties. (Whether or not it is enough for them to sail off to Tahiti or Pitcairn is not something I know.) Over the years they have developed skills and tools and scripts for examining systems. We want them to apply those skills and efforts testing the security of 1Password. Opening up this bug bounty program brings those people and their skills into the process of making 1Password more secure.

Our bounty

Unlike the example of Donald Knuth’s bug bounty, we are only offering payouts for security issues. Of course all bug reports are welcome, we just aren’t promising bounties for them. And because we are promising to pay for bugs, we’ve had to establish a bunch of rules about what counts. These rules help us draw the attention of researchers to the 1Password.com service, and they help us exclude payouts of things that are already known and documented. We don’t want those rules to discourage anyone from bug hunting; they are there to help focus attention on what should be the most fruitful for everyone.

1Password Security white paper cover

Your homework

We think that finding bugs in 1Password will be challenging — 1Password.com is not your typical web service. Our authentication system, for example, is highly unusual and specifically designed so we are never in a position to learn any of our customers’ secrets. Because we use end-to-end encryption, getting hold of user secrets may require breaking not just authentication but also cryptography. Of course, we’re inviting researchers to try out attacks that we haven’t considered to prove us wrong. I expect that successful bug hunters will need to do their homework, all the same.

Now, all that bragging about how challenging I think it’ll be to find serious issues with 1Password isn’t an attempt to stop people from trying — get out there and try! You can get bounty for it, and a thank-you as well. We’re excited to hear a resounding “challenge accepted!” from the research community.

How we help researchers

If there are security bugs, we want to know about them so we can fix them. (I know I keep repeating that point, but not everyone reading this is familiar with why we might invite people to look for security bugs.) We want to help researchers find bugs, because they’re helping us, and everyone who uses 1Password.

To help researchers understand and navigate 1Password (and reduce the amount of time they may need to reverse engineer protocols) we have set up a special 1Password Team that contains a bunch of goodies: internal documentation on our APIs, some specific challenges, and UUIDs and locations of items involved in some of the challenges. So researchers, please come and leave your mark on our Graffiti Wall. (No, not in this web page or the image below, the wall inside the aforementioned team account.)

Secure Note: "The Researchers vault grants read-only access to researchers. If you figure out how to get around read-only access, please put your name in here ..."

With a natural degree of trepidation, I look forward to what might appear there.

The kindness of strangers

A bug bounty program brings in a new group of researchers. And that’s why we’re launching it. We encourage independent research as well. We’re just as open to reports of security issues outside of the bug bounty program as we have always been.

So without further ado, let’s send in the crowds!