Introducing Travel Mode: Protect your data when crossing borders

We often get inspired to create new features based on feedback from our customers. Earlier this month, our friends at Basecamp made their Employee Handbook public. We were impressed to see they had a whole section about using 1Password, which included instructions for keeping work information off their devices when travelling internationally.

We knew right away that we wanted to make it easier for everyone to follow this great advice. So we hunkered down and built Travel Mode.

Travel Mode is a new feature we’re making available to everyone with a 1Password membership. It protects your 1Password data from unwarranted searches when you travel. When you turn on Travel Mode, every vault will be removed from your devices except for the ones marked “safe for travel.” All it takes is a single click to travel with confidence.

It’s important for me that my personal data be as secure and private as possible. I have data on my devices that’s ultimately a lot more sensitive than my personal data though. As one of the developers here at AgileBits I’m trusted with access to certain keys and services that we simply can’t take any risks with.

How it works

Let’s say I had an upcoming trip for a technology conference in San Jose. I hear the apples are especially delicious over there this time of year. :) Before Travel Mode, I would have had to sign out of all my 1Password accounts on all my devices. If I needed certain passwords with me, I had to create a temporary travel account. It was a lot of work and not worth it for most people.

Now all I have to do is make sure any of the items I need for travel are in a single vault. I then sign in to my account on 1Password.com, mark that vault as “safe for travel,” and turn on Travel Mode in my profile. I unlock 1Password on my devices so the vaults are removed, and I’m now ready for my trip. Off I go from sunny Winnipeg to hopefully-sunnier San Jose, ready to cross the border knowing that my iPhone and my Mac no longer contain the vast majority of my sensitive information.

After I arrive at my destination, I can sign in again and turn off Travel Mode. The vaults immediately show up on my devices, and I’m back in business.

Not just a magic trick

Your vaults aren’t just hidden; they’re completely removed from your devices as long as Travel Mode is on. That includes every item and all your encryption keys. There are no traces left for anyone to find. So even if you’re asked to unlock 1Password by someone at the border, there’s no way for them to tell that Travel Mode is even enabled.

In 1Password Teams, Travel Mode is even cooler. If you’re a team administrator, you have total control over which secrets your employees can travel with. You can turn Travel Mode on and off for your team members, so you can ensure that company information stays safe at all times.

Travel Mode is going to change how you use 1Password. It’s already changed the way we use it. When we gave a sneak peak to our friends at Basecamp, here’s what their founder, David Heinemeier Hansson, had to say:

International travel while maintaining your privacy (and dignity!) has become increasingly tough. We need better tools to help protect ourselves against unwarranted searches and the leakage of business and personal secrets. 1Password is taking a great step in that direction with their new Travel Mode. Bravo.

Travel Mode is available today, included in every 1Password membership. Give it a shot, and let us know how you travel with 1Password.

Learn how to use Travel Mode on our support site.

1Password is #LayerUp-ed with modern authentication

We should remember on this World Password Day that passwords have been around for thousands of years. They can’t be all bad if we’ve kept them around so long, but some things need to change.

This year we are partnering with Intel to talk about layers of security, and in this article I’m going talk about how 1Password adds invisible layers of security to what might look like an old-fashioned sign-in page with a username and password.

Passwords are not a bad way of proving who you are. But the way that they’re used in practice exposes people to a number of unnecessary risks. It is long past time to move beyond traditional password authentication, and with 1Password accounts, we’re doing so.

Breaking tradition

When we designed the sign-in process for your 1Password account, we wanted something that looks reasonably familiar to people but secures against the risks that a traditional password login does not. Indeed, when you sign in to a 1Password account, although you enter your Master Password, you do not send us any secrets at all.

The buzzword explanation is that our end-to-end encryption uses Secure Remote Password (SRP) as a Password-based Authenticated Key Exchange (PAKE) along with Two-Secret Key Derivation (2SKD) to get all of the security properties we want without having to place too much additional burden on the users.

If you’re interested in knowing what exactly all that means, read on. Otherwise stop here and enjoy a technical discussion of how migratory swallows can transport coconuts:

Halt! Who goes there?

Authenticating (proving who you are) with passwords hasn’t really changed much over the centuries. And I’m going to start my discussion of the security properties it does and doesn’t offer with a very traditional example.

King Arthur Queen Penelope approaches a castle gate where a guard (we’ll call him Vincent) is on duty.

Many web services use this traditional method, and many are plagued with the same security problems (which I will get to shortly). As a society we need to be doing much better than this, and we are proud to say that 1Password is doing much better than this!

1Password’s major line of security is its end-to-end encryption: your data is encrypted with keys that never leave your devices. So even if someone does get past the castle gate there is not much they can do with what they find inside. But today we are talking about the security of signing in to the 1Password service.

The problems found in the castle gates method

A method of authentication that has been in place for thousands of years may have a lot going for it. It simply shouldn’t be dropped like coconuts dropped by migrating swallows. But it does bear looking into. And when we do look into it, we see lots of problems.

  • Reuse: If Penelope uses the same password for multiple castles, then the discovery of one of those passwords in one place can lead to a breach of all of the castles that she uses that password for.
  • Guessable: The sorts of passwords that Penelope is going to use in these sorts of circumstances are probably guessable with a reasonable amount of effort. This guessing can be made easier if an attacker gets hold of the data that Vincent stores to verify passwords (even if it isn’t the passwords themselves).
  • Replay: If someone overhears Penelope saying her password to Vincent, then they can use what they overheard to pretend to be Penelope at some later time.
  • Revealing password to those in the castle: Even if Penelope and Vincent could avoid being overheard, Penelope is giving away a secret to someone who may or may not be the real Vincent. Even if it is the right Vincent, she has to trust him to not do anything he shouldn’t with her password (like pretending to be her at some other castle).
  • Not mutual: Penelope proves to Vincent that she is who she says she is, but Vincent does not prove to Penelope that she is at the castle that she thinks she is at. Sure, it might be tricky for one castle to pretend to be a different one, but internet services are another matter.
  • Further communication is still unsafe: If Penelope is brought within the castle walls after authentication, all is good. But if (keeping the analogy closer to an Internet service) she remains outside the castle but carries on conversations with people within the castle, their conversation will still be overheard and perhaps tampered with by someone else outside the castle walls.

Building better castles and gates

Traditional authentication, whether it be castle gates or websites, only really have the client or user proving their identity to the server. But we should be asking for much more security in a modern authentication process. Not only should it have the client prove its identity to the server, but

  1. The server should prove its identity to the client (mutual authentication).
  2. An eavesdropper on the conversation should not be able to learn any secrets of either the client or the server.
  3. An eavesdropper should not be able to replay the conversation to get in at some later time.
  4. Client secrets should not be revealed to the server.
  5. The process should set up a secure session beyond just the authentication process.
  6. User secrets should be unguessable.
  7. User secrets should be unique.

I know that a lot of the things on that list overlap, and solving one often involves solving another. But there are some technical reasons for their separation. Also it makes for a more impressive list when they are broken apart this way.

What’s inside the castle

There may be ways for Oscar (Penelope’s opponent) to get into the castle that do not involve providing proofs of identity to Vincent. Perhaps there are other doors. Perhaps it is possible to tunnel in or or fly over the walls. Perhaps it is possible to hide inside a large wooden rabbit that is brought into the castle. Perhaps someone already inside and trusted is an enemy spy. Requiring that Vincent check multiple factors will not help defend against any of those.

It is 1Password’s end-to-end encryption that defends against those sorts of attacks. Although end-to-end encryption is probably our most important layer of security, it is not the one that I am talking about today. I bring it up because not only is end-to-end encryption our most important line of defense, but because we need to make sure that nothing in our authentication system could weaken that encryption. We don’t want an attack on the authentication system to give the attacker any information that would be helpful in decrypting Penelope’s data. (Don’t worry if that didn’t make sense at this point; it should by the end.)

With end-to-end encryption the stuff in the castle that is valuable to Penelope is useless to Oscar because only Penelope has the ability to make use of it. Penelope has secrets that never leave her side that can transform her pile of useless stuff kept inside to castle into things that are very valuable. Authentication is about proving who you are and being given access to something. Encryption (and decryption) is about transforming stuff from a useful form into a useless form (and back again).

End-to-end encryption means that even if Oscar gets inside, he will not be able to read the data that he finds. Penelope’s data – even within the castle – can only be decrypted using secrets that only Penelope has and which are never revealed to anyone else.

Once more into the breach

There is one extra thing we need to consider to keep all of this secure. And that is whether an attacker, Oscar, who gets into the castle can use the same information that Vincent has in a way that helps him (Oscar) decrypt Penelope’s data.

When Vincent checks to see whether Penelope has provided the correct password, he needs to check against some data stored some place (perhaps in his head). If computer systems worked like those castles then there might be the same password for everyone and Vincent would know it, too. Fortunately, most web services do much better than that. Each person has their own password and the service should not be storing those passwords unencrypted.

Services do not want to store the passwords that are needed to get into the service. Vincent shouldn’t store that Penelope’s password is xyzzy. Having a piece of paper around with all of that data would be dangerous. So Vincent doesn’t actually have those passwords himself; instead he has password verifiers. (Vincent verifies passwords in this story, and Penelope proves her identity. Oscar is their opponent.)

Making a hash of it (is a good thing)

I have to step back from the castle metaphor at this point, as Vincent needs to do some math that wasn’t around at the time. The verifiers that Vincent stores are typically (though not necessarily) cryptographic hashes of the password. The beauty of a cryptographic hash is that it is quick and easy to compute that the hash of “xyzzy” is “mzgC7LrRFCZ90NGkMeV7C8qVkwo=”, but it is pretty much impossible to compute things in the other direction. So in a typical web login system, when Penelope tells Vincent her password, Vincent computes the hash of what she says and then compares that to what he has stored. This way, if Oscar steals the list of these hashes from the castle, the passwords that people use to authenticate remain safe.

But we still have a problem. Even though Oscar can’t compute “xyzzy” from “mzgC7LrRFCZ90NGkMeV7C8qVkwo=” he can use that hash to verify guesses. Oscar can guess “OnlyUlysses4Me” and see what hash it produces. If it doesn’t produce the hash he is checking against, then he tries another guess. If Oscar has the right sort of equipment and a copy of the verifier/hash stolen from the castle, he can make a large number of guesses very quickly. If Oscar knows that Penelope enjoys adventures, he will eventually guess “xyzzy”, and will see that he gets the right verifier.

One password, two secrets

Now those of you still following along might wonder why Oscar would bother trying to discover Penelope’s password to get into the castle if he already managed to get into the castle to steal Vincent’s list. There are many reasons why figuring out Penelope’s password would be useful to Oscar, but the one that we are concerned about here is that it might be useful for decrypting Penelope’s data. Even though Penelope’s data is encrypted using a secret that only Penelope knows, we want Penelope to only have to know one password to get to her data (be able to retrieve her stuff from the castle) and decrypt it once she has retrieved it. This one password is her Master Password.

Her password, xyzzy, is being used for two purposes. Once is for authentication (getting into the castle) and the other is for her end-to-end encryption to decrypt her data. And this is why – even though 1Password’s security is primarily based on end-to-end encryption – we need our authentication system to not give Oscar anything useful for attacking Penelope’s data. We do not want anything we store to aid Oscar in guessing Penelope’s Master Password.

We want it so that even if Oscar gets into the castle and gets hold of Vincent’s list of verifiers he is no closer to getting at Penelope’s secrets. Although SRP gives us many of the security properties we want, it still leaves Vincent holding a verifier. That verifier – if we didn’t take counter measures – could be used by Oscar for checking password guesses even though it isn’t a traditional password hash.

Two secrets, one password

We want the verifiers that we store to be uncrackable. Our solution is Two-Secret Key Derivation (2SKD). Penelope has a secret other than her Master Password. It is her Secret Key. When she first sets up her account the software running on her machine will create an unguessable Secret Key. This Secret Key gets blended in with Penelope’s Master Password all on her own devices.

When Penelope proves her identity to our servers, she is not just proving that she knows her Master Password, but she is also proving that she (well, her software) has her Secret Key. In fact, she is proving that she has a combination of these. That combination is unguessable and so cannot be used to get at the secrets needed to decrypt her data.

2SKD means that Vincent only stores uncrackable verifiers. So there is little that Oscar could do if he stole the list of verifiers. SRP means that neither Vincent or anyone listening in during authentication can learn Penelope’s secrets. And with all of Penelope’s authentication secrets nice and secure, they can also be used for the end-to-end encryption of her data.

A very brief word about SRP

SRP and other similar systems involve proving that you know a secret without revealing that secret. As much as I would love to go more into how that happens, this article has become too long as it is. So let me just say that the client and the server present each other with mathematical puzzles that can only be solved with knowledge of a secret, but solutions to those puzzles do not reveal anything about the secret. And because each puzzle is new each time, someone who records a session cannot replay it later.

I’ve tried to give an overview of how the interaction of three things in 1Password’s design – end to end encryption, 2SKD, and SRP – protect your personal information from whole categories of attacks. These design elements are mostly invisible to people using 1Password. You do not need to know about these things to use 1Password well, but we like to let you know it is there. This is all part of how we at strive to make dealing with passwords both easier and more secure. Although not all of the techniques described here are appropriate for all sites and services, we do hope that we that the ideas discussed here can help improve authentication not just for 1Password but for other systems as well.

More detail about all of this and more is in our 1Password Security white paper.

And so a Happy World Password Day to one and all.

More than just a penny for your thoughts — $100,000 top bounty

We believe that we’ve designed and built an extremely secure password management system. We wouldn’t be offering it to people otherwise.  But we know that we – like everyone else – may have blind spots. That is why we very much encourage outside researchers to hunt for security bugs. Today we are upping that encouragement by raising the top reward in our bug bounty program.

bugcrowd-logoWe have always encouraged security experts to investigate 1Password, and in 2015 we added monetary rewards though Bugcrowd. This has been a terrific learning experience for both us and for the researchers. We’ve learned of a few bugs, and they’ve learned that 1Password is not built like the web services they are used to attacking. [Advice to researchers: Read the brief carefully and follow the instructions for where we give you some internal documentation and various hints.]

Since we started with our bounty program, Bugcrowd researchers have found 17 bugs, mostly minor issues during our beta and testing period. But there have been a few higher payout rewards that pushed up the average to $400 per bug. So our average payout should cover a researcher’s Burp Suite Pro license for a year.

So far none of the bugs represented a threat to the secrecy of user data, but even small bugs must be found and squashed. Indeed, attacks on the most secure systems now-a-days tend to involve chaining together a series of seemingly harmless bugs.

Capture the top flag to get $100,000

Capture the flag for $100,000

Our 1Password bug bounty program offers tiered rewards for bug identification, starting at $100. Our top prize goes to anyone who can obtain and decrypt some bad poetry (in particular, a horrible haiku) stored in a 1Password vault that researchers should not have access to. We are raising the reward for that from $25,000 to $100,000. (All rewards are listed in US dollars, as those are easier to transfer than hundreds or thousands of Canadian dollars worth of maple syrup.) This, it turns out, makes it the highest bounty available on Bugcrowd.

We are raising this top bounty because we want people really trying to go for it. It will take hard work to even get close, but that work can pay off even without reaching the very top prize: In addition to the top challenge, there are other challenges along the way. But nobody is going to get close unless they make a careful study of our design.

Go for it

Here’s how to sign-up:

  • Go to bugcrowd.com and set up an account.
  • Read the documentation on the 1Password bugcrowd profile
  • The AgileBits Bugcrowd brief instructs researchers where to find additional documentation on APIs, hints about the location of some of the flags, and other resources for taking on this challenge. Be sure to study that material.
  • Go hunting!

If you have any questions or comments – we’d love to hear from you. Feel free to respond on this page, or ping us an email at security@agilebits.com.

Three layers of encryption keeps you safe when SSL/TLS fails

No 1Password data is put at any risk through the bug reported about CloudFlare. 1Password does not depend on the secrecy of SSL/TLS for your security. The security of your 1Password data remains safe and solid.

We will provide a more detailed description in the coming days of the CloudFlare security bug and how it (doesn’t) affect 1Password. At the moment, we want to assure and remind everyone that we designed 1Password with the expectation that SSL/TLS can fail. Indeed it is for incidents like this that we deliberately made this design.

No secrets are transmitted between 1Password clients and 1Password.com when you sign in and use the service. Our sign-in uses SRP, which means that server and client prove their identity to each other without transmitting any secrets. This means that users of 1Password do not need to change their Master Passwords.

UmbrellaBearYour actual data is encrypted with three layers (including SSL/TLS), and the other two layers remain secure even if the secrecy of an SSL/TLS channel is compromised.

The three layers are

  1. SSL/TLS. This is what puts the “S” in HTTPS. And this is what data may have been exposed due to the Cloudflare bug during the vulnerable period.
  2. Our own transport layer authenticated encryption using a session key that is generated using SRP during sign in. The secret session keys are never transmitted.
  3. The core encryption of your data. Except for when you are viewing your data on your system, it is encrypted with keys that are derived from your Master Password and your secret Account Code. This is the most important layer, as it would protect you even if our servers were to be breached. (Our servers were not breached.)

Using Intel’s SGX to keep secrets even safer

When you unlock 1Password there are lots of secrets it needs to manage. There are the secrets that you see and manage such as your passwords and secure notes and all of the other things you trust to 1Password. But there are lots of secrets that 1Password has to juggle that you never see. These include the various encryption keys that 1Password uses to encrypt your data. These are 77-digit (256-bit) completely random numbers.

You might reasonably think that your data is encrypted directly by your Master Password (and your secret Account Key), but there are a number of technical reasons why that wouldn’t be a good idea. Instead, your Master Password is used to derive a key encryption key which is used to encrypt a master key. The details differ for our different data formats, but here is a little ditty from our description of the OPVault data format to be sung to the tune of Dry Bones.

Each item key’s encrypted with the master key
And the master key’s encrypted with the derived key
And the derived key comes from the MP
Oh hear the word of the XOR
Them keys, them keys, them random keys (3x)
Oh hear the word of the XOR

And that is a simplification! But it is the appropriate simplification for what I want to talk about today: Some of our intrepid 1Password for Windows beta testers can start using a version of 1Password 6 for Windows that will have an extra protection on that “master key” described in that song. We have been working with Intel over the past few months to bring the protection of Intel’s Software Guard Extensions (SGX) to 1Password.

Soon (some time this month) 1Password for Windows customers running on systems that support Intel’s SGX will have another layer of protection around some of their secrets.

SGX support in 1Password isn’t ready for everybody just yet as there are a number of system requirements, but we are very happy to talk about what we have done so far and where we are headed. I would also like to say that we would not be where we are today without the support of many people at Intel. It has been great working with them, and I very much look forward to continuing this collaberation.

What does Intel’s SGX do?

Intel, as most of you know, make the chips that power most of the desktop and laptop computers we all use. Their most recent CPUs include the ability for software running on Windows and Linux to create and use secure enclaves that are safe from attacks coming from the operating system itself. It is a security layer in the chip that cryptographically protects regions of operating system memory.

SGX does a lot of other things, too; but the feature I’m focusing on now is the privacy it offers for regions of system memory and computation.

Ordinary memory protection

A program running on a computer needs to use the system’s memory. It needs this both for the actual program and for the data that the program is working on. It is a Bad Thing™ if one program can mess with another program’s memory. And it is a security problem if one program can read the memory of another program. We don’t want some other program running on your computer to peer what is in 1Password’s memory when 1Password is unlocked. After all, those are your secrets.

It is the operating system’s (OS’s) job to make sure that one process can’t access the memory of another. Back in the old days (when I had to walk two miles through the snow to school, up hill, both ways) some operating systems did not do a good job of enforcing memory protection. Programs could easily cause other programs or the whole system to crash, and malware was very easy to create. Modern operating systems are much better about this. They do a good job of making sure that only the authorized process can read and manipulate certain things in memory. But if the operating system itself gets compromised or if some other mechanism might allow for the reading of all memory then secrets in one program’s part of memory may still be readable by outsiders.

Extraordinary memory protection

One way to protect a region of memory from the operating system itself is to encrypt that region’s contents using a key that even the operating system can’t get to. That is a tricky thing to do as there are few places to keep the key that encrypts this memory region if we really want to keep it out of the hands of the operating system.

SGX memory access drawingSo what we are looking for is the ability to encrypt and decrypt regions of memory quickly, but using a key that the operating system can’t get to. Where should that key live?  We can’t just keep it in the the innards of a program that the operating system is running, as the operating system must be able to see those innards to run the program. We can’t keep the key in the encrypted memory region itself because that is like locking your keys in your car: Nobody, not even the rightful owner, could make use of what is in there. So we need some safe place to create and keep the keys for these encrypted regions of memory.

Intel’s solution is to create and keep those keys in the hardware of the CPU. A region of memory encrypted with such a key is called an enclave. The SGX development and runtime tools for Windows allow us to build 1Password so that when we create some keys and call some cryptographic operations those will be stored and used with an SGX enclave.

An enclave of one’s own

When 1Password uses certain tools provided by Intel, the SGX module in the hardware will create an enclave just for the 1Password process. It does a lot of work for us behind the scenes. It requests memory from the operating system, but the hardware on Intel’s chip will be encrypting and validating all of the data in that region of memory.

When 1Password needs to perform an operation that relies on functions or data in the enclave, we make the request to Intel’s crypto provider, which ends up talking directly to SGX portions of the chip which will then perform the operation in the encrypted SGX enclave.

Not even 1Password has full access to its enclave; instead 1Password has the ability to ask the enclave to perform only those tasks that it was programmed to do. 1Password can say, “hey enclave, here is some data I would like you to decrypt with that key you have stored” Or “hold onto this key, I may ask you to do things with it later.”

What’s in our enclave? Them keys, of course!

protected-keysWhen you enter your Master Password in 1Password for Windows, 1Password processes that password with PBKDF2 to derive the master key to your primary profile in the local data store. (Your local data store and the profiles within it are things that are well hidden from the user, but this is where the keys to other things are stored. What is important about this is that your master key is a really important key.)

When you do this on a Windows system that supports SGX the same thing happens, except that the the computation of the master key is done within the enclave.  The master key that is derived through that process is also retained within the enclave. When 1Password needs to decrypt something with that key it can just ask the enclave to perform that decryption. The key does not need to leave the enclave.

Answers to anticipated questions

What does (and doesn’t) this protect us from?

I must start out by saying what I have often said in the past. It is impossible for 1Password (or any program) to protect you if the system you are running it on is compromised. You need to keep your devices free of malware. But using SGX makes a certain kind of local attack harder for an attacker, particularly as we expand our use of it.

The most notable attacks that SGX can start to help defend against are attacks that exploit Direct Memory Access. Computers with certain sorts of external ports can sometimes be tricked in allowing a peripheral device to read large portions of system memory.

As we expand and fine tune our use of SGX we will be in a better position to be more precise about what attacks it does and doesn’t defend against, but the ability to make use of these enclaves has so much potential that we are delighted to have made our first steps in using the protections that SGX can offer.

What will be in our enclave in the future?

As we progress with this, we will place more keys and more operations involving those keys into the SGX secure enclave. What you see today is just the beginning. When the master key is used to decrypt some other key that other key should only live within the enclave. Likewise the secret part of your personal key set should also have a life within the enclave only. I can’t promise when these additions will come. We still need to get the right cryptographic operations functioning within the enclave and reorganize a lot of code to make all of that Good Stuff™ happens, but we are very happy to have taken the first steps with the master key.

We do not like promising features until they are delivered. So please don’t take this as a promise. It is, however, a plan.

Sealed enclaves?

Among the features of SGX that I have not mentioned so far is the ability to seal an enclave. This would allow the enclave to not just keep secrets safe while the system is running, but to allow it to persist from session to session. Our hope is that we can pre-compute secrets and keep them in a sealed enclave. This should (if all goes to plan) allow 1Password to start up much more quickly as most of the keys that it needs to compute when you first unlock it can already be in an enclave ready to go.

A sealed enclave would also be an ideal place to store your secret 1Password.com Account Key, as a way of protecting that from someone who gains access to your computer.

Is security platform-specific?

1Password can only make use of SGX on some Windows PCs running on CPUs with Intel’s Skylake CPUs and which have been configured to make use of SGX. Thus SGX support in 1Password is not going to be available to every 1Password user. So it is natural to ask whether 1Password’s security depends on the platform you use.

Well, there is the trivial answer of “yes”. If you use 1Password on a device that hasn’t been updated and is filled with dubious software downloaded from who knows where, then using 1Password will not be as secure as when it is running on a device which is better maintained. That goes without saying, but that never stops me from saying it. Really, the easiest and number one thing you can do for your security is to keep your systems and software up to date.

The nontrivial answer is that 1Password’s security model remains the same across all of the platforms on which we offer it. But it would be foolish to not take advantage of some security feature available on one platform merely because such features aren’t available on others. So we are happy to begin to offer this additional layer of security for those of our customers how have computers which can make use of it.

Upward and downward!

I’d like to conclude by just saying how much fun it has been breaking through (or going around) layers. People like me have been trained to think of software applications and hardware being separated by the operating system. There are very good reasons for that separation — indeed, that separation does a great deal for application security — but now we see that some creative, thoughtful, and well-managed exceptions to that separation can have security benefits of its own. We are proud to be a part of this.

Our Security, Our Rights

Every day it feels like our rights to privacy and security are under attack, and indeed, if you’re keeping up with the news, this is a lot more than just a feeling.

Governments and law enforcement agencies around the world are pushing hard for new powers to keep tabs on their citizens. They argue they require the ability to track your activities and access your private information in order to protect you. And they’re willing to weaken encryption for everyone to do so.

We’ve already seen this happen in the UK with their newly passed laws that grant the government unprecedented surveillance powers, and as James Vincent so eloquently states there, the new laws establish a dangerous new norm where surveillance is seen as the baseline for a peaceful society.

Laws like these in the UK are likely to spread to other countries if citizens don’t take a stand. Indeed these laws could end up appearing tame by future standards if we’re not vigilant.

As tempting as it is to give the government more powers to nab the bad people before crimes have even been committed, history has proven time and again that these broader powers are most often used against law-abiding citizens rather than criminals themselves.

It’s possible laws like these will find their way into Canada as well so I’m asking for your help to send a clear message to our ministers before the ball starts rolling in that direction.

Since September Public Safety Canada has been holding a Consultation on National Security to prompt discussion and debate on future policy changes. Feedback is accepted from all Canadians as well as international readers, so everyone is welcome to contribute.

The set of questions and discussion points is quite broad but the one that’s most important to 1Password users is Investigative Capabilities in a Digital World, particularly this question:

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

Or said another way, how can the government institute a backdoor into encryption software that only they can exploit? It sounds simple but in fact it’s simply not possible. As we discussed previously on this blog, back doors are bad for security architecture, and when back doors go bad: mind your Ps and Qs covers an example of a backdoor that went awry along with the math that made it possible.

Please complete the survey and let the Canadian government know you’re not willing to weaken your security or give up your privacy. The opportunity to provide feedback ends on Thursday, December 15th.

I know it’s tempting to give up some freedoms to allow someone else protect you, but whenever I feel that way I remind myself of what Benjamin Franklin would say:

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.

Please forgive a Canadian for quoting one of America’s founding fathers, but Ben summed things up so well that I couldn’t resist. ?

Thanks for caring about privacy and security as much as we do ❤️

Send in the crowds (to hunt for bugs)

We unequivocally encourage security researchers to poke around 1Password. It is an extremely important part of the process that helps us deliver and maintain a more secure product to everyone. Finding and reporting potential security vulnerabilities is what we should all expect from bug hunters around the world; the hunters and yourself should expect that we address those vulnerabilities promptly.

We have always welcomed security reports that arrive at security@agilebits.com, and over most of the past year we offered a more formal, invitation-only bug bounty program through Bugcrowd. We are pleased to now take that program public: https://bugcrowd.com/agilebits.

op-bugcrowd

Before I get into what the program offers, I’d like to remind you that there is always room to improve the security of any complicated system, 1Password included. As clever as we may think we are, there will be security issues that we miss and different perspectives help reveal them. Software updates that address security issues are part of a healthy product. This, by the way, is why it is important to always keep your systems and software up to date. Even in the complete (and unlikely) absence of software bugs, threats change over time, and defenses should try to stay ahead of the game.

Some words about Bounty

A bug bounty program offers payouts for different sorts of bugs. The first bug bounty that I recall seeing was Donald Knuth’s for the the TeX typesetting system, though I have since learned that he does this for most of his books and programs. It started out with $2.56 (256 US cents) for the first year, and doubled each year after that, reaching a final limit of $327.68.

Check from Donald Knuth made out to Richard Kinch.

A bounty check from Donald Knuth made out to Richard Kinch

Of course given Donald Knuth’s well-deserved fame and reputation, few people cashed the checks they received. Instead, they framed them.

Anyway, enough about me revealing my age. Let’s talk about today’s bug bounty program. There is a community of people who earn a portion of their income from bounties. (Whether or not it is enough for them to sail off to Tahiti or Pitcairn is not something I know.) Over the years they have developed skills and tools and scripts for examining systems. We want them to apply those skills and efforts testing the security of 1Password. Opening up this bug bounty program brings those people and their skills into the process of making 1Password more secure.

Our bounty

Unlike the example of Donald Knuth’s bug bounty, we are only offering payouts for security issues. Of course all bug reports are welcome, we just aren’t promising bounties for them. And because we are promising to pay for bugs, we’ve had to establish a bunch of rules about what counts. These rules help us draw the attention of researchers to the 1Password.com service, and they help us exclude payouts of things that are already known and documented. We don’t want those rules to discourage anyone from bug hunting; they are there to help focus attention on what should be the most fruitful for everyone.

1Password Security white paper cover

Your homework

We think that finding bugs in 1Password will be challenging — 1Password.com is not your typical web service. Our authentication system, for example, is highly unusual and specifically designed so we are never in a position to learn any of our customers’ secrets. Because we use end-to-end encryption, getting hold of user secrets may require breaking not just authentication but also cryptography. Of course, we’re inviting researchers to try out attacks that we haven’t considered to prove us wrong. I expect that successful bug hunters will need to do their homework, all the same.

Now, all that bragging about how challenging I think it’ll be to find serious issues with 1Password isn’t an attempt to stop people from trying — get out there and try! You can get bounty for it, and a thank-you as well. We’re excited to hear a resounding “challenge accepted!” from the research community.

How we help researchers

If there are security bugs, we want to know about them so we can fix them. (I know I keep repeating that point, but not everyone reading this is familiar with why we might invite people to look for security bugs.) We want to help researchers find bugs, because they’re helping us, and everyone who uses 1Password.

To help researchers understand and navigate 1Password (and reduce the amount of time they may need to reverse engineer protocols) we have set up a special 1Password Team that contains a bunch of goodies: internal documentation on our APIs, some specific challenges, and UUIDs and locations of items involved in some of the challenges. So researchers, please come and leave your mark on our Graffiti Wall. (No, not in this web page or the image below, the wall inside the aforementioned team account.)

Secure Note: "The Researchers vault grants read-only access to researchers. If you figure out how to get around read-only access, please put your name in here ..."

With a natural degree of trepidation, I look forward to what might appear there.

The kindness of strangers

A bug bounty program brings in a new group of researchers. And that’s why we’re launching it. We encourage independent research as well. We’re just as open to reports of security issues outside of the bug bounty program as we have always been.

So without further ado, let’s send in the crowds!

How 1Password calculates password strength

Password strength is a big deal. It is in fact one of the biggest deals. That’s why the Security Audit feature in 1Password pinpoints your weak passwords, so that you can go through and change them at your earliest convenience. But how does the strength meter actually calculate the strength of your password? What makes a password strong or weak? A recent conversation with a user inspired me to write down my thoughts on the subject. If you are going to trust 1Password to generate strong passwords for you, you should know how the strength meter works.

About Those Meters…

For a password strength meter to actually be accurate, it needs to know the system that was used to generate the password. When you generate a password using 1Password, we know that this newly generated password has been generated in a truly random fashion and can accurately calculate the password’s strength. However, when 1Password is evaluating the strength of a password that you have typed in manually, including a password which was generated in a truly random fashion on another device, the strength meter cannot know whether it is looking at a password that was created through a truly random process or created by a human.

Password strength: perfectly good

If our password strength meter sees something like “my dog has a bunch of fleas” or something like “gnat vicuna craving inclose”, it can’t tell that the first was probably made up by a human and that the second may have been generated by something smarter, like our password generator.

Password strength: not so good

Because it doesn’t know how the password was generated, it errs on the side of caution. The strength meter will mark “gnat vicuna craving inclose” (a perfectly good password) the same as it will mark “my dog has a bunch of fleas” (not a good password at all). Both have the same number of letters (27) and both contain only lowercase letters. It’s up to you to know where the password originated. Did it come from our random password generator, or is it something a person made up?

Randomness and Selection Bias

When we speak of “randomness”, we are referring to mechanisms which have been tested and determined to be truly random and not dependent on events which may be repeatable or subject to outside observation. The toss of a fair coin or die is a source of “random input”. The radioactive decay of a substance can be used as “random input”. Our own limited vocabularies and choices of words are not “random input”.

When creating unique, strong, random passwords, what is required is a Cryptographically Secure Pseudorandom Number Generator (CSPRNG) to ensure that no one value or sequence of values will be preferred over all other values. The values from the CSPRNG may then be used to select from some alphabet or word list to create unique, strong, random passwords having the appropriate construction and length.

Selection Bias refers to preferences for specific values over others, whether by using an unfair coin, a loaded die, or a random number generator which does not produce a uniform and unbiased set of values. Inappropriate math performed on valid CSPRNG produced numbers may also lead to biases for certain values in favor of others. A common error is the use of modulo (remaindering) arithmetic which results in smaller values being used preferentially over larger values — there are more instances of values between 0 and 5535 (66 for each value) than between 5536 and 9999 (only 65 for each value) when using modulo-10000 on an unsigned 16-bit CSPRNG generated number.

Password strength: happy synonyms

For human-generated passwords, common causes of selection bias include the use of a small and limited vocabulary (list all of the synonyms for “happy” you don’t use on a regular basis) and reuse of words (“cool”, “okay”) and avoidance of others (“groovy”, “hip”).

Pre-generated word lists avoid this type of selection bias by randomly selecting words which are common enough that the user should be able to spell the word from memory without being biased by personal preferences or regional differences in word choices.

Password Construction and Strength

The format of a password — the rules which are used to select characters or words — influences the strength of a password, but does not limit its possible strength, except to the extent that length limitations may be imposed on the result.

As an extreme example, consider a password that consists only of the letters “H” and “T”, and that you generate by repeatedly flipping a fair coin. If you make this password long enough it can have any strength desired — you only need to keep flipping a fair coin. But “long enough” in this case is actually unreasonably long. If you want a password that is as strong as a 10-character, mixed-case letter and digit password generated by our Strong Password Generator, your “H” and “T” password would have to be 60 characters long!

Memorizable Passwords

Shorter passwords from truly random sources can be stronger than longer passwords from biased sources even if they draw from the same character sets. For example, an 8-character, mixed-case letter and digit password produced by our generator is going to be a much better password than the longer (10-character) “Abcde12345” password that a human might come up with. There is reason to believe that the more “strength requirements” (use a digit, use mixed case, add a special character, etc.) we impose on people, the worse the passwords that they create may get. Part of this has to do with alphabet reduction: users may choose to limit mixed alphanumeric passwords to more alpha and less numeric, or more lowercase and less uppercase.

There is reason to believe that the more “strength requirements” we impose on people, the worse the passwords that they create may get.

For example, the rule “at least 8 characters, 1 uppercase letter and 1 digit” will produce approximately 80 billion (109) passwords (6 lowercase, 1 upper case, 1 digit: (26 * 26 * 26 * 26 * 26 * 26) * 26 * 10, or about 36 bits) if the uppercase letter appears first, followed immediately by a digit. But again, that assumes that the password was created by something that had a good random number generator and knew how to use it, and that the password isn’t simply a capitalized 8-character word with a single vowel replaced by a digit, such as “B3musing” (possibly as few as 14 bits). If the uppercase character and digit are allowed to be in any of the 8 possible positions, that increases the number to approximately 4,500 billion possible passwords (or about 42 bits).

This is a classic example of alphabet reduction where the complete rule should have been expressed as “at least 8 characters, 1 uppercase, 1 lowercase, 1 digit, and the remaining 5 chosen completely at random from the set of uppercase and lowercase letters, and digits”. When this revised rule is used, and a CSPRNG is used to select the characters, the number of possible passwords increases to ((62 * 62 * 62 * 62 * 62) * 26 * 26 * 10 * (8 * 7 * 6)), a total of about 2 million billion possible passwords (or about 50 bits). Each additional alphanumeric character, chosen completely at random by a CSPRNG, adds about 5.9 bits of additional strength.

Truly random length-limited passwords are hard for human beings to generate and memorize because people tend to choose less randomness in favor of greater memorizability. Using 1Password to generate and store passwords ensures that strong, unique, random passwords can be used without worrying about forgetting them.

Memorizable Passphrases

Choosing multiple words from a suitably large dictionary of words may result in stronger passwords even if all of the words appear in dictionaries, are spelled with lowercase letters, and no punctuation is used. Assuming a dictionary size of 20,000 common words (about 14.3 bits per word), chosen entirely at random, all of which are lowercase, the number of possible 4-word passwords increases to 160 million billion (about 57 bits.)

Studies of our ability to easily remember information have shown that we have limits to our ability to memorize seemingly random information, unless we have a useful mnemonic device or the information is grouped in a particular manner. This is why telephone numbers and postal codes tend to be grouped as they are, and why mnemonic devices are popular, such as “My Very Educated Mother Just Served Us Nine Pizzas”.

XKCD

XKCD comic 936 is a perfect example of how easy it may be to memorize the random four-word password “correct horse battery staple”. As our Chief Defender Against the Dark Arts Jeffrey Goldberg will tell you, you may even add your own rules to that password to make it easier to memorize — “Correct! Horse: Battery staple.” Now you have a nice story to help you memorize a strong, random, unique password.

What It Means to You

Our strength meter (along with every other strength meter ever designed) has to guess how the password it is evaluating was created, unless you are actively generating the password in 1Password at that very moment. This means that you may see a big mismatch between “actual” and reported strength for our generated passwords.

The good news is that our password generator does a really good job of generating truly random passwords, so when you generate a secure, random, and unique password with 1Password, you know that it was generated just for you, right there on your device and it is as strong as can be for the length and rules you requested. So, let’s hear it for “paddle shrill sonorant palazzi ravioli” and “8dUaYolTJu82DDG9” — so happy to meet you, secure, random, and unique passwords that you are!

Additional Reading

For a more comprehensive discussion of generated password strength, please read the Geek Edition of our guide to creating better master passwords.

You’ll also find an article in our knowledge base that discusses password strength meters, chicken entrails, and assorted feats of strength.

More Watchtower, still no watching

1Password WatchtowerThere are some great new features in the 1Password for iOS 6.2 update that hit the App Store last week. One of them is that we’ve added Watchtower (a feature that has been available on Mac and Windows for some time now) to 1Password for iOS.

Watchtower warns you if a site or service has been compromised in a way that would make it a good idea for you to change your password for that site. Watchtower in 1Password looks at the most recent time a password change was recommended for a site and it looks at the time that your password for an item was last modified. If, like Molly (one of my dogs), you haven’t updated your Adobe password since the 2014 breach, you might see something like this:

Watchtower warning in 1Password on iPhone

Molly hasn’t changed her Adobe password since the breach a couple of years back

Preserving your privacy

I want to talk about a far less visible feature of Watchtower: We’ve added Watchtower support in a way that still preserves your privacy. We don’t want to know what sites and services you have in your 1Password vaults, so when 1Password checks to see if one of your Logins is listed in Watchtower, it does not make a query to our servers asking about it.

Enable Watchtower in iOS

Turning on Watchtower in iOS. “Your website information is never transmitted to the 1Password Watchtower service.”

Querying Watchtower without querying you

Our Watchtower people are continually watching reports of site breaches and updating our database of such sites regularly. This is how 1Password knows that a password change is recommended for some site.

The “obvious” way for 1Password on your computer (and now iOS device) to alert you, would be to go through your 1Password items and ask our database on some server about the status of those items. The problem with this “obvious” way of doing things is that it means that any server your copy of 1Password queries would then be able to know your IP address and what sites you have in your 1Password data.

If 1Password on some device were to ask our server, “Do you have Watchtower information about ISecretlyHateStarWars.org?” then our server will know that someone at your Internet address may have a very nasty secret. You certainly wouldn’t like us to know such things about you, and we don’t want to know such things either.

The road less travelled

So we don’t do things the obvious way. Instead, we send the same stripped down version of our Watchtower database to everyone who turns on the feature. You have a local copy of the Watchtower data on your device, and 1Password just checks against that copy of the local data. All we can know (if we chose to log such information) is which IP addresses have enabled Watchtower. We are never in a position to know what sites you have in your 1Password data.

Baked-in privacy

It may take a bit of extra work from us to design Watchtower in a way that preserves your privacy, but we think it is worth it.

Your privacy must be protected by more than mere policy (a set of rules we make on how we behave with respect to data about you); instead, we aim to bake privacy protection into the very structure of what we build. We design 1Password in a way that would make it hard for us to violate your privacy.

You can read more about this approach to privacy in our support article, Private by Design.

When back doors go bad: Mind your Ps and Qs

This is going to be a long and technical article, but the point can be stated more simply:

The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. The back doors that have been recently been disclosed in Juniper Networks’ ScreenOS firewalls exemplify this point dramatically. It is also possible that the back door technology developed by the NSA is being used by some entity other than the NSA.

Perhaps the Economist put it better in When back doors backfire:

The problem with back doors is that, though they make life easier for spooks, they also make the internet less secure for everyone else. Recent revelations involving Juniper, an American maker of networking hardware and software, vividly demonstrate how. Juniper disclosed in December that a back door, dating to 2012, let anyone with knowledge of it read traffic encrypted by its “virtual private network” software, which is used by companies and government agencies worldwide to connect different offices via the public internet. It is unclear who is responsible, but the flaw may have arisen when one intelligence agency installed a back door which was then secretly modified by another.

Two kinds of back doors

There were two back doors in Juniper’s ScreenOS that were fixed by Juniper, the mundane one and the interesting one. The mundane one was a back door into the authentication system, allowing someone with knowledge of it to simply log in and administer the devices. Back doors in authentication systems are easy to create, and are all too common. This, by the way, is why we try to build 1Password’s security on encryption rather than authentication. As I’ve argued before: Architectures that allow for back doors are bad for security.

The more interesting – and for my soapbox, more instructive – back door is cryptographic. It also presents a greater threat. It allows the holder of the back door secret to decrypt certain kinds of traffic that were supposed to be encrypted by the ScreenOS (VPN) server. The attacker only has to listen to network traffic that is encrypted as far as everyone else is concerned, but she will be able to decrypt that traffic with the knowledge of the back door secret.

Another difference between the two is that the mundane one is a simple password (disguised as debugging code) that is now known to the world and so any unpatched systems can now be logged into by anyone. That secret is now out. The cryptographic back door, on the other hand, is a very different beast. We know it’s there, and we know how it works, but we do not have the key. The secret needed to make use of the back door is still a secret.

I don’t want to suggest that the back door in the ScreenOS authentication system isn’t important. It is very worrisome on its own, given where ScreenOS is deployed. But we all know that it is easy to add a back door to something that is merely protected by authentication (no matter how many authentication factors one has in place).

Politically the questions about how that authentication back door got put into place in such an important product and who may have been using it is huge. But it offers nothing new from a technological point of view.

The cryptographic back door

Oh. I want to work for the NSA! Evil mathematicians just like Professor Moriarty!

—My daughter, upon learning about the Dual EC backdoor

The cryptographic back door key all depends on how something called Q was created. Q is not a number (nor is it a free man), instead it is a point on a special sort of graph. If Q was created randomly then there is no back door key and all is well and good. But if Q was created by adding P (another point on the graph) to itself some large number of times then knowing how many times P was added to itself becomes your back door secret.

You are going to see the equation

Q = dP

many times. Neither Q nor P are secret. But d is secret and it is the back door key.

No doubt you are thinking something like, “well if P and Q aren’t secret, can’t we all figure out what d is by dividing Q by P?” Shouldn’t

d = Q/P

Well, d kinda sorta does equal Q/P, but there just isn’t enough computer power around to perform the relevant calculation. I’ll save that bit of fun math to (much) further below.

Necessary historical background

I’m one of those people who when asked a simple question go into more historical background than many people appreciate. But here it really is necessary. Really.

ScreenOS (and others) used a cryptographic algorithm that was deliberately designed to have a back door that only the United States National Security Agency (NSA) was supposed to know about. The cryptographic algorithm is Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG). I will refer to that simply as “Dual EC” from here on out.

Dual EC was one of four DRBGs specified in NIST SP 800–90A, and leaving the back door aside it is still technically inferior to the other three listed in that document. The standard specified a P and Q but did not explain how they were chosen. In 2007 Dan Shumow and Niels Ferguson of Microsoft showed that if Q was created with something like Q = dP, then whoever had hold of d could decrypt traffic encrypted with Dual_EC.

In 2014, a group of cryptographers presented a great explanation of how knowledge of d can be exploited, and even presented a proof of concept (with their own Q) of doing this.

Back in 2007 nobody (well, nobody who was going to say anything) knew whether or not Q was created so as to have the back door. In September 2013 it became clear from some of the documents revealed by Edward Snowden Dual EC had been designed with the back door. The Wikipedia page on Dual EC has a useful timeline.

How many bad Qs?

For ’tis the sport to have the engineer
Hoist with his own petard
Hamlet Act 3, Scene 4

So far the story I’ve told would merely suggest that Juniper has some explaining to do about why it used a DRBG in the way that it did and why they chose one that was inferior to alternatives and was suspected of being backdoored. Juniper Networks are (sadly) not the only one in that position.

But I’ve only told part of the story. ScreenOS did not use the NIST Q. It used two custom Qs over time, and where they came from and who knows the back door keys associated with those Qs is something we simply don’t know. The back door key associated with the Q from the NIST standard is clearly held by the NSA. But what about the two used (at different times) in ScreenOS?

Juniper explained that they didn’t use the NIST Q in their Dual EC implementation but used a Q that was created randomly. However, they did not provide any proof that it was created randomly. (There are ways to demonstrably create such things randomly.) So we don’t know if that “official Juniper Q” was created in a way to give its creator a back door key. Nor do we know who might hold that key.

But there is a third Q that was, according to Juniper, inserted by parties unknown in 2012. That is someone simply changed the Q that is used in Dual EC in ScreenOS. We can only assume that this third Q is indeed backdoored, as it was something surreptitiously added to the code. We have no idea of who did this.

So even if we take Juniper at their word about their choice of Dual EC and other related design choices, the fact that the chose to use a system that was designed so that it could have a back door means that someone – we don’t know who – has had a back door key for decrypting VPN traffic between NetScreen devices for three years.

I have said it before and I will say it again, systems that are designed to have back doors (even if the back doors aren’t used) are inherently weaker than systems that are designed to make it difficult to have a back door.

Thar be math

The rest of this article tries to explain why it is easy to calculate Q from d and P but hard to calculate d from P and Q. Unless you have a great deal of patience and would like to learn more about the math, you can stop reading here.

Here is a picture of Patty and Morgan enjoying the pleasures of warm blankets on the sofa. Morgan also has no patience for math.

Dogs on couch, not doing math.

Patty and Morgan relaxing on couch. Not doing math.

Hard math, simple math

The actual math isn’t all that complicated, but it does rely on a number of unfamiliar notions. Some of these notions are hard to grasp not because they are complicated, but because they are simple. They are highly abstract. Indeed, you might even have to unlearn some things you know about addition.

I said earlier that even though P and Q are public, we can’t figure out what d is in Q = dP, but someone with d and P can easily construct Q. I’m not going to fully explain all of this, but I’m going to make an attempt to try to help people get a better sense of how something like this might be true. Along the way, I’m going to tell a few lies and omit lots of stuff. I need to lie to stand a reasonable chance of this article being useful to people who don’t already know the math.

One thing I should tell you right away is that P and Q are not numbers, while d is a number. I will get around to explaining what P and Q actually are below, but first we need to understand what it means to add two things together.

I am not going to try to explain how knowledge of d provides a back door. That is something that is difficult enough to explain in the normal case and is even harder given how Dual EC is actually used within ScreenOS. (See Ralf-Philipp Weinmann’s “Some Analysis of the Backdoored Backdoor” for some of those details.)

Instead, I’m focusing on a narrow point, allowing me to go into more depth about something that is central to modern cryptography.

Simple addition

Most of us have a pretty good sense of how addition works. We all pretty much know that the “+” in “3 + 5 = 8” means. We may not be able to define it precisely, but it is a basic concept that we understand. Well mathematicians like to define things precisely and more importantly they like to distill things down to their simplest essence. Let’s see what the essential properties of addition are.

What are we adding?

To define addition we need to talk about what things are going to be added to what things. The expression “3 + blue” doesn’t mean anything because “blue” and “3” are not in the same set of things we might add. We might want to talk about adding colors, and so could reasonably say “red + blue = purple”.

Blue + Red = Purple

Addition can apply to more than just numbers.

Sometimes we add numbers; sometimes we add colors; but we need to have a set of things (numbers perhaps, or maybe colors) that we will be adding.

So let’s call the set of things that we will be adding E for an ensemble of things. When we are adding numbers, “blue” won’t be in E, but when we are adding colors, “blue” may very well be in E. When we are adding colors, 3 won’t be in E but it will be when we are adding whole numbers.

For the addition that most people are used to, there are an infinite number of things in E, but that doesn’t have to be the case. When we add hours to know what time it will be 15 hours after 9 o’clock, we use only 12 (or 24) of course. Remember, we are trying to distill addition down to its essence, so we want it to work both when E has a finite number of things in it as well as when it doesn’t.

Addition offers no escape

If we are adding numbers, we don’t want 3 + 5 to equal “blue”. We want 3 + 5 to result in another number. If we are adding colors, we want the result of addition to be some color.

So we will say that

If a and b are both in E then a + b is also in E.

The fancy name for this is “closure”, or “E is closed under addition”.

You can switch things around

We like addition to work out so that “a + b” will yield the same result as “b + a”, so we say that

If a and b are both in E then a + b = b + a

We say that addition is “commutative”.

Any groupings you want

When you mix blue and red and white, it doesn’t matter if you mix the blue and the red first or the red and the white first. Likewise if you add 3 + 2 + 10, it shouldn’t matter whether you add the 3 and the 2 first or the 2 and the 10 first.

If a, b and c are all in E then (a + b) + c = a + (b + c)

And so we say that addition is “associative”.

We need a way to do nothing.

I have no problem grasping the importance of doing nothing, but not everyone seems to agree with me (Hi boss!). Anyway, we need a way to do nothing with addition.

To put that more mathy, we want E to contain something that behaves like zero. If you add zero to anything, you end up with what you started with.

If a is in E and 0 is the zero element then a + 0 = 0 + a = a

For the color adding analogy, we might add the special color “transparent” as our zero element so that blue + transparent = blue.

The fancy way to talk about zero is to call it “the additive identity”.

Adding it all up

Mathematicians are happy to call anything that meets those four conditions “addition”.

Getting to the points

So we can add numbers and we can add colors. But let’s talk about points on a peculiar geometric shape called an elliptic curve. Here is a portion of an elliptic curve, and it has two points on it, P and Q. (These aren’t the same as P and Q used in Dual EC.)

Elliptic curve

Elliptic curve with points P and Q

There are some interesting properties of these odd sorts of shapes. One of them is that if a (non-vertical) straight line crosses the curve at all, then it will cross it in exactly three places.

For example, if we draw a line that goes through both P and Q then that line will go through the curve at exactly one other point.

Line P,Q on elliptic curve

The first step in adding points P and Q is drawing the line connecting them and seeing where else that line crosses the curve

From that third point where the line crosses the curve we can find the point on the opposite side of the curve by drawing a vertical line up or down (as needed) from that point of intersection:

Addition of P, Q on elliptic curve

Draw a vertical line from where the line between P and Q crosses curve to get point P + Q

Now as strange as it might seem, if we define addition of points on an elliptic curve this way, it will meet all the properties of addition that we want. (I’m leaving out how the zero element is defined, and have constructed my examples so that I can get away with that.)

Repeated addition

Now let is return to the problem we started out with. If Q = dP how come it is easy to calculate Q if we know d and P but not d from Q and P? P and Q are points on the curve, while d is just an ordinary (though very large) number.

Well, we’ve defined what addition means for points, but what does it mean to say, for example, “4P”? “4P” is just shorthand for writing “P + P + P + P”. (Note that we can treat “4P” kind of like multiplication, but only because 4 is an ordinary number and we can just say that it is repeated addition of P. We haven’t defined any way to multiply two points together.)

Let’s work through a small example and try to find where 4P is. The first thing we need to do is add P to itself to get 2P.

We’ve already stepped through how to add two different points, but how do we add P to itself? We take as our line going through P and P to be the tangent line at point P. This is the line that touches point P but does not cross the curve at P:

Tangent at P on elliptic curve

The tangent line at P crosses the curve in exactly one other place.

That line crosses the curve at some other point. And just as we did when we added P and Q earlier, we draw a vertical line through that point of intersection to get our sum:

Adding P+P on elliptic curve

When adding a point to itself, we take the tangent at the point to construct the P,P line

Now we have point P, which we started with, and we also have point P + P, or 2P. So if we want to find 3P we can just add P and 2P together.

P + 2P = 3P on elliptic curve

Add 2P to P to find point 3P on the curve.

This gives us point 3P and we can add 3P and P together to get to our goal of 4P:

P + 3P = 4P on elliptic curve

To get point 4P, we add 3P and P.

A shortcut

It took us three additions to calculate 4P from P. But we could have saved ourselves one of those. We never really needed to calculate 3P. Instead once we had the point 2P we could have just added 2P to itself to get 4P. Because point addition is associative, we can think of 4P as (P + P) + (P + P), which is 2P + 2P.

4P = 2P + 2P on elliptic curve

Adding 2P to itself is another way to get 4P

With this shortcut, we have been able to calculate 4P from P with only two additions: The first addition was adding P + P to get 2P; the second addition was 2P + 2P to get 4P.

The shortcut offers huge savings

When calculating 4P our shortcut saved us only one addition, but the savings get bigger the bigger d is. Suppose we wanted to calculate 25P. The slow way would require 24 additions, but using this shortcut, it would only take us six additions.

  1. 2P = P + P
  2. 4P = 2P + 2P
  3. 8P = 4P + 4P
  4. 16P = 8P + 8P
  5. 24P = 16P + 8P
  6. 25P = 24P + P

So having to only do six of these additions versus 24 by the slow way is a big savings. When the numbers get really big, the savings get much bigger. Suppose we wanted to calculate the result of adding P to itself one billion times. 1000000000P would take almost a billion additions the slow way, but using the short cut we can do it with only about 45 additions.

The number of point additions it takes to calculate dP using this shortcut is roughly proportional to the number of bits in d.

No shortcut to d

So we’ve got a really efficient way to compute Q from d and P, but there is no efficient shortcut for finding d from P and Q. Indeed, this asymmetry between calculating d versus finding d is one of the things that makes elliptic curve cryptography work.

When numbers get big

When we start talking about the huge numbers that cryptographers actually use for these sorts to secrets we get truly massive savings. Those savings are so big that using the shortcut means that we can calculate the result in a fraction of a second, while doing it the slow way would take all of the computers on earth many times the age of the universe to compute.

The number of point additions needed to calculate Q=dP is roughly proportional to the number of bits in d, while the number of additions needed to find d from P and Q using the slow way is roughly d. As d gets big (say a 256-bit number), the difference between d and the number of bits in d is enormous. For a 128 bit number, calculating Q takes less than 256 point additions, but calculating d takes a number so large that I don’t know how to write it out. It would have 77 digits. To get a sense of just how big numbers like that are, take a look at “Guess why we’re moving to 256-bit keys

And if I may introduce more terminology, that asymmetry is an instance of what is called the “Generalized Discrete Logarithm Problem” (GDLP).

What is the GDLP good for?

To give you a hint of how this sort of stuff can be used for keeping secrets, suppose that two of my dogs, Molly and Patty, wish to set up secret communication without Mr. Talk (the neighbor’s cat) listening in. The dogs will need to establish between them a secret key that only they know, but how can they do this if Mr Talk is listening in to their communication?

First Patty and Molly will agree on an elliptic curve and some other non-secret parameters. Among those parameters there will be a starting point, that I will call G. Let’s let Patty pick a secret number that only she knows and we will call that little p. Patty will calculate the point pG. Let’s call the point that Patty calculates big P.

Patty can send P to Molly, but neither Molly nor Mr Talk (who is eavesdropping) will be able to calculate little p, even though they both know G and P. Big P is Patty’s public key, while small p is Patty’s private key.

Molly can do the same. She calculates M = mG, where little m is Molly’s secret, and sends big M to Patty.

Now here is where it gets fun. Patty can calculate point K_p by multiplying M by little p. That is

K_p = pM

Molly can also calculate K_m by multiplying Patty’s public key, P, by Molly’s own secret, m. K_m = mP.

The Ks that both calculate will be the same. This is because point addition is commutative and associative. Remember that M = mG. So

K_p = pM = pmG.

We can look at what Molly calculates from Patty’s public key the same way: P = pG. When we look at what both Molly and Patty calculate with their own secrets and the other’s public point, we get

K_m = mP = mpG = pmG = pM = K_p

Let’s look at each end of that long chain of “=”s, but drop out all of the middle parts.

K_m = K_p

Both Molly and Patty calculate the same secret K even though neither of them have revealed any secret in their communication. And Mr Talk, who has listened to everything Molly and Patty sent to each other, cannot calculate K because there is no feasible way for him to learn either Patty’s secret p or Molly’s secret m.

This is magic with mathematics.

Lies and omissions

There are a lot of things that I left out of the explanation I gave here. And in a few cases I even told a couple of lies to make the explanation simpler. I’m only going to mention some of them here.

  • The shortcut that I described for computing Q from p and D is not the most efficient shortcut, but it is an efficient shortcut that is easiest to explain.
  • There are shortcuts for calculating d, but there are thought to be no efficient non-quantum ones.
  • I have not explained what I mean by “efficient”. (It is a technical term.)
  • I haven’t told you the formulas for doing addition of elliptic curves, I only did it with pictures.
    Text exchange about EC science fair project

    So my daughter texted me a picture of a science fair project she saw. I texted back.

    It looks like someone at my daughter’s high school did a science fair project on finding good algorithms for point doubling. Thus forcing me to reveal just how much of a pushy and intrusive parent I am.

  • To make addition on elliptic curves work as an instance of the GDLP, we need to make it have only a finite amount of elements. That involves, among, other thing, making all of the normal arithmetic in the formulas done modulo some large prime number.
  • I didn’t talk about the additive identity (zero element) for addition on elliptic curves.
  • I never said what “=” means when I defined addition.
  • I kinda sorta misused the terms “public key” and “private key” when applying those terms to ECDHE.
  • I didn’t say anything at all about the existence of additive inverses.
  • The addition of colors analogy stops working if we were to consider additive inverses.
  • I implied that all and only the properties I listed for addition are what is needed for something to be called “addition”. The properties that I listed plus the existence of additive inverses are what is needed for the specific type of addition we need for this cryptography. Addition of numbers requires something more because numbers – unlike points on an elliptic curve or colors – line up in an order.

The after math

I’ve been able to use an example of a rare cryptographic back door to give me the chance to talk about the math of elliptic curves cryptography. But if you are asking what this has to do with AgileBits and our products (after all, we don’t use elliptic curves in our products), then we must return to the original point: If you build a system into which it is easy to add a back door, you shouldn’t be all too surprised when a back door is added. So let’s not build things that way.

I will close with another quote from that Economist article:

Calls for the mandatory inclusion of back doors should therefore be resisted. Their potential use by criminals weakens overall internet security, on which billions of people rely for banking and payments. Their existence also undermines confidence in technology companies and makes it hard for Western governments to criticise authoritarian regimes for interfering with the internet. And their imposition would be futile in any case: high-powered encryption software, with no back doors, is available free online to anyone who wants it.