Finding Pwned Passwords with 1Password

Yesterday, Troy Hunt launched Pwned Passwords, a new service that allows you to check if your passwords have been leaked on the Internet. His database now has more than 500 million passwords collected from various breaches. Checking your own passwords against this list is immensely valuable.

We loved Troy’s new service so much that we couldn’t help but create a proof of concept that integrates it with 1Password. Here’s how it looks:

What’s even more fun than watching this video is giving it a try yourself. 🙂

Checking your passwords

This proof of concept was so awesome that we wanted to share it with you right away. It’s available today to everyone with a 1Password membership. To check your passwords:

  1. Sign in to your account on 1Password.com.
  2. Click Open Vault to view the items in a vault, then click an item to see its details.
  3. Enter the magic keyboard sequence Shift-Control-Option-C (or Shift+Ctrl+Alt+C on Windows) to unlock the proof of concept.
  4. Click the Check Password button that appears next to your password.

Check if your password has been pwned

Clicking the Check Password button will call out to Troy’s service and let you know if your password exists in his database. If your password is found, it doesn’t necessarily mean that your account was breached. Someone else could have been using the same password. Either way, we recommend you change your password.

In future releases we’ll be adding this to Watchtower within the 1Password apps, so you can see your pwned passwords right in the 1Password app you use every day.

As cool as this new feature is, we would never add it to 1Password unless it was private and secure.

Keep your passwords private and secure

Personally, I’ve always been afraid of using a service that requires me to send my password to be checked. Once my password has been sent, it’s known, and I can’t use it anymore. It’s the same reason why “correct horse battery staple” was a strong password until this comic came out. 🙂

Thankfully, Troy Hunt and his friends from Cloudflare found a brilliant way to check if my password is leaked without ever needing to send my password to their service. Their server never receives enough information to reconstruct my password.

I’m really happy they managed to find a way to make this possible because it allowed us to integrate this feature with 1Password.

Hopefully you’re as intrigued about how this works as much as I am. It’s what got me the most excited when I saw Troy’s announcement!

How it works

Before I dive into the explanation, I want to reiterate that Troy’s new service allows us to check your passwords while keeping them safe and secure. They’re never sent to us or his service.

First, 1Password hashes your password using SHA-1. But sending that full SHA-1 hash to the server would provide too much information and could allow someone to reconstruct your original password. Instead, Troy’s new service only requires the first five characters of the 40-character hash.

To complete the process, the server sends back a list of leaked password hashes that start with those same five characters. 1Password then compares this list locally to see if it contains the full hash of your password. If there is a match then we know this password is known and should be changed.

Troy has a detailed writeup of how this works under the hood in his Pwned Password v2 announcement post. Check out the “Cloudflare, Privacy and k-Anonymity” section if you find this as fascinating as I do.

Take some time to play with our proof of concept. Generate some new passwords to replace your pwned ones, and let me know what you think in the comments. 😎

A thank you to Troy Hunt

Troy Hunt is a respected member of the security community. He’s most well known for his Have I been pwned? service.

Troy invests a lot of his personal time collecting data from every website breach he can find, adding every leaked password to his database. The Internet is a safer place thanks to Troy Hunt.

Edited: I’m thrilled to see Troy likes what we’ve done with this. 🙂

Developers: How we use SRP, and you can too

Donkey: Oh, you both have layers. Oh. You know, not everybody likes onions. Cake! Everybody loves cake! Cakes have layers!
Shrek: I don’t care what everyone likes! Ogres are not like cakes.
Donkey: You know what else everybody likes? Parfaits! Have you ever met a person, you say, “Let’s get some parfait,” they say, “Hell no, I don’t like no parfait”? Parfaits are delicious!

1Password uses a multi-layered approach to protect your data in your account, and Secure Remote Password (SRP) is one of those very important layers. Today we’re announcing that our Go implementation of SRP is available as an open source project. But first, I’d like to show you the benefits SRP brings as an ingredient in the 1Password security parfait.

Parfaits: delicious and secure

The first layer of security in 1Password, your Master Password, protects your data end to end – at rest and in transit – but we wanted to go further. The second layer is something we call Two-Secret Key Derivation. It combines your Secret Key with your Master Password to greatly improve the strength of the encryption. Thanks to your Secret Key, even if someone got your data from our servers, it would be infeasible to guess your Master Password.

That still wasn’t enough for us, though. When we first started planning how we were going to securely authenticate between 1Password clients and server, we had a wish list. We wanted to ensure that:

  • your Master Password is never transmitted or stored on the server.
  • eavesdroppers can’t learn anything useful.
  • the identity of user and server are mutually authenticated.
  • the authentication is encryption-based.

There was actually one other requirement that wasn’t exactly part of the list but applied to every item in the list: we didn’t want to roll our own solution. We know better than to roll our own crypto, and we wanted to find a proven solution that’s been around and has stood the test of time.

We wanted this layer to be just right.

SRP: a hell of a layer

It took us a while to find what we needed for this layer. (Apparently the marketing department of augmented password-authenticated key agreement protocols is underfunded.) But we eventually found SRP, which ticked all our boxes. SRP is a handshake protocol that makes multiple requests and responses between the client and the server. Now, that may not sound very interesting – and I’m not one to show excitement easily – but SRP is a hell of a layer. With SRP we can:

  • authenticate without ever sending a password over the network.
  • authenticate without the risk of anyone learning any of your secrets – even if they intercept your communication.
  • authenticate both the identity of the client and the server to guarantee that a client isn’t communicating with an impostor server.
  • authenticate with more than just a binary “yes” or “no”. You actually end up with an encryption key.

All this makes SRP a great fit for 1Password, and it keeps your data safe in transit. As an added bonus, because SRP is encryption-based, we end up with a session encryption key we can use for transport security (the fourth layer) instead of relying on just Transport Layer Security (TLS).

So that’s four layers of protection for your 1Password account: Master Password, Secret Key, SRP, and TLS. Now I’d love to show you how SRP works step by step in 1Password.

How 1Password uses SRP

Before any authentication can be done, the account needs to be enrolled. 1Password does this when you create an account. To use SRP, you’ll need a couple different things:

  • a key derivation function (KDF) that will transform a password (and, in our case, also your Secret Key) into a very large number. We’ve chosen PBKDF2.
  • an SRP group consisting of two numbers: one very large prime and one generator. There are seven different groups; we’ve chosen the 4096-bit group.

Enrollment

To enroll, the client sends some important information to the server, and the server saves it:

  1. The client generates a random salt and Secret Key.
  2. The client asks the user for a Master Password.
  3. The client passes those three values to the KDF to derive x.
  4. The client uses x and the SRP group to calculate what’s called a verifier.
  5. The client sends the verifier, salt, and SRP group to the server.
  6. The server saves the verifier and never transmits it back to the client.

Now the account is ready for all future authentication sessions.

Authentication

To authenticate, the client and server exchange non-secret information. Then the client combines that with a secret that only it knows and the server combines it with a secret only it knows:

  1. The client requests the salt and the SRP group from the server.
  2. The client asks the user for the Master Password and Secret Key.
  3. The client passes those three values (minus the SRP group) to the KDF to derive x.
  4. The client:
    1. Generates a random secret number a.
    2. Uses the SRP group to calculate a non-secret counterpart A.
    3. Sends A to the server.
  5. The server:
    1. Generates a random secret b.
    2. Uses the SRP group to calculate a non-secret counterpart B.
    3. Sends B to the client.

So A and B are exchanged, but a and b remain secrets. Through the power of math, the client (with x, a, B) and the server (with the verifier, A, b) can both arrive at the same very large number using different calculations. This is the number that 1Password uses as a session encryption key.

Verification

The last step is verification. After all, no amount of fancy math will help if the numbers don’t match up between client and server.

To verify, the client and server exchange encrypted messages:

  1. The client encrypts a message with the session encryption key and sends it to the server.
  2. The server decrypts the message and verifies it.
  3. The server encrypts its own message with the same session encryption key and sends it to the client.
  4. The client decrypts the message and verifies it.

If the client and server both used the correct inputs then they’ll both have the same session encryption key, which allows them to decrypt and verify the message. If they don’t use the correct inputs, everything fails.

The verification process proves to the server that the client has x, which can only be derived using the correct Master Password and Secret Key. It also proves to the client that the server has the verifier, which ensures that the client is communicating with the 1Password server, not an impostor.

Now that it has been verified, the session encryption key can be used to encrypt every message between the client and server going forward.

As you can see, it’s critical to remember your Master Password and keep your Secret Key safe if you ever want to authenticate your 1Password account. They’re also very important layers, after all. 🙂 And, like any good parfait, everything comes together to create something better than the individual layers alone.

Implement SRP in your own app

We love SRP so much that we want to see it used in more of the apps we use. That’s why we want to help you get started with SRP in your own project. Our Go implementation of SRP is available as an open source project:

This package provides functions for both clients and servers, and you only need to BYOKDF (Bring Your Own Key Derivation Function, like any good party). It’s the same code we’re using on our server and in the 1Password command-line tool, so we welcome security researchers to take a look and report any issues through the 1Password Bugcrowd bug bounty program.

SRP is one of the less appreciated parts of 1Password, and I hope I’ve explained it well enough for you to implement it in your own project. You can read more about how we use SRP in the 1Password Security Design White Paper. Leave a comment if you have any questions, or file an issue if you see something that can be improved.

Until next time, enjoy that parfait!

Same as it ever was: There’s no reason to melt down

The Intel CPU flaw, that is being referred to as “meltdown”, is a big deal. It allows for a whole (new) category of malware to do things that it otherwise shouldn’t be able to do. This is not a good thing, and it remains a threat until operating systems are updated to no longer rely on some specific security features of the CPUs.

But just because it is an extraordinary bug doesn’t mean that it requires an extraordinary response from most people. (Operating system designers are not “most people.”) The same practices that you should already be doing are enough.

What you can do is what you may already be doing

Stay updated, be careful where you get your software

Malware that exploits meltdown may be particularly powerful, but it is still just malware. And so the practices that we’ve always recommended are the practices that will protect you now.

  1. Keep your system and software up to date
  2. Be careful about where you get your software.

Regarding point 1, it appears that the latest version of High Sierra already has defenses to guard against meltdown. If you are using macOS be sure that you are up to date. It also appears that Microsoft is in the process of releasing a security update for Windows.

For the second point, I recommend downloading software from app stores, such as the Mac App Store and the Microsoft Store. They can’t guarantee that no malware slips through, but they provide the easiest and most effective filter available.

Whatever you do, don’t respond to “scareware”. Scareware is typically sold through something that pops up fake alerts about your system being infected or compromised. These scary (and fraudulent) alerts then try to entice you into installing and running tools that will “clean” or “repair” your system. Unfortunately those tools do the exact opposite of what they claim to do.

Panicked people make poor security choices. And so this is why I am worried that fear about this issue might lead people to become more susceptible to scareware. Take a deep breath, don’t panic, and be calmly suspicious of scary alerts.

What we can do is what we have already been doing

1Password is designed so that even if an attacker can read every bit of data on our systems they cannot learn your secrets. We simply don’t have the capacity to decrypt your data, and that holds of anyone who compromises our systems. This has been essential to 1Password’s design from the very beginning, and it is why we don’t have to panic either.

Furthermore, it appears that AWS (our hosting provider) has already begun patching the servers. Keeping up with updates is one of the things we hire them to do.
1Password Encryption

Same as it ever was

I don’t want to downplay the extraordinariness of this bug. It is fascinating in many ways, and it does have broad impacts. But unless your job is to design and maintain operating systems, you should just follow normal practices of keeping your system up to date and not installing dodgy software.

There is a great deal of speculation and news coming thick and fast and it may well be that some of the details of what I have said here will need correction. But the core message should remain the same. Keep your systems and software up to date, and don’t install software from untrusted sources.

1Password keeps you safe by keeping you in the loop

This is a story with many beginnings and many threads coming together. The very short read of it is that 1Password’s browser extension has always been designed from the outset to keep you safe from some recently discovered browser based attacks on some password managers.

Researchers at Princeton University’s Web Transparency and Accountability Project were investigating tracking scripts on web pages, and discovered that several of them attack browser-based password managers and extract the email addresses, usernames and sites stored in the browser’s password manager. As I said, 1Password is designed in such a way as to not be vulnerable to the kinds of attacks those scripts used. The scripts that attempt this are from Adthink (audience insights) and OnAudience (behavioralengine).

Whether or not they make malicious use of the passwords they extract, they are certainly learning which sites you have records for in those password managers. I would like to add that we’ve designed 1Password so that we cannot know which sites and services you have logins for.

There is a huge amount to say about the contemptible behavior of these trackers, and I’m hopeful that others will say so clearly. Here, I want to talk more about what all of this illustrates about 1Password’s design and our approach to security.

Saying “no” to automatic autofill

A commonly requested feature is an option that that would have 1Password automatically fill in web forms as soon as you navigate to those pages in your browser. 1Password, instead, always requires that you take some action. Perhaps it is just hitting ⌘-\ or Ctrl-\ or using our Go and Fill mechanism or even setting up a 1Click Bookmark. But whatever of several mechanisms 1Password makes available, you have to tell it that you want it to fill material on the page.

Plenty of you have written in over the years, saying that they would like 1Password to fill in web forms as soon as they get to a page, with no human intervention. We’ve even been told that it is a very popular feature of some other password managers.

It’s not a lot of fun saying “no” to feature requests. But that is what we have done for as long as I can remember. And for the rest of this article, I’m going to draw from something I wrote in our forums back in 2014.

Because of security concerns we are disinclined at this time to offer, even as an option, the feature you (and so many others) are asking for. […] but I do want to give you an overview of our reasoning for what might seem like an odd choice.

Automatically filling a web form with no user intervention other than visiting the page can, if combined with something that works around the anti-phishing mechanism [of 1Password], lead to an attack where lots your usernames and passwords are submitted to a malicious site in a way that is silent and invisible to you.

The longer answer

I will use the terminology adopted by David Silver and co-authors in Password Managers: Attacks and Defenses at the USENIX security conference (2014). In the terminology of that paper, this requested feature is “automatic auto-fill” instead of what 1Password does with “manual auto-fill”. That is, 1Password requires some user intervention before it will fill a form (such as you typing Ctrl-\), instead of simply filling when you visit a page.

Although I am citing material from 2014, this kind of attack had been discussed since at least 2006, noting that

It’s really not phishing, as it doesn’t actually require the user to believe anything, as the social engineering portion of the attack is not there. As such you can steal user information through any page, as long as the automatic form submission requires no user input to fill the form.

This isn’t new.

Why am I now going to talk about phishing?

One of the great security benefits of 1Password is that it helps you avoid phishing attacks. When you ask 1Password to fill information into a page, it will not fill into pages that don’t match the URL of the item.

1Password has a number of mechanisms to prevent filling into the wrong page. That is, if you go to a form at paypal.evil.com 1Password will not fill in a password saved for paypal.com because the domains don’t match correctly. Tricking a person into filling out something like their PayPal password to something that only masquerades as PayPal is called “phishing”. The idea is that it should be harder to trick a password manager than a person. And it usually is. This is one of the many ways in which 1Password keeps you safe.

For the kinds of attacks we’ve been talking about, the malicious web page content needs to trick or by-pass the password manager’s anti-phishing mechanism. If a malicious script on MyKittyPictures.example.com is going to try to grab PayPal credentials, then it is going to have to fool the password manager into thinking that it is filling in a place that matches paypal.com.

We work very hard to make 1Password do the right thing in such cases. 1Password’s anti-phishing mechanisms work very well at preventing it from filling into the “wrong” web forms. But because of the nature of the HTML, iFrames, protocols, javascript, iFrames, conventions, page designs, and iFrames, the defenses that we (and everyone) have to use are messy and involve a series of rules and exceptions and exceptions to those exceptions. (Did I mention that iFrames are a trouble spot?) It is exactly the kind of thing that we know can go wrong.

So the question we’ve had to ask ourselves is if the anti-phishing mechanisms are strong enough to mean that we never ever have to worry about 1Password in data to the wrong place. We needed to decide whether the tools available for that defense are strong enough to allow us to build a mechanism that meets our standards. Unfortunately they don’t, and so we insist on another line of defense.

Invisible forms

The fields in which usernames, passwords, credit card numbers, and so on get filled won’t always be visible to you. Any page could have a form on it that you don’t see. If the designer of the form is attempting to trick a form filling mechanism, there is no way that 1Password could actually check to see if the fields really are visible.

So if the anti-phishing mechanism can be tricked, then when you visit a malicious web page (including those that have malicious tracking scripts on them) you could have your private information silently and invisibly stolen if automatic auto-fill were in place.

Sweep attacks

The malicious form could be designed to reload itself spoofing a different password each time. So that is, a single malicious injection point could trick your automatically auto-filling password manager into giving up your passwords for many different sites. David Silver referred to these as “sweep attacks”, and that is what it appears that these advert trackers are doing.

At this point, I have not fully studied their scripts to know the precise mechanisms they used, but it certainly is some form of sweep attack.

Doing good and doing no harm

Here is where I go off on a bit of a philosophical abstraction. As I’ve said, I don’t believe that a password manager can offer 100% absolute protection against phishing. But suppose there is one attack out of a million in which it fails to protect against phishing. If you use 1Password, you are much safer against the other 999,999 attempts and you are no worse off than you would be without it. Even in that one in a million case, using 1Password doesn’t add to your risk.

But now contrast that with a situation with a password manager that does allow automatic autofill. A password manager that can be subject to a sweep attack enables a kind of attack that wouldn’t be possible without the use of a password manager.


If you are using a password manager that allows for automatic autofill, turn that feature off. If you are using a password manager that doesn’t allow you to turn that feature off, switch password managers. And when you consider making such a switch, please remember that we’ve never allowed automatic autofill at any time in our more than 10 year history. We believe that you have to be in the loop when it comes to giving your secrets to anyone else. That design philosophy helps keep you safe and in control.

It ain’t over till it’s over

I’m sure that there will be more news to come over the next few days or weeks about the extent of these malicious trackers and precisely which password managers were affected. So please follow in comments for more information.

Why is this information sensitive? The deeper Equifax problem

As the world now knows Equifax, the credit rating company and master of our fates, suffered a data breach in May and June 2017, which revealed to criminals details of 143 million people. (I would have liked to say, “143 million customers“, but that is very far from the case. We have no control at all over Equifax and other credit rating companies collecting information about us. We are neither their customers nor users.)

The revealed data includes:

  • Social Security numbers
  • Dates of birth
  • Addresses
  • Driver’s license numbers (unspecified number of these)
  • Credit card numbers (209,000 of these)

There are many important things to ask about this incident, but what I am focusing on today is why has non-secret information become sensitive? None of those numbers were designed to be used as secrets (including social security numbers and credit card numbers), yet we live in a world in which we have to keep these secret. What is going on here?

Identity crisis

Names only provide a first pass at identifying individuals in some list or database. There are a lot of Jeffrey Goldbergs out there. (For example, I am not the journalist and now editor-in-chief at the Atlantic. But there are lots of others that I also am not.) Also people change their names. Some people change their name when they get married. (My wife, Lívia Markóczy, decided to keep her name because we figure it is easier to spell than “Goldberg”.) Others change their names for other reasons.

We have three “Jeffreys” at AgileBits, but fortunately we have distinct family names. Though sometimes I think that everyone who joins the company should just go by “Jeffrey” to avoid confusion.

Anyway, names alone are not enough to figure out who we are talking about once we get beyond a small group of people. So we use other things. Social security numbers worked well in the US for some time. They didn’t change over your lifetime (except in rare circumstances) and nearly everyone had one. Dates of birth also don’t change. So a combination of a name, a date of birth, and a social security number was a good way to create an identifier for nearly every individual in the US, with the understanding that a name might change.

Sometimes it is not a person that we need to uniquely and reliably identify. Sometimes it is something like a bank account or charge account. Cheques (remember writing those?) have the account number printed on them. They uniquely identify the particular account within a bank, and a routing number (in the US) identifies the bank. The routing number is also printed on each cheque.

Things like social security numbers and driver’s license numbers are designed as “identifiers” of people. They are ways to know which Jeffrey Goldberg is which. Occasionally getting email meant for the journalist is no big problem, but if he gets himself on the no-fly list, I want to be sure that I don’t get caught up in that net. Likewise, I don’t want my doctor or pharmacist mixing me up with some other Jeffrey Goldberg who isn’t allergic to the same stuff that I am. Nor does some other Jeffrey Goldberg want the record of speeding tickets I seem to acquire.

Things like bank or charge account numbers are used to uniquely and reliably identify the particular account. While I wouldn’t mind if my credit card charges were charged against someone else’s account, they would certainly mind, and so would the the relevant bank. (I’m going to just start using the word “bank” broadly to include credit card issuers, automobile loan issuers, and the like.)

A username on some system is also an identifier. It identifies to the service which particular user or account is being talked about. I am jpgoldberg on our discussion forums. That username is how the system knows what permissions I have and how to verify my password.

Identifiers are bad secrets

Something that is designed and used as an identifier is hard to keep secret. A service can hash a password, but it needs to know which account is being talked about before it can look up any information. In many database systems, identifiers are used as record locators. These need to be efficiently searchable for lookup.

Identifiers also need to be communicated before secret stuff can happen. Bank account numbers are printed on cheques for a reason. Now really clever cryptographic protocols – like the one behind Zero Cash – can allow for transactions which don’t reveal the account identifier of the parties, but for almost everything else, account identifiers are not secret.

Identifiers are hard to change. If you depend on the secrecy of some identifier for your security, then you are stuck with a problem when those secrets do get compromised. It is a pain to get a new credit card number, and it is far worse trying to get a new social security number. Getting a new date of birth might also be a teeny tiny problem.
The point here is that, given what identifiers are designed to do, they aren’t designed to be kept secret.

Authenticators

Authentication is the process of proving some identity. And this almost always involves proving that you have access to a secret that only you should have access to. When I use 1Password to fill in my username (jpgoldberg) and password to our discussion forums, I am proving to the system that I have access to the secret (the password) associated with that particular account.

The password is designed to be kept secret. The server running the discussion forum doesn’t need to search to find the password (unlike searching to do a lookup from my username), so it can get away with storing a salted hash of the password. Also, I can change the password without losing all of the stuff that lives under my account. (Changing my username would require more work.) Plus, my username is used to identify me to other people using the system, and so is made very public. My password, on the other hand, is not.

What banks did wrong

The mess we are in today is because financial institutions have been using knowledge of identifiers as authentication secrets. The fact that someone can defraud a credit card issuer by knowing my credit card number (an account number) and my name and address (matters of public record) is all because at one point, credit card issuers decided that knowledge of the credit card number (a non-secret account number) was good way to authenticate.

I have not researched the history in detail, but I believe that this started with credit card numbers when telephone shopping first became a thing (early 1970s, I believe). Prior to then, credit cards were always used when the account holder was physically present and could show the merchant an ID with a signature. The credit card number was used solely as designed up until that point: as a record locator.

The same thing is true of social security numbers. Social security numbers were not secret until banks started to use knowledge of them as authentication proofs when they introduced telephone banking. Before then, there was nothing secret about them.

And on it goes

Because high-value systems use knowledge of identifiers as authentication proofs we are in deep doo-doo. And it will take a long time to dig ourselves out. But we continue to dig ourselves deeper.

It is fine to be asked for non-secret identifying information to help someone or something figure out who they are talking about. I like it when my doctor asks for my date of birth to make sure that they are looking at and updating the right records. But when they won’t reveal certain information to me unless I give them my date of birth, then we have a problem. That is when they start using knowledge of an identifier as an authentication secret.

Over the past decade or so, various institutions have been told that they can’t hold on to social security numbers, and so can’t use them for identifiers. That is a pity, because those are the best identifiers we have in the US. But what is worse is that knowledge of the new identifiers is being used for authentication.

Right now, Baskin-Robbins knows my date of birth (so they can offer me some free ice-cream on my birthday). In ten years, will I have to keep my birth date a closely guarded secret so that I don’t become a victim of some financial or medical records crime? If we keep on making this mistake – using identifiers as authentication secrets – that is where we are headed.

Incentives matter more than technology

I do not want to dismiss the technological hurdles in fixing this problem, but I believe that there is a bigger (and harder) problem that will need to be fixed first: the incentives are in the wrong place.

When Fraudster Freddy gets a loan from Bank Bertha using the identity of Victim Victor, Bertha is (correctly) responsible for the direct financial loss. The problem is that there are costs beyond the immediate fraudulent loan that are borne by Victor. But Victor has no capacity or opportunity to prevent himself from being a victim. In economics jargon, Victor suffers a negative externality.

Bertha factors in the risk of the direct cost to her of issuing a loan to a fraudster. She looks at that risk when deciding how thoroughly to check that Freddy is who he says he is. Bertha could insist that new customers submit notarized documents, but if she insists on that and her competitors don’t, then she would lose business to those competitors.

But Bertha does not factor in the indirect costs to Victor. She has no dealings with Victor. Victor isn’t a potential customer. So if Victor has costly damage to his credit and reputation that requires a lot of effort to sort out, that is not Bertha’s problem (and it certainly isn’t Freddy’s problem.)

Only when Freddy and Bertha (the parties to the original deal) have to pay the cost of the damage done to Victor (Economics jargon: “internalizing the externalities”) will Bertha have the incentives to improve authentication. I don’t have an answer to how we get there from here, but that is the direction we need to head. In the meantime, if you find yourself a victim (whether you’re a Victor, a Jeffrey, or something else entirely), Kate published a post earlier this week with tips to protect yourself until we (hopefully) do get all of this figured out one day.

Net neutrality: Keeping the Internet safe and accessible for all

Lo, everyone! Back on October 29, 1969, that two-letter greeting was the first message sent over ARPANET, the predecessor to the World Wide Web. Today, on July 12, 2017, people from around the globe are coming together for a day of action to fight for net neutrality. The principle of net neutrality states that all Internet traffic should be treated equally, but those who control the transmission of that data have been fighting for the right to place their preferred data in the fast lane and leave data they don’t like in a traffic jam. We here at AgileBits care quite a lot about data, and while we’re glad your sensitive data is safely locked away, we think the data we want to share on the Internet should remain accessible to everyone. Read more

Be your own key master with 1Password

Encryption is great. By magic (well, by math) it converts data from a useful form to complete gibberish that can only be turned back into useful data with secret number called a key.

I happen to think that the term “key” that we use for encryption and decryption keys is a poor metaphor, as it suggests unlocking a door or a box. Cryptographic keys are more like special magic wands that are essential to the process of transforming data from its useful (decrypted) form to gibberish and back again. Read more

Introducing Travel Mode: Protect your data when crossing borders

We often get inspired to create new features based on feedback from our customers. Earlier this month, our friends at Basecamp made their Employee Handbook public. We were impressed to see they had a whole section about using 1Password, which included instructions for keeping work information off their devices when travelling internationally.

We knew right away that we wanted to make it easier for everyone to follow this great advice. So we hunkered down and built Travel Mode.

Travel Mode is a new feature we’re making available to everyone with a 1Password membership. It protects your 1Password data from unwarranted searches when you travel. When you turn on Travel Mode, every vault will be removed from your devices except for the ones marked “safe for travel.” All it takes is a single click to travel with confidence.

It’s important for me that my personal data be as secure and private as possible. I have data on my devices that’s ultimately a lot more sensitive than my personal data though. As one of the developers here at AgileBits I’m trusted with access to certain keys and services that we simply can’t take any risks with.

How it works

Let’s say I had an upcoming trip for a technology conference in San Jose. I hear the apples are especially delicious over there this time of year. :) Before Travel Mode, I would have had to sign out of all my 1Password accounts on all my devices. If I needed certain passwords with me, I had to create a temporary travel account. It was a lot of work and not worth it for most people.

Now all I have to do is make sure any of the items I need for travel are in a single vault. I then sign in to my account on 1Password.com, mark that vault as “safe for travel,” and turn on Travel Mode in my profile. I unlock 1Password on my devices so the vaults are removed, and I’m now ready for my trip. Off I go from sunny Winnipeg to hopefully-sunnier San Jose, ready to cross the border knowing that my iPhone and my Mac no longer contain the vast majority of my sensitive information.

After I arrive at my destination, I can sign in again and turn off Travel Mode. The vaults immediately show up on my devices, and I’m back in business.

Not just a magic trick

Your vaults aren’t just hidden; they’re completely removed from your devices as long as Travel Mode is on. That includes every item and all your encryption keys. There are no traces left for anyone to find. So even if you’re asked to unlock 1Password by someone at the border, there’s no way for them to tell that Travel Mode is even enabled.

In 1Password Teams, Travel Mode is even cooler. If you’re a team administrator, you have total control over which secrets your employees can travel with. You can turn Travel Mode on and off for your team members, so you can ensure that company information stays safe at all times.

Travel Mode is going to change how you use 1Password. It’s already changed the way we use it. When we gave a sneak peak to our friends at Basecamp, here’s what their founder, David Heinemeier Hansson, had to say:

International travel while maintaining your privacy (and dignity!) has become increasingly tough. We need better tools to help protect ourselves against unwarranted searches and the leakage of business and personal secrets. 1Password is taking a great step in that direction with their new Travel Mode. Bravo.

Travel Mode is available today, included in every 1Password membership. Give it a shot, and let us know how you travel with 1Password.

Learn how to use Travel Mode on our support site.

1Password is #LayerUp-ed with modern authentication

We should remember on this World Password Day that passwords have been around for thousands of years. They can’t be all bad if we’ve kept them around so long, but some things need to change.

This year we are partnering with Intel to talk about layers of security, and in this article I’m going talk about how 1Password adds invisible layers of security to what might look like an old-fashioned sign-in page with a username and password.

Passwords are not a bad way of proving who you are. But the way that they’re used in practice exposes people to a number of unnecessary risks. It is long past time to move beyond traditional password authentication, and with 1Password accounts, we’re doing so.

Breaking tradition

When we designed the sign-in process for your 1Password account, we wanted something that looks reasonably familiar to people but secures against the risks that a traditional password login does not. Indeed, when you sign in to a 1Password account, although you enter your Master Password, you do not send us any secrets at all.

The buzzword explanation is that our end-to-end encryption uses Secure Remote Password (SRP) as a Password-based Authenticated Key Exchange (PAKE) along with Two-Secret Key Derivation (2SKD) to get all of the security properties we want without having to place too much additional burden on the users.

If you’re interested in knowing what exactly all that means, read on. Otherwise stop here and enjoy a technical discussion of how migratory swallows can transport coconuts:

Halt! Who goes there?

Authenticating (proving who you are) with passwords hasn’t really changed much over the centuries. And I’m going to start my discussion of the security properties it does and doesn’t offer with a very traditional example.

King Arthur Queen Penelope approaches a castle gate where a guard (we’ll call him Vincent) is on duty.

Many web services use this traditional method, and many are plagued with the same security problems (which I will get to shortly). As a society we need to be doing much better than this, and we are proud to say that 1Password is doing much better than this!

1Password’s major line of security is its end-to-end encryption: your data is encrypted with keys that never leave your devices. So even if someone does get past the castle gate there is not much they can do with what they find inside. But today we are talking about the security of signing in to the 1Password service.

The problems found in the castle gates method

A method of authentication that has been in place for thousands of years may have a lot going for it. It simply shouldn’t be dropped like coconuts dropped by migrating swallows. But it does bear looking into. And when we do look into it, we see lots of problems.

  • Reuse: If Penelope uses the same password for multiple castles, then the discovery of one of those passwords in one place can lead to a breach of all of the castles that she uses that password for.
  • Guessable: The sorts of passwords that Penelope is going to use in these sorts of circumstances are probably guessable with a reasonable amount of effort. This guessing can be made easier if an attacker gets hold of the data that Vincent stores to verify passwords (even if it isn’t the passwords themselves).
  • Replay: If someone overhears Penelope saying her password to Vincent, then they can use what they overheard to pretend to be Penelope at some later time.
  • Revealing password to those in the castle: Even if Penelope and Vincent could avoid being overheard, Penelope is giving away a secret to someone who may or may not be the real Vincent. Even if it is the right Vincent, she has to trust him to not do anything he shouldn’t with her password (like pretending to be her at some other castle).
  • Not mutual: Penelope proves to Vincent that she is who she says she is, but Vincent does not prove to Penelope that she is at the castle that she thinks she is at. Sure, it might be tricky for one castle to pretend to be a different one, but internet services are another matter.
  • Further communication is still unsafe: If Penelope is brought within the castle walls after authentication, all is good. But if (keeping the analogy closer to an Internet service) she remains outside the castle but carries on conversations with people within the castle, their conversation will still be overheard and perhaps tampered with by someone else outside the castle walls.

Building better castles and gates

Traditional authentication, whether it be castle gates or websites, only really have the client or user proving their identity to the server. But we should be asking for much more security in a modern authentication process. Not only should it have the client prove its identity to the server, but

  1. The server should prove its identity to the client (mutual authentication).
  2. An eavesdropper on the conversation should not be able to learn any secrets of either the client or the server.
  3. An eavesdropper should not be able to replay the conversation to get in at some later time.
  4. Client secrets should not be revealed to the server.
  5. The process should set up a secure session beyond just the authentication process.
  6. User secrets should be unguessable.
  7. User secrets should be unique.

I know that a lot of the things on that list overlap, and solving one often involves solving another. But there are some technical reasons for their separation. Also it makes for a more impressive list when they are broken apart this way.

What’s inside the castle

There may be ways for Oscar (Penelope’s opponent) to get into the castle that do not involve providing proofs of identity to Vincent. Perhaps there are other doors. Perhaps it is possible to tunnel in or or fly over the walls. Perhaps it is possible to hide inside a large wooden rabbit that is brought into the castle. Perhaps someone already inside and trusted is an enemy spy. Requiring that Vincent check multiple factors will not help defend against any of those.

It is 1Password’s end-to-end encryption that defends against those sorts of attacks. Although end-to-end encryption is probably our most important layer of security, it is not the one that I am talking about today. I bring it up because not only is end-to-end encryption our most important line of defense, but because we need to make sure that nothing in our authentication system could weaken that encryption. We don’t want an attack on the authentication system to give the attacker any information that would be helpful in decrypting Penelope’s data. (Don’t worry if that didn’t make sense at this point; it should by the end.)

With end-to-end encryption the stuff in the castle that is valuable to Penelope is useless to Oscar because only Penelope has the ability to make use of it. Penelope has secrets that never leave her side that can transform her pile of useless stuff kept inside to castle into things that are very valuable. Authentication is about proving who you are and being given access to something. Encryption (and decryption) is about transforming stuff from a useful form into a useless form (and back again).

End-to-end encryption means that even if Oscar gets inside, he will not be able to read the data that he finds. Penelope’s data – even within the castle – can only be decrypted using secrets that only Penelope has and which are never revealed to anyone else.

Once more into the breach

There is one extra thing we need to consider to keep all of this secure. And that is whether an attacker, Oscar, who gets into the castle can use the same information that Vincent has in a way that helps him (Oscar) decrypt Penelope’s data.

When Vincent checks to see whether Penelope has provided the correct password, he needs to check against some data stored some place (perhaps in his head). If computer systems worked like those castles then there might be the same password for everyone and Vincent would know it, too. Fortunately, most web services do much better than that. Each person has their own password and the service should not be storing those passwords unencrypted.

Services do not want to store the passwords that are needed to get into the service. Vincent shouldn’t store that Penelope’s password is xyzzy. Having a piece of paper around with all of that data would be dangerous. So Vincent doesn’t actually have those passwords himself; instead he has password verifiers. (Vincent verifies passwords in this story, and Penelope proves her identity. Oscar is their opponent.)

Making a hash of it (is a good thing)

I have to step back from the castle metaphor at this point, as Vincent needs to do some math that wasn’t around at the time. The verifiers that Vincent stores are typically (though not necessarily) cryptographic hashes of the password. The beauty of a cryptographic hash is that it is quick and easy to compute that the hash of “xyzzy” is “mzgC7LrRFCZ90NGkMeV7C8qVkwo=”, but it is pretty much impossible to compute things in the other direction. So in a typical web login system, when Penelope tells Vincent her password, Vincent computes the hash of what she says and then compares that to what he has stored. This way, if Oscar steals the list of these hashes from the castle, the passwords that people use to authenticate remain safe.

But we still have a problem. Even though Oscar can’t compute “xyzzy” from “mzgC7LrRFCZ90NGkMeV7C8qVkwo=” he can use that hash to verify guesses. Oscar can guess “OnlyUlysses4Me” and see what hash it produces. If it doesn’t produce the hash he is checking against, then he tries another guess. If Oscar has the right sort of equipment and a copy of the verifier/hash stolen from the castle, he can make a large number of guesses very quickly. If Oscar knows that Penelope enjoys adventures, he will eventually guess “xyzzy”, and will see that he gets the right verifier.

One password, two secrets

Now those of you still following along might wonder why Oscar would bother trying to discover Penelope’s password to get into the castle if he already managed to get into the castle to steal Vincent’s list. There are many reasons why figuring out Penelope’s password would be useful to Oscar, but the one that we are concerned about here is that it might be useful for decrypting Penelope’s data. Even though Penelope’s data is encrypted using a secret that only Penelope knows, we want Penelope to only have to know one password to get to her data (be able to retrieve her stuff from the castle) and decrypt it once she has retrieved it. This one password is her Master Password.

Her password, xyzzy, is being used for two purposes. Once is for authentication (getting into the castle) and the other is for her end-to-end encryption to decrypt her data. And this is why – even though 1Password’s security is primarily based on end-to-end encryption – we need our authentication system to not give Oscar anything useful for attacking Penelope’s data. We do not want anything we store to aid Oscar in guessing Penelope’s Master Password.

We want it so that even if Oscar gets into the castle and gets hold of Vincent’s list of verifiers he is no closer to getting at Penelope’s secrets. Although SRP gives us many of the security properties we want, it still leaves Vincent holding a verifier. That verifier – if we didn’t take counter measures – could be used by Oscar for checking password guesses even though it isn’t a traditional password hash.

Two secrets, one password

We want the verifiers that we store to be uncrackable. Our solution is Two-Secret Key Derivation (2SKD). Penelope has a secret other than her Master Password. It is her Secret Key. When she first sets up her account the software running on her machine will create an unguessable Secret Key. This Secret Key gets blended in with Penelope’s Master Password all on her own devices.

When Penelope proves her identity to our servers, she is not just proving that she knows her Master Password, but she is also proving that she (well, her software) has her Secret Key. In fact, she is proving that she has a combination of these. That combination is unguessable and so cannot be used to get at the secrets needed to decrypt her data.

2SKD means that Vincent only stores uncrackable verifiers. So there is little that Oscar could do if he stole the list of verifiers. SRP means that neither Vincent or anyone listening in during authentication can learn Penelope’s secrets. And with all of Penelope’s authentication secrets nice and secure, they can also be used for the end-to-end encryption of her data.

A very brief word about SRP

SRP and other similar systems involve proving that you know a secret without revealing that secret. As much as I would love to go more into how that happens, this article has become too long as it is. So let me just say that the client and the server present each other with mathematical puzzles that can only be solved with knowledge of a secret, but solutions to those puzzles do not reveal anything about the secret. And because each puzzle is new each time, someone who records a session cannot replay it later.

I’ve tried to give an overview of how the interaction of three things in 1Password’s design – end to end encryption, 2SKD, and SRP – protect your personal information from whole categories of attacks. These design elements are mostly invisible to people using 1Password. You do not need to know about these things to use 1Password well, but we like to let you know it is there. This is all part of how we at strive to make dealing with passwords both easier and more secure. Although not all of the techniques described here are appropriate for all sites and services, we do hope that we that the ideas discussed here can help improve authentication not just for 1Password but for other systems as well.

More detail about all of this and more is in our 1Password Security white paper.

And so a Happy World Password Day to one and all.

More than just a penny for your thoughts — $100,000 top bounty

We believe that we’ve designed and built an extremely secure password management system. We wouldn’t be offering it to people otherwise.  But we know that we – like everyone else – may have blind spots. That is why we very much encourage outside researchers to hunt for security bugs. Today we are upping that encouragement by raising the top reward in our bug bounty program.

bugcrowd-logoWe have always encouraged security experts to investigate 1Password, and in 2015 we added monetary rewards though Bugcrowd. This has been a terrific learning experience for both us and for the researchers. We’ve learned of a few bugs, and they’ve learned that 1Password is not built like the web services they are used to attacking. [Advice to researchers: Read the brief carefully and follow the instructions for where we give you some internal documentation and various hints.]

Since we started with our bounty program, Bugcrowd researchers have found 17 bugs, mostly minor issues during our beta and testing period. But there have been a few higher payout rewards that pushed up the average to $400 per bug. So our average payout should cover a researcher’s Burp Suite Pro license for a year.

So far none of the bugs represented a threat to the secrecy of user data, but even small bugs must be found and squashed. Indeed, attacks on the most secure systems now-a-days tend to involve chaining together a series of seemingly harmless bugs.

Capture the top flag to get $100,000

Capture the flag for $100,000

Our 1Password bug bounty program offers tiered rewards for bug identification, starting at $100. Our top prize goes to anyone who can obtain and decrypt some bad poetry (in particular, a horrible haiku) stored in a 1Password vault that researchers should not have access to. We are raising the reward for that from $25,000 to $100,000. (All rewards are listed in US dollars, as those are easier to transfer than hundreds or thousands of Canadian dollars worth of maple syrup.) This, it turns out, makes it the highest bounty available on Bugcrowd.

We are raising this top bounty because we want people really trying to go for it. It will take hard work to even get close, but that work can pay off even without reaching the very top prize: In addition to the top challenge, there are other challenges along the way. But nobody is going to get close unless they make a careful study of our design.

Go for it

Here’s how to sign-up:

  • Go to bugcrowd.com and set up an account.
  • Read the documentation on the 1Password bugcrowd profile
  • The AgileBits Bugcrowd brief instructs researchers where to find additional documentation on APIs, hints about the location of some of the flags, and other resources for taking on this challenge. Be sure to study that material.
  • Go hunting!

If you have any questions or comments – we’d love to hear from you. Feel free to respond on this page, or ping us an email at security@agilebits.com.