100K Bounty Header

More than just a penny for your thoughts — $100,000 top bounty

We believe that we’ve designed and built an extremely secure password management system. We wouldn’t be offering it to people otherwise.  But we know that we – like everyone else – may have blind spots. That is why we very much encourage outside researchers to hunt for security bugs. Today we are upping that encouragement by raising the top reward in our bug bounty program.

bugcrowd-logoWe have always encouraged security experts to investigate 1Password, and in 2015 we added monetary rewards though Bugcrowd. This has been a terrific learning experience for both us and for the researchers. We’ve learned of a few bugs, and they’ve learned that 1Password is not built like the web services they are used to attacking. [Advice to researchers: Read the brief carefully and follow the instructions for where we give you some internal documentation and various hints.]

Since we started with our bounty program, Bugcrowd researchers have found 17 bugs, mostly minor issues during our beta and testing period. But there have been a few higher payout rewards that pushed up the average to $400 per bug. So our average payout should cover a researcher’s Burp Suite Pro license for a year.

So far none of the bugs represented a threat to the secrecy of user data, but even small bugs must be found and squashed. Indeed, attacks on the most secure systems now-a-days tend to involve chaining together a series of seemingly harmless bugs.

Capture the top flag to get $100,000

Capture the flag for $100,000

Our 1Password bug bounty program offers tiered rewards for bug identification, starting at $100. Our top prize goes to anyone who can obtain and decrypt some bad poetry (in particular, a horrible haiku) stored in a 1Password vault that researchers should not have access to. We are raising the reward for that from $25,000 to $100,000. (All rewards are listed in US dollars, as those are easier to transfer than hundreds or thousands of Canadian dollars worth of maple syrup.) This, it turns out, makes it the highest bounty available on Bugcrowd.

We are raising this top bounty because we want people really trying to go for it. It will take hard work to even get close, but that work can pay off even without reaching the very top prize: In addition to the top challenge, there are other challenges along the way. But nobody is going to get close unless they make a careful study of our design.

Go for it

Here’s how to sign-up:

  • Go to bugcrowd.com and set up an account.
  • Read the documentation on the 1Password bugcrowd profile
  • The AgileBits Bugcrowd brief instructs researchers where to find additional documentation on APIs, hints about the location of some of the flags, and other resources for taking on this challenge. Be sure to study that material.
  • Go hunting!

If you have any questions or comments – we’d love to hear from you. Feel free to respond on this page, or ping us an email at security@agilebits.com.

Three layers of encryption keeps you safe when SSL/TLS fails

No 1Password data is put at any risk through the bug reported about CloudFlare. 1Password does not depend on the secrecy of SSL/TLS for your security. The security of your 1Password data remains safe and solid.

We will provide a more detailed description in the coming days of the CloudFlare security bug and how it (doesn’t) affect 1Password. At the moment, we want to assure and remind everyone that we designed 1Password with the expectation that SSL/TLS can fail. Indeed it is for incidents like this that we deliberately made this design.

No secrets are transmitted between 1Password clients and 1Password.com when you sign in and use the service. Our sign-in uses SRP, which means that server and client prove their identity to each other without transmitting any secrets. This means that users of 1Password do not need to change their Master Passwords.

UmbrellaBearYour actual data is encrypted with three layers (including SSL/TLS), and the other two layers remain secure even if the secrecy of an SSL/TLS channel is compromised.

The three layers are

  1. SSL/TLS. This is what puts the “S” in HTTPS. And this is what data may have been exposed due to the Cloudflare bug during the vulnerable period.
  2. Our own transport layer authenticated encryption using a session key that is generated using SRP during sign in. The secret session keys are never transmitted.
  3. The core encryption of your data. Except for when you are viewing your data on your system, it is encrypted with keys that are derived from your Master Password and your secret Account Code. This is the most important layer, as it would protect you even if our servers were to be breached. (Our servers were not breached.)
sgx-header

Using Intel’s SGX to keep secrets even safer

When you unlock 1Password there are lots of secrets it needs to manage. There are the secrets that you see and manage such as your passwords and secure notes and all of the other things you trust to 1Password. But there are lots of secrets that 1Password has to juggle that you never see. These include the various encryption keys that 1Password uses to encrypt your data. These are 77-digit (256-bit) completely random numbers.

You might reasonably think that your data is encrypted directly by your Master Password (and your secret Account Key), but there are a number of technical reasons why that wouldn’t be a good idea. Instead, your Master Password is used to derive a key encryption key which is used to encrypt a master key. The details differ for our different data formats, but here is a little ditty from our description of the OPVault data format to be sung to the tune of Dry Bones.

Each item key’s encrypted with the master key
And the master key’s encrypted with the derived key
And the derived key comes from the MP
Oh hear the word of the XOR
Them keys, them keys, them random keys (3x)
Oh hear the word of the XOR

And that is a simplification! But it is the appropriate simplification for what I want to talk about today: Some of our intrepid 1Password for Windows beta testers can start using a version of 1Password 6 for Windows that will have an extra protection on that “master key” described in that song. We have been working with Intel over the past few months to bring the protection of Intel’s Software Guard Extensions (SGX) to 1Password.

Soon (some time this month) 1Password for Windows customers running on systems that support Intel’s SGX will have another layer of protection around some of their secrets.

SGX support in 1Password isn’t ready for everybody just yet as there are a number of system requirements, but we are very happy to talk about what we have done so far and where we are headed. I would also like to say that we would not be where we are today without the support of many people at Intel. It has been great working with them, and I very much look forward to continuing this collaberation.

What does Intel’s SGX do?

Intel, as most of you know, make the chips that power most of the desktop and laptop computers we all use. Their most recent CPUs include the ability for software running on Windows and Linux to create and use secure enclaves that are safe from attacks coming from the operating system itself. It is a security layer in the chip that cryptographically protects regions of operating system memory.

SGX does a lot of other things, too; but the feature I’m focusing on now is the privacy it offers for regions of system memory and computation.

Ordinary memory protection

A program running on a computer needs to use the system’s memory. It needs this both for the actual program and for the data that the program is working on. It is a Bad Thing™ if one program can mess with another program’s memory. And it is a security problem if one program can read the memory of another program. We don’t want some other program running on your computer to peer what is in 1Password’s memory when 1Password is unlocked. After all, those are your secrets.

It is the operating system’s (OS’s) job to make sure that one process can’t access the memory of another. Back in the old days (when I had to walk two miles through the snow to school, up hill, both ways) some operating systems did not do a good job of enforcing memory protection. Programs could easily cause other programs or the whole system to crash, and malware was very easy to create. Modern operating systems are much better about this. They do a good job of making sure that only the authorized process can read and manipulate certain things in memory. But if the operating system itself gets compromised or if some other mechanism might allow for the reading of all memory then secrets in one program’s part of memory may still be readable by outsiders.

Extraordinary memory protection

One way to protect a region of memory from the operating system itself is to encrypt that region’s contents using a key that even the operating system can’t get to. That is a tricky thing to do as there are few places to keep the key that encrypts this memory region if we really want to keep it out of the hands of the operating system.

SGX memory access drawingSo what we are looking for is the ability to encrypt and decrypt regions of memory quickly, but using a key that the operating system can’t get to. Where should that key live?  We can’t just keep it in the the innards of a program that the operating system is running, as the operating system must be able to see those innards to run the program. We can’t keep the key in the encrypted memory region itself because that is like locking your keys in your car: Nobody, not even the rightful owner, could make use of what is in there. So we need some safe place to create and keep the keys for these encrypted regions of memory.

Intel’s solution is to create and keep those keys in the hardware of the CPU. A region of memory encrypted with such a key is called an enclave. The SGX development and runtime tools for Windows allow us to build 1Password so that when we create some keys and call some cryptographic operations those will be stored and used with an SGX enclave.

An enclave of one’s own

When 1Password uses certain tools provided by Intel, the SGX module in the hardware will create an enclave just for the 1Password process. It does a lot of work for us behind the scenes. It requests memory from the operating system, but the hardware on Intel’s chip will be encrypting and validating all of the data in that region of memory.

When 1Password needs to perform an operation that relies on functions or data in the enclave, we make the request to Intel’s crypto provider, which ends up talking directly to SGX portions of the chip which will then perform the operation in the encrypted SGX enclave.

Not even 1Password has full access to its enclave; instead 1Password has the ability to ask the enclave to perform only those tasks that it was programmed to do. 1Password can say, “hey enclave, here is some data I would like you to decrypt with that key you have stored” Or “hold onto this key, I may ask you to do things with it later.”

What’s in our enclave? Them keys, of course!

protected-keysWhen you enter your Master Password in 1Password for Windows, 1Password processes that password with PBKDF2 to derive the master key to your primary profile in the local data store. (Your local data store and the profiles within it are things that are well hidden from the user, but this is where the keys to other things are stored. What is important about this is that your master key is a really important key.)

When you do this on a Windows system that supports SGX the same thing happens, except that the the computation of the master key is done within the enclave.  The master key that is derived through that process is also retained within the enclave. When 1Password needs to decrypt something with that key it can just ask the enclave to perform that decryption. The key does not need to leave the enclave.

Answers to anticipated questions

What does (and doesn’t) this protect us from?

I must start out by saying what I have often said in the past. It is impossible for 1Password (or any program) to protect you if the system you are running it on is compromised. You need to keep your devices free of malware. But using SGX makes a certain kind of local attack harder for an attacker, particularly as we expand our use of it.

The most notable attacks that SGX can start to help defend against are attacks that exploit Direct Memory Access. Computers with certain sorts of external ports can sometimes be tricked in allowing a peripheral device to read large portions of system memory.

As we expand and fine tune our use of SGX we will be in a better position to be more precise about what attacks it does and doesn’t defend against, but the ability to make use of these enclaves has so much potential that we are delighted to have made our first steps in using the protections that SGX can offer.

What will be in our enclave in the future?

As we progress with this, we will place more keys and more operations involving those keys into the SGX secure enclave. What you see today is just the beginning. When the master key is used to decrypt some other key that other key should only live within the enclave. Likewise the secret part of your personal key set should also have a life within the enclave only. I can’t promise when these additions will come. We still need to get the right cryptographic operations functioning within the enclave and reorganize a lot of code to make all of that Good Stuff™ happens, but we are very happy to have taken the first steps with the master key.

We do not like promising features until they are delivered. So please don’t take this as a promise. It is, however, a plan.

Sealed enclaves?

Among the features of SGX that I have not mentioned so far is the ability to seal an enclave. This would allow the enclave to not just keep secrets safe while the system is running, but to allow it to persist from session to session. Our hope is that we can pre-compute secrets and keep them in a sealed enclave. This should (if all goes to plan) allow 1Password to start up much more quickly as most of the keys that it needs to compute when you first unlock it can already be in an enclave ready to go.

A sealed enclave would also be an ideal place to store your secret 1Password.com Account Key, as a way of protecting that from someone who gains access to your computer.

Is security platform-specific?

1Password can only make use of SGX on some Windows PCs running on CPUs with Intel’s Skylake CPUs and which have been configured to make use of SGX. Thus SGX support in 1Password is not going to be available to every 1Password user. So it is natural to ask whether 1Password’s security depends on the platform you use.

Well, there is the trivial answer of “yes”. If you use 1Password on a device that hasn’t been updated and is filled with dubious software downloaded from who knows where, then using 1Password will not be as secure as when it is running on a device which is better maintained. That goes without saying, but that never stops me from saying it. Really, the easiest and number one thing you can do for your security is to keep your systems and software up to date.

The nontrivial answer is that 1Password’s security model remains the same across all of the platforms on which we offer it. But it would be foolish to not take advantage of some security feature available on one platform merely because such features aren’t available on others. So we are happy to begin to offer this additional layer of security for those of our customers how have computers which can make use of it.

Upward and downward!

I’d like to conclude by just saying how much fun it has been breaking through (or going around) layers. People like me have been trained to think of software applications and hardware being separated by the operating system. There are very good reasons for that separation — indeed, that separation does a great deal for application security — but now we see that some creative, thoughtful, and well-managed exceptions to that separation can have security benefits of its own. We are proud to be a part of this.

Featured image: Founders' Desk, Dave

Our Security, Our Rights

Every day it feels like our rights to privacy and security are under attack, and indeed, if you’re keeping up with the news, this is a lot more than just a feeling.

Governments and law enforcement agencies around the world are pushing hard for new powers to keep tabs on their citizens. They argue they require the ability to track your activities and access your private information in order to protect you. And they’re willing to weaken encryption for everyone to do so.

We’ve already seen this happen in the UK with their newly passed laws that grant the government unprecedented surveillance powers, and as James Vincent so eloquently states there, the new laws establish a dangerous new norm where surveillance is seen as the baseline for a peaceful society.

Laws like these in the UK are likely to spread to other countries if citizens don’t take a stand. Indeed these laws could end up appearing tame by future standards if we’re not vigilant.

As tempting as it is to give the government more powers to nab the bad people before crimes have even been committed, history has proven time and again that these broader powers are most often used against law-abiding citizens rather than criminals themselves.

It’s possible laws like these will find their way into Canada as well so I’m asking for your help to send a clear message to our ministers before the ball starts rolling in that direction.

Since September Public Safety Canada has been holding a Consultation on National Security to prompt discussion and debate on future policy changes. Feedback is accepted from all Canadians as well as international readers, so everyone is welcome to contribute.

The set of questions and discussion points is quite broad but the one that’s most important to 1Password users is Investigative Capabilities in a Digital World, particularly this question:

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

Or said another way, how can the government institute a backdoor into encryption software that only they can exploit? It sounds simple but in fact it’s simply not possible. As we discussed previously on this blog, back doors are bad for security architecture, and when back doors go bad: mind your Ps and Qs covers an example of a backdoor that went awry along with the math that made it possible.

Please complete the survey and let the Canadian government know you’re not willing to weaken your security or give up your privacy. The opportunity to provide feedback ends on Thursday, December 15th.

I know it’s tempting to give up some freedoms to allow someone else protect you, but whenever I feel that way I remind myself of what Benjamin Franklin would say:

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.

Please forgive a Canadian for quoting one of America’s founding fathers, but Ben summed things up so well that I couldn’t resist. 🙂

Thanks for caring about privacy and security as much as we do ❤️

duo-banner

Send in the crowds (to hunt for bugs)

We unequivocally encourage security researchers to poke around 1Password. It is an extremely important part of the process that helps us deliver and maintain a more secure product to everyone. Finding and reporting potential security vulnerabilities is what we should all expect from bug hunters around the world; the hunters and yourself should expect that we address those vulnerabilities promptly.

We have always welcomed security reports that arrive at security@agilebits.com, and over most of the past year we offered a more formal, invitation-only bug bounty program through Bugcrowd. We are pleased to now take that program public: https://bugcrowd.com/agilebits.

op-bugcrowd

Before I get into what the program offers, I’d like to remind you that there is always room to improve the security of any complicated system, 1Password included. As clever as we may think we are, there will be security issues that we miss and different perspectives help reveal them. Software updates that address security issues are part of a healthy product. This, by the way, is why it is important to always keep your systems and software up to date. Even in the complete (and unlikely) absence of software bugs, threats change over time, and defenses should try to stay ahead of the game.

Some words about Bounty

A bug bounty program offers payouts for different sorts of bugs. The first bug bounty that I recall seeing was Donald Knuth’s for the the TeX typesetting system, though I have since learned that he does this for most of his books and programs. It started out with $2.56 (256 US cents) for the first year, and doubled each year after that, reaching a final limit of $327.68.

Check from Donald Knuth made out to Richard Kinch.

A bounty check from Donald Knuth made out to Richard Kinch

Of course given Donald Knuth’s well-deserved fame and reputation, few people cashed the checks they received. Instead, they framed them.

Anyway, enough about me revealing my age. Let’s talk about today’s bug bounty program. There is a community of people who earn a portion of their income from bounties. (Whether or not it is enough for them to sail off to Tahiti or Pitcairn is not something I know.) Over the years they have developed skills and tools and scripts for examining systems. We want them to apply those skills and efforts testing the security of 1Password. Opening up this bug bounty program brings those people and their skills into the process of making 1Password more secure.

Our bounty

Unlike the example of Donald Knuth’s bug bounty, we are only offering payouts for security issues. Of course all bug reports are welcome, we just aren’t promising bounties for them. And because we are promising to pay for bugs, we’ve had to establish a bunch of rules about what counts. These rules help us draw the attention of researchers to the 1Password.com service, and they help us exclude payouts of things that are already known and documented. We don’t want those rules to discourage anyone from bug hunting; they are there to help focus attention on what should be the most fruitful for everyone.

1Password Security white paper cover

Your homework

We think that finding bugs in 1Password will be challenging — 1Password.com is not your typical web service. Our authentication system, for example, is highly unusual and specifically designed so we are never in a position to learn any of our customers’ secrets. Because we use end-to-end encryption, getting hold of user secrets may require breaking not just authentication but also cryptography. Of course, we’re inviting researchers to try out attacks that we haven’t considered to prove us wrong. I expect that successful bug hunters will need to do their homework, all the same.

Now, all that bragging about how challenging I think it’ll be to find serious issues with 1Password isn’t an attempt to stop people from trying — get out there and try! You can get bounty for it, and a thank-you as well. We’re excited to hear a resounding “challenge accepted!” from the research community.

How we help researchers

If there are security bugs, we want to know about them so we can fix them. (I know I keep repeating that point, but not everyone reading this is familiar with why we might invite people to look for security bugs.) We want to help researchers find bugs, because they’re helping us, and everyone who uses 1Password.

To help researchers understand and navigate 1Password (and reduce the amount of time they may need to reverse engineer protocols) we have set up a special 1Password Team that contains a bunch of goodies: internal documentation on our APIs, some specific challenges, and UUIDs and locations of items involved in some of the challenges. So researchers, please come and leave your mark on our Graffiti Wall. (No, not in this web page or the image below, the wall inside the aforementioned team account.)

Secure Note: "The Researchers vault grants read-only access to researchers. If you figure out how to get around read-only access, please put your name in here ..."

With a natural degree of trepidation, I look forward to what might appear there.

The kindness of strangers

A bug bounty program brings in a new group of researchers. And that’s why we’re launching it. We encourage independent research as well. We’re just as open to reports of security issues outside of the bug bounty program as we have always been.

So without further ado, let’s send in the crowds!

Security header

How 1Password calculates password strength

Password strength is a big deal. It is in fact one of the biggest deals. That’s why the Security Audit feature in 1Password pinpoints your weak passwords, so that you can go through and change them at your earliest convenience. But how does the strength meter actually calculate the strength of your password? What makes a password strong or weak? A recent conversation with a user inspired me to write down my thoughts on the subject. If you are going to trust 1Password to generate strong passwords for you, you should know how the strength meter works.

About Those Meters…

For a password strength meter to actually be accurate, it needs to know the system that was used to generate the password. When you generate a password using 1Password, we know that this newly generated password has been generated in a truly random fashion and can accurately calculate the password’s strength. However, when 1Password is evaluating the strength of a password that you have typed in manually, including a password which was generated in a truly random fashion on another device, the strength meter cannot know whether it is looking at a password that was created through a truly random process or created by a human.

Password strength: perfectly good

If our password strength meter sees something like “my dog has a bunch of fleas” or something like “gnat vicuna craving inclose”, it can’t tell that the first was probably made up by a human and that the second may have been generated by something smarter, like our password generator.

Password strength: not so good

Because it doesn’t know how the password was generated, it errs on the side of caution. The strength meter will mark “gnat vicuna craving inclose” (a perfectly good password) the same as it will mark “my dog has a bunch of fleas” (not a good password at all). Both have the same number of letters (27) and both contain only lowercase letters. It’s up to you to know where the password originated. Did it come from our random password generator, or is it something a person made up?

Randomness and Selection Bias

When we speak of “randomness”, we are referring to mechanisms which have been tested and determined to be truly random and not dependent on events which may be repeatable or subject to outside observation. The toss of a fair coin or die is a source of “random input”. The radioactive decay of a substance can be used as “random input”. Our own limited vocabularies and choices of words are not “random input”.

When creating unique, strong, random passwords, what is required is a Cryptographically Secure Pseudorandom Number Generator (CSPRNG) to ensure that no one value or sequence of values will be preferred over all other values. The values from the CSPRNG may then be used to select from some alphabet or word list to create unique, strong, random passwords having the appropriate construction and length.

Selection Bias refers to preferences for specific values over others, whether by using an unfair coin, a loaded die, or a random number generator which does not produce a uniform and unbiased set of values. Inappropriate math performed on valid CSPRNG produced numbers may also lead to biases for certain values in favor of others. A common error is the use of modulo (remaindering) arithmetic which results in smaller values being used preferentially over larger values — there are more instances of values between 0 and 5535 (66 for each value) than between 5536 and 9999 (only 65 for each value) when using modulo-10000 on an unsigned 16-bit CSPRNG generated number.

Password strength: happy synonyms

For human-generated passwords, common causes of selection bias include the use of a small and limited vocabulary (list all of the synonyms for “happy” you don’t use on a regular basis) and reuse of words (“cool”, “okay”) and avoidance of others (“groovy”, “hip”).

Pre-generated word lists avoid this type of selection bias by randomly selecting words which are common enough that the user should be able to spell the word from memory without being biased by personal preferences or regional differences in word choices.

Password Construction and Strength

The format of a password — the rules which are used to select characters or words — influences the strength of a password, but does not limit its possible strength, except to the extent that length limitations may be imposed on the result.

As an extreme example, consider a password that consists only of the letters “H” and “T”, and that you generate by repeatedly flipping a fair coin. If you make this password long enough it can have any strength desired — you only need to keep flipping a fair coin. But “long enough” in this case is actually unreasonably long. If you want a password that is as strong as a 10-character, mixed-case letter and digit password generated by our Strong Password Generator, your “H” and “T” password would have to be 60 characters long!

Memorizable Passwords

Shorter passwords from truly random sources can be stronger than longer passwords from biased sources even if they draw from the same character sets. For example, an 8-character, mixed-case letter and digit password produced by our generator is going to be a much better password than the longer (10-character) “Abcde12345” password that a human might come up with. There is reason to believe that the more “strength requirements” (use a digit, use mixed case, add a special character, etc.) we impose on people, the worse the passwords that they create may get. Part of this has to do with alphabet reduction: users may choose to limit mixed alphanumeric passwords to more alpha and less numeric, or more lowercase and less uppercase.

There is reason to believe that the more “strength requirements” we impose on people, the worse the passwords that they create may get.

For example, the rule “at least 8 characters, 1 uppercase letter and 1 digit” will produce approximately 80 billion (109) passwords (6 lowercase, 1 upper case, 1 digit: (26 * 26 * 26 * 26 * 26 * 26) * 26 * 10, or about 36 bits) if the uppercase letter appears first, followed immediately by a digit. But again, that assumes that the password was created by something that had a good random number generator and knew how to use it, and that the password isn’t simply a capitalized 8-character word with a single vowel replaced by a digit, such as “B3musing” (possibly as few as 14 bits). If the uppercase character and digit are allowed to be in any of the 8 possible positions, that increases the number to approximately 4,500 billion possible passwords (or about 42 bits).

This is a classic example of alphabet reduction where the complete rule should have been expressed as “at least 8 characters, 1 uppercase, 1 lowercase, 1 digit, and the remaining 5 chosen completely at random from the set of uppercase and lowercase letters, and digits”. When this revised rule is used, and a CSPRNG is used to select the characters, the number of possible passwords increases to ((62 * 62 * 62 * 62 * 62) * 26 * 26 * 10 * (8 * 7 * 6)), a total of about 2 million billion possible passwords (or about 50 bits). Each additional alphanumeric character, chosen completely at random by a CSPRNG, adds about 5.9 bits of additional strength.

Truly random length-limited passwords are hard for human beings to generate and memorize because people tend to choose less randomness in favor of greater memorizability. Using 1Password to generate and store passwords ensures that strong, unique, random passwords can be used without worrying about forgetting them.

Memorizable Passphrases

Choosing multiple words from a suitably large dictionary of words may result in stronger passwords even if all of the words appear in dictionaries, are spelled with lowercase letters, and no punctuation is used. Assuming a dictionary size of 20,000 common words (about 14.3 bits per word), chosen entirely at random, all of which are lowercase, the number of possible 4-word passwords increases to 160 million billion (about 57 bits.)

Studies of our ability to easily remember information have shown that we have limits to our ability to memorize seemingly random information, unless we have a useful mnemonic device or the information is grouped in a particular manner. This is why telephone numbers and postal codes tend to be grouped as they are, and why mnemonic devices are popular, such as “My Very Educated Mother Just Served Us Nine Pizzas”.

XKCD

XKCD comic 936 is a perfect example of how easy it may be to memorize the random four-word password “correct horse battery staple”. As our Chief Defender Against the Dark Arts Jeffrey Goldberg will tell you, you may even add your own rules to that password to make it easier to memorize — “Correct! Horse: Battery staple.” Now you have a nice story to help you memorize a strong, random, unique password.

What It Means to You

Our strength meter (along with every other strength meter ever designed) has to guess how the password it is evaluating was created, unless you are actively generating the password in 1Password at that very moment. This means that you may see a big mismatch between “actual” and reported strength for our generated passwords.

The good news is that our password generator does a really good job of generating truly random passwords, so when you generate a secure, random, and unique password with 1Password, you know that it was generated just for you, right there on your device and it is as strong as can be for the length and rules you requested. So, let’s hear it for “paddle shrill sonorant palazzi ravioli” and “8dUaYolTJu82DDG9” — so happy to meet you, secure, random, and unique passwords that you are!

Additional Reading

For a more comprehensive discussion of generated password strength, please read the Geek Edition of our guide to creating better master passwords.

You’ll also find an article in our knowledge base that discusses password strength meters, chicken entrails, and assorted feats of strength.

Security header

More Watchtower, still no watching

1Password WatchtowerThere are some great new features in the 1Password for iOS 6.2 update that hit the App Store last week. One of them is that we’ve added Watchtower (a feature that has been available on Mac and Windows for some time now) to 1Password for iOS.

Watchtower warns you if a site or service has been compromised in a way that would make it a good idea for you to change your password for that site. Watchtower in 1Password looks at the most recent time a password change was recommended for a site and it looks at the time that your password for an item was last modified. If, like Molly (one of my dogs), you haven’t updated your Adobe password since the 2014 breach, you might see something like this:

Watchtower warning in 1Password on iPhone

Molly hasn’t changed her Adobe password since the breach a couple of years back

Preserving your privacy

I want to talk about a far less visible feature of Watchtower: We’ve added Watchtower support in a way that still preserves your privacy. We don’t want to know what sites and services you have in your 1Password vaults, so when 1Password checks to see if one of your Logins is listed in Watchtower, it does not make a query to our servers asking about it.

Enable Watchtower in iOS

Turning on Watchtower in iOS. “Your website information is never transmitted to the 1Password Watchtower service.”

Querying Watchtower without querying you

Our Watchtower people are continually watching reports of site breaches and updating our database of such sites regularly. This is how 1Password knows that a password change is recommended for some site.

The “obvious” way for 1Password on your computer (and now iOS device) to alert you, would be to go through your 1Password items and ask our database on some server about the status of those items. The problem with this “obvious” way of doing things is that it means that any server your copy of 1Password queries would then be able to know your IP address and what sites you have in your 1Password data.

If 1Password on some device were to ask our server, “Do you have Watchtower information about ISecretlyHateStarWars.org?” then our server will know that someone at your Internet address may have a very nasty secret. You certainly wouldn’t like us to know such things about you, and we don’t want to know such things either.

The road less travelled

So we don’t do things the obvious way. Instead, we send the same stripped down version of our Watchtower database to everyone who turns on the feature. You have a local copy of the Watchtower data on your device, and 1Password just checks against that copy of the local data. All we can know (if we chose to log such information) is which IP addresses have enabled Watchtower. We are never in a position to know what sites you have in your 1Password data.

Baked-in privacy

It may take a bit of extra work from us to design Watchtower in a way that preserves your privacy, but we think it is worth it.

Your privacy must be protected by more than mere policy (a set of rules we make on how we behave with respect to data about you); instead, we aim to bake privacy protection into the very structure of what we build. We design 1Password in a way that would make it hard for us to violate your privacy.

You can read more about this approach to privacy in our support article, Private by Design.

mind your ps and qs

When back doors go bad: Mind your Ps and Qs

This is going to be a long and technical article, but the point can be stated more simply:

The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. The back doors that have been recently been disclosed in Juniper Networks’ ScreenOS firewalls exemplify this point dramatically. It is also possible that the back door technology developed by the NSA is being used by some entity other than the NSA.

Perhaps the Economist put it better in When back doors backfire:

The problem with back doors is that, though they make life easier for spooks, they also make the internet less secure for everyone else. Recent revelations involving Juniper, an American maker of networking hardware and software, vividly demonstrate how. Juniper disclosed in December that a back door, dating to 2012, let anyone with knowledge of it read traffic encrypted by its “virtual private network” software, which is used by companies and government agencies worldwide to connect different offices via the public internet. It is unclear who is responsible, but the flaw may have arisen when one intelligence agency installed a back door which was then secretly modified by another.

Two kinds of back doors

There were two back doors in Juniper’s ScreenOS that were fixed by Juniper, the mundane one and the interesting one. The mundane one was a back door into the authentication system, allowing someone with knowledge of it to simply log in and administer the devices. Back doors in authentication systems are easy to create, and are all too common. This, by the way, is why we try to build 1Password’s security on encryption rather than authentication. As I’ve argued before: Architectures that allow for back doors are bad for security.

The more interesting – and for my soapbox, more instructive – back door is cryptographic. It also presents a greater threat. It allows the holder of the back door secret to decrypt certain kinds of traffic that were supposed to be encrypted by the ScreenOS (VPN) server. The attacker only has to listen to network traffic that is encrypted as far as everyone else is concerned, but she will be able to decrypt that traffic with the knowledge of the back door secret.

Another difference between the two is that the mundane one is a simple password (disguised as debugging code) that is now known to the world and so any unpatched systems can now be logged into by anyone. That secret is now out. The cryptographic back door, on the other hand, is a very different beast. We know it’s there, and we know how it works, but we do not have the key. The secret needed to make use of the back door is still a secret.

I don’t want to suggest that the back door in the ScreenOS authentication system isn’t important. It is very worrisome on its own, given where ScreenOS is deployed. But we all know that it is easy to add a back door to something that is merely protected by authentication (no matter how many authentication factors one has in place).

Politically the questions about how that authentication back door got put into place in such an important product and who may have been using it is huge. But it offers nothing new from a technological point of view.

The cryptographic back door

Oh. I want to work for the NSA! Evil mathematicians just like Professor Moriarty!

—My daughter, upon learning about the Dual EC backdoor

The cryptographic back door key all depends on how something called Q was created. Q is not a number (nor is it a free man), instead it is a point on a special sort of graph. If Q was created randomly then there is no back door key and all is well and good. But if Q was created by adding P (another point on the graph) to itself some large number of times then knowing how many times P was added to itself becomes your back door secret.

You are going to see the equation

Q = dP

many times. Neither Q nor P are secret. But d is secret and it is the back door key.

No doubt you are thinking something like, “well if P and Q aren’t secret, can’t we all figure out what d is by dividing Q by P?” Shouldn’t

d = Q/P

Well, d kinda sorta does equal Q/P, but there just isn’t enough computer power around to perform the relevant calculation. I’ll save that bit of fun math to (much) further below.

Necessary historical background

I’m one of those people who when asked a simple question go into more historical background than many people appreciate. But here it really is necessary. Really.

ScreenOS (and others) used a cryptographic algorithm that was deliberately designed to have a back door that only the United States National Security Agency (NSA) was supposed to know about. The cryptographic algorithm is Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG). I will refer to that simply as “Dual EC” from here on out.

Dual EC was one of four DRBGs specified in NIST SP 800–90A, and leaving the back door aside it is still technically inferior to the other three listed in that document. The standard specified a P and Q but did not explain how they were chosen. In 2007 Dan Shumow and Niels Ferguson of Microsoft showed that if Q was created with something like Q = dP, then whoever had hold of d could decrypt traffic encrypted with Dual_EC.

In 2014, a group of cryptographers presented a great explanation of how knowledge of d can be exploited, and even presented a proof of concept (with their own Q) of doing this.

Back in 2007 nobody (well, nobody who was going to say anything) knew whether or not Q was created so as to have the back door. In September 2013 it became clear from some of the documents revealed by Edward Snowden Dual EC had been designed with the back door. The Wikipedia page on Dual EC has a useful timeline.

How many bad Qs?

For ’tis the sport to have the engineer
Hoist with his own petard
Hamlet Act 3, Scene 4

So far the story I’ve told would merely suggest that Juniper has some explaining to do about why it used a DRBG in the way that it did and why they chose one that was inferior to alternatives and was suspected of being backdoored. Juniper Networks are (sadly) not the only one in that position.

But I’ve only told part of the story. ScreenOS did not use the NIST Q. It used two custom Qs over time, and where they came from and who knows the back door keys associated with those Qs is something we simply don’t know. The back door key associated with the Q from the NIST standard is clearly held by the NSA. But what about the two used (at different times) in ScreenOS?

Juniper explained that they didn’t use the NIST Q in their Dual EC implementation but used a Q that was created randomly. However, they did not provide any proof that it was created randomly. (There are ways to demonstrably create such things randomly.) So we don’t know if that “official Juniper Q” was created in a way to give its creator a back door key. Nor do we know who might hold that key.

But there is a third Q that was, according to Juniper, inserted by parties unknown in 2012. That is someone simply changed the Q that is used in Dual EC in ScreenOS. We can only assume that this third Q is indeed backdoored, as it was something surreptitiously added to the code. We have no idea of who did this.

So even if we take Juniper at their word about their choice of Dual EC and other related design choices, the fact that the chose to use a system that was designed so that it could have a back door means that someone – we don’t know who – has had a back door key for decrypting VPN traffic between NetScreen devices for three years.

I have said it before and I will say it again, systems that are designed to have back doors (even if the back doors aren’t used) are inherently weaker than systems that are designed to make it difficult to have a back door.

Thar be math

The rest of this article tries to explain why it is easy to calculate Q from d and P but hard to calculate d from P and Q. Unless you have a great deal of patience and would like to learn more about the math, you can stop reading here.

Here is a picture of Patty and Morgan enjoying the pleasures of warm blankets on the sofa. Morgan also has no patience for math.

Dogs on couch, not doing math.

Patty and Morgan relaxing on couch. Not doing math.

Hard math, simple math

The actual math isn’t all that complicated, but it does rely on a number of unfamiliar notions. Some of these notions are hard to grasp not because they are complicated, but because they are simple. They are highly abstract. Indeed, you might even have to unlearn some things you know about addition.

I said earlier that even though P and Q are public, we can’t figure out what d is in Q = dP, but someone with d and P can easily construct Q. I’m not going to fully explain all of this, but I’m going to make an attempt to try to help people get a better sense of how something like this might be true. Along the way, I’m going to tell a few lies and omit lots of stuff. I need to lie to stand a reasonable chance of this article being useful to people who don’t already know the math.

One thing I should tell you right away is that P and Q are not numbers, while d is a number. I will get around to explaining what P and Q actually are below, but first we need to understand what it means to add two things together.

I am not going to try to explain how knowledge of d provides a back door. That is something that is difficult enough to explain in the normal case and is even harder given how Dual EC is actually used within ScreenOS. (See Ralf-Philipp Weinmann’s “Some Analysis of the Backdoored Backdoor” for some of those details.)

Instead, I’m focusing on a narrow point, allowing me to go into more depth about something that is central to modern cryptography.

Simple addition

Most of us have a pretty good sense of how addition works. We all pretty much know that the “+” in “3 + 5 = 8” means. We may not be able to define it precisely, but it is a basic concept that we understand. Well mathematicians like to define things precisely and more importantly they like to distill things down to their simplest essence. Let’s see what the essential properties of addition are.

What are we adding?

To define addition we need to talk about what things are going to be added to what things. The expression “3 + blue” doesn’t mean anything because “blue” and “3” are not in the same set of things we might add. We might want to talk about adding colors, and so could reasonably say “red + blue = purple”.

Blue + Red = Purple

Addition can apply to more than just numbers.

Sometimes we add numbers; sometimes we add colors; but we need to have a set of things (numbers perhaps, or maybe colors) that we will be adding.

So let’s call the set of things that we will be adding E for an ensemble of things. When we are adding numbers, “blue” won’t be in E, but when we are adding colors, “blue” may very well be in E. When we are adding colors, 3 won’t be in E but it will be when we are adding whole numbers.

For the addition that most people are used to, there are an infinite number of things in E, but that doesn’t have to be the case. When we add hours to know what time it will be 15 hours after 9 o’clock, we use only 12 (or 24) of course. Remember, we are trying to distill addition down to its essence, so we want it to work both when E has a finite number of things in it as well as when it doesn’t.

Addition offers no escape

If we are adding numbers, we don’t want 3 + 5 to equal “blue”. We want 3 + 5 to result in another number. If we are adding colors, we want the result of addition to be some color.

So we will say that

If a and b are both in E then a + b is also in E.

The fancy name for this is “closure”, or “E is closed under addition”.

You can switch things around

We like addition to work out so that “a + b” will yield the same result as “b + a”, so we say that

If a and b are both in E then a + b = b + a

We say that addition is “commutative”.

Any groupings you want

When you mix blue and red and white, it doesn’t matter if you mix the blue and the red first or the red and the white first. Likewise if you add 3 + 2 + 10, it shouldn’t matter whether you add the 3 and the 2 first or the 2 and the 10 first.

If a, b and c are all in E then (a + b) + c = a + (b + c)

And so we say that addition is “associative”.

We need a way to do nothing.

I have no problem grasping the importance of doing nothing, but not everyone seems to agree with me (Hi boss!). Anyway, we need a way to do nothing with addition.

To put that more mathy, we want E to contain something that behaves like zero. If you add zero to anything, you end up with what you started with.

If a is in E and 0 is the zero element then a + 0 = 0 + a = a

For the color adding analogy, we might add the special color “transparent” as our zero element so that blue + transparent = blue.

The fancy way to talk about zero is to call it “the additive identity”.

Adding it all up

Mathematicians are happy to call anything that meets those four conditions “addition”.

Getting to the points

So we can add numbers and we can add colors. But let’s talk about points on a peculiar geometric shape called an elliptic curve. Here is a portion of an elliptic curve, and it has two points on it, P and Q. (These aren’t the same as P and Q used in Dual EC.)

Elliptic curve

Elliptic curve with points P and Q

There are some interesting properties of these odd sorts of shapes. One of them is that if a (non-vertical) straight line crosses the curve at all, then it will cross it in exactly three places.

For example, if we draw a line that goes through both P and Q then that line will go through the curve at exactly one other point.

Line P,Q on elliptic curve

The first step in adding points P and Q is drawing the line connecting them and seeing where else that line crosses the curve

From that third point where the line crosses the curve we can find the point on the opposite side of the curve by drawing a vertical line up or down (as needed) from that point of intersection:

Addition of P, Q on elliptic curve

Draw a vertical line from where the line between P and Q crosses curve to get point P + Q

Now as strange as it might seem, if we define addition of points on an elliptic curve this way, it will meet all the properties of addition that we want. (I’m leaving out how the zero element is defined, and have constructed my examples so that I can get away with that.)

Repeated addition

Now let is return to the problem we started out with. If Q = dP how come it is easy to calculate Q if we know d and P but not d from Q and P? P and Q are points on the curve, while d is just an ordinary (though very large) number.

Well, we’ve defined what addition means for points, but what does it mean to say, for example, “4P”? “4P” is just shorthand for writing “P + P + P + P”. (Note that we can treat “4P” kind of like multiplication, but only because 4 is an ordinary number and we can just say that it is repeated addition of P. We haven’t defined any way to multiply two points together.)

Let’s work through a small example and try to find where 4P is. The first thing we need to do is add P to itself to get 2P.

We’ve already stepped through how to add two different points, but how do we add P to itself? We take as our line going through P and P to be the tangent line at point P. This is the line that touches point P but does not cross the curve at P:

Tangent at P on elliptic curve

The tangent line at P crosses the curve in exactly one other place.

That line crosses the curve at some other point. And just as we did when we added P and Q earlier, we draw a vertical line through that point of intersection to get our sum:

Adding P+P on elliptic curve

When adding a point to itself, we take the tangent at the point to construct the P,P line

Now we have point P, which we started with, and we also have point P + P, or 2P. So if we want to find 3P we can just add P and 2P together.

P + 2P = 3P on elliptic curve

Add 2P to P to find point 3P on the curve.

This gives us point 3P and we can add 3P and P together to get to our goal of 4P:

P + 3P = 4P on elliptic curve

To get point 4P, we add 3P and P.

A shortcut

It took us three additions to calculate 4P from P. But we could have saved ourselves one of those. We never really needed to calculate 3P. Instead once we had the point 2P we could have just added 2P to itself to get 4P. Because point addition is associative, we can think of 4P as (P + P) + (P + P), which is 2P + 2P.

4P = 2P + 2P on elliptic curve

Adding 2P to itself is another way to get 4P

With this shortcut, we have been able to calculate 4P from P with only two additions: The first addition was adding P + P to get 2P; the second addition was 2P + 2P to get 4P.

The shortcut offers huge savings

When calculating 4P our shortcut saved us only one addition, but the savings get bigger the bigger d is. Suppose we wanted to calculate 25P. The slow way would require 24 additions, but using this shortcut, it would only take us six additions.

  1. 2P = P + P
  2. 4P = 2P + 2P
  3. 8P = 4P + 4P
  4. 16P = 8P + 8P
  5. 24P = 16P + 8P
  6. 25P = 24P + P

So having to only do six of these additions versus 24 by the slow way is a big savings. When the numbers get really big, the savings get much bigger. Suppose we wanted to calculate the result of adding P to itself one billion times. 1000000000P would take almost a billion additions the slow way, but using the short cut we can do it with only about 45 additions.

The number of point additions it takes to calculate dP using this shortcut is roughly proportional to the number of bits in d.

No shortcut to d

So we’ve got a really efficient way to compute Q from d and P, but there is no efficient shortcut for finding d from P and Q. Indeed, this asymmetry between calculating d versus finding d is one of the things that makes elliptic curve cryptography work.

When numbers get big

When we start talking about the huge numbers that cryptographers actually use for these sorts to secrets we get truly massive savings. Those savings are so big that using the shortcut means that we can calculate the result in a fraction of a second, while doing it the slow way would take all of the computers on earth many times the age of the universe to compute.

The number of point additions needed to calculate Q=dP is roughly proportional to the number of bits in d, while the number of additions needed to find d from P and Q using the slow way is roughly d. As d gets big (say a 256-bit number), the difference between d and the number of bits in d is enormous. For a 128 bit number, calculating Q takes less than 256 point additions, but calculating d takes a number so large that I don’t know how to write it out. It would have 77 digits. To get a sense of just how big numbers like that are, take a look at “Guess why we’re moving to 256-bit keys

And if I may introduce more terminology, that asymmetry is an instance of what is called the “Generalized Discrete Logarithm Problem” (GDLP).

What is the GDLP good for?

To give you a hint of how this sort of stuff can be used for keeping secrets, suppose that two of my dogs, Molly and Patty, wish to set up secret communication without Mr. Talk (the neighbor’s cat) listening in. The dogs will need to establish between them a secret key that only they know, but how can they do this if Mr Talk is listening in to their communication?

First Patty and Molly will agree on an elliptic curve and some other non-secret parameters. Among those parameters there will be a starting point, that I will call G. Let’s let Patty pick a secret number that only she knows and we will call that little p. Patty will calculate the point pG. Let’s call the point that Patty calculates big P.

Patty can send P to Molly, but neither Molly nor Mr Talk (who is eavesdropping) will be able to calculate little p, even though they both know G and P. Big P is Patty’s public key, while small p is Patty’s private key.

Molly can do the same. She calculates M = mG, where little m is Molly’s secret, and sends big M to Patty.

Now here is where it gets fun. Patty can calculate point K_p by multiplying M by little p. That is

K_p = pM

Molly can also calculate K_m by multiplying Patty’s public key, P, by Molly’s own secret, m. K_m = mP.

The Ks that both calculate will be the same. This is because point addition is commutative and associative. Remember that M = mG. So

K_p = pM = pmG.

We can look at what Molly calculates from Patty’s public key the same way: P = pG. When we look at what both Molly and Patty calculate with their own secrets and the other’s public point, we get

K_m = mP = mpG = pmG = pM = K_p

Let’s look at each end of that long chain of “=”s, but drop out all of the middle parts.

K_m = K_p

Both Molly and Patty calculate the same secret K even though neither of them have revealed any secret in their communication. And Mr Talk, who has listened to everything Molly and Patty sent to each other, cannot calculate K because there is no feasible way for him to learn either Patty’s secret p or Molly’s secret m.

This is magic with mathematics.

Lies and omissions

There are a lot of things that I left out of the explanation I gave here. And in a few cases I even told a couple of lies to make the explanation simpler. I’m only going to mention some of them here.

  • The shortcut that I described for computing Q from p and D is not the most efficient shortcut, but it is an efficient shortcut that is easiest to explain.
  • There are shortcuts for calculating d, but there are thought to be no efficient non-quantum ones.
  • I have not explained what I mean by “efficient”. (It is a technical term.)
  • I haven’t told you the formulas for doing addition of elliptic curves, I only did it with pictures.
    Text exchange about EC science fair project

    So my daughter texted me a picture of a science fair project she saw. I texted back.

    It looks like someone at my daughter’s high school did a science fair project on finding good algorithms for point doubling. Thus forcing me to reveal just how much of a pushy and intrusive parent I am.

  • To make addition on elliptic curves work as an instance of the GDLP, we need to make it have only a finite amount of elements. That involves, among, other thing, making all of the normal arithmetic in the formulas done modulo some large prime number.
  • I didn’t talk about the additive identity (zero element) for addition on elliptic curves.
  • I never said what “=” means when I defined addition.
  • I kinda sorta misused the terms “public key” and “private key” when applying those terms to ECDHE.
  • I didn’t say anything at all about the existence of additive inverses.
  • The addition of colors analogy stops working if we were to consider additive inverses.
  • I implied that all and only the properties I listed for addition are what is needed for something to be called “addition”. The properties that I listed plus the existence of additive inverses are what is needed for the specific type of addition we need for this cryptography. Addition of numbers requires something more because numbers – unlike points on an elliptic curve or colors – line up in an order.

The after math

I’ve been able to use an example of a rare cryptographic back door to give me the chance to talk about the math of elliptic curves cryptography. But if you are asking what this has to do with AgileBits and our products (after all, we don’t use elliptic curves in our products), then we must return to the original point: If you build a system into which it is easy to add a back door, you shouldn’t be all too surprised when a back door is added. So let’s not build things that way.

I will close with another quote from that Economist article:

Calls for the mandatory inclusion of back doors should therefore be resisted. Their potential use by criminals weakens overall internet security, on which billions of people rely for banking and payments. Their existence also undermines confidence in technology companies and makes it hard for Western governments to criticise authoritarian regimes for interfering with the internet. And their imposition would be futile in any case: high-powered encryption software, with no back doors, is available free online to anyone who wants it.

Podcast header

Jessy on authentication and passwords in premiere of On the Wire podcast

Jessy Irwin On the Wire

On the Wire is a new podcast about the security threats we face and how we can protect ourselves against them. It’s hosted by our friend Dennis Fisher, who was previously behind the podcast at Threatpost.

The new podcast covers such topics as privacy, fraud, and social engineering. It aims to do so in a simple way that’s easy for everyone to understand. A lot of this stuff goes right over my head, which is why I’ve been relying on 1Password to secure my digital life for so long! I’m looking forward to listening to a security-centric podcast that doesn’t make my eyes glaze over.

The first episode of On the Wire is extra special because it features Jessy, our Security Evangelist! Listen to Jessy talk about the state of authentication, passwords, and security education.

To listen to the episode, you can stream it on the On the Wire website.

Security header

1Password and your browsing habits: What we don’t know can’t hurt you

1Password blueprintThere are some things that we would love to know about people who use 1Password. Some of that information would be useful in improving 1Password, some might just be interesting statistics about our users. Here are a few things we might want to know:

  • What sites are among your 1Password data
  • When, how often, and from which IP address you use 1Password to log in to particular websites
  • Which new Logins you save
  • How often and where you fill credit card data

Knowing such things about our customers would help us focus our development efforts on the things that people want to use most. But here is the point of this article: We do not have that information and we have built 1Password so that it would be hard to even collect that information. Our principle of Private By Design means that we don’t know many things. This is for your benefit.

We have no such data

Despite our curiosity and the usefulness of such data, we have designed 1Password so that we can never see that information. We’ve written before about how our security architecture protects your privacy (see Private By Design and the opening sections of our 1Password for Teams white paper [PDF]), but I will highlight some of its points below.

The importance of knowing nothing

One of our design principles is based on the fact that we cannot lose, use, or abuse data that we never have. We believe that you should be in control of your data and that your use of your data is your business. To the extent possible, we have built 1Password in such a way that not only do we not retain data about your use of 1Password, but we make it hard to even obtain such data.  We have also chosen not to include any in-app analytics tools within 1Password.

Some of this is basic security design. Our design principle isn’t radical in theory, but it can be difficult to implement. For example, our underlying data synchronization system would be much simpler if we allowed ourselves to know which sites you are logging in to when you log in to them. But because we do not want to ever know that information, we have had to put in more intricate machinery.

I should also acknowledge that some of our design principle is motivated by cowardice. We do not want our servers and systems to be heavily attacked, so we have designed our systems such that we have little worth stealing. Our cowardice here works to protect your privacy and your security. Cowardice can be a virtue.

Example: We can’t watch from the Watchtower

1Password WatchtowerA relatively simple example of our privacy mechanism is how Watchtower works in 1Password for Mac and Windows. 1Password does not send a query to our server to ask, “Is site X in the Watchtower database? What does it report?” If we had built it that way, our server logs would be able to determine exactly which sites are in your 1Password data. Instead, 1Password fetches all of the information needed by Watchtower on your computer. Every instance of 1Password is fetching the same data file in a way that does not depend on which Logins you have.

Security designs matter

I would like to step back and look at a picture that is perhaps even bigger than the privacy matters discussed here. Please indulge me in my musings.

We are proud of the overall security design of 1Password, and we certainly like to talk about it. Yet very understandably most people are not going to look at the subtleties of the design and its implications. As a consequence, some of the things that we think are the biggest security benefits of 1Password are invisible to users, and so we occasionally hit you with articles like this.

Sometimes our security design makes certain “features” irrelevant and inapplicable. See Authentication v Encryption for a discussion of one such feature. Sometimes, as in the example of Watchtower described above, it means that we have to work harder to put a feature in place than we would have if we’d used a different security design. But even when we have to work harder, we believe that our security design is the better choice. To maintain a privacy-preserving security architecture we are happy to do the extra work.