1Password inter-process communication: a discussion

Recently, security researcher Luyi Xing of Indiana University at Bloomington and his co-authors released the details of their research revealing security vulnerabilities in Apple’s Mac OS X and iOS that allow “a malicious app to gain unauthorised access to other apps’ sensitive data such as passwords and tokens for iCloud, Mail app and all web passwords stored by Google Chrome.”  It has since been described in the technology press, including an article in the Register with a somewhat hyperbolic title. I should point out that even in the worst case, the attack described does not get at data you have stored in 1Password.

The fact of the matter is that specialized malware can capture some of the information sent by the 1Password browser extension and 1Password mini on the Mac under certain circumstances.  But roughly speaking, such malware can do no more (and actually considerably less) than what a malicious browser extension could do in your browser.

For 1Password, the difficulty is in fully authenticating the communication between the 1Password browser extension and 1Password mini; however, this problem is not unique to 1Password. The difficulty of securing inter-process communication on the operating system is a problem system-wide. A recent paper, “Unauthorized Cross-App Resource Access on MAC OS X and iOS” (PDF),  by Luyi Xing (Li) and his colleagues shows just how difficult securing such communication can be. Since November 2014, we’ve been engaged in discussion with Li about what, if anything, we can do about such attacks. He and his team have been excellent at providing us with details and information upfront.

As always, we are limited in what we can do in the face of malware running on the local machine. It may be useful to quote at length the introduction of that article

I have said it before, and I’ll say it again: 1Password […] cannot provide complete protection against a compromised operating system. There is a saying […] “Once an attacker has broken into your computer […], it is no longer your computer.” So in principle, there is nothing that 1Password can do to protect you if your computer is compromised.

In practice, however, there are steps we can and do take which dramatically reduce the chances that some malware running on your computer [could obtain your 1Password data].

That was written more specifically about  keystroke loggers, and there are some things that set the new attack apart. Like superficial keystroke loggers it doesn’t require “admin” or “root” access, but they were able to sneak a proof of concept past Apple reviewers.

The threat

The threat is that a malicious Mac app can pretend to be 1Password mini as far as the 1Password browser extension is concerned if it gets the timing right. In these cases, the malicious app can collect Login details sent from the 1Password browser extension to the fake 1Password mini. The researchers have demonstrated that it is possible to install a malicious app that might be able to put itself in a position to capture passwords sent from the browser to 1Password.

Note that their attack does not gain full access to your 1Password data but only to those passwords being sent from the browser to 1Password mini. In this sense, it is getting the same sort of information that a malicious browser extension might get if you weren’t using 1Password.


1Password provides its own security. What I mean by this is that for the bulk of what we do, we don’t generally rely upon security mechanisms like sandboxing or iOS Keychain. So it doesn’t matter whether those sorts of security measures provided by the operating system fail.

The careful reader will note, however, that I used phrases like “for the bulk of what we do” and “don’t generally rely upon” in the previous paragraph. There are some features and aspects for which some of 1Password’s security makes use of those mechanisms, and so vulnerabilities in those mechanisms can allow for harm to us and our customers.

1Password mini listens to the extension

Application sandboxing is a good thing for security. But it limits how the 1Password browser extension can actually exchange data with 1Password itself. Indeed, the extension (correctly) has no direct access to your data. Keeping your data out of the browser (a relatively hostile environment) is one of our security design choices. But this does mean that the 1Password browser extension needs to find a way to talk to something that does actually manage your data. 1Password mini (originally the 1Password Helper) was invented for this purpose.

One of the few ways that a browser extension can communicate locally is through a websocket. Browser extensions are free to talk to the Internet as a whole, but we certainly don’t want our browser extension doing that; we only want it talking to 1Password locally. So we restrict the browser extension to only talking to 1Password mini via a local websocket.

Mutual authentication

Obviously we would want 1Password mini and the browser extension to only talk to bona fide versions of each other, so this becomes a problem of mutual authentication. There should be some way for 1Password mini to prove to the extension that it is the real one, and there should be a way for the browser extension to prove to 1Password mini that it is a real 1Password browser extension.

The difficulty that we face is that we have no completely reliable mechanism for that mutual authentication. Instead, we employ a number of separate mechanisms of authentication, but each has its own limitations. We have no way to guarantee that when the browser extension reaches out to 1Password mini it is really talking to the genuine one.

There are a number of checks that we can (and do) perform to see if everyone is talking to who they think they are talking to, but those checks are not perfect. As a result, malware running on your Mac under your username can sometimes defeat those checks. In this case, it can pretend to be 1Password mini when talking to the browser extension and thus capture any information sent from the 1Password browser extension that is intended for the mini.

What can be done

Neither we nor Luyi Xing and his team have been able to figure out a completely reliable way to solve this problem. We thank them for their help and suggestions during these discussions. But, although there is no perfect solution, there are things that can be done to make such attacks more difficult.

What you can do

1. Check “Always Keep 1Password Mini Running” in Preferences > General

In the specific attack that Luyi Xing demonstrates, the malicious malware needs to be launched before the genuine 1Password mini is launched. By setting 1Password mini to always run, you reduce the opportunity for that particular attack.

keep mini running



2. Keep using the 1Password browser extension

Although what is described is an attack against the communication between 1Password mini and the browser extension through specialized malware, using the 1Password browser extension protects you from a more typical malware attack of pasteboard/clipboard sniffers. Likewise, the 1Password extension helps fend off phishing attacks because it will refuse to fill into pages that don’t match the domain for your saved Logins.

Quite simply, the 1Password extension not only makes life easier for you, but it is an important safety feature on its own.

3. Pay attention to what you install

As always be careful about what software you run and install on your system. On your Mac, open System Preferences > Security & Privacy > General. You’ll see an Allow apps downloaded from: setting there. We strongly recommend confirming that this setting is configured so that only apps from trusted sources can be opened. You can read more about the setting and its options on Apple’s support site.

Now Xing and his team point out that this isn’t a guaranteed way to prevent malware being installed. They were able to get a malicious app approved by the Mac App Store review process. However, I think it is reasonable to assume that now that Apple reviewers know what to look for, it will be much harder for that specific kind of malware to get through.

What we can do

There are additional (defeasible) mechanisms that we can add to our attempts at mutual authentication between the extension and 1Password mini. I will briefly mention a few that we’ve considered over the years.

Encryption with an obfuscated key

One option is to have a shared obfuscated key in both 1Password mini and the extension. (Remember that the browser extension never sees your Master Password so any secret it stores for authentication cannot be protected by your Master Password.)

Obfuscation only makes things harder for attackers until someone breaks the obfuscation, and every system designer should assume that obfuscation will be broken. See our discussion of Kerckhoffs’ Principle in our article, “You have secrets; we don’t,” for some background on why we tend to be reluctant to use obfuscation. Of course, it may be warranted in the absence of a more effective alternative, so this remains under consideration.

In anticipation of a likely suggestion, I should point out that even the magic of public key encryption wouldn’t save us from having to rely on obfuscation here; but I will save that discussion for our forums.

Using the OS X keychain

Another option would be to store authentication secrets in the OS X keychain, so that both our browser extension and 1Password mini would have access to it. This could be made to work for authenticating 1Password mini to the extension for those browsers that allow easy use of the OS X keychain.

This might solve half the problem for some browsers, but to date we’ve been focusing on solutions that work across all of the browsers we support.

An extreme solution

In the extreme case, we could have some explicit pairing (sort of like Bluetooth) between 1Password mini and the extension.  That is, the browser extension may display some number that you have to type into 1Password mini (or the other way around).  With this user intervention we can provide solid mutual authentication, but that user action would need to be done every time either the browser or 1Password mini is launched.

Quite frankly, there is no really good solution for this. To date, our approach has been to put in those authentication checks that we have and keep an eye out for any hints of malware that exploits the known limitations of what we do.

Is 1Password for iOS affected?

The research paper isn’t limited to discussing inter-process communication (IPC) that is done through websockets, but covers a wide range of mechanisms used on Apple systems. This includes some mechanisms that we may use for some features in 1Password for iOS.

Shared data security

1Password for iOS shares some of its data with the 1Password app extension. As most of that data is encrypted with your Master Password, it is not a substantial problem if that data becomes available to attackers. The exception, of course, is the TouchID secret.

As yet, we have not had a chance to test whether there is any exposure there, but watch this space for updates.


We truly are grateful for the active security community, including Luyi Xing and his team, who take the time to test existing security measures and challenge us to do better. Our analysis of the researchers’ findings will continue and we will post an update if further action is necessary.

Anniversary sale polaroids

9th Anniversary Sale-abration!

Update: The 9th Anniversary Sale ended on June 20th, 2015.

Let me take you back in time to the days when AgileBits was known as Agile Web Solutions. About 10 years ago, Dave Teare and Roustem Karimov decided to spend a month writing a quick little password management app to help them share data more efficiently while they worked to build their Palm app empire. Response to the little tool was great, and soon the pair had made the app available for download in their store.

Our intrepid co-founders quickly realized that they had a Thing on their hands. A really Real Thing! At the same time, Dave & Roustem had begun a little love affair with the Mac, and decided it was the perfect platform for their ambitious application. And so, nine years ago, on June 18, 2006, 1Password 1.0 for Mac was born. (For a real blast from the past, check out the original 1Passwd website!)


Palm may be a thing of the past, but Dave and Roustem’s “hobby” has grown into a powerful and secure data management tool for most major desktop and mobile operating systems, including Mac OS X, Windows, iOS, and Android. What began as two coders and one support person has grown into a team of almost 60.

polaroidsWe’re so proud of how much 1Password has evolved over the past nine years, largely due to feedback and support from millions of customers just like you. To celebrate, we’re offering a 30% discount on 1Password for Mac and Windows. You can get both 1Password for Mac and Windows in the AgileBits Store, and 1Password for Mac is in available from the Mac App Store.

This is a limited-time offer, so take advantage of the celebration and get your 1Password license today!

1Password for Android header

Fingerprint unlock coming to 1Password for Android [Update: Sneak peek!]

A strong Master Password is critical to keeping your 1Password vault secure. It’s also not the easiest thing to type out on a mobile device. What if you had another way to unlock your vault, in addition to your master password? One that is both convenient and secure?

For some time now, we’ve been wanting to give you the ability to unlock 1Password for Android using your fingerprint. The challenge has been that there was no standard way for us to implement it that would work across a variety of devices made by different manufacturers. And so we waited, and you waited.

Now, our wait is over.

The Android M Developer Preview was just announced at Google I/O, Google’s annual developer conference going on right now in San Francisco. For us, one of the most exciting new features is the standardized fingerprint support that is coming to the Android platform. This means that we have some awesome news for you:

We will be adding support for fingerprint unlock to 1Password for Android when Android M launches later this year!

Fingerprint unlock (1Password for Android)

We don’t usually talk about upcoming features, but we were just too excited about this one to keep it a secret.

As we get closer to the launch of Android M, we will need your help to beta test fingerprint unlock. If you’d like to be among the first to try it out, we invite you to join our beta team. We will share more information in time; for now, we hope you are as excited about this new development as we are!

Update: We had the privilege of being demoed at Google I/O today! If you happened to be in the audience, we’d love to hear what you thought of the demo. And if you weren’t, we’re happy to be able to show you a very, very quick sneak peek. Here’s what it might be like to access your 1Password vault using fingerprint unlock.

Windows v4 blog

Turbo boost 1Password for Windows with new 4.5 version

Ctrl+\ has become muscle memory for millions of 1Password users all around the world. It’s hard to beat the speed of a customizable keyboard shortcut. Unless, of course, we focus on what happens after you invoke the 1Password extension in your web browser.

The technology behind the extension is what fills your 1Password information in web forms. It’s an incredibly complex system that we lovingly call The Brain, and it has received a serious upgrade in 1Password 4.5 for Windows. What this means for you is that filling web forms is now faster and more accurate than ever before.

An upgraded Brain is only one of the time-saving, experience-enhancing improvements in 1Password 4.5, which is a free update and available to download right now from our website.

Time-based, One-Time Passwords (TOTPs)

These single-use passwords are becoming more commonplace as a supplementary security measure to protect online accounts. If you’re not familiar with them, our blog post will help you learn how to use them in 1Password. Not only is it possible to add a time-based, one-time password to your Login items in 1Password 4.5, but it’s a cinch to do it.

Personalize Secure Notes with custom fields

Custom fields are great. They let you modify an item’s details view to hold exactly the information you want, formatted in a way that makes sense to you. In version 4.5, we’ve introduced custom fields to the Secure Notes item type.

Adding custom fields to your secure notes

1Password speaks your language

We have begun localizing 1Password for Windows and are kicking things off with nine languages. Thanks to our wonderful translators, they are:

  • Czech
  • Dutch
  • English
  • French
  • German
  • Italian
  • Polish
  • Spanish
  • Swedish

If you’d like to help translate 1Password into your language, you can create a free Crowdin account and join us at

Report website issues with Synapse

The 1Password extension is pretty much continuously being improved. It has to be, because there are umpteen billion websites out there, many with their own quirks and many others constantly changing. Now, you can help us ensure maximum compatibility by reporting any website issues you encounter.

In the extension menu, select the option to report an issue with the current website.

In the old days, you’d report a website and we’d ask you all sorts of questions, trying to learn any detail that might help us reproduce and diagnose the problem. No more! There are no lengthy questions to answer and you don’t have to know every minute detail about your web browser or the website. Our new website reporter makes it super easy: simply select the option in the extension menu and all the relevant information is already filled out for you.

Accessibility, Wi-Fi Sync, and more

If you use the NVDA screen reader, you should notice a marked improvement in this release. We are committed to making 1Password fully accessible to you, and there’s always room for improvement. We’d love your help in determining what most needs our attention. Please let us know how we’re doing!

Last on the list of highlights, but certainly not least, is Wi-Fi Sync. This is a wonderful way for you to sync 1Password for Windows with 1Password for iOS when you’re on the same wireless network, if you prefer not to use cloud-based services. We are constantly working to improve performance and reliability, and Wi-Fi Sync has received a nice coat of polish in this update.

1Password 4.5 for Windows is available now as a free update for existing owners (Help > Check for New Version), or you can grab a new copy from our downloads page. Thank you for choosing 1Password!

One can never have too many palm tree pictures.

The sixth annual AGConf includes record number of smiles … and selfies

One of the awesome things about AgileBits is that we’re a mostly remote company. We hail from six different countries around the world, and two different continents. Before we opened our Toronto office two years ago, in fact, there was no headquarters (except, perhaps Dave and Roustem’s basements). This means that a large portion of the company gets to work from wherever their hearts desire: home office, coffee shop, city park, you name it. Which is completely awesome … except that we never really get to hang out as a team. That’s why the entire AgileBits gang tries to meet up every year or so in a sunny, palm-tree-filled location to talk 1Password, and to get to know each other a little better.

AgileBits Group Photo 2015

AGConf[5] followed in the proud tradition of AGConf[4] and saw the team weigh anchor on our now favourite cruise ship, the Liberty of the Seas.

Cruising and collaborating

Of course, one of the best things about having the whole team in one place is that it provides an all-too-rare opportunity for in-person collaboration. This opportunity was not wasted as we hauled out laptops in hotel lobbies, cruise ship lounges, and even poolside patios to share ideas and get support questions answered. It was so neat to be able to sit next to a friend who is normally a thousand miles away and solve a tricky question face-to-face instead of conversing solely via emoji and gif (and the occasional word) in Slack.

You’ve already seen the fruits of some of this collaboration in recent updates to 1Password, and our shiny new Knowledgebase, but we have even more great things that we’re excited to show you … soon!

All aboard!

Of course, even a work-cation can’t be all work and no play … and there was certainly a ton of fun available on board the ship.  From an all-you-can-eat frozen yogurt bar and all-day buffets to fancy ‘family’ dinners in a swanky dining room, we never went hungry. There were hot tubs to keep us relaxed, lounges for us to take over to play innumerable games of the Resistance, and a handy karaoke bar for when we just needed to get our groove on. (Note to the daring: never challenge Khad and his wife to a sing-off … you will not win.)


On this voyage, the Liberty of the Seas stopped in Belize and Cozumel, and we took full advantage of the opportunity to explore these tropical locations.

What a trip!

As always, our annual meet-up has left us all feeling refreshed, inspired, and ready to take 1Password to the next level.

Presented without comment:


back door cryptography

Back doors are bad for security architecture

Instead of inventing encryption that only government can break, we should just breed a special unicorn that magically blocks terrorist acts.
Ryan Paul

Back doors into security systems weaken security. For everyone. This remains true despite wishful thinking on the part of those who may advocate back doors. The claim that back doors could be added to systems for law enforcement purposes without compromising the security of those systems was something that was heatedly discussed in the 1990s.

I had hoped that we had driven a stake through its heart back then, but it has been revived in the wake Apple’s announcement last Autumn that they have no method to unlock iOS devices without the user’s consent, and so don’t have anything that can be given to law enforcement agencies. The current version of Apple’s statement reads:

On devices running iOS 8.0 and later versions, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. For all devices running iOS 8.0 and later versions, Apple will not perform iOS data extractions in response to government search warrants because the files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.

Ever since then there has been official and unofficial hand wringing about the threat that this poses to public safety and national security. This is often accompanied by “suggestions” of building systems that don’t compromise the security of a system, give (the right) governments the access they want, and are called something other than “back doors”.

But in addition to whatever risks government access poses, there is a subtle but crucial point that is often overlooked: The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. I will come back to that in more detail below, but first let me review a few events and concepts.

Wishful thinking

Over the past half a year, we’ve been told that through some technological wizardry there must be a way to provide governments with what they want without compromising user security. Each time suggestions of that sort come up they are met with ridicule from cryptographers and information security specialists.

An early example is from a Washington Post editorial in October 2014

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

The phrase “secure golden key” has become a running joke among security specialists since then.

More recently (in January of [2015]) British Prime Minister David Cameron called for government readable encryption. Prime Minister Cameron declared that there should be “no means of communication” that his government “cannot read.” Yet he also stated that this would not involve a “back door.”

Without a very specific proposal in hand, it is hard to analyze the suggestions in detail: all we can do is poke fun at what we imagine they might mean. At least we now have a slightly more specific idea of what it might mean in the US from Michael S. Rogers, the head of the National Security Agency (NSA). He appears to be advocating key escrow with threshold secret sharing for the escrowed key. As described in the Washington Post on April 10 [2015]:

Why not, suggested [Rogers], require technology companies to create a digital key that could open any smartphone or other locked device to obtain text messages or photos, but divide the key into pieces so that no one person or agency alone could decide to use it?

I would love to talk about how keys can be divided into pieces so that no one person can decide to use it, but I will save that for another article. (It’s really cool, and the essential mathematical concept is not actually that hard to grasp.)  But that slightly more specific proposal still doesn’t address the fact that key escrow can’t really be built into securely designed systems. This should become more clear below.

Each of those proposals, in their own way, fail to recognize that entirely separate from the privacy concerns, inserting some government access mechanism into cryptographic systems requires a weakening of those systems.

What’s a back door?

A back door is simply a second way of gaining access to some resource. Imagine a bank vault with a very visible and secure vault door. Now imagine that there is a hidden back door into the vault that few people are aware of. Typically a back door is created deliberately and its existence is kept secret. It isn’t too far from the truth to consider a back door a deliberate security vulnerability.

I am using the term “back door” broadly here because from the user’s point of view, and from the point of view of implications on security architecture, the narrower definition isn’t useful. Under a narrow definition, a back door can only be added systems that have (front) doors. Tools like 1Password and Knox for Mac don’t have any doors to begin with, as they operate solely through encryption and not authentication.

Not everything that looks like a back door is secret or malicious. For example, when my bank needs to deposit or withdraw funds from my account, it doesn’t go in through the same door that I do. The bank has legitimate access through their own doors. Indeed, one of the major reasons I use a bank is so that it can perform such transactions on my behalf. So in this case the apparent back door is essential to the purpose of the system in the first place. I will not be including such things in my discussion of “back doors.” Those are just other front doors.

Indeed, my usage is similar to what appears in Matt Blaze’s prepared testimony (PDF)  before Congress for April 29, 2015.

These law enforcement access features have been variously referred to as “lawful access”, “back doors”, “front doors”, and “golden keys”, among other things. While it may be possible to draw distinctions between them, it is sufficient for the purposes of the analysis in this testimony that all these proposals share the essential property of incorporating a special access feature of some kind that is intended solely to facilitate law enforcement interception under certain circumstances.

Key escrow

It appears that Admiral Rogers is advocating a key escrow system. Under my broad definition of back door, this is one mechanism. The notion is that a copy of a cryptographic key is deposited with a safe pair of hands (an escrow service) who store that copy securely and will only release it under certain circumstances.

Keymaster from Ghost Busters

Sometimes it’s hard to find the right Keymaster

Additionally, he is suggesting that it not be a single entity or agency that holds the key, but the key is “split” in such a way that it may require multiple parties to work together to retrieve or reconstruct the key. Typically this is done through an algorithm called Shamir secret sharing which allows one to do things like give a separate secret to five different people which will allow any three of them to recover the master secret (“three of five”). I really, really want to write about how Shamir secret sharing works, but I must leave that for another day.

Although this kind of key splitting for the escrowed key is a good thing to help protect it from theft or abuse, it does nothing to address its implications for the security design of some application which must comply with it. So let me repeat again that these sorts of proposals have implications for the security design of systems that comply.

Vital Technicalities

There are a number of technical facts that policy makers should understand:

  1. Software and hardware cannot distinguish between good guys and bad guys.
  2. Back doors pose a direct risk to all users.
  3. Designs that enable back doors (whether or not a back door is present) are weaker than systems which preclude back doors.
  4. There is no useful and coherent way to distinguish between cryptographic tools for communication and those not for communication.

I am mostly going to talk about number 3 on that list. This is my point that security designs that make it hard to insert a back door are more secure than designs in which it is easy. But let me briefly address the other ones.

Good guys and bad guys

One of the interesting phrases in the Washington Post editorial back in October was notion that the golden key could only be used when a court has produced a warrant. This isn’t actually as ridiculous as it first seems if we consider that the relevant court might hold part of a split key. But a cryptographic system only knows whether it has been given keys that work or not; it cannot decide whether the person who is using that key is using it properly or came upon it through legitimate means.

1Password, for example, only knows if you have provided the correct Master Password. It doesn’t know if you are a good guy or a bad guy. It doesn’t know if you obtained the Master Password through torture. It doesn’t know if you are a photogenic hero who needs to decrypt the data to save the world from destruction by Dr No. These are simply not the kinds of things that software can know. As clever as we may be, we cannot build software that will “let the good guy in.” Instead we build systems that let the holder of the correct Master Password in and nobody else.

Inherent risks

The most obvious risk of a back door is that the keys to the back door will be captured by “the wrong people.” The holders of the key to the back door need to protect it well, not only from outsiders but from misuse from themselves. This is an enormous topic that I will largely skip since it is widely discussed elsewhere. But I will point out that in the US, the court oversight of secret programs has not lived up to what law makers wished, and that if one government is allowed a back door, many other governments will insist on similar access.

Systems for Communication

As mentioned above, Prime Minister Cameron expressed interest in “communication” and, so, perhaps, is envisioning rules that would apply only to systems that are used for communication. Perhaps text messaging systems would be subject to his rules that they must be readable by the British government, but Full Disk Encryption (FDE) systems like Bitlocker or FileVault would not be. The difficulty with taking such an approach is that even FDE systems could be used for secret communication. Patty may encrypt a disk and send the physical disk to Molly. Sure, Patty and Molly may have preferred to use tools better suited for communication, but if no such secure tools are available, they will make do with others.

Indeed this reflects the fact that cryptographers don’t typically distinguish between the case where Alice encrypts a message for Bob and the case where Alice encrypts a message for herself to decrypt at some later time. Communicating securely with a separate person is a lot like communicating securely with yourself in the future, and so tools that help with the latter can be co-opted to do the former.

Doors and architectures

I would now like to return to the central point I am trying to make. The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door.

This is a fundamental part of security engineering. By using strong encryption with keys that only the end user has access to, a huge number of potential attacks are suddenly off the table. As Matthew Green, a cryptographer at Johns Hopkins University, wrote in an article on Slate discussing the reaction to Apple’s statement:

Apple is not designing systems to prevent law enforcement from executing legitimate warrants. It’s building systems that prevent everyone who might want your data – including hackers, malicious insiders, and even hostile foreign governments — from accessing your phone. This is absolutely in the public interest. Moreover, in the process of doing so, Apple is setting a precedent that users, and not companies, should hold the keys to their own devices.

Apple isn’t designing iOS security with the aim of thumbing their noses at law enforcement. They are following good design principles that protect your data. Likewise, when we design our products so that only you can decrypt your data, we are doing so to protect you from those who would read your data without your consent. As described above, no software can determine the intent of the people using it.

Doors must lead somewhere

A back door can pretty much only be placed into a system at a point where that system has a secret such as an encryption key in memory. Otherwise it is a door to nowhere. The parts of a system that require the most protection are the ones that handle the secrets. A principle of security design is to reduce those portions of the system to the smallest possible.

Let’s consider software bugs. Continuing with our metaphor of doors, we can imagine a software bug as not so much another door but as a weakness that allows an attacker to break a hole in a wall. The attacker manages to go around the doors to get to the secrets.

The fewer places that secrets are held, the fewer the number of places where a dangerous vulnerability can occur. If the rooms with the secrets are small, there is less wall area to attack. So good security design means reducing the number of places and times where secrets are held. Great security design places all of those secret-holding components under the user’s control. Naturally, we strive for great design in our own products.

Some of the technical jargon is about “attack surfaces.” Good security design seeks to limit the attack surface, and therefore inherently limits the ways in which a back door could be inserted into a system. By building systems that preclude back doors in most places, we are also preventing a large class of accidental vulnerabilities.

Secrets under your control

One of the most important ways to achieve good security design is to make sure that your decrypted secrets never leave the system without your consent. In the case of 1Password, you may export your data, you may copy a password out of an item, you may use the 1Password extension to fill Login credentials into a web browser. But each of those is an action that you choose to take.

This is a slightly more general notion of what is meant by “end-to-end” encryption. Your encryption keys (the secrets that are derived from your Master Password) never leave your computers or devices and are only used when you want them to be used. Your encryption keys are created on your own devices and never leave your device unencrypted.

That sort of end-to-end encryption is essential to your security. It means that the only attacks that could ever be launched off of your system would involve guessing your Master Password. As a consequence, a back door could only be placed in the software running on a device under your control. By using end-to-end encryption we have dramatically narrowed down the attack surface. A side effect of this is that we also limit the places into which a back door could be inserted.

Where it would have to go

It appears that Admiral Rogers is advocating a key escrow system. Cryptographic tools would use strong encryption and would use strong keys, but the government would have a copy of the keys. His proposal of requiring multiple entities to unlock the escrowed key does make it harder to steal those keys from the government, but it does not stop this from being a key escrow system.

Even if we were fully confident that those keys would be stored safely and would only be used appropriately, the question of security architecture remains. Let’s look at 1Password for an example:

When you create a new vault (or even a new item) in 1Password, 1Password running on your machine will generate random cryptographic keys. We at AgileBits never have the opportunity to see those keys. Nor does anyone else. This is an example of what I meant when I said above that great security design places all of the secret holding components under the user’s control. The creation and handling of those keys happens only on your machine.

Under 1Password’s design, the only way to comply with key escrow would be to send a copy of the key to some government controlled entity when the key is created or after you have entered your Master Password (when these keys are decrypted on your machine). Roughly speaking, 1Password would have to send your Master Password (or keys derived from it) to some government entity. But because these only exist on your system (and not ours) it would have to be your system that is sending the information.

You can control what is transmitted from your computer. Sure, it may take technical skill to do so, but this is something that neither we nor a government can prevent you from doing. Indeed, in the unlikely event that we are ever required to produce a version of 1Password or Knox that would transmit your data to another system, we would display a huge notice to you about what is happening.

There might be more reliable ways in which we could (be forced to) comply with a key escrow scheme, but each of them involve weakening the overall security architecture of 1Password. It would mean that our software would only work if someone other than you had access to your keys. That is not how we build things.

This example should illustrate that the strongest security architectures cannot reliably participate in key escrow. This means that it is often a mistake to frame the discussion as a “clash between privacy and security.” We weaken many kinds of security when we weaken privacy protections.

Law enforcement is right to want a back door

The October [2014] Washington Post article that I keep referencing is absolutely correct when they say,

Law enforcement officials deserve to be heard in their recent warnings about the impact of next-generation encryption technology on smartphones, such as Apple’s new iPhone.

Those voices do need to be heard. So let’s start with them.

From the point of view of law enforcement, they rightly want to be able to actually get at data that they have the legal right to acquire.

Suppose that Molly, one of my dogs, is suspected of kidnapping, torturing, and even eating rabbits. (Molly, I’m sorry if some of my social media posts have implicated you in an FBI investigation, but your behavior was suspicious.) Also suppose that the FBI has good reason to suspect that Molly may even be taking pictures of her victims. The FBI should have little difficulty obtaining a warrant to confiscate and search Molly’s iPhone. If Molly has set a decent passcode for the device and has not leaked those photos off of her phone, then the FBI will have no means whatsoever (other than compelling Molly to reveal her passcode, which is a whole different set of very confused legal issues in the US) to get the evidence they need to lock Molly up in a crate. More bunnies will suffer and die as a consequence of the security design of iOS and the iPhone.

This isn’t as funny when we switch our example away from Molly and rabbits to the sorts of things that the FBI does investigate. Giving people access to encryption that law enforcement can’t break will mean that some investigations are harder, some never get solved, and some prosecutions will fail. There will be times when some very bad dogs get away with their crimes because of this.

It is no surprise that those given the task of fighting crime do not want to encounter encryption that they can’t break. Indeed, if they didn’t seek back doors into such systems they might not be doing their jobs. But this isn’t a question for law enforcement to decide on their own. It is a question for the public and for policy makers.

You can’t always get what you want

Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them.

Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins.

Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe.

1Password for Android header

1Password 4.2 for Android: It’s Out of this World

It’s not often in the life of an application that one gets the opportunity to draw inspiration from one of the greatest and most hilarious sci-fi stories of all time. Today, we are incredibly honoured and excited to present 1Password 4.2 for Android. With a custom keyboard; automatic filling in web browsers and third-party apps; and built-in support for viewing time-based, one-time passwords (TOTPs); our newest version of 1Password for Android promises to be approximately 420% more useful than a towel. Read more

We'd love to know which characters you want included in our new character generator!

Introducing the new Strong Character Generator!

Creating secure passwords for you is one of the most important things that we do here at AgileBits. We’re constantly looking for ways to improve 1Password to make it more secure and more convenient.

Today, we believe we’ve come up with an innovative solution that will allow users to create secure passwords … even in the face of unreasonably low character limits.

Security, in eight characters.

Too many websites today still restrict passwords to 8 or 10 characters, when we all know that 8-10 alphanumeric characters just isn’t going to cut it with today’s password cracking technology.

The next update to 1Password will include a brand new addition to our Strong Password Generator that introduces a whole new method of creating secure passwords: a random character generator.

Passwords:  Now with random-er characters!closeup generator

Our new character generator gives you security with only a few characters.  our new random character generator

Because attackers do not have facial recognition built into their algorithms, these passwords will thwart even the craftiest crackers.

We’re introducing this feature first on 1Password for Mac, with a limited character set, and we’d love for your help in making this tool even more secure-ier! Check out our list of available characters and let us know which ones you’d like to see added next. We’re taking votes and will add characters based on the highest popularity.

Never have seven characters looked more secure.

Never have seven characters looked more secure.

Please note: as awesome as an 'all-wil-wheaton' password may seem, it's not quite as secure as our truly random generator, so use with caution.

As awesome as an ‘all-wil-wheaton’ password looks, it’s not quite as secure as a randomly generated character set, please use sparingly.

We'd love to know which characters you want included in our new character generator!

Who’s missing here? Let us know which characters you’d love to see in the next update!

Enigma machine

Bcrypt is great, but is password cracking “infeasible”?

There are a lot of technical terms that mean something very specific to cryptographers but often mean something else to everyone else, including security professionals. Years ago I wrote about what it means to say that a cipher is “broken”. Today’s word is “infeasible”.

The news that sparked this lesson is the use of “computationally infeasible” in an announcement by Slack. Slack has announced that their hashed password database had been compromised, and their message was excellent: They clearly described what was available to attackers (usernames, email address, hashed passwords, and possibly phone numbers and contact information users may have added); they offered clear and useful instructions on what users should do (change passwords, enable two-step verification), and described what they have done and what they will be doing. And – most relevant for the technical discussion here – they have told us how the passwords were hashed.

In this case they said:

Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that your password could be recreated from the hashed form.

It is terrific that they chose to use bcyrpt for password hashing. bcrypt is among the three password hashing schemes that we recommend for sites and services that must store hashed passwords. The other two are PBKDF2 and scrypt. But Slack’s use of the term “computationally infeasible” here illustrates that one must be very careful when using cryptographic technical terms.

If you have a weak or reused password for Slack, change it immediately. Here is a guide to using 1Password for changing a password. And because the Slack app on iOS makes use of the 1Password App Extension, it is easy to use a strong and unique password for Slack.

Slack 1Password login Slack 1Password extension

If you would like to see how to use Slack’s two-step verification with 1Password take a look at our handy guide on doing just that.


But now back to what is feasible with password hashing.

One way hashing

When services that you log into store your password they should never store those as unencrypted “plaintext”. If they are stored as plaintext it means that anyone who can get their hands on that data file can learn everyone’s passwords. For example, Molly (one of my dogs) uses the same password on Poop Deck as she does on Barkbook. So if Patty (my other dog) learns Molly’s Poop Deck password, she can use it to break into Molly’s Barkbook account as well. This is why it is important not to reuse passwords.

Now suppose that Molly uses the password “rabbit” on Barkbook. (Have I mentioned that Molly is not the smartest dog in the pack?) Barkbook shouldn’t store just “rabbit”, but instead should store a one way hash of rabbit. A cryptographic hash function will transform something like “rabbit” into something like “bQ67vc4yR024FB0j0sAb2WKNbl8=” (base64 encoded).

One of the features of a cryptographic hash function is that it should be quick and easy to compute the hash from the original, but that it should be infeasible to perform the computation in the other direction. That is it should be pretty much impossible to go from “bQ67vc4yR024FB0j0sAb2WKNbl8=” back to “rabbit”. And it is.

Guessing versus reversing

With any halfway decent cryptographic hash function is it infeasible to compute the original from its hash if the original is chosen at random! But if you can make some reasonable guesses about the original then you can use the hash to check your guesses. Because passwords created by humans are not chosen at random, then it does become computationally feasible (and often quite practical) to discover the original based on the hash.

The actual formal definition of “one-way” for a cryptographic hash function, H(x), includes the requirement that x be the output of a uniformly distributed sampling of the domain of H. That is, considering all of the things that you can hash (under some set length), you need to pick something at random.  Otherwise a hash function might be invertible. Human created passwords do not meet that requirement and so the “computational infeasibility” of inverting a one way function isn’t applicable when its input is not chosen at random.

So now let’s correct Slack’s statement:

Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that a randomly created password could be recreated from the hashed form.

Modified Slack statement.

This, of course, is why you should use 1Password’s Strong Password Generator for creating your passwords. When your password is chosen randomly with a large set of possibilities, then it really is computationally infeasible to discover the password from the cryptographic hash.

Slowing down guessing

I mentioned that (for now) bcrypt, scrypt, and PBKDF2 are good choices for password hashing. Once the final results are in from the Password Hashing Competition and the dust has settled, we will probably have a good successor to those three. These are built upon cryptographic hash functions, but are designed for hashing specifically for when their input is not selected randomly.

Because cryptographic hashing is something that we have computers do a lot of, one of the things that we want is that it be fast. We want to be able to perform lots and lots of SHA-256 hashes per second without straining a computer’s memory. But if an attacker is going to be guessing passwords to see if they produce the right hash, we want to slow down the hashing. PBKDF2, scrypt, and bcrypt are all designed to require much more computation than a regular hash function to compute a hash. This can slow down an attacker from performing millions of computations per second to just thousands. The actual speed depends on many things, including the hardware that the attacker brings to bear on the system. scrypt, additionally, places specific demands on memory.

So the use of bcrypt means that attackers will need to do more work than they otherwise would to guess passwords stolen from Slack. That is a good thing, but it is not an “infeasible” amount of work.

What’s infeasible?

I started out by saying that I was going to talk about the word “infeasible”, but so far I have just been using it a lot. This is because its definition is abstract, subtle, and hard. I am not going to give a full definition, but I am going to try to get reasonably close. The discussion that follows is inherently technical, and nobody will blame you if instead of reading further you just wish to watch us pour ice water over ourselves. (Remember, that was a thing just last year.)

Welcome back to this article. It get’s progressively more arcane from this point onward.

The notion of infeasible depends on the relationship between the amount of work the defender has to do to secure the system compared to the amount of work that the attacker has to do to break it. A bank vault may take a minute to unlock if you know the combination, but it may take days to break through if you don’t. With cryptographic systems it can take just a fraction of a second to decrypt data if you have a key, but many times the age of the universe to do so if you don’t have the key.

Security parameters

What we want is the amount of work the attacker has to do to be vastly disproportionate to the work that the defender must do. It turns out that this can be stated mathematically, but first we need to introduce the notion of “security parameter” if we want our definition to stand the test of time instead of depending on the speed and power of current computers. So we will talk about how much work the defender and the attacker have to do in proportion to some security parameter.

Let’s pick, for purposes of exposition, an encryption system that operates at a security parameter of 56. The amount of computation that the the defender has to do to decrypt some data with the key is proportional to 56, but the amount of work that the attacker has to do to decrypt the data without the key is proportional to 2⁵⁶. Fifty-six is much much smaller than 2 raised to the 56th power, but today even 2⁵⁶ operations is within the reach of many attackers. Thirty years ago it was within the reach of just a few.

So now let’s suppose that we want to double this security parameter to 112. How much of a work increase might this cause the defender? You might be thinking that it doubles the cost to the defender, but the system I’m thinking of actually tripled the cost to the defender. Tripling the cost for double the security parameter may not seem like a good deal, but doubling the security parameter increased the work of the attacker by another 2⁵⁶, for a total of 2¹¹². This puts it well outside the reach of even the most resourceful attacker for a long time to come.

When we doubled the security parameter in that example, the work to the defender increased linearly while the work to the attacker increased exponentially. We want the work required of the attacker to increase exponentially with the security parameter while for the defender we increase it linearly or polynomially.

Doing time, time, time in an exponential rhyme

If the security parameter is n, we will tolerate it if the amount of work the defender must do is proportional to na for any a > 1. That’s what we mean when we say the work is “polynomial in n“. So if the work goes up with the square or cube of n we might grumble and seek more practical systems, but no matter how big the power that n is raised to gets, this is still a polynomial expression. An algorithm that works this way is called a “polynomial time algorithm”.

For the attacker we want the number of computations needed to be proptional to an expression in which n is in the exponent. So if the work to the attacker is proportional to b for any b > 1, so that the work is exponential in n. (Those of you who know this stuff, know that I’m leaving some things out and am taking some shortcuts.)

It might seem that a “big” polynomial get us bigger numbers than a “small” exponential, but no matter how much a polynomial function starts out ahead of an exponential, the exponential will always catch up. Let’s compare the exponential  y=1.1ˣ with the polynomial y=x⁶ + 2. For values of x below a few hundred, it looks like the polynomial is the runaway winner.Plot of polynomial taking early lead over exponentialBut we inevitably reach a point where the exponential function catches up. For the particularly examples I’ve given, the exponential catches up with the polynomial when x is about 372.73.

Plot with exponential catching up

Finally, if we go just a bit beyond the point where the exponential overtakes the polynomial, we see that the exponential completely flattens the polynomial.

Plot on scale where exponential flattens polynomial

Some computations will take a number of steps that are polynomial in n (“polynomial time algorithms”), and others will be exponential (“exponential time algorithms”). We say that a task is infeasible if there is no polynomial time algorithm to complete it with a non-negligible chance of success. I have not defined what a non-negligible chance of success is, but as the article appears to be growing in length exponentially, I will leave that discussion for our forums.

When we have this sort of asymmetry, where the work done by the attacker grows exponentially with the security parameter, but grows at most polynomially for the defender, there will always be some security factor beyond which the work to be done by the attacker is so enormously larger than what the defender must do as to just not be an option for any attacker.

Quibbling over terminology

Now that we have a workable definition of “infeasible” and a better understanding of what cryptographic hash functions do, we can take a closer look at Slack’s statement. First let me repeat that their overall statement was excellent, and I fully sympathize with the difficulty involved in writing something about security that is correct, clear, and usable. I’ve taken some shortcuts in my exposition on any number of occasions, and I’ve made my share of errors as well. My point here is not to criticize but instead to use this as an opportunity to explain.

Given what we believe about cryptographic hash functions it is infeasible to discover x if you are only given the hash of x but only if x is chosen at random. Furthermore this is true of any (decent) cryptographic hash function and is not limited to the slow functions that are recommended for password hashing. That is, we don’t need bcrypt or PBKDF2 for that property to hold.

The limits of slow hashes

Slow hashes – specifically designed for password hashing – are built because we know that passwords are not chosen at random and so are subject to guessing attacks. But slow hashes have their limits, and with the notions that have been introduced above, we can now talk about them more clearly. Using a slow hash like PBKDF2 slows things down for both the attacker and for the defender. And the amount of slow-down is roughly the same for both the attacker and for the defender.

If we increase the security parameter (number of iterations) for PBKDF2 the computational cost rises linearly for both the attacker and for the defender. This is unlike the security parameters we use elsewhere in cryptography, where we would like a small (linear or perhaps polynomial) increase in cost to the defender to create a large (exponential) increase for the attacker.

Let’s see how that works out with a concrete, if hypothetical, example. Suppose it is using 40,000 PBKDF2 iterations. Now suppose that you add a really randomly chosen digit to the end of your Master Password. Adding a single random digit will make an attacker do 10 times the amount of work that they would have to do to crack the original password. Adding two digits would make the attacker have to do 100 times the work of the original. Making a password just a little bit longer (with random stuff) makes the work required by the attacker increase exponentially. That is the kind of proportion of work that we like.

Now suppose 1Password uses 40,000 PBKDF2 iterations in deriving your Master Password. To get the same additional work as adding a single digit to your password, you would need to increase the number of PBKDF2 iterations to 400,000. And to get the equivalent of adding two digits, you would need to increase the number of iterations to 4,000,000. Once we have a goodly amount of PBKDF2 iterations, there isn’t that much gained by increasing it by an additional ten or twenty thousand. But there is much to be gained by even a small improvement in a Master Password.

PBKDF2 is terrific, and it is an important part of the defense that 1Password offers if an attacker gets a hold of your encrypted data. But you must still pick a good Master Password because the security parameter is linear for both the defender and the attacker. Unless there is a breakthrough in the slow hashing competition, a strong Master Password will always be required in order to ensure your security can withstand the test of time.

1Password 4 for iOS icon

1Password 5.3 for iOS: The Extended Brainiac Edition is out!

This major, free update to 1Password for iOS is so awesome, we thought about pulling a Harry Potter and releasing it in two parts. But when Apple told us Daniel Radcliffe wasn’t available, and they didn’t even have his number in the first place, we just had to give it all to you at once.

A 400 percent better App Extension

1P iOS 5.3 App Extension CC Identities borderYou know how our App Extension can fill Logins into Safari, our own 1Browser, and hundreds of other apps with a single tap? Now it can also:

  • fill Identities
  • fill Credit Cards
  • create new Logins when you’re signing up for new services
  • show all Logins if none are found for the current app (App Extension only)

It’s all in the name of saving you even more time when logging in and now filling long forms and shopping carts.

A brand new Brain

We affectionately call 1Password’s under-the-hood tools and form-filling logic the “Brain,” and we gave it a huge upgrade in 5.3. It’s much smarter about matching websites and subdomains and fills forms even faster.

We need to talk

OPI 5.3 Message Center

There is so much great stuff going on with 1Password that we added a new Message Center to keep you in the know. It brings you 1Password news and tips right in our in-app Settings. Don’t worry, Push Notifications need not apply.

So, so much more

We added Large Type so you can view usernames and passwords in Jumbo Size, and we fixed a couple Zoom Mode bugs and a crash for iPhone 6 Plus users. Truly, there is a mountain of improvements you can check out in the full release notes.

Our free 1Password 5.3 for iOS update is now live in the App Store, so take it for a spin and let us know what you think on TwitterFacebook, and in our newly redesigned forums!