Security header

How 1Password for Teams protects your secrets

Since this is my first AgileBits byline, allow me to introduce myself. Last month, I joined the awesome security team here at AgileBits. I’m super excited to work with Jeffrey Goldberg, our Chief Defender Against the Dark Arts, and Jessy Irwin, our resident Security Evangelist. I aim to review product security and keep bad things from happening to good people. In addition, I write readable things: I’ve got a number of blog posts on deck that I look forward to sharing with you fine folks.

With pleasantries exchanged, let’s talk about 1Password for Teams, and about how your privacy and the security of your data are of the utmost importance to us. We are able to offer the great new features of 1Password for Teams by providing it as a service. If you are using 1Password but don’t have a 1Password for Teams account, your existing vaults remain unchanged, whether you sync them using Wi-FI, Dropbox or iCloud. While we have made some significant changes to how your data is stored in 1Password for Teams, our commitment to security and privacy has not changed.

How 1Password for Teams keeps your data safe

When we set out to build 1Password for Teams, our first concern was that our cryptography and security be absolutely top notch. I mention them both because they work hand in hand to keep your data secure. We opted for security that is enforced by cryptography instead of software or personnel policy.

Cryptography is what makes your data completely worthless to hackers. It is our cryptography that ensures that even if someone were to hack into our servers they would be able to access nothing more than a bunch of random numbers.

Security is what ensures that there are no back doors or vulnerabilities in the code. Security has to do with the assurance that certain policies are enforced by the operating system. Specifically, that there are no workarounds or back doors into our servers.

Private by Design

We take the “privacy by design” approach because we believe that we can best protect your secrets by not knowing them. It is impossible to lose, use or abuse data one doesn’t possess. Therefore, we designed systems that reduce the amount of sensitive user data we can access or acquire.

Triple-Layer Cake

1Password for Teams stores your encrypted data on our servers, but neither your Master Password nor your Account key is ever sent to our servers over any network. This means that we do not actually have the ability to decrypt your data. That is because decrypting your data requires all three of the following:

If you use 1Password, you are already very familiar with the Master Password and its role in protecting your data. Let’s talk about the other two pieces of the puzzle: the Account Key and the Secure Remote Password.

The purpose of the Account Key is to protect your data from being decrypted by someone who might access or compromise our servers. It ensures that a password-guessing attack against your data is useless: even if an attacker were to correctly guess the Master Password, the vault would not unlock.

The Secure Remote Password (SRP) is a way for both the client and the server to authenticate each other without either revealing any secrets. The SRP encrypts all traffic over the network and verifies the authenticity of the remote server before sending your information over TLS/SSL.

In Math We Trust

These three pieces of information work together to symbiotically protect your data. The Account Key strengthens your Master Password exponentially. And since it never gets sent over the network, it can’t be reset, intercepted, or evaded. In fact, I would be happy to print out a 2D barcode of all of the information in my 1Password for Teams personal vault and tape it to my front door. And if you knew me, you would know that this is a very big deal.

Still have questions? You can read all of the details of how we secure your data and why we made the decisions we did by reading our White Paper (PDF). Please also leave us a comment below or join the conversation in our discussion forums. We love hearing from you!

Shield Security header

When a Leak Isn’t a Leak

Over the weekend Dale Myers wrote a blog post that examined our .agilekeychain format. The post featured a good discussion and analysis of our older data format, but it raised some questions among 1Password users and the wider technology community.

Dale states that he plans to continue using 1Password and has no concerns over the safety of his passwords themselves, but his main concern was how the AgileKeychain handles item URLs. While we widely documented this design decision and shared it publicly, Dale was surprised to find out that we didn’t encrypt URLs within the keychain. We want to reassure users that rely on AgileKeychain that their password data is safe and secure, and take the time to walk through our data formats to explain the issue completely.

AgileKeychain & OPVault Data Formats

Back in 2008, we introduced the AgileKeychain as a way to help our users better synchronize data across platforms and devices. At this time, 1Password had significantly less processing power to draw from for tasks like decryption, and doing something as simple as a login search would cause massive performance issues and battery drain for our users. Given the constraints that we faced at the time, we decided not to encrypt item URLs and Titles (which resembled the same sorts of information that could be found in browser bookmarks).

In December 2012, we introduced a new format that encrypted much more of the metadata. OPVault, our newer and stronger data format, provided authenticated encryption as well as many other improvements for 1Password users.

This format worked well in situations where we didn’t need to worry about backwards compatibility, including iCloud and local storage on iOS and Mac. For Windows, Android, and Dropbox syncing, however, we needed to decide if we should migrate to the new format or provide compatibility with older versions of 1Password.

We decided to take a conservative approach and not automatically migrate everyone over to OPVault because many users depend upon older versions of 1Password and they wouldn’t be able to log into their accounts. We knew we could trust the security of the AgileKeychain to protect confidential user data so we didn’t want to rush into something that would disrupt people’s workflows.

Switching to OPVault

Despite the security of AgileKeychain remaining intact, Dale reminded us that its time to move on. The OPVault format is really great in so many ways and we should start sharing it with as many users as possible.

We’ve already started making changes to use OPVault as the default format. In fact, the latest beta of 1Password for Windows does this already. Similar changes are coming to Mac and iOS soon, and we’re planning on using the new format in Android in the future. Once all of these things are complete, we will add an automatic migration for all 1Password users. For users who would like to switch to OPVault sooner than this, here’s how you can get started immediately:

To avoid losing access to your data, be sure to back up your 1Password data beforehand, and only follow these instructions if you are NOT using any legacy versions of 1Password. If you have any questions or concerns, or would like to migrate but aren’t sure if your version of 1Password is affected, our knowledgebase, forums and support team are here to help.

1Password tips

Quick Tip: iOS 9 Spotlight search and 1Password

Some of the geekiest arguments I’ve ever heard have been over the way people organize apps on their iPhones and iPads. I keep my most heavily used apps on my main screen, then shove almost everything else into folders on my other screens.

The reason I can do this is because of the wonders of Spotlight search. It’s easy for me to search for and launch the app I want to use, so I don’t have to spend my mental energy trying to remember where I’ve put things.

Apple opened up Spotlight to third-party developers like us in iOS 9. My searches are now supercharged! I’ve gotta say, I love being able to find my 1Password items right from my iPhone’s home screen. I enabled Spotlight search in 1Password by going to Settings > General > Enable Spotlight Search. Now I can just pull down, type in part of the item’s title, then tap on its name in the search results. 1Password opens right to that item.

iOS 9 Spotlight search

You might have questions about the new Spotlight search and how it works with 1Password, so I put together some answers for you. If your question isn’t addressed, please let me know; I’ll be sure to update it in response to your feedback.

I’m also curious: what are your favorite iOS 9 features? Let me know in the comments!


Everything you need to know about 1Password and XcodeGhost

Over the past few days, security researchers from Palo Alto Networks discovered that 39 apps infected with malware found their way into the Apple App Store in China. Since the news broke, the malicious apps have been pulled from the App Store— and we’ve had a few questions about what this might mean for 1Password and password managers in general. To put your mind (and your passwords!) at ease, we’re answering some of the most common questions and concerns that iOS users have had about malware, compromised apps, and the security of 1Password.

So wait… what happened? How did this get in the App Store?

It’s kind of a long story, but we’ll make it short. In software development, there are many, many tools that can be used to build an app, and iOS developers rely on a compiler called Xcode as part of that process. A compromised version of that compiler made its way to the web in China, and was downloaded from an untrusted source. In this case, all apps built using the malicious compiler, XcodeGhost, were modified to sneak malicious code into the App Store. Though Apple works to review and screen apps for malware before they reach the App Store, in this case Apple confirmed that the attackers were able to make it through the review process without raising any red flags.

What does this malware do?

In general, most malware is designed to capture personal information and/or user credentials, and send them back home to the attacker who compromised your device. While XcodeGhost does not directly affect the 1Password application, it indirectly affects those who use the application through your device’s clipboard. In a post outlining the malware’s capabilities, senior malware researcher Claud Xaio noted that this particular strain could:

  • Prompt a fake alert dialog to phish user credentials
  • Hijack opening specific URLs based on their scheme, which could allow for exploitation of vulnerabilities in the iOS system or other iOS apps
  • Read and write data in the user’s clipboard, which could be used to read the user’s password if that password is copied from a password management tool.

Additionally, according to one developer’s report, XcodeGhost has already launched phishing attacks to prompt a dialog asking victims to input their iCloud passwords.

Should I be worried? Does this affect me?

There are a few very specific factors that determine whether your device is at risk, but overall, this vulnerability is a rare occurrence for the App Store.

  • At present, this issue mostly affects devices using the Chinese App Store, though researchers have found compromised apps in the Canadian App Store as well.
  • The malware is only in applications built using a compromised code compiler. A list of affected apps can be found on the Palo Alto Networks blog, but security researchers believe that as many as 344 apps may be vulnerable to the attack.

Will 1Password protect my data if an app on my iPhone or iPad has been infected by XcodeGhost?

We have designed 1Password with your privacy in mind at all times. We use strong, reliable encryption and take many, many measures to make our application breach-resistant. Combined, the many layers of security we’ve implemented work together to secure your passwords and protect your most sensitive data— but if your device has been compromised, there’s almost nothing that 1Password can do to defend it. As previously stated in a post on malware by Jeffrey Goldberg, our Chief Defender Against the Dark Arts:

I have said it before, and I’ll say it again: 1Password […] cannot provide complete protection against a compromised operating system. There is a saying […] “Once an attacker has broken into your computer […], it is no longer your computer.” So in principle, there is nothing that 1Password can do to protect you if your computer is compromised.

Eek! My phone is infected with this— what should I do?!

First (and most importantly): don’t panic! There are a few simple things you can do to to return things to normal. If you’re positive that you’re using an app that was affected, here’s what you can do immediately to protect your data:

  1. Delete the compromised app(s) from your phone. If you are uncertain about whether an app has been compromised, it’s okay to delete it out of an abundance of caution.
  2. Change any passwords that you think may have been compromised through your device’s clipboard. Any passwords that you may have accessed through the 1Password extension are safe from this strain of malware, and do not need to be changed.
  3. Avoid redownloading or reinstalling any of the compromised apps until they have been updated. When an update has been released, be sure to download it from a trusted source once the developer has officially confirmed that a new, secure version is ready for you to use. If you’re uncertain of this, you can visit the developer’s site or check with their support team for help.

The XcodeGhost vulnerability doesn’t directly affect 1Password— we have not used the malicious version of Xcode, and the malware it injects into applications was not designed to directly compromise or target our application. Though the malware in compromised apps on any platform has the potential to put any user’s credentials at risk, especially when it can access a device’s clipboard, all technology users benefit from the work security researchers do to find vulnerabilities like this.

If you’ve made it this far down the post and still have questions or concerns, please leave a comment here or start a conversation with us in our discussion forums. You can also reach out to us on Facebook and Twitter.


Jessysaurus Rex joins the AgileBits team!

An adventure 65 million years in the making

A couple of weeks ago, we introduced you to the wonder women of AgileBits, who make this company and 1Password what they are today. We’re happy to announce that a new member has joined that illustrious team. If you follow the world of online security, you may already be familiar with her (or at the very least with one of her security sign bunnies hopping around Twitter!).

JessysaurusRex - Jessy Irwin

Her name is Jessy Irwin, and she is an influential voice in the world of information security. She also happens to love dinosaurs. A published writer and presenter, Jessy champions online privacy and security and spends much of her time educating people about the need for strong, unique passwords; secure software development; and operational security (opsec). She works to raise security awareness among students and educators, and helps the average Internet citizen learn what they can do to keep themselves, their data, and their online identities secure. She’s an obvious choice and a natural fit for our team, and we’re so glad that she’s here. @1Password and Jessy have been each other’s Twitter boo for a long time, a courtship that culminated in a grand proposal. (Spoiler alert: She said yes!)

Thanks for the Storify and kind words, Matthew!

This week, Jessy was a guest on Threatpost’s Digital Underground podcast. She and host Dennis Fisher had a great discussion about passwords, student privacy, how Jessy got her start in the world of information security, and her new role at AgileBits. You can subscribe to the Threatpost podcast on iTunes or listen to Jessy’s episode on the Threatpost website.

If you’re interested in learning more about online security, I highly recommend following @1Password and Jessy on Twitter. Jessy frequently shares her thoughts on the latest tech developments (such as Wednesday’s Apple event) and how they might impact your security, as well as great articles and blog post written by some of the smartest hackers and security researchers in the world. I enjoy following her on Twitter and having her do the work of curating all those interesting articles for me.

DevBits header

Improved locking in 1Password 5.5 for iOS

Security and convenience

One of the coolest features in 1Password for iOS is the extension. For nearly a year, it’s been really easy to log in to participating apps without having to copy and paste usernames and passwords. Shopping in Safari is also a breeze, now that you can add items to your cart, then fill in your credit card and address with just a couple of taps. The icing on this cake is that you can log in to 1Password using Touch ID instead of tapping out a PIN or your entire Master Password over and over again.

Integral to the extension is the 1Password Lock Service, which determines how often you’re prompted to unlock the app and whether you’re prompted to use quick unlock (Touch ID or PIN Code) or your full Master Password. Thanks to the feedback you’ve provided, the Lock Service has gone through a couple of transformations since iOS 8 was released last fall. The latest update to 1Password is no exception and includes some major improvements that we’re sure you’ll love!

Touch ID: The star of the show

When Apple announced Touch ID on the iPhone 5s in 2013, we knew it would be the perfect way to unlock 1Password for iOS quickly and securely. It took a year before we were able to integrate it, but it was definitely worth the wait!

1Password for iOS Touch ID lock screen

In previous versions of 1Password, cancelling the Touch ID prompt cleared your Master Password from the iOS Keychain, which meant that you would have to enter your Master Password before you could use Touch ID again. This was inconvenient, especially when your goal was just to dismiss the Touch ID prompt without unlocking 1Password.

In version 5.1, we decided to force quit the main app and dismiss the extension when the Touch ID prompt was canceled. It seemed like a good idea, but it was confusing because it looked like the app was crashing. So we went back to the drawing board.

In 1Password 5.5, canceling Touch ID will cause 1Password to display the Master Password prompt, but your password won’t be cleared from the iOS Keychain. This means that you will be able to use Touch ID the next time you open 1Password without typing your Master Password; all you need to do is to tap the fingerprint icon to bring up the prompt.

1Password 5.5 for iOS Master Password lock screen with Touch ID icon

Lock Service: Centralized and better than ever

In 1Password 5.5 for iOS, we have created a “central” Lock Service that is shared between 1Password and its extension. The extension will now use the settings you have specified in the main app. Additionally, when you unlock the 1Password extension, you will also unlock the main app (and vice versa). Those of you who use 1Password on Mac will probably notice that this is similar to the way 1Password and 1Password mini lock and unlock in unison.

As long as you have Lock on Exit disabled, you will no longer be prompted to unlock 1Password moments after you unlock the extension in Safari. Depending upon your Auto-Lock settings, it may be as long as an hour before you’re prompted to unlock 1Password again.

1Password 5 for iOS security settings

iOS Keychain + 1Password Extension = ❤️

In previous versions of 1Password, the extension never saved the Master Password to the iOS keychain. This meant that if your Master Password were cleared from the iOS keychain (like when you restart your iPhone or iPad), you would have to launch the main 1Password app and enter your Master Password before you’d be able to use quick unlock. Entering your Master Password in the extension would allow you to access your vault, but you’d have to keep reentering your Master Password until you finally unlocked the main 1Password app.

Now it doesn’t matter if your Master Password is cleared from the iOS keychain! If you have quick unlock enabled, you’ll just need to enter your Master Password in either the extension or main app—once. After that, you’ll be able to use quick unlock until the next time your Master Password is wiped from the keychain.

It’s taken some time and experimentation to get the main 1Password app and the extension working together just so, but we think our latest changes offer a balance of security and convenience. We hope you’re as happy with this update as we are! We’d love to hear your thoughts in the comments and in our discussion forums.

Unspeakable Passwords

Unspeakable Passwords: Jeff Goldberg talks to Passwords15

Passwords! They safeguard our most important information, but they’re such a pain, aren’t they? Every site imposes a different set of restrictions on password creation and they seem to get stolen from one place or another every other day, but they are absolutely necessary.

Given that passwords aren’t going anywhere anytime soon, it stands to reason that something must be done to thwart the increasing numbers of Not Nice People who would do us harm. I don’t know about you, but thinking about this stuff makes my brain hurt, which is why I’ve been using 1Password for almost 10 years.

Fortunately, there are some very, very smart folks who don their proverbial white hats and convene twice a year at an event called Passwords, where they figure out how to keep us safe. Launched in 2010, this conference focuses on the analysis of authentication solutions, in an effort to better understand and meet the challenges of digital authentication.

This year, our very own Jeff Goldberg was in attendance at #passwords15. If you don’t know who he is, allow me to introduce you. He’s our Chief Defender Against the Dark Arts, and a self-proclaimed explanation junkie. Seriously, he explains everything. At length. And very, very well. This makes him an excellent candidate to give a talk at such a conference, which is fantastic, because that’s exactly what he did.

In his presentation, Jeff talks about pronounceable passwords, entering them on various devices, Diceware, the successor to PBKDF2, and more. Check it out! You’ll learn, you’ll laugh, it’ll be great.

Presentation slides

Master Password feature

Do a little dance, make a Master Password

When you start using 1Password, creating a strong Master Password is the first and most important thing you’ll do. We all know that the Master Password is the sentry that protects your data, so choosing a super-secure password is the key to starting your journey towards better security. After all, this will be the ‘one password’ that you have to remember from now on, so you want to make it a good one! Our Chief Defender Against the Dark Arts has written an awesome blog post to help you through this important step, but there’s a lot of information in there and it can be a little bit overwhelming.

One of our goals with 1Password is to make security convenient. We’ve thought long and hard about how to make the process of choosing this important password simpler and more friendly for our new users.

Could we make the password creation process fun, and maybe even danceable? Why not?

We called on our friend, Jonathan Mann, to help us teach everyone how to create a strong Master Password. It turns out, his method involves a lot less reading.

Jonathan in the Toronto Office

We’ve been humming this song for weeks now, and I’m so glad we can finally share it with you! I’m pretty sure my favourite scene is the 35 bats, but I’d love to know which one makes you smile the most.



1Password inter-process communication: a discussion

Recently, security researcher Luyi Xing of Indiana University at Bloomington and his co-authors released the details of their research revealing security vulnerabilities in Apple’s Mac OS X and iOS that allow “a malicious app to gain unauthorised access to other apps’ sensitive data such as passwords and tokens for iCloud, Mail app and all web passwords stored by Google Chrome.”  It has since been described in the technology press, including an article in the Register with a somewhat hyperbolic title. I should point out that even in the worst case, the attack described does not get at data you have stored in 1Password.

The fact of the matter is that specialized malware can capture some of the information sent by the 1Password browser extension and 1Password mini on the Mac under certain circumstances.  But roughly speaking, such malware can do no more (and actually considerably less) than what a malicious browser extension could do in your browser.

For 1Password, the difficulty is in fully authenticating the communication between the 1Password browser extension and 1Password mini; however, this problem is not unique to 1Password. The difficulty of securing inter-process communication on the operating system is a problem system-wide. A recent paper, “Unauthorized Cross-App Resource Access on MAC OS X and iOS” (PDF),  by Luyi Xing (Li) and his colleagues shows just how difficult securing such communication can be. Since November 2014, we’ve been engaged in discussion with Li about what, if anything, we can do about such attacks. He and his team have been excellent at providing us with details and information upfront.

As always, we are limited in what we can do in the face of malware running on the local machine. It may be useful to quote at length the introduction of that article

I have said it before, and I’ll say it again: 1Password […] cannot provide complete protection against a compromised operating system. There is a saying […] “Once an attacker has broken into your computer […], it is no longer your computer.” So in principle, there is nothing that 1Password can do to protect you if your computer is compromised.

In practice, however, there are steps we can and do take which dramatically reduce the chances that some malware running on your computer [could obtain your 1Password data].

That was written more specifically about  keystroke loggers, and there are some things that set the new attack apart. Like superficial keystroke loggers it doesn’t require “admin” or “root” access, but they were able to sneak a proof of concept past Apple reviewers.

The threat

The threat is that a malicious Mac app can pretend to be 1Password mini as far as the 1Password browser extension is concerned if it gets the timing right. In these cases, the malicious app can collect Login details sent from the 1Password browser extension to the fake 1Password mini. The researchers have demonstrated that it is possible to install a malicious app that might be able to put itself in a position to capture passwords sent from the browser to 1Password.

Note that their attack does not gain full access to your 1Password data but only to those passwords being sent from the browser to 1Password mini. In this sense, it is getting the same sort of information that a malicious browser extension might get if you weren’t using 1Password.


1Password provides its own security. What I mean by this is that for the bulk of what we do, we don’t generally rely upon security mechanisms like sandboxing or iOS Keychain. So it doesn’t matter whether those sorts of security measures provided by the operating system fail.

The careful reader will note, however, that I used phrases like “for the bulk of what we do” and “don’t generally rely upon” in the previous paragraph. There are some features and aspects for which some of 1Password’s security makes use of those mechanisms, and so vulnerabilities in those mechanisms can allow for harm to us and our customers.

1Password mini listens to the extension

Application sandboxing is a good thing for security. But it limits how the 1Password browser extension can actually exchange data with 1Password itself. Indeed, the extension (correctly) has no direct access to your data. Keeping your data out of the browser (a relatively hostile environment) is one of our security design choices. But this does mean that the 1Password browser extension needs to find a way to talk to something that does actually manage your data. 1Password mini (originally the 1Password Helper) was invented for this purpose.

One of the few ways that a browser extension can communicate locally is through a websocket. Browser extensions are free to talk to the Internet as a whole, but we certainly don’t want our browser extension doing that; we only want it talking to 1Password locally. So we restrict the browser extension to only talking to 1Password mini via a local websocket.

Mutual authentication

Obviously we would want 1Password mini and the browser extension to only talk to bona fide versions of each other, so this becomes a problem of mutual authentication. There should be some way for 1Password mini to prove to the extension that it is the real one, and there should be a way for the browser extension to prove to 1Password mini that it is a real 1Password browser extension.

The difficulty that we face is that we have no completely reliable mechanism for that mutual authentication. Instead, we employ a number of separate mechanisms of authentication, but each has its own limitations. We have no way to guarantee that when the browser extension reaches out to 1Password mini it is really talking to the genuine one.

There are a number of checks that we can (and do) perform to see if everyone is talking to who they think they are talking to, but those checks are not perfect. As a result, malware running on your Mac under your username can sometimes defeat those checks. In this case, it can pretend to be 1Password mini when talking to the browser extension and thus capture any information sent from the 1Password browser extension that is intended for the mini.

What can be done

Neither we nor Luyi Xing and his team have been able to figure out a completely reliable way to solve this problem. We thank them for their help and suggestions during these discussions. But, although there is no perfect solution, there are things that can be done to make such attacks more difficult.

What you can do

1. Check “Always Keep 1Password Mini Running” in Preferences > General

In the specific attack that Luyi Xing demonstrates, the malicious malware needs to be launched before the genuine 1Password mini is launched. By setting 1Password mini to always run, you reduce the opportunity for that particular attack.

keep mini running



2. Keep using the 1Password browser extension

Although what is described is an attack against the communication between 1Password mini and the browser extension through specialized malware, using the 1Password browser extension protects you from a more typical malware attack of pasteboard/clipboard sniffers. Likewise, the 1Password extension helps fend off phishing attacks because it will refuse to fill into pages that don’t match the domain for your saved Logins.

Quite simply, the 1Password extension not only makes life easier for you, but it is an important safety feature on its own.

3. Pay attention to what you install

As always be careful about what software you run and install on your system. On your Mac, open System Preferences > Security & Privacy > General. You’ll see an Allow apps downloaded from: setting there. We strongly recommend confirming that this setting is configured so that only apps from trusted sources can be opened. You can read more about the setting and its options on Apple’s support site.

Now Xing and his team point out that this isn’t a guaranteed way to prevent malware being installed. They were able to get a malicious app approved by the Mac App Store review process. However, I think it is reasonable to assume that now that Apple reviewers know what to look for, it will be much harder for that specific kind of malware to get through.

What we can do

There are additional (defeasible) mechanisms that we can add to our attempts at mutual authentication between the extension and 1Password mini. I will briefly mention a few that we’ve considered over the years.

Encryption with an obfuscated key

One option is to have a shared obfuscated key in both 1Password mini and the extension. (Remember that the browser extension never sees your Master Password so any secret it stores for authentication cannot be protected by your Master Password.)

Obfuscation only makes things harder for attackers until someone breaks the obfuscation, and every system designer should assume that obfuscation will be broken. See our discussion of Kerckhoffs’ Principle in our article, “You have secrets; we don’t,” for some background on why we tend to be reluctant to use obfuscation. Of course, it may be warranted in the absence of a more effective alternative, so this remains under consideration.

In anticipation of a likely suggestion, I should point out that even the magic of public key encryption wouldn’t save us from having to rely on obfuscation here; but I will save that discussion for our forums.

Using the OS X keychain

Another option would be to store authentication secrets in the OS X keychain, so that both our browser extension and 1Password mini would have access to it. This could be made to work for authenticating 1Password mini to the extension for those browsers that allow easy use of the OS X keychain.

This might solve half the problem for some browsers, but to date we’ve been focusing on solutions that work across all of the browsers we support.

An extreme solution

In the extreme case, we could have some explicit pairing (sort of like Bluetooth) between 1Password mini and the extension.  That is, the browser extension may display some number that you have to type into 1Password mini (or the other way around).  With this user intervention we can provide solid mutual authentication, but that user action would need to be done every time either the browser or 1Password mini is launched.

Quite frankly, there is no really good solution for this. To date, our approach has been to put in those authentication checks that we have and keep an eye out for any hints of malware that exploits the known limitations of what we do.

Is 1Password for iOS affected?

The research paper isn’t limited to discussing inter-process communication (IPC) that is done through websockets, but covers a wide range of mechanisms used on Apple systems. This includes some mechanisms that we may use for some features in 1Password for iOS.

Shared data security

1Password for iOS shares some of its data with the 1Password app extension. As most of that data is encrypted with your Master Password, it is not a substantial problem if that data becomes available to attackers. The exception, of course, is the TouchID secret.

As yet, we have not had a chance to test whether there is any exposure there, but watch this space for updates.


We truly are grateful for the active security community, including Luyi Xing and his team, who take the time to test existing security measures and challenge us to do better. Our analysis of the researchers’ findings will continue and we will post an update if further action is necessary.

back door cryptography

Back doors are bad for security architecture

Instead of inventing encryption that only government can break, we should just breed a special unicorn that magically blocks terrorist acts.
Ryan Paul

Back doors into security systems weaken security. For everyone. This remains true despite wishful thinking on the part of those who may advocate back doors. The claim that back doors could be added to systems for law enforcement purposes without compromising the security of those systems was something that was heatedly discussed in the 1990s.

I had hoped that we had driven a stake through its heart back then, but it has been revived in the wake Apple’s announcement last Autumn that they have no method to unlock iOS devices without the user’s consent, and so don’t have anything that can be given to law enforcement agencies. The current version of Apple’s statement reads:

On devices running iOS 8.0 and later versions, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. For all devices running iOS 8.0 and later versions, Apple will not perform iOS data extractions in response to government search warrants because the files to be extracted are protected by an encryption key that is tied to the user’s passcode, which Apple does not possess.

Ever since then there has been official and unofficial hand wringing about the threat that this poses to public safety and national security. This is often accompanied by “suggestions” of building systems that don’t compromise the security of a system, give (the right) governments the access they want, and are called something other than “back doors”.

But in addition to whatever risks government access poses, there is a subtle but crucial point that is often overlooked: The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door. I will come back to that in more detail below, but first let me review a few events and concepts.

Wishful thinking

Over the past half a year, we’ve been told that through some technological wizardry there must be a way to provide governments with what they want without compromising user security. Each time suggestions of that sort come up they are met with ridicule from cryptographers and information security specialists.

An early example is from a Washington Post editorial in October 2014

A police “back door” for all smartphones is undesirable — a back door can and will be exploited by bad guys, too. However, with all their wizardry, perhaps Apple and Google could invent a kind of secure golden key they would retain and use only when a court has approved a search warrant.

The phrase “secure golden key” has become a running joke among security specialists since then.

More recently (in January of this year) British Prime Minister David Cameron called for government readable encryption. Prime Minister Cameron declared that there should be “no means of communication” that his government “cannot read.” Yet he also stated that this would not involve a “back door.”

Without a very specific proposal in hand, it is hard to analyze the suggestions in detail: all we can do is poke fun at what we imagine they might mean. At least we now have a slightly more specific idea of what it might mean in the US from Michael S. Rogers, the head of the National Security Agency (NSA). He appears to be advocating key escrow with threshold secret sharing for the escrowed key. As described in the Washington Post on April 10:

Why not, suggested [Rogers], require technology companies to create a digital key that could open any smartphone or other locked device to obtain text messages or photos, but divide the key into pieces so that no one person or agency alone could decide to use it?

I would love to talk about how keys can be divided into pieces so that no one person can decide to use it, but I will save that for another article. (It’s really cool, and the essential mathematical concept is not actually that hard to grasp.)  But that slightly more specific proposal still doesn’t address the fact that key escrow can’t really be built into securely designed systems. This should become more clear below.

Each of those proposals, in their own way, fail to recognize that entirely separate from the privacy concerns, inserting some government access mechanism into cryptographic systems requires a weakening of those systems.

What’s a back door?

A back door is simply a second way of gaining access to some resource. Imagine a bank vault with a very visible and secure vault door. Now imagine that there is a hidden back door into the vault that few people are aware of. Typically a back door is created deliberately and its existence is kept secret. It isn’t too far from the truth to consider a back door a deliberate security vulnerability.

I am using the term “back door” broadly here because from the user’s point of view, and from the point of view of implications on security architecture, the narrower definition isn’t useful. Under a narrow definition, a back door can only be added systems that have (front) doors. Tools like 1Password and Knox for Mac don’t have any doors to begin with, as they operate solely through encryption and not authentication.

Not everything that looks like a back door is secret or malicious. For example, when my bank needs to deposit or withdraw funds from my account, it doesn’t go in through the same door that I do. The bank has legitimate access through their own doors. Indeed, one of the major reasons I use a bank is so that it can perform such transactions on my behalf. So in this case the apparent back door is essential to the purpose of the system in the first place. I will not be including such things in my discussion of “back doors.” Those are just other front doors.

Indeed, my usage is similar to what appears in Matt Blaze’s prepared testimony (PDF)  before Congress for April 29, 2015.

These law enforcement access features have been variously referred to as “lawful access”, “back doors”, “front doors”, and “golden keys”, among other things. While it may be possible to draw distinctions between them, it is sufficient for the purposes of the analysis in this testimony that all these proposals share the essential property of incorporating a special access feature of some kind that is intended solely to facilitate law enforcement interception under certain circumstances.

Key escrow

It appears that Admiral Rogers is advocating a key escrow system. Under my broad definition of back door, this is one mechanism. The notion is that a copy of a cryptographic key is deposited with a safe pair of hands (an escrow service) who store that copy securely and will only release it under certain circumstances.

Keymaster from Ghost Busters

Sometimes it’s hard to find the right Keymaster

Additionally, he is suggesting that it not be a single entity or agency that holds the key, but the key is “split” in such a way that it may require multiple parties to work together to retrieve or reconstruct the key. Typically this is done through an algorithm called Shamir secret sharing which allows one to do things like give a separate secret to five different people which will allow any three of them to recover the master secret (“three of five”). I really, really want to write about how Shamir secret sharing works, but I must leave that for another day.

Although this kind of key splitting for the escrowed key is a good thing to help protect it from theft or abuse, it does nothing to address its implications for the security design of some application which must comply with it. So let me repeat again that these sorts of proposals have implications for the security design of systems that comply.

Vital Technicalities

There are a number of technical facts that policy makers should understand:

  1. Software and hardware cannot distinguish between good guys and bad guys.
  2. Back doors pose a direct risk to all users.
  3. Designs that enable back doors (whether or not a back door is present) are weaker than systems which preclude back doors.
  4. There is no useful and coherent way to distinguish between cryptographic tools for communication and those not for communication.

I am mostly going to talk about number 3 on that list. This is my point that security designs that make it hard to insert a back door are more secure than designs in which it is easy. But let me briefly address the other ones.

Good guys and bad guys

One of the interesting phrases in the Washington Post editorial back in October was notion that the golden key could only be used when a court has produced a warrant. This isn’t actually as ridiculous as it first seems if we consider that the relevant court might hold part of a split key. But a cryptographic system only knows whether it has been given keys that work or not; it cannot decide whether the person who is using that key is using it properly or came upon it through legitimate means.

1Password, for example, only knows if you have provided the correct Master Password. It doesn’t know if you are a good guy or a bad guy. It doesn’t know if you obtained the Master Password through torture. It doesn’t know if you are a photogenic hero who needs to decrypt the data to save the world from destruction by Dr No. These are simply not the kinds of things that software can know. As clever as we may be, we cannot build software that will “let the good guy in.” Instead we build systems that let the holder of the correct Master Password in and nobody else.

Inherent risks

The most obvious risk of a back door is that the keys to the back door will be captured by “the wrong people.” The holders of the key to the back door need to protect it well, not only from outsiders but from misuse from themselves. This is an enormous topic that I will largely skip since it is widely discussed elsewhere. But I will point out that in the US, the court oversight of secret programs has not lived up to what law makers wished, and that if one government is allowed a back door, many other governments will insist on similar access.

Systems for Communication

As mentioned above, Prime Minister Cameron expressed interest in “communication” and, so, perhaps, is envisioning rules that would apply only to systems that are used for communication. Perhaps text messaging systems would be subject to his rules that they must be readable by the British government, but Full Disk Encryption (FDE) systems like Bitlocker or FileVault would not be. The difficulty with taking such an approach is that even FDE systems could be used for secret communication. Patty may encrypt a disk and send the physical disk to Molly. Sure, Patty and Molly may have preferred to use tools better suited for communication, but if no such secure tools are available, they will make do with others.

Indeed this reflects the fact that cryptographers don’t typically distinguish between the case where Alice encrypts a message for Bob and the case where Alice encrypts a message for herself to decrypt at some later time. Communicating securely with a separate person is a lot like communicating securely with yourself in the future, and so tools that help with the latter can be co-opted to do the former.

Doors and architectures

I would now like to return to the central point I am trying to make. The kinds of security architectures in which it is easy to insert a back door are typically less secure than the security architectures in which it is hard to insert a back door.

This is a fundamental part of security engineering. By using strong encryption with keys that only the end user has access to, a huge number of potential attacks are suddenly off the table. As Matthew Green, a cryptographer at Johns Hopkins University, wrote in an article on Slate discussing the reaction to Apple’s statement:

Apple is not designing systems to prevent law enforcement from executing legitimate warrants. It’s building systems that prevent everyone who might want your data – including hackers, malicious insiders, and even hostile foreign governments — from accessing your phone. This is absolutely in the public interest. Moreover, in the process of doing so, Apple is setting a precedent that users, and not companies, should hold the keys to their own devices.

Apple isn’t designing iOS security with the aim of thumbing their noses at law enforcement. They are following good design principles that protect your data. Likewise, when we design our products so that only you can decrypt your data, we are doing so to protect you from those who would read your data without your consent. As described above, no software can determine the intent of the people using it.

Doors must lead somewhere

A back door can pretty much only be placed into a system at a point where that system has a secret such as an encryption key in memory. Otherwise it is a door to nowhere. The parts of a system that require the most protection are the ones that handle the secrets. A principle of security design is to reduce those portions of the system to the smallest possible.

Let’s consider software bugs. Continuing with our metaphor of doors, we can imagine a software bug as not so much another door but as a weakness that allows an attacker to break a hole in a wall. The attacker manages to go around the doors to get to the secrets.

The fewer places that secrets are held, the fewer the number of places where a dangerous vulnerability can occur. If the rooms with the secrets are small, there is less wall area to attack. So good security design means reducing the number of places and times where secrets are held. Great security design places all of those secret-holding components under the user’s control. Naturally, we strive for great design in our own products.

Some of the technical jargon is about “attack surfaces.” Good security design seeks to limit the attack surface, and therefore inherently limits the ways in which a back door could be inserted into a system. By building systems that preclude back doors in most places, we are also preventing a large class of accidental vulnerabilities.

Secrets under your control

One of the most important ways to achieve good security design is to make sure that your decrypted secrets never leave the system without your consent. In the case of 1Password, you may export your data, you may copy a password out of an item, you may use the 1Password extension to fill Login credentials into a web browser. But each of those is an action that you choose to take.

This is a slightly more general notion of what is meant by “end-to-end” encryption. Your encryption keys (the secrets that are derived from your Master Password) never leave your computers or devices and are only used when you want them to be used. Your encryption keys are created on your own devices and never leave your device unencrypted.

That sort of end-to-end encryption is essential to your security. It means that the only attacks that could ever be launched off of your system would involve guessing your Master Password. As a consequence, a back door could only be placed in the software running on a device under your control. By using end-to-end encryption we have dramatically narrowed down the attack surface. A side effect of this is that we also limit the places into which a back door could be inserted.

Where it would have to go

It appears that Admiral Rogers is advocating a key escrow system. Cryptographic tools would use strong encryption and would use strong keys, but the government would have a copy of the keys. His proposal of requiring multiple entities to unlock the escrowed key does make it harder to steal those keys from the government, but it does not stop this from being a key escrow system.

Even if we were fully confident that those keys would be stored safely and would only be used appropriately, the question of security architecture remains. Let’s look at 1Password for an example:

When you create a new vault (or even a new item) in 1Password, 1Password running on your machine will generate random cryptographic keys. We at AgileBits never have the opportunity to see those keys. Nor does anyone else. This is an example of what I meant when I said above that great security design places all of the secret holding components under the user’s control. The creation and handling of those keys happens only on your machine.

Under 1Password’s design, the only way to comply with key escrow would be to send a copy of the key to some government controlled entity when the key is created or after you have entered your Master Password (when these keys are decrypted on your machine). Roughly speaking, 1Password would have to send your Master Password (or keys derived from it) to some government entity. But because these only exist on your system (and not ours) it would have to be your system that is sending the information.

You can control what is transmitted from your computer. Sure, it may take technical skill to do so, but this is something that neither we nor a government can prevent you from doing. Indeed, in the unlikely event that we are ever required to produce a version of 1Password or Knox that would transmit your data to another system, we would display a huge notice to you about what is happening.

There might be more reliable ways in which we could (be forced to) comply with a key escrow scheme, but each of them involve weakening the overall security architecture of 1Password. It would mean that our software would only work if someone other than you had access to your keys. That is not how we build things.

This example should illustrate that the strongest security architectures cannot reliably participate in key escrow. This means that it is often a mistake to frame the discussion as a “clash between privacy and security.” We weaken many kinds of security when we weaken privacy protections.

Law enforcement is right to want a back door

The October Washington Post article that I keep referencing is absolutely correct when they say,

Law enforcement officials deserve to be heard in their recent warnings about the impact of next-generation encryption technology on smartphones, such as Apple’s new iPhone.

Those voices do need to be heard. So let’s start with them.

From the point of view of law enforcement, they rightly want to be able to actually get at data that they have the legal right to acquire.

Suppose that Molly, one of my dogs, is suspected of kidnapping, torturing, and even eating rabbits. (Molly, I’m sorry if some of my social media posts have implicated you in an FBI investigation, but your behavior was suspicious.) Also suppose that the FBI has good reason to suspect that Molly may even be taking pictures of her victims. The FBI should have little difficulty obtaining a warrant to confiscate and search Molly’s iPhone. If Molly has set a decent passcode for the device and has not leaked those photos off of her phone, then the FBI will have no means whatsoever (other than compelling Molly to reveal her passcode, which is a whole different set of very confused legal issues in the US) to get the evidence they need to lock Molly up in a crate. More bunnies will suffer and die as a consequence of the security design of iOS and the iPhone.

This isn’t as funny when we switch our example away from Molly and rabbits to the sorts of things that the FBI does investigate. Giving people access to encryption that law enforcement can’t break will mean that some investigations are harder, some never get solved, and some prosecutions will fail. There will be times when some very bad dogs get away with their crimes because of this.

It is no surprise that those given the task of fighting crime do not want to encounter encryption that they can’t break. Indeed, if they didn’t seek back doors into such systems they might not be doing their jobs. But this isn’t a question for law enforcement to decide on their own. It is a question for the public and for policy makers.

You can’t always get what you want

Just because something would be useful for law enforcement doesn’t mean that they should have it. There is no doubt that law enforcement would be able to catch more criminals if they weren’t bound by various rules. If they could search any place or anybody any time they wished (instead of being bound by various rules about when they can), they would clearly be able to solve and prevent more crimes. That is just one of many examples of where we deny to law enforcement tools that would obviously be useful to them.

Quite simply, non-tyrannical societies don’t give every power to law enforcement that law enforcement would find useful. Instead we make choices based on a whole complex array of factors. Obviously the value of some power is one factor that plays a role in such a decision, and so it is important to hear from law enforcement about what they would find useful. But that isn’t where the conversation ends, it is where it begins.

Whenever that conversation does takes place, it is essential that all the participants understand the nature of the technology: There are some things that we simply can’t do without deeply undermining the security of the systems that we all rely on to keep us safe.