For the third time this year, there is yet another flaw in an underlying security technology used across the net: the recently fixed OpenSSL bugs announced on June 5. For our customers, we are happy to report that 1Password is not affected by bugs in SSL implementations, nor do these bugs require that most people change passwords.
1Password is not affected and your data remains secure, and you do not need to make password changes. The bug that everyone is talking about, lovingly referred to as “ChangeCipherSpec (CCS)” (also known as “CVE-2014-0224” or “SSL/TLS MITM vulnerability”), is not in the same category as the recent, catastrophic Heartbleed. It does not require a response from most people in the way that Heartbleed did.
Why no password changes?
As bad as the CCS bug is, here is what makes it different from Heartbleed from a user’s perspective.
- The attacker must be in a “privileged network position”
Not anyone can launch a CCS-based attack. The attacker must be the operator of some of the network between you and the site you are using. In this respect, the attack is similar to the GotoFail bug in February on Apple’s Secure Transport. In contrast, Heartbleed could be easily launched by anyone anywhere on the net.
- Both the client and the server must be vulnerable for the attack to work
This means that if you are not using a vulnerable SSL client (web browser, email program, etc), then you remain safe from this attack even if the server is vulnerable. Few desktop browsers use the OpenSSL libraries to manage their SSL connections. Chrome on Android and Konqueror on KDE (linux) are the two most popular ones I can think of that do. Chrome on desktops does not use OpenSSL. In contract, Heartbleed only required the server to be vulnerable.
- Many systems were fixed before the news of the bugs were made fully public
It is very tricky to fix a bug in open source software without making knowledge of the bug public at the same time. The OpenSSL team and the discoverers of Heartbleed attempted, but failed, to get most systems fixed before going public. With these bugs, they did a better job, so the window of vulnerability was much shorter.
Each of the first two reasons, on their own, are sufficient for me to conclude that the large majority of people do not need to worry about changing passwords. The combination of them and the other two make me extremely comfortable in this advice.
If you are concerned about governments or network operators having exploited this bug, and if you used clients that relied on OpenSSL for their SSL operations (such as Chrome on Android or Konqueror and other KDE tools on Linux), you may wish to change those passwords. But most people don’t need to take any action. It remains important that you do change passwords for systems that had been vulnerable to the Heartbleed bug reported in April. With Heartbleed, there really is a wolf we are crying about.
These new OpenSSL bugs do mean that system administrators need to update their systems quickly, but it does not require them to rekey their server certificates. These bugs are substantial, but the response is the usual “upgrade affected systems promptly”.
Everything that follows goes into technical details explaining what the recent bugs are and what they may mean in general. They have no specific impact on 1Password, but I know that some of you are curious, and I do indeed suffer from a pathological compulsion to explain things.
SSL (Secure Socket Layer) and TLS (Transport Security Layer) are the mechanisms that put the “S” in “HTTPS”. 1Password’s security does not rely on SSL and therefore is not affected by these sorts of bugs. There might be just a few rare situations around the edges where there can be some indirect impacts for 1Password and Knox users. I will get to those further below, but I’d like to describe the general case first.
Contributors to OpenSSL, a widely used software library for implementing SSL, recently discovered a number of substantial security bugs. These are not, individually or collectively, as bad as Heartbleed, which was described back in April as being an 11 on a scale of 1 to 10. The very latest group of bugs are serious, but they do not have the consequences that Heartbleed had, nor do they require the same sorts of action from most people.
SSL, OpenSSL, Libraries, and Functions
I have to back up a bit and make some important distinctions. These may sound pedantic, but it will help people understand reports and warnings better, both now and in the future.
SSL (TLS) is the name of the protocol that is used to make secure connections. That is, it is a system of rules and conventions that various computer programs agree to follow when they talk to each other. SSL (and its successor, TLS) are not programs, but instead are a set of standards that are used by programs. Just as HTML (the markup language used for web pages) is neither a web browser nor a web page editor, SSL/TLS is not client or server software.
OpenSSL is one of several sets of software libraries that programs and systems can use to talk SSL/TLS. It is not the only SSL library to have had serious bugs discovered it in recent months. GnuTLS fixed some serious bugs in March. In February, the “gotofail” bug in Apple’s Secure Transport was discovered (as an aside, Apple’s new programming, Swift, has safety features that would have prevented gotofail along with many of the sorts of bugs we find in a lot of software). Most web servers use OpenSSL, and many other programs do, too. But remember it is not the only library.
Perhaps a useful analogy here for these software libraries is to think of a literal library filled with books. Each book (function) contains instructions for the computer to do something. When you “call” one of these books, the computer does what the book says. Some of the books tell the computer how to encrypt and decrypt things. For example, they tell the computer how to do AES encryption or SHA-256 hashing. Other books tell the computer how to talk SSL/TLS, for example how to negotiate keys and ciphers or how to handle DTLS datagrams. As it turns out, the cryptography books and the SSL/TLS books in the OpenSSL library tend to be written by different people and edited differently.
None of the recent bugs suggest any problems with those core cryptographic functions (books). Instead, all of the substantial bugs have been in those those functions used for SSL/TLS communication. Many years ago, 1Password did use the cryptographic functions from the OpenSSL libraries before switching fully to CommonCrypto libraries on the Mac and iOS. As 1Password does not talk SSL/TLS (except for the updater, which does not use OpenSSL on Mac and iOS), problems in the SSL functions provided by OpenSSL would not have mattered to 1Password. Even if 1Password relied on OpenSSL for its cryptographic operations (which it doesn’t), bugs in the SSL/TLS portions of OpenSSL would not impact 1Password’s security.
I will return further below to the fact that it is the SSL/TLS components of those libraries that have been the locus of so many serious software bugs.
What do the latest bugs do?
There were seven security bugs in OpenSSL announced last Thursday along with fixes (one had been fixed earlier). I will only talk about two of them: The most dangerous bug is DTLS invalid fragment vulnerability (CVE-2014-0195), and the most interesting one is CCS or SSL/TLS MITM vulnerability (CVE-2014-0224). The one that is getting the most attention is not, in my opinion, the most dangerous one.
CVE-2014-0195 and “Remote code execution”
The gory details of CVE-2014-0195 are described by Brian Gornenc on the HP Security Research Blog: OpenSSL DTLS Fragment Out-of-Bounds Write: Breaking up is hard to do, but I will try to provide a somewhat gentler explanation here.
The official description of CVE-2014-0195 says:
A buffer overrun attack can be triggered by sending invalid DTLS fragments to an OpenSSL DTLS client or server. This is potentially exploitable to run arbitrary code on a vulnerable client or server.
What this says is that a computer running certain software which talks over the network in particular ways can be tricked into running a program designed by an attacker. The bit that says “potentially exploitable to run arbitrary code” translates to “can be tricked into running a small program designed by the attacker”. The bit that says “by sending invalid DTLS fragments to an OpenSSL DTLS client or server” translates to “can be done remotely”.
Not every computer with the vulnerable OpenSSL libraries is vulnerable to that attack. Something on the computer has to actually be talking DTLS (which is a lot like TLS, but is used for faster, lighter weight, though less reliable communication). The computer needs to be talking DTLS using a vulnerable version of OpenSSL. Despite that constraint, I consider this to be the most serious of the security bugs. This “remote code execution” is a major step in breaking into a computer remotely. The program that is using the broken DTLS system can be tricked into running a small piece of computer code designed by the attacker.
How much further this gets them depends on a number of things. Whether the attacker “merely” gains control of the browser or succeeds in gaining control of the entire computer or device, this is a serious (and unfortunately common) sort of bug.
CCS bug: CVE-2014-0224 and “Man in the Middle”
The far more interesting bug, and the one which has received the lion’s share of press attention, is CVE-2014-0224 “SSL/TLS MITM vulnerability”. But this is not, in my opinion as dangerous as one that allows for remote code execution. For those who are familiar with these sorts of things in general, there is an excellent technical analysis by ImperialViolet. I will try to provide just a cursory discussion here because this has been discussed in the technical press already.
Remember that the SSL/TLS protocol is the thing that puts the “S” into “HTTPS”. There are roughly two goals of the protocol. One is to encrypt the data that goes back and forth so that someone listening to that traffic can’t make use of it. The other is to prove that you are talking to who you think you are talking to. It turns out that you can’t have the former (secrecy) without also having the latter (authenticity).
Suppose that Molly (one of my dogs) wants to talk privately with Patty (the other dog) without Mr Talk (neighbor’s cat) listening in. Patty and Molly send encrypted messages back and forth to each other. When they start talking to each other, they go through a process of agreeing on a cryptographic system to use.
Now suppose that Patty and Molly are sending their messages through the postal service and that Mr Talk works at the post office. That puts Mr Talk in a “privileged network position”. Mr Talk isn’t just able to read the messages that go back and forth, but he is able to change them. When Molly sends a message to Patty, Mr Talk intercepts it, makes some changes (or not) and sends it on to Patty. When Patty replies, Mr Talk intercepts that, makes some changes (or not) and sends it on the Molly.
Patty thinks she is talking directly to Molly, but actually she is talking indirectly to Molly via Mr Talk. Molly thinks she is talking directly to Patty, but actually is talking indirectly to Patty via Mr Talk. Mr Talk is the Cat In The Middle. For reasons I can’t begin to understand, Cat In The Middle attacks are abbreviated as “MITM attacks”.
The TLS protocol is designed to prevent MITM attacks through some very wonderful mathematics and some not so wonderful trust infrastructure. To get this all working the initial setup negotiation between Patty and Molly needs to be done very carefully. In particular, Molly and Patty should work out what encryption key to use in a way that can’t be subverted by a Cat In The Middle.
Part of the negotiation of what encryption key to use involves figuring out what encryption algorithm they should. The SSL/TLS protocol allows for Patty or Molly to request a Change of Cipher Specification (CCS) early on, before the secret key for the session is negotiated. But those requests should only be acted upon after the key is negotiated, and should not allow that session key to change (or it should force a complete renegotiation).
Because these CCS requests are announced before a key has been set, Mr Talk, in his privileged network position, can can sent a counterfeit CCS request to both Molly and Patty. If that request is formulated in a particular way, it ends up instructing both Molly and Patty to agree to not use any encryption key at all.
What should happen is that after Molly and Patty have securely negotiated a key, they should not “trust” CCS requests that came in earlier. They should renegotiate keys. The bug is that if Molly and Patty are both using the broken implementation, they will process the CCS in a way that will reset the session key to “no key”.
As I continue to repeat, for Mr Talk to get away with this, both Molly and Patty need to be using the buggy implementation of the protocol. If just one of them is, then Mr Talk’s attempts will fail.
For Mr Talk to successfully launch his CCS attack,
- Mr Talk would need to be in a privileged network position,
- And both Molly and Patty would need to be using the buggy system
- And the attack would have to take place before the systems were fixed
Each of the first two very substantially reduce the chance of attack. In combination they reduce it enormously. Yes, this vulnerability is a big deal, but it is very unlikely that there have been widespread attacks based on it.
So what are those rare cases where AgileBits customers could have been affected.
SSL is used in four different places specific to 1Password customers. For completeness sake, I’ll try to list those limited circumstances under which there could be some exploit (as unlikely as they are).
Our AgileBits web store uses OpenSSL, so it was vulnerable for a time. Again, you would have needed to use a vulnerable browser (and there aren’t many) during the time of server vulnerability, and the attacker would have had to have been in a privileged network position to be able to launch an attack. Quite frankly, there are much easier ways to steal credit card details than launch a Cat In The Middle (MITM) attack.
If you used 1PasswordAnywhere from a vulnerable browser (many KDE browsers on Linux or Chrome on Android) and an attacker in a privileged network position ran a MITM attack then they could supply you with a malicious copy of the 1PasswordAnywhere 1Password.html file. Because of the short window of vulnerability an attacker would have needed to be prepared ahead of time for such an opportunity. Besides, a governmental attacker wouldn’t need to exploit such a vulnerability to manipulate your 1Password.html file. (If you are afraid of an active attack by an entity that can subvert Dropbox, do not use 1PasswordAnywhere.)
Our discussion forums recently got SSL support, and so this is another place where OpenSSL bugs could affect us. Again, you would have needed to use a vulnerable browser (and there aren’t many) during the time of server vulnerability and the attacker would have had to have been in a privileged network position. I like to think that most of our forum users are using unique passwords for our discussion forums. A MITM attack is a complicated and risky attack to launch. I don’t think that forum passwords would be worth it.
Our software downloads use TLS as one of several mechanisms to ensure that the copy of 1Password (or Knox for Mac) that you run on your computer is the one that we have written instead of a malicious version. The Updater for Mac and for the browser extensions does not use OpenSSL (and remember, both ends need to be vulnerable) so there is no danger there. The updater for 1Password for Windows does use OpenSSL and so an attacker in a privileged network position could insert a malicious copy of 1Password into the download process. However, 1Password is also cryptographically signed with a digital signature, and so neither 1Password nor Windows would accept the malicious download.
Why so many bugs and why in the SSL portion
I will speculate a bit here.
I think that bugs have been found recently because people have started looking for them more carefully. The release of the BULLRUN and related documents in September 2013 revealed that the NSA and GCHQ are exploiting implementation bugs to break into systems and into supposedly secure communication systems. This has motivated security professionals to start reviewing things more carefully. Reading between the lines of what was leaked we are left with the impression that the NSA isn’t breaking actual encryption, but they are instead exploiting bugs in the system as a whole. So the bug hunt is on, and we are starting to see the fruits of it.
SSL/TLS is overly complicated
Another reason for these bugs is that SSL/TLS is a mess. The protocol is far more complicated than it should be: It is very easy to implement badly. SSL/TLS has long been less glamorous than the actual cryptography, and so in the past when people looked for bugs, they mostly focused on the cryptography instead of on the implementation of the protocol itself. Until recently it has been subject to less scrutiny.
OpenSSL itself is a mess
OpenSSL has had a long and venerable history. I remember (not) installing its predecessor, SSLeay on a web server at a UK university in the mid 1990s. (I say “not” because as a US citizen, me “giving” that software to a non-citizen counted as a munitions export and could subject me to five years in federal prison. So it was always one of my colleagues who did that install.)
Supporting older platforms can present a danger to all
The danger of supporting stupid platforms: sometimes you run the stupid code on the good platforms too. - @tedunangst View tweet
Over the years, OpenSSL has grown creaky, particularly the SSL part of it. (The crypto libraries remain solid.) In order to stay runnable on much older systems, it doesn’t use newer software libraries that would prevent or catch the kinds of bugs we have seen with Heartbleed and the DTLS remote code execution bug.
As Ted Unangst described it, “OpenSSL has exploit mitigation countermeasures to make sure it’s exploitable.” That is, OpenSSL specifically makes it hard to use tools that would find or prevent many of these bugs. This was further expanded on in a rant by Theo de Raadt:
What Ted is saying may sound like a joke…
So years ago we added […] measures to libc
mmap, so that a variety of bugs can be exposed. […] But around that time OpenSSL adds a wrapper around
freeso that the library will cache memory on [its] own, and not
freeit to the protective malloc.
You can find the comment in their sources …
/* On some platforms, malloc() performance is bad enough that you can't just
OH, because SOME platforms have slow performance, it means even if you build protective technology into
free(), it will be ineffective. On ALL PLATFORMS, because that option is the default, and Ted’s tests show you can’t turn [that default] off because they haven’t tested without it in ages.
What Ted and Theo are saying is that that OpenSSL’s commitment to keep OpenSSL runnable when built with Visual C++ 5.0 (released in 1997) has meant that they have made it impossible to use later safety checks that would have prevented a large number of bugs. This, by the way, is one of several reasons why 1Password for Windows will not support Windows XP. Trying to maintain compatibility with older systems means foregoing safety and security features for everyone, not just the users of those older systems.
The good news
The good news is that these choices and their consequences are undergoing renewed scrutiny. Yes, the bugs are bad, and the situation is difficult. but what we are experiencing now are the fruits of a redoubled look at the security of the tools that so many rely on.
Having just returned from Apple’s World Wide Developer Conference in San Francisco, I really would like to write about that. And so I will work into this article the fact that people are moving to “safer” programing languages for software development. Major cryptographic and security libraries will still be written in C for its portability and familiarity, but I am delighted to see Apple’s introduction of Swift. Swift looks like a terrific language for a number of reasons, but one of them is that it tried to make it hard for programmers to make the kinds of mistakes that lead to the sorts of bugs that are all too common in much of C programming. As I told people at WWDC:
Everything that I loved about C when I first started coding in the 1980s are things that I have come to hate about C in recent years as I became more concerned about security and preventing crashes. Swift removes those features that I loved/hate about C.
I do think that we are moving into an era of safer programing tools, practices, and habits. It won’t happen overnight. But a year after the initial Snowden releases, we are building more secure systems and they will continue, though sometimes painfully, to become even more secure.