As we announced previously we’ve improved our SSL setup by deploying forward secrecy and improving the list of supported ciphers. Deploying forward secrecy and up to date cipher lists comes with a number of considerations which makes doing it properly non trivial.
This is why we thought it would be worth expanding some more on the discussions we’ve had, choices we’ve made and feedback we’ve got from people.
A lot of the internet’s traffic is still secured by TLS 1.0. This version was attacked numerous times and also doesn’t provide support for newer algorithms that you’d want to deploy.
We were glad that we were already on a recent enough OpenSSL version that supports TLS 1.1 and 1.2 as well. If you’re looking at improving your SSL setup, making sure that you can support TLS 1.2 is the first step you should take, because it makes the other improvements possible as well. TLS 1.2 is supported in OpenSSL 1.0.0h and 1.0.1 and newer.
Attacks against RC4 will only get better over time, and the vast majority of browsers have implemented client-side protections against BEAST. This is why we have decided to move RC4 to the bottom of our cipher prioritization, keeping it only for backwards compatibility.
The only cipher that is relatively broadly supported and that hasn’t been compromised by attacks is AES GCM. This mode of AES doesn’t suffer from keystream bias like RC4 or attacks on CBC that resulted in BEAST and Lucky 13.
Currently AES GCM is supported in Chrome, but it’s also in the works for other browsers like Firefox. We’ve given priority to these ciphers and, given our usage patterns we now see a large majority of our connections being secured by this cipher.
So the recommendations on which ciphers to use are fairly straightforward. But, choosing the right ciphers is only one step to ensuring forward secrecy. There are some pitfalls that can cause you to not actually provide any additional security to customers.
In order to explain these potential problems, first we need to introduce the concept of session resumption. Session resumption is a mechanism used to significantly shorten the handshake mechanism when a new connection is opened. This means that if a client connects again to the same server, we can do a shorter setup and greatly reduce the time it takes to setup a secure connection.
There are two mechanisms for implementing these session resumption, the first is using session IDs, the second is using session tickets.
Using session IDs means that the server keeps track of state and if a client reconnects with a session ID the server has given out, it can reuse the existing state it tracked there. Let’s see how that looks when we connect to a server supporting session IDs.
openssl s_client -tls1_2 -connect github.com:443 < /dev/null ... New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ...
What you can see here is that the server hands out a Session-ID that the client can then use to send and reconnect. The downside of this is of course that this means the server needs to keep track of this state.
This state tracking also means that if you have a site that has multiple front ends for SSL termination, you might not get the benefits that you expect. If a client ends up on a different front end the second time, that front end doesn’t know about the session ID and will have to setup a completely new connection.
SSL Session tickets are described in RFC5077 and provide a mechanism that means we don’t have to keep the same state at the server.
How this mechanism works is that the state is encrypted by the server and handed to the client. This means the server doesn’t have to keep track of all this state in memory. It does mean however that the key used to encrypt session tickets needs to be tracked server side. This is how it looks when we connect to a server supporting session tickets.
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 ... TLS session ticket: 0000 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0010 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0020 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0030 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0040 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0050 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0060 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0070 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0080 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................ 0090 - XX XX XX XX XX XX XX XX-XX XX XX XX XX XX XX XX ................
With a session ticket key, it is possible to share this ticket across multiple front ends. This way you can have the performance benefits of session resumption even across different servers. If you don’t share this ticket key it has the same performance benefits as using session ID’s.
Not carefully considering the session resumption mechanism can lead to not getting the benefits of forward secrecy. If you keep track of the state for too long, it can be used to decrypt prior sessions, even when deploying forward secrecy.
So, we had to decide whether developing a secure means of sharing ticket keys (ala Twitter) was necessary to maintain acceptable performance given our current traffic patterns. We found that clients usually end up on the same load balancer when they make a new connection shortly after a previous one. As a result, we decided that we can rely on session IDs as our resumption mechanism and still maintain a sufficient level of performance for clients.
This is also where we got tripped up. We currently use HAProxy as our SSL termination which ends up using the default OpenSSL settings if you don’t specify any additional options. This means that both session IDs and session tickets are enabled by default.
The problem here lies with session tickets being enabled. Even though we didn’t setup sharing the key across servers, it still means that HAProxy uses an in-memory key to encrypt session tickets. This encryption key is initialized once the process starts up and stays the same for the process lifetime.
This means that if we would have a HAProxy running for a long time, an attacker who who obtains the session ticket key can decrypt traffic from prior sessions whose ticket was encrypted using the session ticket key. This of course doesn’t provide the forward secrecy properties we were aiming for.
Session IDs don’t have this problem, since they have a lifetime of 5 minutes (on our platform), making the window for this attack only 5 minutes wide instead of the entire process lifetime.
Given that session tickets don’t provide any additional value for us at this point, we decided to disable them and only rely on session IDs. This way we get the benefits of forward secrecy while also maintaining an acceptable level of performance for clients.
We would like to thank Jeff Hodges for reaching out to us and point us at what we’ve missed in our initial setup.