Planet Firefox

September 30, 2016

Tim Taubert (ttaubert)

TLS Version Intolerance

A few weeks ago I listened to Hanno Böck talk about TLS version intolerance at the Berlin AppSec & Crypto Meetup. He explained how with TLS 1.3 just around the corner there again are growing concerns about faulty TLS stacks found in HTTP servers, load balancers, routers, firewalls, and similar software and devices.

I decided to dig a little deeper and will use this post to explain version intolerance, how version fallbacks work and why they’re insecure, as well as describe the downgrade protection mechanisms available in TLS 1.2 and 1.3. It will end with a look at version negotiation in TLS 1.3 and a proposal that aims to prevent similar problems in the future.

What is version intolerance?

Every time a new TLS version is specified, browsers usually are the fastest to implement and update their deployments. Most major browser vendors have a few people involved in the standardization process to guide the standard and give early feedback about implementation issues.

As soon as the spec is finished, and often far before that feat is done, clients will have been equipped with support for the new TLS protocol version and happily announce this to any server they connect to:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I too support TLS 1.2 so let’s use that to communicate.
[TLS 1.2 connection will be established.]

In this case the highest TLS version supported by the client is 1.2, and so the server picks it because it supports that as well. Let’s see what happens if the client supports 1.2 but the server does not:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I only support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

This too is how it should work if a client tries to connect with a protocol version unknown to the server. Should the client insist on any specific version and not agree with the one picked by the server it will have to terminate the connection.

Unfortunately, there are a few servers and more devices out there that implement TLS version negotiation incorrectly. The conversation might go like this:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! I don’t know that version. Handshake failure.
[Connection will be terminated.]


Client: Hi! The highest TLS version I support is 1.2.
Server: TCP FIN! I don’t know that version.
[Connection will be terminated.]

Or even worse:

Client: Hi! The highest TLS version I support is 1.2.
Server: (I don’t know this version so let’s just not respond.)
[Connection will hang.]

The same can happen with the infamous F5 load balancer that can’t handle ClientHello messages with a length between 256 and 512 bytes. Other devices abort the connection when receiving a large ClientHello split into multiple TLS records. TLS 1.3 might actually cause more problems of this kind due to more extensions and client key shares.

What are version fallbacks?

As browsers usually want to ship new TLS versions as soon as possible, more than a decade ago vendors saw a need to prevent connection failures due to version intolerance. The easy solution was to decrease the advertised version number by one with every failed attempt:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! Handshake failure. (Or FIN. Or hang.)
[TLS version fallback to 1.1.]
Client: Hi! The highest TLS version I support is 1.1.
Server: Hi! I support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

A client supporting everything from TLS 1.0 to TLS 1.2 would start trying to establish a 1.2 connection, then a 1.1 connection, and if even that failed a 1.0 connection.

Why are these insecure?

What makes these fallbacks insecure is that the connection can be downgraded by a MITM, by sending alerts or TCP packets to the client, or blocking packets from the server. To the client this is indistinguishable from a network error.

The POODLE attack is one example where an attacker abuses the version fallback to force an SSL 3.0 connection. In response to this browser vendors disabled version fallbacks to SSL 3.0, and then SSL 3.0 entirely, to prevent even up-to-date clients from being exploited. Insecure version fallback in browsers pretty much break the actual version negotiation mechanisms.

Version fallbacks have been disabled since Firefox 37 and Chrome 50. Browser telemetry data showed it was no longer necessary as after years, TLS 1.2 and correct version negotiation was deployed widely enough.

The TLS_FALLBACK_SCSV cipher suite

You might wonder if there’s a secure way to do version fallbacks, and other people did so too. Adam Langley and Bodo Möller proposed a special cipher suite in RFC 7507 that would help a client detect whether the downgrade was initiated by a MITM.

Whenever the client includes TLS_FALLBACK_SCSV {0x56, 0x00} in the list of cipher suites it signals to the server that this is a repeated connection attempt, but this time with a version lower than the highest it supports, because previous attempts failed. If the server supports a higher version than advertised by the client, it MUST abort the connection.

The drawback here however is that a client even if it implements fallback with a Signaling Cipher Suite Value doesn’t know the highest protocol version supported by the server, and whether it implements a TLS_FALLBACK_SCSV check. Common web servers will likely be updated faster than others, but router or load balancer manufacturers might not deem it important enough to implement and ship updates for.

Signatures in TLS 1.2

It’s been long known to be problematic that signatures in TLS 1.2 don’t cover the list of cipher suites and other messages sent before server authentication. They sign the ephemeral DH parameters sent by the server and include the *Hello.random values as nonces to prevent replay attacks:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signing at least the list of cipher suites would have helped prevent downgrade attacks like FREAK and Logjam. TLS 1.3 will sign all messages before server authentication, even though it makes Transcript Collision Attacks somewhat easier to mount. With SHA-1 not allowed for signatures that will hopefully not become a problem anytime soon.

Downgrade Sentinels in TLS 1.3

With neither the client version nor its cipher suites (for the SCSV) included in the hash signed by the server’s certificate in TLS 1.2, how do you secure TLS 1.3 against downgrades like FREAK and Logjam? Stuff a special value into ServerHello.random.

The TLS WG decided to put static values (sometimes called downgrade sentinels) into the server’s nonce sent with the ServerHello message. TLS 1.3 servers responding to a ClientHello indicating a maximum supported version of TLS 1.2 MUST set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x01

If the client advertises a maximum supported version of TLS 1.1 or below the server SHOULD set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x00

If not connecting with a downgraded version, a client MUST check whether the server nonce ends with any of the two sentinels and in such a case abort the connection. The TLS 1.3 spec here introduces an update to TLS 1.2 that requires servers and clients to update their implementation.

Unfortunately, this downgrade protection relies on a ServerKeyExchange message being sent and is thus of limited value. Static RSA key exchanges are still valid in TLS 1.2, and unless the server admin disables all non-forward-secure cipher suites the protection can be bypassed.

The comeback of insecure fallbacks?

Current measurements show that enabling TLS 1.3 by default would break a significant fraction of TLS handshakes due to version intolerance. According to Ivan Ristić, as of July 2016, 3.2% of servers from the SSL Pulse data set reject TLS 1.3 handshakes.

This a very high number and would affect way too many people. Alas, with TLS 1.3 we have only limited downgrade protection for forward-secure cipher suites. And that is assuming that most servers either support TLS 1.3 or update their 1.2 implementations. TLS_FALLBACK_SCSV, if supported by the server, will help as long as there are no attacks tampering with the list of cipher suites.

The TLS working group has been thinking about how to handle intolerance without bringing back version fallbacks, and there might be light at the end of the tunnel.

Version negotiation with extensions

The next version of the proposed TLS 1.3 spec, draft 16, will introduce a new version negotiation mechanism based on extensions. The current ClientHello.version field will be frozen to TLS 1.2, i.e. {3, 3}, and renamed to legacy_version. Any number greater than that MUST be ignored by servers.

To negotiate a TLS 1.3 connection the protocol now requires the client to send a supported_versions extension. This is a list of versions the client supports, in preference order, with the most preferred version first. Clients MUST send this extension as servers are required to negotiate TLS 1.2 if it’s not present. Any version number unknown to the server MUST be ignored.

This still leaves potential problems with big ClientHello messages or choking on unknown extensions unaddressed, but according to David Benjamin the main problem is ClientHello.version. We will hopefully be able to ship browsers that have TLS 1.3 enabled by default, without bringing back insecure version fallbacks.

However, it’s not unlikely that implementers will screw up even the new version negotiation mechanism and we’ll have similar problems in a few years down the road.

GREASE-ing the future

David Benjamin, following Adam Langley’s advice to have one joint and keep it well oiled, proposed GREASE (Generate Random Extensions And Sustain Extensibility), a mechanism to prevent extensibility failures in the TLS ecosystem.

The heart of the mechanism is to have clients inject “unknown values” into places where capabilities are advertised by the client, and the best match selected by the server. Servers MUST ignore unknown values to allow introducing new capabilities to the ecosystem without breaking interoperability.

These values will be advertised pseudo-randomly to break misbehaving servers early in the implementation process. Proposed injection points are cipher suites, supported groups, extensions, and ALPN identifiers. Should the server respond with a GREASE value selected in the ServerHello message the client MUST abort the connection.

September 30, 2016 02:00 PM

August 15, 2016

Dave Townsend (Mossop)

A new owner for the add-ons manager

I’ve been acting as the owner for the add-ons manager for the past little while and while I have always cared a lot about the add-ons space it is time to formerly pass over the torch. So I was pleased that Rob Helmer was willing to take it over from me.

Rob has been doing some exceptional work on making system add-ons (used as part of the go faster project) more robust and easier for Mozilla to use. He’s also been thinking lot about improvements we can make to the add-ons manager code to make it more friendly to approach.

As my last act I’m updating the suggested reviewers in bugzilla to be him, Andrew Swan (who in his own right has been doing exceptional work on the add-ons manager) and me as a last resort. Please congratulate them and direct any questions you may have about the add-ons manager towards Rob.

August 15, 2016 11:29 PM

August 09, 2016

Tim Taubert (ttaubert)

Continuous Integration for NSS

The following image shows our TreeHerder dashboard after pushing a changeset to the NSS repository. It is the result of only a few weeks of work (on our side):

Based on my experience from building a Taskcluster CI for NSS over the last weeks, I want to share a rough outline of the process of setting this up for basically any Mozilla project, using NSS as an example.

What is the goal?

The development of NSS has for a long time been heavily supported by a fleet of buildbots. You can see them in action by looking at our waterfall diagram showing the build and test statuses of the latest pushes to the NSS repository.

Unfortunately, this setup is rather complex and the bots are slow. Build and test tasks are run sequentially and so on some machines it takes 10-15 hours before you will be notified about potential breakage.

The first thing that needs to be done is to replicate the current setup as good as possible and then split monolithic test runs into many small tasks that can be run in parallel. Builds will be prepared by build tasks, test tasks will later download those pieces (called artifacts) to run tests.

A good turnaround time is essential, ideally one should know whether a push broke the tree after not more than 15-30 minutes. We want a TreeHerder dashboard that gives a good overview of all current build and test tasks, as well as an IRC and email notification system so we don’t have to watch the tree all day.

Docker for Linux tasks

To build and test on Linux, Taskcluster uses Docker. The build instructions for the image containing all NSS dependencies, as well as the scripts to build and run tests, can be found in the automation/taskcluster/docker directory.

For a start, the fastest way to get something up and running (or building) is to use ADD in the Dockerfile to bake your scripts into the image. That way you can just pass them as the command in the task definition later.

# Add build and test scripts.
ADD bin /home/worker/bin
RUN chmod +x /home/worker/bin/*

Once you have NSS and its tests building and running in a local Docker container, the next step is to run a Taskcluster task in the cloud. You can use the Task Creator to spawn a one-off task, experiment with your Docker image, and with the task definition. Taskcluster will automatically pull your image from Docker Hub:


  "created": " ... ",
  "deadline": " ... ",
  "payload": {
    "image": "ttaubert/nss-ci:0.0.21",
    "command": [
    "maxRunTime": 3600


Docker and task definitions are well-documented, so this step shouldn’t be too difficult and you should be able to confirm everything runs fine. Now instead of kicking off tasks manually the next logical step is to spawn tasks automatically when changesets are pushed to the repository.

Using taskcluster-github

Triggering tasks on repository pushes should remind you of Travis CI, CircleCI, or AppVeyor, if you worked with any of those before. Taskcluster offers a similar tool called taskcluster-github that uses a configuration file in the root of your repository for task definitions.

If your master is a Mercurial repository then it’s very helpful that you don’t have to mess with it until you get the configuration right, and can instead simply create a fork on GitHub. The documentation is rather self-explanatory, and the task definition is similar to the one used by the Task Creator.

Once the WebHook is set up and receives pings, a push to your fork will make “Lisa Lionheart”, the Taskcluster bot, comment on your push and leave either an error message or a link to the task graph. If on the first try you see failures about missing scopes you are lacking permissions and should talk to the nice folks over in #taskcluster.

Move scripts into the repository

Once you have a GitHub fork spawning build and test tasks when pushing you should move all the scripts you wrote so far into the repository. The only script left on the Docker image would be a script that checks out the hg/git repository and then uses the scripts in the tree to build and run tests.

This step will pay off very early in the process, rebuilding and pushing the Docker image to Docker Hub is something that you really don’t want to do too often. All NSS scripts for Linux live in the automation/taskcluster/scripts directory.

#!/usr/bin/env bash

set -v -e -x

if [ $(id -u) = 0 ]; then
    # Drop privileges by re-running this script.
    exec su worker $0 $@

# Do things here ...

Use the above snippet as a template for your scripts. It will set a few flags that help with debugging later, drop root privileges, and rerun it as the unprivileged worker user. If you need to do things as root before building or running tests, just put them before the exec su ... call.

Split build and test runs

Taskcluster encourages many small tasks. It’s easy to split the big monolithic test run I mentioned at the beginning into multiple tasks, one for each test suite. However, you wouldn’t want to build NSS before every test run again, so we should build it only once and then reuse the binary. Taskcluster allows to leave artifacts after a task run that can then be downloaded by subtasks.

# Build.
cd nss && make nss_build_all

# Package.
mkdir artifacts
tar cvfjh artifacts/dist.tar.bz2 dist

The above snippet builds NSS and creates an archive containing all the binaries and libraries. You need to let Taskcluster know that there’s a directory with artifacts so that it picks those up and makes them available to the public.


  "created": " ... ",
  "deadline": " ... ",
  "payload": {
    "image": "ttaubert/nss-ci:0.0.21",
    "artifacts": {
      "public": {
        "type": "directory",
        "path": "/home/worker/artifacts",
        "expires": " ... "
    "command": [
    "maxRunTime": 3600


The test task then uses the $TC_PARENT_TASK_ID environment variable to determine the correct download URL, unpacks the build and starts running tests. Making artifacts automatically available to subtasks, without having to pass the parent task ID and build a URL, will hopefully be added to Taskcluster in the future.

# Fetch build artifact.
curl --retry 3 -Lo dist.tar.bz2$TC_PARENT_TASK_ID/artifacts/public/dist.tar.bz2
tar xvjf dist.tar.bz2

# Run tests.
cd nss/tests && ./

Writing decision tasks

Specifying task dependencies in your .taskcluster.yml file is unfortunately not possible at the moment. Even though the set of builds and tasks you want may be static you can’t create the necessary links without knowing the random task IDs assigned to them.

Your only option is to create a so-called decision task. A decision task is the only task defined in your .taskcluster.yml file and started after you push a new changeset. It will leave an artifact in the form of a JSON file that Taskcluster picks up and uses to extend the task graph, i.e. schedule further tasks with appropriate dependencies. You can use whatever tool or language you like to generate these JSON files, e.g. Python, Ruby, Node, …

    image: "ttaubert/nss-ci:0.0.21"

    maxRunTime: 1800

        type: "directory"
        path: "/home/worker/artifacts"
        expires: "7 days"

      - /home/worker/artifacts/graph.json

All task graph definitions including the Node.JS build script for NSS can be found in the automation/taskcluster/graph directory. Depending on the needs of your project you might want to use a completely different structure. All that matters is that in the end you produce a valid JSON file. Slightly more intelligent decision tasks can be used to implement features like try syntax.

mozilla-taskcluster for Mercurial projects

If you have all of the above working with GitHub but your main repository is hosted on you will want to have Mercurial spawn decision tasks when pushing.

The Taskcluster team is working on making .taskcluster.yml files work for Mozilla-hosted Mercurial repositories too, but while that work isn’t finished yet you have to add your project to mozilla-taskcluster. mozilla-taskcluster will listen for pushes and then kick off tasks just like the WebHook.

TreeHerder Configuration

A CI is no CI without a proper dashboard. That’s the role of TreeHerder at Mozilla. Add your project to the end of the repository.json file and create a new pull request. It will usually take a day or two after merging until your change is deployed and your project shows up in the dashboard.

TreeHerder gets the per-task configuration from the task definition. You can configure the symbol, the platform and collection (i.e. row), and other parameters. Here’s the configuration data for the green B at the start of the fifth row of the image at the top of this post:


  "created": " ... ",
  "deadline": " ... ",
  "payload": {
  "extra": {
    "treeherder": {
      "jobKind": "build",
      "symbol": "B",
      "build": {
        "platform": "linux64"
      "machine": {
        "platform": "linux64"
      "collection": {
        "debug": true


IRC and email notifications

Taskcluster is a very modular system and offers many APIs. It’s built with mostly Node, and thus there are many Node libraries available to interact with the many parts. The communication between those is realized by Pulse, a managed RabbitMQ cluster.

The last missing piece we wanted is an IRC and email notification system, a bot that notifies about failures on IRC and sends emails to all parties involved. It was a piece of cake to write nss-tc that uses Taskcluster Node.JS libraries and Mercurial JSON APIs to connect to the task queue and listen for task definitions and failures.

A rough overview

I could have probably written a detailed post for each of the steps outlined here but I think it’s much more helpful to start with an overview of what’s needed to get the CI for a project up and running. Each step and each part of the system is hopefully more obvious now if you haven’t had too much interaction with Taskcluster and TreeHerder so far.

Thanks to the Taskcluster team, especially John Ford, Greg Arndt, and Pete Moore! They helped us pull this off in a matter of weeks and besides Linux builds and tests we already have Windows tasks, static analysis, ASan+LSan, and are in the process of setting up workers for ARM builds and tests.

August 09, 2016 02:00 PM

August 03, 2016

Matthew Noorenberghe (MattN)

Filling saved HTTP logins over HTTPS for the same domain

HTTPS passwordThanks to initiatives like Let's Encrypt (co-founded by Mozilla), adoption of HTTPS has been increasing making the web more secure. Firefox has adapted its Login Manager to make this transition smooth for users. Firefox's Login Manager has been using strict origin matching when looking for saved logins for a website. Hence, when a login is saved on the HTTP version of a webpage, it is not available to the HTTPS version. We’ve changed this in Firefox 49, so that logins saved on the HTTP version of a page are also available to the upgraded and more secure HTTPS version. Without this change, a website that transitioned to HTTPS for its login page would not have the existing saved HTTP logins readily available to Firefox users. The insecure and secure version of the site may appear the same to users and hence they expect the login to still be available. Websites also want their users to be able to continue to easily log in after transitioning to HTTPS, instead of experiencing a surge in account lockout support requests. Some websites have even delayed the switch to HTTPS because of this issue. In order to continue to help users log in, starting in Firefox 49 saved HTTP logins will be available upon visiting the HTTPS version of a site. The saved HTTP login entry won't be modified when it's used on HTTPS. If a password is changed on the HTTPS version of the site, the saved login will be upgraded and only used on the HTTPS origin in the future. This is to prevent a new password created over HTTPS from leaking to HTTP. HTTPS logins will continue to only to be filled for the same origin (over HTTPS) and won't be downgraded (to HTTP) for security reasons. These changes should help ease the transition to HTTPS. Cheers to a more secure web!

August 03, 2016 07:32 PM

July 26, 2016

Tim Taubert (ttaubert)

The Evolution of Signatures in TLS

This post will take a look at the evolution of signature algorithms and schemes in the TLS protocol since version 1.0. I at first started taking notes for myself but then decided to polish and publish them, hoping that others will benefit as well.

(Let’s ignore client authentication for simplicity.)

Signature algorithms in TLS 1.0 and TLS 1.1

In TLS 1.0 as well as TLS 1.1 there are only two supported signature schemes: RSA with MD5/SHA-1 and DSA with SHA-1. The RSA here stands for the PKCS#1 v1.5 signature scheme, naturally.

select (SignatureAlgorithm)
    case rsa:
        digitally-signed struct {
            opaque md5_hash[16];
            opaque sha_hash[20];
    case dsa:
        digitally-signed struct {
            opaque sha_hash[20];
} Signature;

An RSA signature signs the concatenation of the MD5 and SHA-1 digest, the DSA signature only the SHA-1 digest. Hashes will be computed as follows:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

The ServerParams are the actual data to be signed, the *Hello.random values are prepended to prevent replay attacks. This is the reason TLS 1.3 puts a downgrade sentinel at the end of ServerHello.random for clients to check.

The ServerKeyExchange message containing the signature is sent only when static RSA/DH key exchange is not used, that means we have a DHE_* cipher suite, an RSA_EXPORT_* suite downgraded due to export restrictions, or a DH_anon_* suite where both parties don’t authenticate.

Signature algorithms in TLS 1.2

TLS 1.2 brought bigger changes to signature algorithms by introducing the signature_algorithms extension. This is a ClientHello extension allowing clients to signal supported and preferred signature algorithms and hash functions.

enum {
    none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5), sha512(6)
} HashAlgorithm;

enum {
    anonymous(0), rsa(1), dsa(2), ecdsa(3)
} SignatureAlgorithm;

struct {
    HashAlgorithm hash;
    SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;

If a client does not include the signature_algorithms extension then it is assumed to support RSA, DSA, or ECDSA (depending on the negotiated cipher suite) with SHA-1 as the hash function.

Besides adding all SHA-2 family hash functions, TLS 1.2 also introduced ECDSA as a new signature algorithm. Note that the extension does not allow to restrict the curve used for a given scheme, P-521 with SHA-1 is therefore perfectly legal.

A new requirement for RSA signatures is that the hash has to be wrapped in a DER-encoded DigestInfo sequence before passing it to the RSA sign function.

DigestInfo ::= SEQUENCE {
    digestAlgorithm DigestAlgorithm,
    digest OCTET STRING

This unfortunately led to attacks like Bleichenbacher’06 and BERserk because it turns out handling ASN.1 correctly is hard. As in TLS 1.1, a ServerKeyExchange message is sent only when static RSA/DH key exchange is not used. The hash computation did not change either:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signature schemes in TLS 1.3

The signature_algorithms extension introduced by TLS 1.2 was revamped in TLS 1.3 and MUST now be sent if the client offers a single non-PSK cipher suite. The format is backwards compatible and keeps some old code points.

enum {
    /* RSASSA-PKCS1-v1_5 algorithms */
    rsa_pkcs1_sha1 (0x0201),
    rsa_pkcs1_sha256 (0x0401),
    rsa_pkcs1_sha384 (0x0501),
    rsa_pkcs1_sha512 (0x0601),

    /* ECDSA algorithms */
    ecdsa_secp256r1_sha256 (0x0403),
    ecdsa_secp384r1_sha384 (0x0503),
    ecdsa_secp521r1_sha512 (0x0603),

    /* RSASSA-PSS algorithms */
    rsa_pss_sha256 (0x0700),
    rsa_pss_sha384 (0x0701),
    rsa_pss_sha512 (0x0702),

    /* EdDSA algorithms */
    ed25519 (0x0703),
    ed448 (0x0704),

    /* Reserved Code Points */
    private_use (0xFE00..0xFFFF)
} SignatureScheme;

Instead of SignatureAndHashAlgorithm, a code point is now called a SignatureScheme and tied to a hash function (if applicable) by the specification. TLS 1.2 algorithm/hash combinations not listed here are deprecated and MUST NOT be offered or negotiated.

New code points for RSA-PSS schemes, as well as Ed25519 and Ed448-Goldilocks were added. ECDSA schemes are now tied to the curve given by the code point name, to be enforced by implementations. SHA-1 signature schemes SHOULD NOT be offered, if needed for backwards compatibility then only as the lowest priority after all other schemes.

The current draft-13 lists RSASSA-PSS as the only valid signature algorithm allowed to sign handshake messages with an RSA key. The rsa_pkcs1_* values solely refer to signatures which appear in certificates and are not defined for use in signed handshake messages.

To prevent various downgrade attacks like FREAK and Logjam the computation of the hashes to be signed has changed significantly and covers the complete handshake, up until CertificateVerify:

h = Hash(Handshake Context + Certificate) + Hash(Resumption Context)

This includes amongst other data the client and server random, key shares, the cipher suite, the certificate, and resumption information to prevent replay and downgrade attacks. With static key exchange algorithms gone the CertificateVerify message is now the one carrying the signature.

July 26, 2016 02:00 PM

July 08, 2016

Brian Bondy (bbondy)

Shutting down Code Firefox

On October 07, 2016 I'll be shutting down Code Firefox.

With over 100k unique visits and over 30k full video views, it helped hundreds of Mozillians and Mozilla employees get ramped up for hacking on Firefox. It was from the start a personal side project and wasn't funded or sponsored in any way.

It isn't being shutdown because of a conflict of interest with Brave, but instead because over time the content naturally becomes more obsolete as better development methods surface.

I'm happy with what it is and the purpose it had, but shutting it down is the responsible thing to do.

July 08, 2016 12:00 AM

May 13, 2016

Tim Taubert (ttaubert)

Six Months as a Security Engineer

It’s been a little more than six months since I officially switched to the Security Engineering team here at Mozilla to work on NSS and related code. I thought this might be a good time to share what I’ve been up to in a short status update:

Removed SSLv2 code from NSS

NSS contained quite a lot of SSLv2-specific code that was waiting to be removed. It was not compiled by default so there was no way to enable it in Firefox even if you wanted to. The removal was rather straightforward as the protocol changed significantly with v3 and most of the code was well separated. Good riddance.

Added ChaCha20/Poly1305 cipher suites to Firefox

Adam Langley submitted a patch to bring ChaCha20/Poly1305 cipher suites to NSS already two years ago but at that time we likely didn’t have enough resources to polish and land it. I picked up where he left and updated it to conform to the slightly updated specification. Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites enabled.

RSA-PSS for TLS v1.3 and the WebCrypto API

Ryan Sleevi, also a while ago, implemented RSA-PSS in freebl, the lower cryptographic layer of NSS. I hooked it up to some more APIs so Firefox can support RSA-PSS signatures in its WebCrypto API implementation. In NSS itself we need it to support new handshake signatures in our experimental TLS v1.3 code.

Improve continuous integration for NSS

Kai Engert from RedHat is currently doing a hell of a job maintaining quite a few buildbots that run all of our NSS tests whenever someone pushes a new changeset. Unfortunately the current setup doesn’t scale too well and the machines are old and slow.

Similar to e.g. Travis CI, Mozilla maintains its own continuous integration and release infrastructure, called TaskCluster. Using TaskCluster we now have an experimental Docker image that builds NSS/NSPR and runs all of our 17 (so far) test suites. The turnaround time is already very promising. This is an ongoing effort, there are lots of things left to do.

Joined the WebCrypto working group

I’ve been working on the Firefox WebCrypto API implementation for a while, long before I switched to the Security Engineering team, and so it made sense to join the working group to help finalize the specification. I’m unfortunately still struggling to carve out more time for involvement with the WG than just attending meetings and representing Mozilla.

Added HKDF to the WebCrypto API

The main reason the WebCrypto API in Firefox did not support HKDF until recently is that no one found the time to implement it. I finally did find some time and brought it to Firefox 46. It is fully compatible to Chrome’s implementation (RFC 5869), the WebCrypto specification still needs to be updated to reflect those changes.

Added SHA-2 for PBKDF2 in the WebCrypto API

Since we shipped the first early version of the WebCrypto API, SHA-1 was the only available PRF to be used with PBKDF2. We now support PBKDF2 with SHA-2 PRFs as well.

Improved the Firefox WebCrypto API threading model

Our initial implementation of the WebCrypto API would naively spawn a new thread every time a crypto.subtle.* method was called. We now use a thread pool per process that is able to handle all incoming API calls much faster.

Added WebCrypto API to Workers and ServiceWorkers

After working on this on and off for more than six months, so even before I officially joined the security engineering team, I managed to finally get it landed, with a lot of help from Boris Zbarsky who had to adapt our WebIDL code generation quite a bit. The WebCrypto API can now finally be used from (Service)Workers.

What’s next?

In the near future I’ll be working further on improving our continuous integration infrastructure for NSS, and clean up the library and its tests. I will hopefully find the time to write more about it as we progress.

May 13, 2016 04:00 PM

April 29, 2016

Tim Taubert (ttaubert)

A Fast, Constant-time AEAD for TLS

The only TLS v1.2+ cipher suites with a dedicated AEAD scheme are the ones using AES-GCM, a block cipher mode that turns AES into an authenticated cipher. From a cryptographic point of view these are preferable to non-AEAD-based cipher suites (e.g. the ones with AES-CBC) because getting authenticated encryption right is hard without using dedicated ciphers.

For CPUs without the AES-NI instruction set, constant-time AES-GCM however is slow and also hard to write and maintain. The majority of mobile phones, and mostly cheaper devices like tablets and notebooks on the market thus cannot support efficient and safe AES-GCM cipher suite implementations.

Even if we ignored all those aforementioned pitfalls we still wouldn’t want to rely on AES-GCM cipher suites as the only good ones available. We need more diversity. Having widespread support for cipher suites using a second AEAD is necessary to defend against weaknesses in AES or AES-GCM that may be discovered in the future.

ChaCha20 and Poly1305, a stream cipher and a message authentication code, were designed with fast and constant-time implementations in mind. A combination of those two algorithms yields a safe and efficient AEAD construction, called ChaCha20/Poly1305, which allows TLS with a negligible performance impact even on low-end devices.

Firefox 47 will ship with two new ECDHE/ChaCha20 cipher suites as specified in the latest draft. We are looking forward to see the adoption of these increase and will, as a next step, work on prioritizing them over AES-GCM suites on devices not supporting AES-NI.

April 29, 2016 01:00 PM

February 12, 2016

Matthew Noorenberghe (MattN)

Firefox screenshots can now be easily captured and compared in automation

Back in 2013, during Australis[1] development, I created a tool called mozscreenshots. The purpose of this tool was to help detect UI regressions and make it easier to review how the UI looks in various configurations on each of our supported platforms. The main hindrance to its regular usage was that it required developers and designers to install it on each machine, tying it up while the images were captured. The great news is that mozscreenshots is now running in automation on Nightlies and on-demand for try pushes, making it much easier to capture images. Comparing the captured images to a reference/base version can now be done via a web interface which means the images don’t need to be downloaded for review. mozscreenshots has already detected two issues during its first week in automation, and I hope that this tool will improve the quality of desktop Firefox by surfacing UI issues sooner. You can read about it in the documentation on MDN, but here’s a quick start guide:

Change detection via Nightly captures

In order to detect UI regressions, screenshots of some sets run on every Nightly. These are compared to the previous day’s Nightly, using the compare_screenshots tool (which uses ImageMagick’s `compare`). For now I’m manually kicking this off each day but soon I hope to automate this and have it send an email to interested parties for investigation. Currently, only one run of "TabsInTitlebar,Tabs,WindowSize,Toolbars,LightweightThemes" occurs on Nightlies. I plan to add runs with the DevEdition theme, DevTools, and Preferences shortly, as the code for those are already written.

Capturing on Try

You can request screenshots be captured on a Try push for UI review or comparison to a know-good base by requesting the "mochitest-browser-screenshots" test job. You can specify what you would like captured by setting the MOZSCREENSHOTS_SETS environment variable with a comma-separated list of configurations like so: try: -b o -p linux,linux64,macosx64,win32,win64 -t none -u mochitest-browser-screenshots[Ubuntu,10.6,10.10,Windows XP,Windows 7,Windows 8] --setenv MOZSCREENSHOTS_SETS=TabsInTitlebar,WindowSize,LightweightThemes Note that the job is currently Tier 3 and "excluded" on TreeHerder so you will need to toggle both of those filters to see the jobs there with the symbol: "M[Tier-3] (ss)". Unlike Nightlies, Try pushes won’t capture any images by default if MOZSCREENSHOTS_SETS isn’t specified. This avoids capturing images when developers request all mochitest runs, but don’t really want the screenshots. The capturing is implemented as a mochitest-bc test in the "screenshots" subsuite, meaning configurations can use things like BrowserTestUtils and such. See fetch_screenshots if you would like to download the captured images.

Comparing screenshots

The simplest way to compare images is via the web UI at (Example: ). Simply provide the project and revision of a base push with images, like the Nightly rev. from about:buildconfig, and your new revision, like a try push with some patches to review. In the background, the images are fetched from automation via fetch_screenshots, and then compared using compare_screenshots with the output displayed on the page. The first comparison for a pair of revisions can take several minutes, as around one thousand (5 platforms x 2 revisions x 100 screenshots) images need to be downloaded and compared for the default set of screenshots. The results are cached, so subsequent comparisons for the same revision are much faster.
Example image generated when a change is detected
There are a lot of opportunities for this tool (e.g. pulse integration, notifications, simplified bug filing based on differences, etc.), and I hope to continue to improve the workflow. Please file bugs on the capturing infrastructure blocking the "mozscreenshots" meta bug (1169179) and bugs related to the web UI, comparison and fetching at See also the list of issues and ideas at which haven’t been filed yet. Thank you to everyone who has helped with getting mozscreenshots to this point, specifically Felipe Gomes, Armen Zambrano Gasparnian, Joel Maher, Brian Grinstead, Kit Cambridge, and Justin Dolske. [1] Australis was the code name of the Firefox UI redesign that launched April 29, 2014.

February 12, 2016 03:55 AM

January 27, 2016

Dave Townsend (Mossop)

Improving the performance of the add-ons manager with asynchronous file I/O

The add-ons manager has a dirty secret. It uses an awful lot of synchronous file I/O. This is the kind of I/O that blocks the main thread and can cause Firefox to be janky. I’m told that that is a technical term. Asynchronous file I/O is much nicer, it means you can let the rest of the app continue to function while you wait for the I/O operation to complete. I rewrote much of the current code from scratch for Firefox 4.0 and even back then we were trying to switch to asynchronous file I/O wherever possible. But still I used mostly synchronous file I/O.

Here is the problem. For many moons we have allowed other applications to install add-ons into Firefox by dropping them into the filesystem or registry somewhere. We also have to do things like updating and installing non-restartless add-ons during startup when their files aren’t in use. And we have to know the full set of non-restartless add-ons that we are going to activate quite early in startup so the startup function for the add-ons manager has to do all those installs and a scan of the extension folders before returning back to the code startup up the browser, and that means being synchronous.

The other problem is that for the things that we could conceivably use async I/O, like installs and updates of restartless add-ons during runtime we need to use the same code for loading and parsing manifests, extracting zip files and others that we need to be synchronous during startup. So we can either write a second version that is asynchronous so we can have nice performance at runtime or use the synchronous version so we only have one version to test and maintain. Keeping things synchronous was where things fell in the end.

That’s always bugged me though. Runtime is the most important time to use asynchronous I/O. We shouldn’t be janking the browser when installing a large add-on particularly on mobile and so we have taken some steps since Firefox 4 to make parts of the code asynchronous. But there is still a bunch there.

The plan

The thing is that there isn’t actually a reason we can’t use the asynchronous I/O functions. All we have to do is make sure that startup is blocked until everything we need is complete. It’s pretty easy to spin an event loop from JavaScript to wait for asynchronous operations to complete so why not do that in the startup function and then start making things asynchronous?

Performances is pretty important for the add-ons manager startup code, the longer we spend in startup the more it hurts us. Would this switch slow things down? I assumed that there would be some losses due to other things happening during an event loop tick that otherwise wouldn’t have but that the file I/O operations should take around the same time. And here is the clever bit. Because it is asynchronous I could fire off operations to run in parallel. Why check the modification time of every file in a directory one file at a time when you can just request the times for every file and wait until they all complete?

There are really a tonne of things that could affect whether this would be faster or slower and no amount of theorising was convincing me either way and last night this had finally been bugging me for long enough that I grabbed a bottle of wine, fired up the music and threw together a prototype.

The results

It took me a few hours to switch most of the main methods to use Task.jsm, switch much of the likely hot code to use OS.File and to run in parallel where possible and generally cover all the main parts that run on every startup and when add-ons have changed.

The challenge was testing. Default talos runs don’t include any add-ons (or maybe one or two) and I needed a few different profiles to see how things behaved in different situations. It was possible that startups with no add-ons would be affected quite differently to startups with many add-ons. So I had to figure out how to add extensions to the default talos profiles for my try runs and fired off try runs for the cases where there were no add-ons, 200 unpacked add-ons with a bunch of files and 200 packed add-ons. I then ran all those a second time with deleting extensions.json between each run to force the database to be loaded and rebuilt. So six different talos runs for the code without my changes and then another six with my changes and I triggered ten runs per test and went to bed.

The first thing I did this morning was check the performance results. The first ready was with 200 packed add-ons in the profile, should be a good check of the file scanning. How did it do? Amazing! Incredible! A greater than 50% performance improvement across the board! That’s astonishing! No really that’s pretty astonishing. It would have to mean the add-ons manager takes up at least 50% of the browser startup time and I’m pretty sure it doesn’t. Oh right I’m accidentally comparing to the test run with 200 packed add-ons and a database reset with my async code. Well I’d expect that to be slower.

Ok, let’s get it right. How did it really do? Abysmally! Like incredibly badly. Across the board in every test run startup is significantly slower with the asynchronous I/O than without. With no add-ons in the profile the new code incurs a 20% performance hit. In the case with 200 unpacked add-ons? An almost 1000% hit!

What happened?

Ok so that wasn’t the best result but at least it will stop bugging me now. I figure there are two things going on here. The first is that OS.File might look like you can fire off I/O operations in parallel but in fact you can’t. Every call you make goes into a queue and the background worker thread doesn’t start on one operation until the previous has completed. So while the I/O operations themselves might take about the same time you have the added overhead of passing messages between the background thread and promises. I probably should have checked that before I started! Oh, and promises. Task.jsm and OS.File make heavy use of promises and I have to say I’m sold on using them for async code. But. Everytime you wait for a promise you have to wait at least one tick of the event loop longer than you would with a simple callback. That’s great if you want responsive UI but during startup every event loop tick costs time since other code might be running that you don’t care about.

I still wonder if we could get more threads for OS.File whether it would speed things up but that’s beyond where I want to play with things for now so I guess this is where this fun experiment ends. Although now I have a bunch of code converted I wonder if I can create some replacements for OS.File and Task.jsm that behave synchronously during startup and asynchronously at runtime, then we get the best of both worlds … where did that bottle of wine go?

January 27, 2016 08:35 PM

January 21, 2016

Dave Townsend (Mossop)

An editable box model view in the devtools

This week the whole devtools group has been sequestered in Mozilla’s Portland office having one of our regular meet-ups. As always it’s been a fantastically productive week with lots of demos to show for it. I’ll be writing a longer write-up later but I wanted to post about what I played with over the week.

My wife does the odd bit of web development on the side. For a long time she was a loyal Firebug user and a while ago I asked her what she thought of Firefox’s built in devtools. She quickly pointed out a couple of features that Firebug had that Firefox did not. As I was leaving for this week I mentioned I’d be with the devtools group and she asked whether her features had been fixed yet. It turns out that colour swatches had been but the box model still wasn’t editable. So I figured I could earn myself some brownie points by hacking on that this week.

The goal here is to be able to inspect an element on the page, pull up the box model and be able to quickly play with the margins, borders and padding to tweak the positioning until it looks right. Then armed with the right values you can go update your stylesheets. It saves a lot of trial and error with positioning.

It turned out to be relatively simple to implement a pretty full version. The feature allows you to click one of the box model values and type whatever value you like, in any CSS unit you prefer. If the size had been set in the stylesheet in some specific unit then that is what appears in the input box for you to change. Better yet as you type numbers the element updates in the page on the fly and you can use the arrow keys to increase/decrease the value until you’re happy. It’s a really natural way to play with the element’s position.

The changes made appear on the element so you can find them in the rule view pretty easily. This patch is based on an updated version of the box model view which is why it looks so different to existing Firefox, all my work does is make the numbers editable.

I actually completed this so quickly that I decided to take this one step further. One thing missing from the box model display is information about border colours. So I added some colour swatches for each border and made them editable with the regular devtools colour picker.

Both of these patches are pretty much complete but they’ll have to wait for the new box model highlighter to be complete before they can be reviewed and land.

January 21, 2016 05:30 PM

January 20, 2016

Brian Bondy (bbondy)

Brave Software

Since June of last year, I’ve been co-founding a new startup called Brave Software with Brendan Eich. With our amazing team, we're developing something pretty epic.

We're building the next-generation of browsers for smartphones and laptops as part of our new ad-tech platform. Our terms of use give our users control over their personal data by blocking ad trackers and third party cookies. We re-integrate fewer and better ads directly into programmatic ad positions, paying revenue shares to users and publishers to support both of these essential parties in the web ecosystem.

Coming built in, we have new faster engines for tracking protection, ad block, HTTPS Everywhere, safe ads with rev-share, and more. We're seeing massive web page load time speedups.

We're starting to bring people in for early developer build access on all platforms.

I’m happy to share that the browsers we’re developing were made fully open sourced. We welcome contributors, and would love your help.

Some of the repositories include:

January 20, 2016 12:00 AM

January 15, 2016

Tim Taubert (ttaubert)

Build Your Own Signal Desktop

The Signal Private Messenger is great. Use it. It’s probably the best secure messenger on the market. When recently a desktop app was announced people were eager to join the beta and even happier when an invite finally showed up in their inbox. So was I, it’s a great app and works surprisingly well for an early version.

The only problem is that it’s a Chrome App. Apart from excluding folks with other browsers it’s also a shitty user experience. If you too want your messaging app not tied to a browser then let’s just build our own standalone variant of Signal Desktop.

NW.js beta with Chrome App support

Signal Desktop is a Chrome App, so the easiest way to turn it into a standalone app is to use NW.js. Conveniently, their next release v0.13 will ship with Chrome App support and is available for download as a beta version.

First, make sure you have git and npm installed. Then open a terminal and prepare a temporary build directory to which we can download a few things and where we can build the app:

$ mkdir signal-build
$ cd signal-build

[OS X] Packaging Signal and NW.js

Download the latest beta of NW.js and unzip it. We’ll extract the application and use it as a template for our Signal clone. The NW.js project does unfortunately not seem to provide a secure source (or at least hashes) for their downloads.

$ wget
$ unzip
$ cp -r nwjs-sdk-v0.14.4-osx-x64/

Next, clone the Signal repository and use NPM to install the necessary modules. Run the grunt automation tool to build the application.

$ git clone
$ cd Signal-Desktop/
$ npm install
$ node_modules/grunt-cli/bin/grunt

Finally, simply to copy the dist folder containing all the juicy Signal files into the application template we created a few moments ago.

$ cp -r dist ../
$ open ..

The last command opens a Finder window. Move to your Applications folder and launch it as usual. You should now see a welcome page!

[Linux] Packaging Signal and NW.js

The build instructions for Linux aren’t too different but I’ll write them down, if just for convenience. Start by cloning the Signal Desktop repository and build.

$ git clone
$ cd Signal-Desktop/
$ npm install
$ node_modules/grunt-cli/bin/grunt

The dist folder contains the app, ready to be launched. zip it and place the resulting package somewhere handy.

$ cd dist
$ zip -r ../../package.nw *

Back to the top. Download the NW.js binary, extract it, and change into the newly created directory. Move the package.nw file we created earlier next to the nw binary and we’re done. The nwjs-sdk-v0.13.0-beta3-linux-x64 folder does now contain the standalone Signal app.

$ cd ../..
$ wget
$ tar xfz nwjs-sdk-v0.14.4-linux-x64.tar.gz
$ cd nwjs-sdk-v0.14.4-linux-x64
$ mv ../package.nw .

Finally, launch NW.js. You should see a welcome page!

$ ./nw

If you see something, file something

Our standalone Signal clone mostly works, but it’s far from perfect. We’re pulling from master and that might bring breaking changes that weren’t sufficiently tested.

We don’t have the right icons. The app crashes when you click a media message. It opens a blank popup when you click a link. It’s quite big because also NW.js has bugs and so we have to use the SDK build for now. In the future it would be great to have automatic updates, and maybe even signed builds.

Remember, Signal Desktop is beta, and completely untested with NW.js. If you want to help file bugs, but only after checking that those affect the Chrome App too. If you want to fix a bug only occurring in the standalone version it’s probably best to file a pull request and cross fingers.

Is this secure?

Great question! I don’t know. I would love to get some more insights from people that know more about the NW.js security model and whether it comes with all the protections Chromium can offer. Another interesting question is whether bundling Signal Desktop with NW.js is in any way worse (from a security perspective) than installing it as a Chrome extension. If you happen to have an opinion about that, I would love to hear it.

Another important thing to keep in mind is that when building Signal on your own you will possibly miss automatic and signed security updates from the Chrome Web Store. Keep an eye on the repository and rebuild your app from time to time to not fall behind too much.

January 15, 2016 02:00 PM

December 18, 2015

Dave Townsend (Mossop)

Linting for Mozilla JavaScript code

One of the projects I’ve been really excited about recently is getting ESLint working for a lot of our JavaScript code. If you haven’t come across ESLint or linters in general before they are automated tools that scan your code and warn you about syntax errors. They can usually also be set up with a set of rules to enforce code styles and warn about potential bad practices. The devtools and Hello folks have been using eslint for a while already and Gijs asked why we weren’t doing this more generally. This struck a chord with me and a few others and so we’ve been spending some time over the past few weeks getting our in-tree support for ESLint to work more generally and fixing issues with browser and toolkit JavaScript code in particular to make them lintable.

One of the hardest challenges with this is that we have a lot of non-standard JavaScript in our tree. This includes things like preprocessing as well as JS features that either have since been standardized with a different syntax (generator functions for example) or have been dropped from the standardization path (array comprehensions). This is bad for developers as editors can’t make sense of our code and provide good syntax highlighting and refactoring support when it doesn’t match standard JavaScript. There are also plans to remove the non-standard JS features so we have to remove that stuff anyway.

So a lot of the work done so far has been removing all this non-standard stuff so that ESLint can pass with only a very small set of style rules defined. Soon we’ll start increasing the rules we check in browser and toolkit.

How do I lint?

From the command line this is simple. Make sure and run ./mach eslint --setup to install eslint and some related packages then just ./mach eslint <directory or file> to lint a specific area. You can also lint the entire tree. For now you may need to periodically run setup again as we add new dependencies, at some point we may make mach automatically detect that you need to re-run it.

You can also add ESLint support to many code editors and as of today you can add ESLint support into hg!

Why lint?

Aside from just ensuring we have standard JavaScript in the tree linting offers a whole bunch of benefits.

Where are we?

There’s a full list of everything that has been done so far in the dependencies of the ESLint metabug but some highlights:

What’s next?

I’m grateful to all those that have helped get things moving here but there is still more work to do. If you’re interested there’s really two ways you can help. We need to lint more files and we need to turn on more lint rules.

The .eslintignore file shows what files are currently ignored in the lint checks. Removing files and directories from that involves fixing JavaScript standards issues generally but every extra file we lint is a win for all the above reasons so it is valuable work. Also mostly straightforward once you get the hang of it, there are just a lot of files.

We also need to turn on more rules. We’ve got a rough list of the rules we want to turn on in browser and toolkit but as you might guess they aren’t on because they fail right now. Fixing up our JS to work with them is simple work but much appreciated. In some cases ESLint can also do the work for you!

December 18, 2015 07:27 PM

December 02, 2015

Christian Legnitto (LegNeato)

Star rating is the worst metric I have ever seen

Note: The below is a slightly modified version of a rant I posted internally at Facebook when I was shipping their mobile apps. Even though the post is years old, I think the issues with star rating still apply in general. These days I mainly rant on Twitter.

Not only is star rating the worst metric I have ever seen at an engineering company, I think it is actively encouraging us to make wrong and irrational decisions.

My criticisms, in no particular order:

1. We can game it easily.

On iOS we prompt1 people to rate our app and get at least a 1/2 a star bump. Is that a valid thing to do or are we juicing the stats? We don’t really know. On Android we don’t prompt…should we artificially add in a 1/2 a star there to make up for the lack of prompt and approximate the “real” rating? 2

We’re adding in-app rating dialogs to both platforms, which can juice the stats even more3. If we are able to add a simple feature–which I think we should add for what it’s worth–and wildly swing a core metric without actually changing the app itself, I would argue the core metric is not reflective of the state of the app.

2. We don’t understand it.

The star rating is up on Android…we don’t really know why. The star rating is down on iOS and we think we might know why, but we still have big countdown buckets like “performance”. For a concrete example, in the Facebook for Android release before Home we shipped the crashiest release ever…and the star rating was up! We think it was because we added a much-requested feature and people didn’t care about the crashes but we have no way to be sure.

When users give star ratings they are not required to enter text reviews, leaving us blind and with no actionable information for those ratings. So even when we cluster on text reviews (using awesome systems and legit legwork by the data folks) we are working with even fewer data points to try to understand what is happening.4

Finally, we have fixed countdown bugs on both platforms in the last quarter…we haven’t seen a step function up or down on either star rating….the trends are pretty constant. This implies that we don’t really know what levers to pull and what they get us.

3. Vocal minorities skewing risk vs reward reasoning.

The absolute number of star ratings is pretty low, so vocal minorities can swing it wildly–representative sample this is not. For example, on the latest iOS app we think 37% of 1-star reviews can be attributed to a crash on start. Based on what we know, the upper bound of affected users is likely ~1MM, which at 130MM MAU5 that’s 0.7%. The fix touches a critical component of the app and mucks around with threading (via blocks) and the master code is completly different. So 0.7% of users make up 37% of our 1 star reviews because of one bug (we think) and we are pushing out a hotfix touching the startup path because of the “37%” when we should really be focusing on the “0.7%”. I think that is the right decision if we put a lot of weight on star rating but it isn’t the right decision generally. Note that we did not push out a hotfix for the profile picture uploading failure issue in the same release because the 0.5% of DAU affected wasn’t seen as worth the risk and churn.

4. It’s fluid-ish.

A user can give us a star rating and then go back and change it. Often they do, but frequently they don’t (we think). This means our overall star ratings likely have an inertia coefficient and may not reflect the current state of the app. We have no visibility into how much this affects ratings and in what ways. If we fix the iOS crash mentioned above, what percent of users will go back and change their star rating from 1 to something else? As far as I know this inertia coefficient isn’t included in any analysis and isn’t really accounted for in our reasoning and goals.6

5. One star != bad experience.

Note: I added #5 today, it wasn’t in the original post.

Digging into our star rating, some curious behavior emerged:

Of course, there is both the standard OMG CHANGE reaction (“Why am I being forced to install Messenger?”) and user support issues (“I am blocked from sending friend requests, please help me!”) that show up frequently in 1 star reviews too. While both of those are important to capture and measure, they don’t really reflect on the quality of the app or a particular release.

The emperor has no clothes.

Everyone working on mobile knows about these issues and has been going along with star rating due to the idea that a flawed metric is better than no metric. I don’t think even using star rating as a knowingly flawed metric is useful from what I’ve seen over the last quarter. I think we should keep an eye on it as a vanity metric. I think we should work to capture that feedback in-app so we can be in control and get actionable data. I think we should be aware of it as an input to our reasoning about hotfixes but make it clear the star rating itself has no value and shouldn’t be optimized for in a specific release cycle.

  1. Via Appirater at the time.
  2. The idea being that ratings and reviews naturally skew negative due to selection bias.
  3. Many apps actually do this. They prompt to rate the app. If it is rated low, they ask for in-app feedback. If it is rated high, they ask to rate in the app store and redirect.
  4. This is slightly better on Android now that ratings show the device model and OS version…at least there is some actionable information.
  5. These numbers are so old!
  6. Note: this was written before the iOS rating was split into “current version” and “all versions”. This is also affected a bit by Facebook shipping so fast now (every 2 weeks!) so the current version star rating is always resetting on iOS. On Android I believe there is still one rating for all app versions and this is a larger issue.

December 02, 2015 06:14 PM

November 16, 2015

Tim Taubert (ttaubert)

More Privacy, Less Latency

Please note that this post is about draft-11 of the TLS v1.3 standard.

TLS must be fast. Adoption will greatly benefit from speeding up the initial handshake that authenticates and secures the connection. You want to get the protocol out of the way and start delivering data to visitors as soon as possible. This is crucial if we want the web to succeed at deprecating non-secure HTTP.

Let’s start by looking at full handshakes as standardized in TLS v1.2, and then continue to abbreviated handshakes that decrease connection times for resumed sessions. Once we understand the current protocol we can proceed to proposals made in the latest TLS v1.3 draft to achieve full 1-RTT and even 0-RTT handshakes.

It helps if you already have a rough idea of how TLS and Diffie-Hellman work as I can’t go into every detail. The focus of this post is on comparing current and future handshakes and I might omit a few technicalities to get basic ideas across more easily.

Full TLS 1.2 Handshake (static RSA)

Static RSA is a straightforward key exchange method, available since SSLv2. After sharing basic protocol information via the ClientHello and ServerHello messages the server sends its certificate to the client. ServerHelloDone signals that for now there will be no further messages until the client responds.

The client then encrypts the so-called premaster secret with the server’s public key found in the certificate and wraps it in a ClientKeyExchange message. ChangeCipherSpec signals that from now on messages will be encrypted. Finished, the first message to be encrypted and the client’s last message of the handshake, contains a MAC of all handshake messages exchanged thus far to prove that both parties saw the same messages, without interference from a MITM.

The server decrypts the premaster secret found in the ClientKeyExchange message using its certificate’s private key, and derives the master secret and communication keys. It then too signals a switch to encrypted communication and completes the handshake. It takes two round-trips to establish a connection.

Authentication: With static RSA key exchanges, the connection is authenticated by encrypting the premaster secret with the server certificate’s public key. Only the server in possession of the private key can decrypt, correctly derive the master secret, and send an encrypted Finished message with the right MAC.

The simplicity of static RSA has a serious drawback: it does not offer forward secrecy. If a passive adversary records all traffic to a server then every recorded TLS session can be broken later by obtaining the certificate’s private key.

This key exchange method will be removed in TLS v1.3.

Full TLS 1.2 Handshake (ephemeral DH)

A full handshake using (Elliptic Curve) Diffie-Hellman to exchange ephemeral keys is very similar to the flow of static RSA. The main difference is that after sending the certificate the server will also send a ServerKeyExchange message. This message contains either the parameters of a DH group or of an elliptic curve, paired with an ephemeral public key computed by the server.

The client too computes an ephemeral public key compatible with the given parameters and sends it to the server. Knowing their private keys and the other party’s public key both sides should now share the same premaster secret and can derive a shared master secret.

Authentication: With (EC)DH key exchanges it’s still the certificate that must be signed by a CA listed in the client’s trust store. To authenticate the connection the server will sign the parameters contained in ServerKeyExchange with the certificate’s private key. The client verifies the signature with the certificate’s public key and only then proceeds with the handshake.

Abbreviated Handshakes in TLS 1.2

Since SSLv2 clients have been able to use session identifiers as a way to resume previously established TLS/SSL sessions. Session resumption is important because a full handshake can take time: it has a high latency as it needs two round-trips and might involve expensive computation to exchange keys, or sign and verify certificates.

Session IDs, assigned by the server, are unique identifiers under which both parties store the master secret and other details of the connection they established. The client may include this ID in the ClientHello message of the next handshake to short-circuit the negotiation and reuse previous connection parameters.

If the server is willing and able to resume the session it responds with a ServerHello message including the Session ID given by the client. This handshake is effectively 1-RTT as the client can send application data immediately after the Finished message.

Sites with lots of visitors will have to manage and secure big session caches, or risk pushing out saved sessions too quickly. A setup involving multiple load-balanced servers will need to securely synchronize session caches across machines. The forward secrecy of a connection is bounded by how long session information is retained on servers.

Session tickets, created by the server and stored by the client, are blobs containing all necessary information about a connection, encrypted by a key only known to the server. If the client presents this tickets with the ClientHello message, and proves that it knows the master secret stored in the ticket, the session will be resumed.

A server willing and able to decrypt the given ticket responds with a ServerHello message including an empty SessionTicket extension, otherwise the extension would be omitted completely. As with session IDs, the client will start sending application data immediately after the Finished message to achieve 1-RTT.

To not affect the forward secrecy provided by (EC)DHE suites session ticket keys should be rotated periodically, otherwise stealing the ticket key would allow recovering recorded sessions later. In a setup with multiple load-balanced servers the main challenge here is to securely generate, rotate, and synchronize keys across machines.

Authentication: Both session resumption mechanisms retain the client’s and server’s authentication states as established in the session’s initial handshake. Neither the server nor the client have to send and verify certificates a second time, and thus can reduce connection times significantly, especially when dealing with RSA certificates.

Full Handshakes in TLS 1.3

The first good news about handshakes in TLS v1.3 is that static RSA key exchanges are no longer supported. Great! That means we can start with full handshakes using forward-secure Diffie-Hellman.

Another important change is the removal of the ChangeCipherSpec protocol (yes, it’s actually a protocol, not a message). With TLS v1.3 every message sent after ServerHello is encrypted with the so-called ephemeral secret to lock out passive adversaries very early in the game. EncryptedExtensions carries Hello extension data that must be encrypted because it’s not needed to set up secure communication.

The probably most important change with regard to 1-RTT is the removal of the ServerKeyExchange and ClientKeyExchange messages. The DH parameters and public keys are now sent in special KeyShare extensions, a new type of extension to be included in the ServerHello and ClientHello messages. Moving this data into Hello extensions keeps the handshake compatible with TLS v1.2 as it doesn’t change the order of messages.

The client sends a list of KeyShareEntry values, each consisting of a named (EC)DH group and an ephemeral public key. If the server accepts it must respond with one of the proposed groups and its own public key. If the server does not support any of the given key shares the server will request retrying the handshake or abort the connection with a fatal handshake_failure alert.

Authentication: The Diffie-Hellman parameters itself aren’t signed anymore, authentication will be a tad more explicit in TLS v1.3. The server sends a CertificateVerify message that contains a hash of all handshake message exchanged so far, signed with the certificate’s private key. The client then simply verifies the signature with the certificate’s public key.

Session Resumption in TLS 1.3 (PSK)

Session resumption via identifiers and tickets is obsolete in TLS v1.3. Both methods are replaced by a pre-shared key (PSK) mode. A PSK is established on a previous connection after the handshake is completed, and can then be presented by the client on the next visit.

The client sends one or more PSK identities as opaque blobs of data. They can be database lookup keys (similar to Session IDs), or self-encrypted and self-authenticated values (similar to Session Tickets). If the server accepts one of the given PSK identities it replies with the one it selected. The KeyShare extension is sent to allow servers to ignore PSKs and fall back to a full handshake.

Forward secrecy can be maintained by limiting the lifetime of PSK identities sensibly. Clients and servers may also choose an (EC)DHE cipher suite for PSK handshakes to provide forward secrecy for every connection, not just the whole session.

Authentication: As in TLS v1.2, the client’s and server’s authentication states are retained and both parties don’t need to exchange and verify certificates again. A regular PSK handshake initiating a new session, instead of resuming, omits certificates completely.

Session resumption still allows significantly faster handshakes when using RSA certificates and can prevent user-facing client authentication dialogs on subsequent connections. However, the fact that it requires a single round-trip just like a full handshake might make it less appealing, especially if you have an ECDSA or EdDSA certificate and do not require client authentication.

Zero-RTT Handshakes in TLS 1.3

The latest draft of the specification contains a proposal to let clients encrypt application data and include it in their first flights. On a previous connection, after the handshake completes, the server would send a ServerConfiguration message that the client can use for 0-RTT handshakes on subsequent connections. The configuration includes a configuration identifier, the server’s semi-static (EC)DH parameters, an expiration date, and other details.

With the very first TLS record the client sends its ClientHello and, changing the order of messages, directly appends application data (e.g. GET / HTTP/1.1). Everything after the ClientHello will be encrypted with the static secret, derived from the client’s ephemeral KeyShareEntry and the semi-static DH parameters given in the server’s configuration. The end_of_early_data alert indicates the end of the flight.

The server, if able and willing to decrypt, responds with its default set of messages and immediately appends the contents of the requested resource. That’s the same round-trip time as for an unencrypted HTTP request. All communication following the ServerHello will again be encrypted with the ephemeral secret, derived from the client’s and server’s ephemeral key shares. After exchanging Finished messages the server will be re-authenticated, and traffic encrypted with keys derived from the master secret.

Security of 0-RTT Handshakes

At first glance, 0-RTT mode seems similar to session resumption or PSK, and you might wonder why one wouldn’t merge these mechanisms. The differences however are subtle but important, and the security properties of 0-RTT handshakes are weaker than those for other kinds of TLS data:

1. To protect against replay attacks the server must incorporate a server random into the master secret. That is unfortunately not possible before the first round-trip and so the poor server can’t easily tell whether it’s a valid request or an attacker replaying a recorded conversation. Replay protection will be in place again after the ServerHello message is sent.

2. The semi-static DH share given in the server configuration, used to derive the static secret and encrypt first flight data, defies forward secrecy. We need at least one round-trip to establish the ephemeral secret. As configurations are shared between clients, and recovering the server’s DH share becomes more attractive, expiration dates should be limited sensibly. The maximum allowed validity is 7 days.

3. If the server’s DH share is compromised a MITM can tamper with the 0-RTT data sent by the client, without being detected. This does not extend to the full session as the client can retrospectively authenticate the server via the remaining handshake messages.

Defending against Replay Attacks

Thwarting replay attacks without input from the server is fundamentally very expensive. It’s important to understand that this is a generic problem, not an issue with TLS in particular, so alas one can’t just borrow another protocol’s 0-RTT model and put that into TLS.

It is possible to have servers keep a list of every ClientRandom they have received in a given time window. Upon receiving a ClientHello the server checks its list and rejects replays if necessary. This list must be globally and temporally consistent as there are possible attack vectors due to TLS’ reliable delivery guarantee if an attacker can force a server to lose its state, as well as with multiple servers in loosely-synchronized data centers.

Maintaining a consistent global state is possible, but only in some limited circumstances, namely for very sophisticated operators or situations where there is a single server with good state management. We will need something better.

Removing Anti-Replay Guarantee

A possible solution might be a TLS stack API to let applications designate certain data as replay-safe, for example GET / HTTP/1.1 assuming that GET requests against a given resource are idempotent.

let c = new TLSConnection(...);
c.setReplayable0RTTData("GET / ...");

Applications can, before opening the connection, specify replayable 0-RTT data to send on the first flight. If the server ignores the given 0-RTT data, the TLS stack automatically replays it after the first round-trip.

Removing Reliable Delivery Guarantee

Another way of achieving the same outcome would be a TLS stack API that again lets applications designate certain data as replay-safe, but does not automatically replay if the server ignores it. The application can decide to do this manually if necessary.

let c = new TLSConnection(...);
c.setUnreliable0RTTData("GET / ...");

if (c.delivered0RTTData()) {
  // Things are cool.
} else {
  // Try to figure out whether to replay or not.

Both of these APIs are early proposals and the final version of the specification might look very different from what we can see above. Though, as 0-RTT handshakes are a charter goal, the working group will very likely find a way to make them work.

Summing up

TLS v1.3 will bring major improvements to handshakes, how exactly will be finalized in the coming months. They will be more private by default as all information not needed to set up a secure channel will be encrypted as early as possible. Clients will need only a single round-trip to establish secure and authenticated connections to servers they never spoke to before.

Static RSA mode will no longer be available, forward secrecy will be the default. The two session resumption standards, session identifiers and session tickets, are merged into a single PSK mode which will allow streamlining implementations.

The proposed 0-RTT mode is promising, for custom application communication based on TLS but also for browsers, where a GET / HTTP/1.1 request to your favorite news page could deliver content blazingly fast as if no TLS was involved. The security aspects of zero round-trip handshakes will become more clear as the draft progresses.

November 16, 2015 05:00 PM

October 19, 2015

Christian Legnitto (LegNeato)

I’m talking at Mozilla tomorrow!

Stay focused & keep shipping
I’m excited to give a brown bag talk at Mozilla Mountain View tomorrow about how Facebook builds and ships software (with a Release Engineering / tooling bent).

Swing by if you are in the area or catch it on Air Mozilla! More details @

October 19, 2015 08:14 PM

October 05, 2015

Dave Townsend (Mossop)

Delivering Firefox features faster

Over time Mozilla has been trying to reduce the amount of time between developing a feature and getting it into a user’s hands. Some time ago we would do around one feature release of Firefox every year, more recently we’ve moved to doing one feature release every six weeks. But it still takes at least 12 weeks for a feature to get to users. In some cases we can speed that up by landing new things directly on the beta/aurora branches but the more we do this the harder it is for release managers to track the risk of shipping a given release.

The Go Faster project is investigating ways that we can speed up getting changes to users. System add-ons are one piece of this that will let us deliver updates to core Firefox features more often than the regular six week releases. Instead of being embedded in the rest of the code certain features will be developed as standalone system add-ons.

Building features as add-ons gives us more flexibility in how we deliver the features to users. System add-ons will ship in two different ways. First every Firefox release will include a default set of system add-ons. These are the latest versions of the features at the time the Firefox build was produced. Later during runtime Firefox will contact Mozilla’s update servers to ask for the current list of system add-ons. If there are new or updated versions listed Firefox will download and install them giving users access to the newest features without needing to update the entire application.

Building a feature as an add-on gives developers a lot of benefits too. Developers will be able to work on and test new features without doing custom Firefox builds. Users can even try out new features by just installing the add-ons. Once the feature is ready to ship it ships as an add-on with no code changes necessary for integration into Firefox. This is something we’ve attempted to do before with things like Test Pilot and pdf.js, but system add-ons make this process much smoother and reduces the differences between how the feature runs as an add-on and how it runs when shipped in the application.

The basic support for system add-ons is already included in current nightly builds and Firefox 44 should be the first release that we could use to deliver features like this if we choose. If you’re interested in the details you can read the client implementation plan or follow along the tracking bug for the client side of the feature.

October 05, 2015 06:20 PM

August 12, 2015

Matthew Noorenberghe (MattN)

Firefox Password Manager Update: 2015-Q2

Continuing from the Q1 update, here's a summary of the password manager progress made in the second quarter of 2015 (in no particular order):

August 12, 2015 05:13 AM

July 26, 2015

Matthew Noorenberghe (MattN)

Firefox Password Manager Update: 2015-Q1

Logins are a part of our daily lives on the web and one of the active Firefox projects this year is to improve Firefox's password manager which has the simple high-level goal of helping users log in. Here's a summary of the progress made in the first quarter of 2015 (in no particular order): Expect to see many more improvements in upcoming months as we continue to make major improvements to the password manager. If you'd like to contribute to this project, check out the password manager wiki page for mailing list, IRC, bug list and other information.

July 26, 2015 04:18 AM

May 19, 2015

Gavin Sharp (gavin)

leaving mozilla

I started contributing to Mozilla nearly 11 years ago, in 2004, and joined as a full-time employee over 8 years ago. Suffice it to say, Mozilla has been a big part of my life in a lot of ways. I’ve dedicated essentially my entire career so far to Mozilla, and it introduced me to my wife, just to name a couple.

Mozilla’s in a great place now – still a tough challenge ahead, but plenty of great people willing and able to tackle it. Firefox has new leadership, the team is being re-organized and groups merged together in what I think are the right ways, and I’m optimistic about their prospects. But it’s time for me to move on and find The Next Thing – that I have no idea what it is now is both exciting and a little bit terrifying, but I feel good about it.

It will probably take me a while to figure out what exactly a post-Mozilla Gavin-life looks like, given the various ways Mozilla’s been injected into my life. I won’t disappear entirely, certainly, and I know the many relationships I’ve built through Mozilla will continue to be a huge part of my life.

I’m looking forward to what’s next!

May 19, 2015 10:07 PM

May 18, 2015

Tim Taubert (ttaubert)

A Firefox OS Password Storage

My esteemed colleague Frederik Braun recently took on to rewrite the module responsible for storing and checking passcodes that unlock Firefox OS phones. While we are still working on actually landing it in Gaia I wanted to seize the chance to talk about this great use case of the WebCrypto API in the wild and highlight a few important points when using password-based key derivation (PBKDF2) to store passwords.

The Passcode Module

Let us take a closer look at not the verbatim implementation but at a slightly simplified version. The API offers the only two operations such a module needs to support: setting a new passcode and verifying that a given passcode matches the stored one.

let Passcode = {
  store(code) {
    // ...

  verify(code) {
    // ...

When setting up the phone for the first time - or when changing the passcode later - we call to write a new code to disk. Passcode.verify() will help us determine whether we should unlock the phone. Both methods return a Promise as all operations exposed by the WebCrypto API are asynchronous."1234").then(() => {
  return Passcode.verify("1234");
}).then(valid => {

// Output: true

Make the passcode look “random”

The module should absolutely not store passcodes in the clear. We will use PBKDF2 as a pseudorandom function (PRF) to retrieve a result that looks random. An attacker with read access to the part of the disk storing the user’s passcode should not be able to recover the original input, assuming limited computational resources.

The function deriveBits() is a PRF that takes a passcode and returns a Promise resolving to a random looking sequence of bytes. To be a little more specific, it uses PBKDF2 to derive pseudorandom bits.

function deriveBits(code) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // Salt should be at least 64 bits.
    let salt = crypto.getRandomValues(new Uint8Array(8));

    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000};

    // Derive 160 bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, 160);

Choosing PBKDF2 parameters

As you can see above PBKDF2 takes a whole bunch of parameters. Choosing good values is crucial for the security of our passcode module so it is best to take a detailed look at every single one of them.

Select a cryptographic hash function

PBKDF2 is a big PRF that iterates a small PRF. The small PRF, iterated multiple times (more on why this is done later), is fixed to be an HMAC construction; you are however allowed to specify the cryptographic hash function used inside HMAC itself. To understand why you need to select a hash function it helps to take a look at HMAC’s definition, here with SHA-1 at its core:

HMAC-SHA-1(k, m) = SHA-1((k ⊕ opad) + SHA-1((k ⊕ ipad) + m))

The outer and inner padding opad and ipad are static values that can be ignored for our purpose, the important takeaway is that the given hash function will be called twice, combining the message m and the key k. Whereas HMAC is usually used for authentication PBKDF2 makes use of its PRF properties, that means its output is computationally indistinguishable from random.

deriveBits() as defined above uses SHA-1 as well, and although it is considered broken as a collision-resistant hash function it is still a safe building block in the HMAC-SHA-1 construction. HMAC only relies on a hash function’s PRF properties, and while finding SHA-1 collisions is considered feasible it is still believed to be a secure PRF.

That said, it would not hurt to switch to a secure cryptographic hash function like SHA-256. Chrome supports other hash functions for PBKDF2 today, Firefox unfortunately has to wait for an NSS fix before those can be unlocked for the WebCrypto API.

Pass a random salt

The salt is a random component that PBKDF2 feeds into the HMAC function along with the passcode. This prevents an attacker from simply computing the hashes of for example all 8-character combinations of alphanumerics (~5.4 PetaByte of storage for SHA-1) and use a huge lookup table to quickly reverse a given password hash. Specify 8 random bytes as the salt and the poor attacker will have to suddenly compute (and store!) 264 of those lookup tables and face 8 additional random characters in the input. Even without the salt the effort to create even one lookup table would be hard to justify because chances are high you cannot reuse it to attack another target, they might be using a different hash function or combine two or more of them.

The same goes for Rainbow Tables. A random salt included with the password would have to be incorporated when precomputing the hash chains and the attacker is back to square one where she has to compute a Rainbow Table for every possible salt value. That certainly works ad-hoc for a single salt value but preparing and storing 264 of those tables is impossible.

The salt is public and will be stored in the clear along with the derived bits. We need the exact same salt to arrive at the exact same derived bits later again. We thus have to modify deriveBits() to accept the salt as an argument so that we can either generate a random one or read it from disk.

function deriveBits(code, salt) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000};

    // Derive 160 bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, 160);

Keep in mind though that Rainbow tables today are mainly a thing from the past where password hashes were smaller and shittier. Salts are the bare minimum a good password storage scheme needs, but they merely protect against a threat that is largely irrelevant today.

Specify a number of iterations

As computers became faster and Rainbow Table attacks infeasible due to the prevalent use of salts everywhere, people started attacking password hashes with dictionaries, simply by taking the public salt value and passing that combined with their educated guess to the hash function until a match was found. Modern password schemes thus employ a “work factor” to make hashing millions of password guesses unbearably slow.

By specifying a sufficiently high number of iterations we can slow down PBKDF2’s inner computation so that an attacker will have to face a massive performance decrease and be able to only try a few thousand passwords per second instead of millions.

For a single-user disk or file encryption it might be acceptable if computing the password hash takes a few seconds; for a lock screen 300-500ms might be the upper limit to not interfere with user experience. Take a look at this great StackExchange post for more advice on what might be the right number of iterations for your application and environment.

A much more secure version of a lock screen would allow to not only use four digits but any number of characters. An additional delay of a few seconds after a small number of wrong guesses might increase security even more, assuming the attacker cannot access the PRF output stored on disk.

Determine the number of bits to derive

PBKDF2 can output an almost arbitrary amount of pseudo-random data. A single execution yields the number of bits that is equal to the chosen hash function’s output size. If the desired number of bits exceeds the hash function’s output size PBKDF2 will be repeatedly executed until enough bits have been derived.

function getHashOutputLength(hash) {
  switch (hash) {
    case "SHA-1":   return 160;
    case "SHA-256": return 256;
    case "SHA-384": return 384;
    case "SHA-512": return 512;

  throw new Error("Unsupported hash function");

Choose 160 bits for SHA-1, 256 bits for SHA-256, and so on. Slowing down the key derivation even further by requiring more than one round of PBKDF2 will not increase the security of the password storage.

Do not hard-code parameters

Hard-coding PBKDF2 parameters - the name of the hash function to use in the HMAC construction, and the number of HMAC iterations - is tempting at first. We however need to be flexible if for example it turns out that SHA-1 can no longer be considered a secure PRF, or you need to increase the number of iterations to keep up with faster hardware.

To ensure future code can verify old passwords we store the parameters that were passed to PBKDF2 at the time, including the salt. When verifying the passcode we will read the hash function name, the number of iterations, and the salt from disk and pass those to deriveBits() along with the passcode itself. The number of bits to derive will be the hash function’s output size.

function deriveBits(code, salt, hash, iterations) {
  // Convert string to a TypedArray.
  let bytes = new TextEncoder("utf-8").encode(code);

  // Create the base key to derive from.
  let importedKey = crypto.subtle.importKey(
    "raw", bytes, "PBKDF2", false, ["deriveBits"]);

  return importedKey.then(key => {
    // Output length in bits for the given hash function.
    let hlen = getHashOutputLength(hash);

    // All required PBKDF2 parameters.
    let params = {name: "PBKDF2", hash, salt, iterations};

    // Derive |hlen| bits using PBKDF2.
    return crypto.subtle.deriveBits(params, key, hlen);

Storing a new passcode

Now that we are done implementing deriveBits(), the heart of the Passcode module, completing the API is basically a walk in the park. For the sake of simplicity we will use localforage as the storage backend. It provides a simple, asynchronous, and Promise-based key-value store.

// <script src="localforage.min.js"/>

const HASH = "SHA-1";
const ITERATIONS = 4096; = function (code) {
  // Generate a new random salt for every new passcode.
  let salt = crypto.getRandomValues(new Uint8Array(8));

  return deriveBits(code, salt, HASH, ITERATIONS).then(bits => {
    return Promise.all([
      localforage.setItem("digest", bits),
      localforage.setItem("salt", salt),
      localforage.setItem("hash", HASH),
      localforage.setItem("iterations", ITERATIONS)

We generate a new random salt for every new passcode. The derived bits are stored along with the salt, the hash function name, and the number of iterations. HASH and ITERATIONS are constants that provide default values for our PBKDF2 parameters and can be updated whenever desired. The Promise returned by will resolve when all values have been successfully stored in the backend.

Verifying a given passcode

To verify a passcode all values and parameters stored by will have to be read from disk and passed to deriveBits(). Comparing the derived bits with the value stored on disk tells whether the passcode is valid.

Passcode.verify = function (code) {
  let loadValues = Promise.all([

  return loadValues.then(([digest, salt, hash, iterations]) => {
    return deriveBits(code, salt, hash, iterations).then(bits => {
      return compare(bits, digest);

Should compare() be a constant-time operation?

compare() does not have to be constant-time. Even if the attacker learns the first byte of the final digest stored on disk she cannot easily produce inputs to guess the second byte - the opposite would imply knowing the pre-images of all those two-byte values. She cannot do better than submitting simple guesses that become harder the more bytes are known. For a successful attack all bytes have to be recovered, which in turns means a valid pre-image for the full final digest needs to be found.

If it makes you feel any better, you can of course implement compare() as a constant-time operation. This might be tricky though given that all modern JavaScript engines optimize code heavily.

What about bcrypt or scrypt?

Both bcrypt and scrypt are probably better alternatives to PBKDF2. Bcrypt automatically embeds the salt and cost factor into its output, most APIs are clever enough to parse and use those parameters when verifying a given password.

Scrypt implementations can usually securely generate a random salt, that is one less thing for you to care about. The most important aspect of scrypt though is that it allows consuming a lot of memory when computing the password hash which makes cracking passwords using ASICs or FPGAs close to impossible.

The Web Cryptography API does unfortunately support neither of the two algorithms and currently there are no proposals to add those. In the case of scrypt it might also be somewhat controversial to allow a website to consume arbitrary amounts of memory.

May 18, 2015 01:00 PM

March 23, 2015

Dave Townsend (Mossop)

Making communicating with chrome from in-content pages easy

As Firefox increasingly switches to support running in multiple processes we’ve been finding common problems. Where we can we are designing nice APIs to make solving them easy. One problem is that we often want to run in-content pages like about:newtab and about:home in the child process without privileges making it safer and less likely to bring down Firefox in the event of a crash. These pages still need to get information from and pass information to the main process though, so we have had to come up with ways to handle that. Often we use custom code in a frame script acting as a middle-man, using things like DOM events to listen for requests from the in-content page and then messaging to the main process.

We recently added a new API to make this problem easier to solve. Instead of needing code in a frame script the RemotePageManager module allows special pages direct access to a message manager to communicate with the main process. This can be useful for any page running in the content area, regardless of whether it needs to be run at low privileges or in the content process since it takes care of listening for documents and hooking up the message listeners for you.

There is a low-level API available but the higher-level API is probably more useful in most cases. If your code wants to interact with a page like about:myaddon just do this from the main process:

let manager = new RemotePages("about:myaddon");

The manager object is now something resembling a regular process message manager. It has sendAsyncMessage and addMessageListener methods but unlike the regular e10s message managers it only communicates with about:myaddon pages. Unlike the regular message managers there is no option to send synchronous messages or pass cross-process wrapped objects.

When about:myaddon is loaded it has sendAsyncMessage and addMessageListener functions defined in its global scope for regular JavaScript to call. Anything that can be structured-cloned can be passed between the processes

The module documentation has more in-depth examples showing message passing between the page and the main process.

The RemotePageManager module is available in nightlies now and you can see it in action with the simple change I landed to switch about:plugins to run in the content process. For the moment the APIs only support exact URL matching but it would be possible to add support for regular expressions in the future if that turns out to be useful.

March 23, 2015 03:34 PM

February 17, 2015

Johnathan Nightingale (johnath)

Home for a Rest

Earlier today, I sent this note to the global mozilla employees list. It was not an easy send button to push.


One of the many, many things Mozilla has taught me over the years is not to bury the lede, so here goes:

March 31 will be my last day at Mozilla.

2014 was an incredible year, and it ended so much better than it started. I’m really proud of what we all accomplished, and I’m so hopeful for Mozilla’s future. But it took a lot out of me. I need to take a break. And as the dust settled on 2014 I realized, for the first time in a while, that I could take one.

You can live the Mozilla mission, feel it in your bones, and still worry about the future; I’ve had those moments over the last 8 years. Maybe you have, too. But Mozilla today is stronger than I’ve seen it in a long time. Our new strategy in search gives us a solid foundation and room to breathe, to experiment, and to make things better for our users and the web. We’re executing better than we ever have, and we’re seeing the shift in our internal numbers, while we wait for the rest of the world to catch up. January’s desktop download numbers are the best they’ve been in years. Accounts are being counted in tens of millions. We’re approaching 100MM downloads on Android. Dev Edition is blowing away targets faster than we can set them; Firefox on iOS doesn’t even exist yet, and already you can debug it with our devtools. Firefox today has a fierce momentum.

None of which will stop the trolls, of course. When this news gets out, I imagine someone will say something stupid. That it’s a Sign Of Doom. Predictable, and dead wrong; it misunderstands us completely. When things looked really rough, at the beginning of 2014, say, and people wanted to write about rats and sinking ships, that’s when I, and all of you, stayed.

You stayed or, in Chris’ case, you came back. And I’ve gotta say, having Chris in the seat is one of the things that gives me the most confidence. I didn’t know what Mozilla would feel like with Chris at the helm, but my CEO in 2014 was a person who pushed me and my team to do better, smarter work, to measure our results, and to keep the human beings who use our stuff at the center of every conversation. In fact, the whole senior team blows me away with their talent and their dedication.

You all do. And it makes me feel like a chump to be packing up in the midst of it all; but it’s time. And no, I haven’t been poached by facebook. I don’t actually know what my next thing will be. I want to take some time to catch up on what’s happened in the world around me. I want to take some time with my kid before she finishes her too-fast sprint to adulthood. I want to plant deeper roots in Toronto tech, which is incredibly exciting right now and may be a place where I can help. And I want a nap.

You are the very best I’ve met. It’s been a privilege to call myself your colleague, and to hand out a business card with the Firefox logo. I’m so immensely grateful for my time at Mozilla, and got so much more done here than I could have hoped. I’m talking with Chris and others about how I can meaningfully stay involved after March as an advisor, alumnus, and cheerleader. Once a Mozillian, always.



February 17, 2015 08:59 PM

February 13, 2015

Johnathan Nightingale (johnath)


Saturday morning light. #lilyHi sweets,

Today you turn 5, which makes this the tenth one of these letters I’ve written (A letter, 6 months, 1 year old, 18 months, 2 years old, Two and a Half, 3 years old, Three and a Half, Four, Four and a Half). Every time I write a new one, I read all the other ones and then I fall down the rabbit hole of looking at old pictures of you, and I land in a big pile of feels. That happened again this morning, and I’m sitting in the Mozilla office sniffling like a goober.

You’re the coolest little kid, Lil. You’re big into Monopoly Junior right now, and you cackle a little when something good happens, but when you’re winning by too much, you give everyone else some of your money, “so that we all have a nice time.” You’ve got friends on the street and go over to hang out with them, which is totally normal and great, but also a hard thing to get used to. You’re developing all these mannerisms that are so big kid that it hurts, even though it’s awesome. There’s a lot of that in parenthood – heart-swelling-to-painful awesomeness. When you need my help to undo a button at the back of your dress, you gather up all your hair in your hands and tip your neck forward and I’d swear you were 17.

You’re working through having two homes, just like I did when I was your age. Which is expected and normal and still heart-achy. So far you’re just curious and trying to puzzle it out. For our part, as we muddle through it, we talk to other “bonus moms” who have been at it longer than we have, and try to pay it forward by talking to friends who are becoming step parents for the first time, to share what little we’ve learned. Take it from me, kid, there’s no quick road to the other side of this one – you’ll spend a lot of time thinking about it, well into adulthood. But I’ll always be here, if you want to talk about it.

You refuse to go to sleep. You sleep like a cat, you’re out for 9-10 hours a night at least, but getting you to close your eyes is a daily, ridiculous struggle. A few months ago I taught you the word, “stalling”, but naming it hasn’t eliminated it. You want another story. You want to pee. You want to cuddle. Your newest trick is to ask wide open questions about the world. Smart, thoughtful questions and they are my achilles heel, because of course I will talk to you about stars and planets and plants and animals and dinosaurs. Last week’s 9pm gambit was, “Daddy, how does light work?”

I pretend to get annoyed, and tell you to go to sleep, but I’m not really annoyed at all. I love our conversations. They’re the best part of my day. I love watching you experience the world, and watching you grow. It hurts sometimes, but in the best way.

I love you, Lil. Happy birthday.

February 13, 2015 02:26 PM

November 28, 2014

Tim Taubert (ttaubert)

Implementing the MD2 Hash Function in Rust

static SBOX: [u8, ..256] = [
  0x29, 0x2e, 0x43, 0xc9, 0xa2, 0xd8, 0x7c, 0x01, 0x3d, 0x36, 0x54,
  0xa1, 0xec, 0xf0, 0x06, 0x13, 0x62, 0xa7, 0x05, 0xf3, 0xc0, 0xc7,
  0x73, 0x8c, 0x98, 0x93, 0x2b, 0xd9, 0xbc, 0x4c, 0x82, 0xca, 0x1e,
  0x9b, 0x57, 0x3c, 0xfd, 0xd4, ...
pub fn md2(msg: &[u8]) -> [u8, ..16] {
  // Pad the message to be a multiple of 16 bytes long.
  let msg = md2_pad(msg);

  // Compress all message blocks.
  let state = msg.chunks(16).fold([0u8, ..16], |s, m| md2_compress(&s, m));

  // Compute the checksum.
  let checksum = md2_checksum(msg.as_slice());

  // Compress checksum and return.
  md2_compress(&state, &checksum)
pub fn md2_pad(msg: &[u8]) -> Vec<u8> {
  let mut msg = msg.to_vec();
  let pad = 16 - msg.len() % 16;
  msg.grow_fn(pad, |_| pad as u8);
pub fn md2_checksum(msg: &[u8]) -> [u8, ..16] {
  // Message must be padded.
  assert!(msg.len() % 16 == 0);

  let mut checksum = [0u8, ..16];
  let mut last = 0u8;

  for chunk in msg.chunks(16) {
    for (mbyte, cbyte) in chunk.iter().zip(checksum.iter_mut()) {
      *cbyte ^= SBOX[(*mbyte ^ last) as uint];
      last = *cbyte;

mod test {
  use md2;

  fn hex(buf: &[u8]) -> String {
    buf.iter().fold(String::new(), |a, &b| format!("{}{:02x}", a, b))

  fn cmp(d: &str, s: &str) {
    assert_eq!(d.to_string(), hex(&md2(s.as_bytes())));

  fn test_md2() {
    cmp("8350e5a3e24c153df2275c9f80692773", "");
    cmp("32ec01ec4a6dac72c0ab96fb34c0b5d1", "a");
    cmp("da853b0d3f88d99b30283a69e6ded6bb", "abc");
    cmp("ab4f496bfb2a530b219ff33031fe06b0", "message digest");
    cmp("4e8ddff3650292ab5a4108c3aa47940b", "abcdefghijklmnopqrstuvwxyz");

November 28, 2014 02:48 PM

November 27, 2014

Brian Bondy (bbondy)

Developing and releasing the Khan Academy Firefox OS app

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.


There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.


App features

Development statistics

Technologies used

Technologies used to develop the app include:


The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements

Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

November 27, 2014 12:00 AM

November 26, 2014

Brian Bondy (bbondy)

SQL on Khan Academy enabled by SQLite, sqljs, asm.js and Emscripten

Originally the computer programming section at Khan Academy only focused on learning JavaScript by using ProcessingJS. This still remains our biggest environment and still has lots of plans for further growth, but we recently generalized and abstracted the whole framework to allow for new environments.

The first environment we added was HTML/CSS which was announced here. You can try it out here. We also have a lot of content for learning how to make webpages already created.

SQL on Khan Academy

We recently also experimented with the ability to teach SQL on Khan Academy. This wasn't a near term priority for us, so we used our hack week as an opportunity to bring an SQL environment to Khan Academy.

You can try out the SQL environment here.


To implement the environment, one would think of WebSQL, but there are a couple major browser vendors (Mozilla and Microsoft) who do not plan to implement it and W3C stopped working on the specification at the end of 2010,

Our implementation of SQL is based off of SQLite which is compiled down to asm.js by Emscripten packaged into sqljs.

All of these technologies I just mentioned, other than SQLite which is sponsored by Mozilla, are Mozilla based projects. In particular, largely thanks to Alon Zakai.

The environment

The environment looks like this, the entire code for creating, inserting, updating, and querying a database occur in a single editor. Behind the scenes, we re-create the entire state of the database and result sets on each code edit. Things run smoothly in the browser and you don't notice that.

Unlike many online SQL tutorials, this environment is entirely client side. It has no limitations on what you can do, and if we wanted, we could even let you export the SQL databases you create.

One of the other main highlights is that you can modify the inserts in the editor, and see the results in real time without having to run the code. This can lead to some cool insights on how changing data affects aggregate queries.

Hour of Code

Unlike the HTML/CSS work, we don’t have a huge number of tutorials created, but we do have some videos, coding talk throughs, challenges and a project setup in a single tutorial which we’ll be using for one of our hour of code offerings: Hour of Databases.

November 26, 2014 12:00 AM

November 25, 2014

Brian Bondy (bbondy)

Automated end to end testing at Khan Academy using Gecko

Developers at Khan Academy are responsible for shipping new stuff they create to as it's ready. As a whole, the site is deployed several times per day. Testing deploys of can take up a lot of time.

We have tons of JavaScript and Python unit tests, but they do not catch various errors that can only happen on the live site, such as Content Security Policy (CSP) errors.

We recently deployed a new testing environment for end to end testing which will result in safer deploys. End to end testing is not meant to replace manual testing at deploy time completely, but over time, it will reduce the amount of time taken for manual testing.

Which types of errors do the tests catch?

The end to end tests catch things like missing resources on pages, JavaScript errors, and CSP errors. They do not replace unit tests, and unit tests should be favoured when it's possible.

Which frameworks are we using?

We chose to implement the end to end testing with CasperJS powered by the SlimerJS engine. Actually we even have one more abstraction on top of that so that tests are very simple and clean to write.

SlimerJS is similar and mostly compatible with the more known PhantomJS, but SlimerJS is based on Firefox's Gecko rendering engine instead of WebKit. At the time of this writing, it's based on Gecko 33. CasperJS is a set of higher level APIs and can be configured to use PhantomJS or SlimerJS.

The current version of PhantomJS is based on Webkit and is too far behind to be useful to end to end tests for our site yet. There's a newer version of PhantomJS coming, but it's not ready yet. We also considered using Selenium to automate browsers to do the testing, but it didn't meet our objectives for various reasons.

What do the tests do?

They test the actual live site. They can load a list of pages, run scripts on the pages, and detect errors. The scripts emulate a user of the site who fills out forms, logs in, clicks things, waits for things, etc.

We also have scripts for creating and saving programs in our CS learning environment, doing challenges, and we'll even have some for playing videos.

Example script

Here's an example end-to-end test script that logs in, and tests a couple pages. It will return an error if there are any JavaScript errors, CSP errors, network errors, or missing resources:

EndToEnd.test("Basic logged in page load tests", function(casper, test) {
        [ "Home page", "/"],
        [ "Mission dashboard", "/mission/cc-sixth-grade-math"]
    ].map(function(testPage) {
        thenEcho(casper, "Loading page: " + testPage[0]);
        KAPageNav.thenOpen(casper, testPage[1]);

When are tests run?

Developers are currently prompted to run the tests when they do a deploy, but we'll be moving this to run automatically from Jenkins during the deploy process. Tests are run both on the staged website version before it is set as the default, and after it is set as the default version.

The output of tests looks like this:

November 25, 2014 12:00 AM

November 19, 2014

David Dahl (ddahl)

Encryptr: ‘zero knowledge’ essential information storage

Encryptr is one of the first “in production” applications built on top of Crypton. Encryptr can store short pieces of text like passwords, credit card numbers and other random pieces of information privately, in the cloud. Since it uses Crypton, all data that is saved to the server is encrypted first, making even a server compromise an exercise in futility for the attacker.

A key feature is that you can run Encryptr on your phone as well as your desktop and all data is available in each place immediately. Have a look:

The author of Encryptr, my colleague Tommy @therealdevgeeks, has recently blogged about building Encryptr. I hope you give it a try and send him feedback through the Github project page.

November 19, 2014 07:30 PM

November 17, 2014

Tim Taubert (ttaubert)

Botching Forward Secrecy

The probably oldest complaint about TLS is that its handshake is slow and together with the transport encryption has a lot of CPU overhead. This certainly is not true anymore if configured correctly.

One of the most important features to improve user experience for visitors accessing your site via TLS is session resumption. Session resumption is the general idea of avoiding a full TLS handshake by storing the secret information of previous sessions and reusing those when connecting to a host the next time. This drastically reduces latency and CPU usage.

Enabling session resumption in web servers and proxies can however easily compromise forward secrecy. To find out why having a de-factor standard TLS library (i.e. OpenSSL) can be a bad thing and how to avoid botching PFS let us take a closer look at forward secrecy, and the current state of server-side implementation of session resumption features.

What is (Perfect) Forward Secrecy?

(Perfect) Forward Secrecy is an important part of modern TLS setups. The core of it is to use ephemeral (short-lived) keys for key exchange so that an attacker gaining access to a server cannot use any of the keys found there to decrypt past TLS sessions they may have recorded previously.

We must not use a server’s RSA key pair, whose public key is contained in the certificate, for key exchanges if we want PFS. This key pair is long-lived and will most likely outlive certificate expiration dates as you would just use the same key pair to generate a new certificate after the current expired. In case the server is compromised it would be far too easy to determine the location of the private key on disk or in memory and use it to decrypt recorded TLS sessions from the past.

Using Diffie-Hellman key exchanges where key generation is a lot cheaper we can use a key pair exactly once and discard it afterwards. An attacker with access to the server can still compromise the authentication part as shown above and MITM everything from here on using the certificate’s private key, but past TLS sessions stay protected.

How can Session Resumption botch PFS?

TLS provides two session resumption features: Session IDs and Session Tickets. To better understand how those can be attacked it is worth looking at them in more detail.

Session IDs

In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both server and client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

To support session resumption via session IDs the server must maintain a cache that maps past session IDs to those sessions’ secret states. The cache itself is the main weak spot, stealing the cache contents allows to decrypt all sessions whose session IDs are contained in it.

The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized cache that is purged daily. Purging your cache might however not help if the cache itself lives on a persistent storage as it might be feasible to restore deleted data from it. An in-memory storage should be more resistant to these kind of attacks if it turns over about once a day and ensures old data is overridden properly.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future and is the weak spot an attacker will target.

The client will store its secret information for a TLS session along with the ticket received from the server. By transmitting that ticket back to the server at the beginning of the next TLS connection both parties can resume their previous session, given that the server can still access the secret key that was used to encrypt.

We ideally want the same secrecy bounds for Session Tickets as for Session IDs. To achieve this we need to ensure that the key used to encrypt tickets is rotated about daily. It should just as the session cache not live on a persistent storage to not leave any trace.

Apache configuration

Now that we determined how we ideally want session resumption features to be configured we should take a look at a popular web servers and load balancers to see whether that is supported, starting with Apache.

Configuring the Session Cache

The Apache HTTP Server offers the SSLSessionCache directive to configure the cache that contains the session IDs of previous TLS sessions along with their secret state. You should use shmcb as the storage type, that is a high-performance cyclic buffer inside a shared memory segment in RAM. It will be shared between all threads or processes and allow session resumption no matter which of those handles the visitor’s request.

SSLSessionCache shmcb:/path/to/ssl_gcache_data(512000)

The example shown above establishes an in-memory cache via the path /path/to/ssl_gcache_data with a size of 512 KiB. Depending on the amount of daily visitors the cache size might be too small (i.e. have a high turnover rate) or too big (i.e. have a low turnover rate).

We ideally want a cache that turns over daily and there is no really good way to determine the right session cache size. What we really need is a way to tell Apache the maximum time an entry is allowed to stay in the cache before it gets overridden. This must happen regardless of whether the cyclic buffer has actually cycled around yet and must be a periodic background job to ensure the cache is purged even when there have not been any requests in a while.

You might wonder whether the SSLSessionCacheTimeout directive can be of any help here - unfortunately no. The timeout is only checked when a session ID is given at the start of a TLS connection. It does not cause entries to be purged from the session cache.

Configuring Session Tickets

While Apache offers the SSLSessionTicketKeyFile directive to specify a key file that should contain 48 random bytes, it is recommended to not specify one at all. Apache will simply generate a random key on startup and use that to encrypt session tickets for as long as it is running.

The good thing about this is that the session ticket key will not touch persistent storage, the bad thing is that it will never be rotated. Generated once on startup it is only discarded when Apache restarts. For most of the servers out there that means they use the same key for months, if not years.

To provide forward secrecy we need to rotate the session ticket key about daily and current Apache versions provide no way of doing that. The only way to achieve that might be use a cron job to gracefully restart Apache daily to ensure a new key is generated. That does not sound like a real solution though and nothing ensures the old key is properly overridden.

Changing the key file while Apache is running does not do it either, you would still need to gracefully restart the service to apply the new key. An do not forget that if you use a key file it should be stored on a temporary file system like tmpfs.

Disabling Session Tickets

Although disabling session tickets will undoubtedly have a negative performance impact, for the moment being you will need to do that in order to provide forward secrecy:

SSLOpenSSLConfCmd Options -SessionTicket

Ivan Ristic adds that to disable session tickets for Apache using SSLOpenSSLConfCmd, you have to be running OpenSSL 1.0.2 which has not been released yet. If you want to disable session tickets with earlier OpenSSL versions, Ivan has a few patches for the Apache 2.2.x and Apache 2.4.x branches.

To securely support session resumption via tickets Apache should provide a configuration directive to specify the maximum lifetime for session ticket keys, at least if auto-generated on startup. That would allow us to simply generate a new random key and override the old one daily.

Nginx configuration

Another very popular web server is Nginx. Let us see how that compares to Apache when it comes to setting up session resumption.

Configuring the Session Cache

Nginx offers the ssl_session_cache directive to configure the TLS session cache. The type of the cache should be shared to share it between multiple workers:

ssl_session_cache shared:SSL:10m;

The above line establishes an in-memory cache with a size of 10 MB. We again have no real idea whether 10 MB is the right size for the cache to turn over daily. Just as Apache, Nginx should provide a configuration directive to allow cache entries to be purged automatically after a certain time. Any entries not purged properly could simply be read from memory by an attacker with full access to the server.

You guessed right, the ssl_session_timeout directive again only applies when trying to resume a session at the beginning of a connection. Stale entries will not be removed automatically after they time out.

Configuring Session Tickets

Nginx allows to specify a session ticket file using the ssl_session_ticket_key directive, and again you are probably better off by not specifying one and having the service generate a random key on startup. The session ticket key will never be rotated and might be used to encrypt session tickets for months, if not years.

Nginx, too, provides no way to automatically rotate keys. Reloading its configuration daily using a cron job might work but does not come close to a real solution either.

Disabling Session Tickets

The best you can do to provide forward secrecy to visitors is thus again switch off session ticket support until a proper solution is available.

ssl_session_tickets off;

HAproxy configuration

HAproxy, a popular load balancer, suffers from basically the same problems as Apache and Nginx. All of them rely on OpenSSL’s TLS implementation.

Configuring the Session Cache

The size of the session cache can be set using the tune.ssl.cachesize directive that accepts a number of “blocks”. The HAproxy documentation tries to be helpful and explain how many blocks would be needed per stored session but we again cannot ensure an at least daily turnover. We would need a directive to automatically purge entries just as for Apache and Nginx.

And yes, the tune.ssl.lifetime directive does not affect how long entries are persisted in the cache.

Configuring Session Tickets

HAproxy does not allow configuring session ticket parameters. It implicitly supports this feature because OpenSSL enables it by default. HAproxy will thus always generate a session ticket key on startup and use it to encrypt tickets for the whole lifetime of the process.

A graceful daily restart of HAproxy might be the only way to trigger key rotation. This is a pure assumption though, please do your own testing before using that in production.

Disabling Session Tickets

You can disable session ticket support in HAproxy using the no-tls-tickets directive:

ssl-default-bind-options no-sslv3 no-tls-tickets

A previous version of the post said it would be impossible to deactivate session tickets. Thanks to the HAproxy team for correcting me!

Session Resumption with multiple servers

If you have multiple web servers that act as front-ends for a fleet of back-end servers you will unfortunately not get away with not specifying a session ticket key file and a dirty hack that reloads the service configuration at midnight.

Sharing a session cache between multiple machines using memcached is possible but using session tickets you “only” have to share one or more session ticket keys, not the whole cache. Clients would take care of storing and discarding tickets for you.

Twitter wrote a great post about how they manage multiple web front-ends and distribute session ticket keys securely to each of their machines. I suggest reading that if you are planning to have a similar setup and support session tickets to improve response times.

Keep in mind though that Twitter had to write their own web server to handle forward secrecy in combination with session tickets properly and this might not be something you want to do yourselves.

It would be great if either OpenSSL or all of the popular web servers and load balancers would start working towards helping to provide forward secrecy by default and server admins could get rid of custom front-ends or dirty hacks to rotate keys.

November 17, 2014 05:00 PM

November 13, 2014

David Dahl (ddahl)

Crypton: The Privacy Framework

Crypton Logo

Since I left Mozilla last year, I have been working on Crypton ( ), an HTML5 application framework that places privacy above all else. Last month, my team was invited to the ‘Hack In The Box’, Malaysia security conference to lead a Lab Session on Crypton.

We were required to write a whitepaper for HiTB to publish, which was a great exercise, as my team has been meaning to write a paper for some time. It was a long trip, but worth it. We led a group of engineers through most of the Crypton API in about 2 hours.

I lived-coded the ‘skeleton’ of a messaging application in 74 lines of JavaScript. The coolest thing about this session was using Firefox’s Scratchpad for all of the live-coding. It worked so well, we plan on doing more sessions like this.

Crypton is intended for use inside of mobile and desktop applications (FirefoxOS, too). Our initial target for development is via Cordova and node-webkit. The API hides all of the complexity of cryptography from the developer. Developers use APIs that look like any other hosted API, for instance, account creation looks something like this:

var myAccount; 
crypton.generateAccount('alice', 'password', function callback(error, successResult){
  if (error) { console.error(err); return;}
  myAccount = successResult;

Beneath this elegant, every-day-looking API call, a set of encryption keys are generated for encryption, signing and HMAC as well as a stretched key via the password that wraps all other keys. This keyring is then stored on the server making multiple-device operations easy.

As we move forward with the Crypton framework, we are building a “private backend service” which will make using Crypton trivially easy to use and require no system administration. More on this in a future post.

November 13, 2014 05:17 PM

November 02, 2014

Tim Taubert (ttaubert)

Generating .onion Names for Tor Hidden Services

You have probably read that Facebook unveiled its hidden service that lets users access their website more safely via Tor. While there are lots of opinions about whether this is good or bad I think that the Tor project described best why that is not as crazy as it seems.

The most interesting part to me however is that Facebook brute-forced a custom hidden service address as it never occurred to me that this is something you might want to do. Again ignoring the pros and cons of doing that, investigating the how seems like a fun exercise to get more familiar with the WebCrypto API if that is still unknown territory to you.

How are .onion names created?

Names for Tor hidden services are meant to be self-authenticating. When creating a hidden service Tor generates a new 1024 bit RSA key pair and then computes the SHA-1 digest of the public key. The .onion name will be the Base32-encoded first half of that digest.

By using a hash of the public key as the URL to contact a hidden service you can easily authenticate it and bypass the existing CA structure. This 80 bit URL is sufficient to prevent collisions, even with a birthday attack (and thus an entropy of 40 bit) you can only find a random collision but not the key pair matching a specific .onion name.

Creating custom .onion names

So how did Facebook manage to come up with a public key resulting in facebookcorewwwi.onion? The answer is that they were incredibly lucky.

You can brute-force .onion names matching a specific pattern using tools like Shallot or Scallion. Those will generate key pairs until they find one resulting in a matching URL. That is usably fast for 1-5 characters. Finding a 6-character pattern takes on average 30 minutes and for just 7 characters you might need to let it run for a full day.

Coming up with an .onion name starting with an 8-character pattern like facebook would thus take even longer or need a lot more resources. As a Facebook engineer confirmed they indeed got extremely lucky: they generated a few keys matching the pattern, picked the best and then just needed to come up with an explanation for the corewwwi part to let users memorize it better.

Without taking a closer look at “Shallot” or “Scallion” let us go with a naive approach. We do not need to create another tool to find .onion names in the browser (the existing ones work great) but it is a good opportunity to again show what you can do with the WebCrypto API in the browser.

Generating a random .onion name

To generate a random name for a Tor hidden service we first need to generate a new 1024 bit RSA key just as Tor would do:

function generateRSAKey() {
  var alg = {
    // This could be any supported RSA* algorithm.
    name: "RSASSA-PKCS1-v1_5",
    // We won't actually use the hash function.
    hash: {name: "SHA-1"},
    // Tor hidden services use 1024 bit keys.
    modulusLength: 1024,
    // We will use a fixed public exponent for now.
    publicExponent: new Uint8Array([0x03])

  return crypto.subtle.generateKey(alg, true, ["sign", "verify"]);

generateKey() returns a Promise that resolves to the new key pair. The second argument specifies that we want the key to be exportable as we need to do that in order to check for pattern matches. We will not actually use the key to sign or verify data but we need specify valid usages for the public and private keys.

To check whether a generated public key matches a specific pattern we of course have to compute the hash for the .onion URL:

function computeOnionHash(publicKey) {
  // Export the DER encoding of the SubjectPublicKeyInfo structure.
  var promise = crypto.subtle.exportKey("spki", publicKey);

  promise = promise.then(function (spki) {
    // Compute the SHA-1 digest of the SPKI.
    // Skip 22 bytes (the SPKI header) that are ignored by Tor.
    return crypto.subtle.digest({name: "SHA-1"}, spki.slice(22));

  return promise.then(function (digest) {
    // Base32-encode the first half of the digest.
    return base32(digest.slice(0, 10));

We first use exportKey() to get an SPKI representation of the public key, use digest() to compute the SHA-1 digest of that, and finally pass it to base32() to Base32-encode the first half of that digest.

Note: base32() is an RFC 3548 compliant Base32 implementation. chrisumbel/thirty-two is a good one that unfortunately does not support ArrayBuffers, I will use a slightly adapted version of it in the example code.

Finding a specific .onion name

The only thing missing now is a function that checks for pattern matches and loops until we found one:

function findOnionName(pattern) {
  var key;

  // Start by generating a random key pair.
  var promise = generateRSAKey().then(function (pair) {
    key = pair.privateKey;

    // Generate the .onion hash of the public key.
    return computeOnionHash(pair.publicKey);

  return promise.then(function (hash) {
    // Try again if the pattern doesn't match.
    if (!pattern.test(hash)) {
      return findOnionName(pattern);

    // Key matches! Export and format it.
    return formatKey(key).then(function (formatted) {
      return {key: formatted, hash: hash};

We simply use generateRSAKey() and computeOnionHash() as defined before. In case of a pattern match we export the PKCS8 private key information, encode it as Base64 and format it nicely:

function formatKey(key) {
  // Export the DER-encoded ASN.1 private key information.
  var promise = crypto.subtle.exportKey("pkcs8", key);

  return promise.then(function (pkcs8) {
    var encoded = base64(pkcs8);

    // Wrap lines after 64 characters.
    var formatted = encoded.match(/.{1,64}/g).join("\n");

    // Wrap the formatted key in a header and footer.
    return "-----BEGIN PRIVATE KEY-----\n" + formatted +
           "\n-----END PRIVATE KEY-----";

Note: base64() refers to an existing Base64 implementation that can deal with ArrayBuffers. niklasvh/base64-arraybuffer is a good one that I will use in the example code.

What is logged to the console can be directly used to replace any random key that Tor has assigned before. Here is how you would use the code we just wrote:

findOnionName(/ab/).then(function (result) {
  console.log(result.hash + ".onion", result.key);
}, function (err) {
  console.log("An error occurred, please reload the page.");

The Promise returned by findOnionName() will not resolve until a match was found. When generating lots of keys Firefox currently sometimes fails with a “transient error” that needs to be investigated. If you want a loop that runs despite that error you could simply restart the search in the error handler.

The code

Include it in a minimal web site and have the Web Console open. It will run in Firefox 33+ and Chrome 37+ with the WebCrypto API explicitly enabled (if necessary).

The pitfalls

As said before, the approach shown above is quite naive and thus very slow. The easiest optimization to implement might be to spawn multiple web workers and let them search in parallel.

We could also speed up finding keys by not regenerating the whole RSA key every loop iteration but instead increasing the public exponent by 2 (starting from 3) until we find a match and then check whether that produces a valid key pair. If it does not we can just continue.

Lastly, the current implementation does not perform any safety checks that Tor might run on the generated key. All of these points would be great reasons for a follow-up post.

Important: You should use the keys generated with this code to run a hidden service only if you trust the host that serves it. Getting your keys off of someone else’s web server is a terrible idea. Do not be that guy or gal.

November 02, 2014 03:00 PM

October 30, 2014

Tim Taubert (ttaubert)

HTTP Public-Key-Pinning Explained

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

  max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the pin of any public key in your current certificate chain and the other is the pin of any public key not in your current certificate chain. The latter is a backup in case your certificate expires or has to be revoked.

It is definitely not obvious which public keys you should pin and what a good backup pin would be. Let us answer those questions by starting with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 2048

The above command generates a 2048 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to MITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After establishing a TLS session with the server, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

A valid certificate that passed all basics checks will be accepted if the browser could not find any pins stored for the current hostname. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

What if you need to replace your certificate?

If your certificate expires or an attacker stole the private key you will have to replace (and possibly revoke) the leaf certificate. This might invalidate your pin, the constraints for obtaining a new valid certificate are the same as for an attacker that tries to impersonate you and intercept TLS sessions.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain and will succeed if any of the public keys matches any of the pins. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

If you pinned your leaf certificate then the only way to recover is your backup pin - whatever this points to must be included in your new certificate chain if you want to allow users that stored your pin from previous connections back on your server.

An easier solution would be available if you provided the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. To construct a new valid certificate chain you simply have to ask StartSSL to re-issue a new certificate for a new or your current key. This comes at the price of a slightly bigger attack surface as someone that stole the private key of the CA’s intermediate certificate would be able to impersonate your site and pass key pinning checks.

Another possibility is pinning StartSSL’s root certificate. Any certificate issued by StartSSL would let you construct a new valid certificate chain. Again, this slightly increases the attack vector as any compromised intermediate or root certificate would allow to impersonate your site and pass pinning checks.

What key should I pin?

Given all of the above scenarios you might ask which key would be the best to pin, and the answer is: it depends. You can pin one or all of the public keys in your certificate chain and that will work. The specification requires you to have at least two pins, so you must include the SPKI hash of another CA’s root certificate, another CA’s intermediate certificate (a different tier of your current CA would also work), or another leaf certificate. The only requirement is that this pin is not equal to the hash of any of the certificates in the current chain. The poor browser cannot tell whether you gave it a valid and useful backup pin so it will happily accept random values too.

Pinning to a small set of CAs that you are comfortable with helps you reduce the risk to yourself. Pinning just your leaf certificates is only advised if you are really certain that this is for you. It is a little like driving without a seatbelt and might work most of the time. If something goes wrong it usually goes really wrong and you want to avoid that.

Pinning only your own leaf certs also bears the risk of creating a backup key that adheres to ancient standards and could not be used anymore when you have to replace your current certificate. Assume it was three years ago, and your backup was a 1024-bit RSA key pair. You pin for a year, and your certificate expires. You go to a CA and say “Hey, re-issue my cert for Key A”, and they say “No, your key is too small/weak”. You then say “Ah, but what about my backup key?” - and that also gets rejected because it is too short. In effect, because you only pinned to keys under your control you are now bricked.

October 30, 2014 01:00 PM

October 29, 2014

Tim Taubert (ttaubert)

Talk: Keeping Secrets With JavaScript - an Introduction to the WebCrypto API

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.




October 29, 2014 04:00 PM

October 27, 2014

Tim Taubert (ttaubert)

Deploying TLS the Hard Way

  1. How does TLS work?
  2. The certificate
  3. (Perfect) Forward Secrecy
  4. Choosing the right cipher suites
  5. HTTP Strict Transport Security
  6. HSTS Preload List
  7. OCSP Stapling
  8. HTTP Public Key Pinning
  9. Known attacks

Last weekend I finally deployed TLS for and decided to write up what I learned on the way hoping that it would be useful for anyone doing the same. Instead of only giving you a few buzz words I want to provide background information on how TLS and certain HTTP extensions work and why you should use them or configure TLS in a certain way.

One thing that bugged me was that most posts only describe what to do but not necessarily why to do it. I hope you appreciate me going into a little more detail to end up with the bigger picture of what TLS currently is, so that you will be able to make informed decisions when deploying yourselves.

To follow this post you will need some basic cryptography knowledge. Whenever you do not know or understand a concept you should probably just head over to Wikipedia and take a few minutes or just do it later and maybe re-read the whole thing.

Disclaimer: I am not a security expert or cryptographer but did my best to research this post thoroughly. Please let me know of any mistakes I might have made and I will correct them as soon as possible.

But didn’t Andy say this is all shit?

I read Andy Wingo’s blog post too and I really liked it. Everything he says in there is true. But what is also true is that TLS with the few add-ons is all we have nowadays and we better make the folks working for the NSA earn their money instead of not trying to encrypt traffic at all.

After you finished reading this page, maybe go back to Andy’s post and read it again. You might have a better understanding of what he is ranting about than you had before if the details of TLS are still dark matter to you.

So how does TLS work?

Every TLS connection starts with both parties sharing their supported TLS versions and cipher suites. As the next step the server sends its X.509 certificate to the browser.

Checking the server’s certificate

The following certificate checks need to be performed:

All of these are very obvious crucial checks. To query a certificate’s revocation status the browser will use the Online Certificate Status Protocol (OCSP) which I will describe in more detail in a later section.

After the certificate checks are done and the browser ensured it is talking to the right host both sides need to agree on secret keys they will use to communicate with each other.

Key Exchange using RSA

A simple key exchange would be to let the client generate a master secret and encrypt that with the server’s public RSA key given by the certificate. Both client and server would then use that master secret to derive symmetric encryption keys that will be used throughout this TLS session. An attacker could however simply record the handshake and session for later, when breaking the key has become feasible or the machine is suspect to a vulnerability. They may then use the server’s private key to recover the whole conversation.

Key Exchange using (EC)DHE

When using (Elliptic Curve) Diffie-Hellman as the key exchange mechanism both sides have to collaborate to generate a master secret. They generate DH key pairs (which is a lot cheaper than generating RSA keys) and send their public key to the other party. With the private key and the other party’s public key the shared master secret can be calculated and then again be used to derive session keys. We can provide Forward Secrecy when using ephemeral DH key pairs. See the section below on how to enable it.

We could in theory also provide forward secrecy with an RSA key exchange if the server would generate an ephemeral RSA key pair, share its public key and would then wait for the master secret to be sent by the client. As hinted above RSA key generation is very expensive and does not scale in practice. That is why RSA key exchanges are not a practical option for providing forward secrecy.

After both sides have agreed on session keys the TLS handshake is done and they can finally start to communicate using symmetric encryption algorithms like AES that are much faster than asymmetric algorithms.

The certificate

Now that we understand authenticity is an integral part of TLS we know that in order to serve a site via TLS we first need a certificate. The TLS protocol can encrypt traffic between two parties just fine but the certificate provides the necessary authentication towards visitors.

Without a certificate a visitor could securely talk to either us, the NSA, or a different attacker but they probably want to talk to us. The certificate ensures by cryptographic means that they established a connection to our server.

Selecting a Certificate Authority (CA)

If you want a cheap certificate, have no specific needs, and only a single subdomain (e.g. www) then StartSSL is an easy option. Do of course feel free to take a look at different authorities - their services and prices will vary heavily.

In the chain of trust the CA plays an important role: by verifying that you are the rightful owner of your domain and signing your certificate it will let browsers trust your certificate. The browsers do not want to do all this verification themselves so they defer it to the CAs.

For your certificate you will need an RSA key pair, a public and private key. The public key will be included in your certificate and thus also signed by the CA.

Generating an RSA key and a certificate signing request

The example below shows how you can use OpenSSL on the command line to generate a key for your domain. Simply replace with the domain of your website. will be your new RSA key and will be the Certificate Signing Request that your CA needs to generate your certificate.

openssl req -new -newkey rsa:4096 -nodes -sha256 \
  -keyout -out

We will use a SHA-256 based signature for integrity as Firefox and Chrome will phase out support for SHA-1 based certificates soon. The RSA keys used to authenticate your website will use a 4096 bit modulus. If you need to handle a lot of traffic or your server has a weak CPU you might want to use 2048 bit. Never go below that as keys smaller than 2048 bit are considered insecure.

Get a signed certificate

Sign up with the CA you chose and depending on how they handle this process you probably will have to first verify that you are the rightful owner of the domain that you claim to possess. StartSSL will do that by sending a token to (or similar) and then ask you to confirm the receipt of that token.

Now that you signed up and are the verified owner of you simply submit the file to request the generation of a certificate for your domain. The CA will sign your public key and the other information contained in the CSR with their private key and you can finally download the certificate to

Upload the .crt and .key files to your web server. Be aware that any intermediate certificate in the CA’s chain must be included in the .crt file as well - you can just cat them together. StartSSL’s free tier has an intermediate Class 1 certificate - make sure to use the SHA-256 version of it. All files should be owned by root and must not be readable by anyone else. Configure your web server to use those and you should probably have TLS running configured out-of-the-box.

(Perfect) Forward Secrecy

To properly deploy TLS you will want to provide (Perfect) Forward Secrecy. Without forward secrecy TLS still seems to secure your communication today, it might however not if your private key is compromised in the future.

If a powerful adversary (think NSA) records all communication between a visitor and your server, they can decrypt all this traffic years later by stealing your private key or going the “legal” way to obtain it. This can be prevented by using short-lived (ephemeral) keys for key exchanges that the server will throw away after a short period.

Diffie-Hellman key exchanges

Using RSA with your certificate’s private and public keys for key exchanges is off the table as generating a 2048+ bit prime is very expensive. We thus need to switch to ephemeral (Elliptic Curve) Diffie-Hellman cipher suites. For DH you can generate a 2048 bit parameter once, choosing a private key afterwards is cheap.

openssl dhparam -out dhparam.pem 2048

Simply upload dhparam.pem to your server and instruct the web server to use it for Diffie-Hellman key exchanges. When using ECDH the predefined elliptic curve represents this parameter and no further action is needed.

ssl_dhparam /path/to/ssl/dhparam.pem;

Apache does unfortunately not support custom DH parameters, it is always set to 1024 bit and is not user configurable. This might hopefully be fixed in future versions.

Session IDs

One of the most important mechanisms to improve TLS performance is Session Resumption. In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both the server and the client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

Now you might notice that this could violate forward secrecy as a compromised server might reveal the secret state for all session IDs if the cache is just large enough. The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized in-memory cache that is purged daily.

Apache lets you configure that using the SSLSessionCache directive and you should use the high-performance cyclic buffer shmcb. Nginx has the ssl_session_cache directive and you should use a shared cache that is shared between workers. The right size of those caches would depend on the amount of traffic your server handles. You want browsers to resume TLS sessions but also get rid of old ones about daily.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future.

This might as well violate forward secrecy if the key used to encrypt session tickets is compromised. The ticket (just as the session cache) contains all of the server’s secret state and would allow an attacker to reveal the whole conversation.

Nginx and Apache by default generate a session ticket key at startup and do unfortunately provide no way to rotate it. If your server is running for months without a restart then you will use that same session ticket key for months and breaking into your server could reveal every recorded TLS conversation since the web server was started.

Neither Nginx nor Apache have a sane way to work around this, Nginx might be able to rotate the key by reloading the server config which is rather easy to implement with a cron job. Make sure to test that this actually works before relying on it though.

Thus if you really want to provide forward secrecy you should disable session tickets using ssl_session_tickets off for Nginx and SSLOpenSSLConfCmd Options -SessionTicket for Apache.

Choosing the right cipher suites

Mozilla’s guide on server side TLS provides a great list of modern cipher suites that needs to be put in your web server’s configuration. The combinations below are unfortunately supported by only modern browsers, for broader client support you might want to consider using the “intermediate” list.

DHE-RSA-AES128-GCM-SHA256:     \
DHE-DSS-AES128-GCM-SHA256:     \

All these cipher suites start with (EC)DHE which means they only support ephemeral Diffie-Hellman key exchanges for forward secrecy. The last line discards non-authenticated key exchanges, null-encryption (cleartext), legacy weak ciphers marked exportable by US law, weak ciphers (3)DES and RC4, weak MD5 signatures, and pre-shared keys.

Note: To ensure that the order of cipher suites is respected you need to set ssl_prefer_server_ciphers on for Nginx or SSLHonorCipherOrder on for Apache.

HTTP Strict Transport Security (HSTS)

Now that your server is configured to accept TLS connections you still want to support HTTP connections on port 80 to redirect old links and folks typing in the URL bar to your shiny new HTTPS site.

At this point however a Man-In-The-Middle (or Woman-In-The-Middle) attack can easily intercept and modify traffic to deliver a forged HTTP version of your site to a visitor. The poor visitor might never know because they did not realize you offer TLS connections now.

To ensure your users are secured when visiting your site the next time you want to send a HSTS header to enforce strict transport security. By sending this header the browser will not try to establish a HTTP connection next time but directly connect to your website via TLS.

  max-age=15768000; includeSubDomains; preload

Sending these headers over a HTTPS connection (they will be ignored via HTTP) lets the browser remember that this domain wants strict transport security for the next six months (~15768000 seconds). The includeSubDomains token enforces TLS connections for every subdomain of your domain and the non-standard preload token will be required for the next section.

HSTS Preload List

If after deploying TLS the very first connection of a visitor is genuine we are fine. Your server will send the HSTS header over TLS and the visitor’s browser remembers to use TLS in the future. The very first connection and every connection after the HSTS header expires however are still vulnerable to a MITM attack.

To prevent this Firefox and Chrome share a HSTS Preload List that basically includes HSTS headers for all sites that would send that header when visiting anyway. So before connecting to a host Firefox and Chrome check whether that domain is in the list and if so would not even try using an insecure HTTP connection.

Including your page in that list is easy, just submit your domain using the HSTS Preload List submission form. Your HSTS header must be set up correctly and contain the includeSubDomains and preload tokens to be accepted.

OCSP Stapling

OCSP - using an external server provided by the CA to check whether the certificate given by the server was revoked - might sound like a great idea at first. On the second thought it actually sounds rather terrible. First, the CA providing the OCSP server suddenly has to be able to handle a lot of requests: every client opening a connection to your server will want to know whether your certificate was revoked before talking to you.

Second, the browser contacting a CA and passing the certificate is an easy way to monitor a user’s browsing behavior. If all CAs worked together they probably could come up with a nice data set of TLS sites that people visit, when and in what order (not that I know of any plans they actually wanted to do that).

Let the server do the work for your visitors

OCSP Stapling is a TLS extension that enables the server to query its certificate’s revocation status at regular intervals in the background and send an OCSP response with the TLS handshake. The stapled response itself cannot be faked as it needs to be signed with the CA’s private key. Enabling OCSP stapling thus improves performance and privacy for your visitors immediately.

You need to create a certificate file that contains your CA’s root certificate prepended by any intermediate certificates that might be in your CA’s chain. StartSSL has an intermediate certificate for Class 1 (the free tier) - make sure to use the one having the SHA-256 signature. Pass the file to Nginx using the ssl_trusted_certificate directive and to Apache using the SSLCACertificateFile directive.

OCSP Must Staple

OCSP however is unfortunately not a silver bullet. If a browser does not know in advance it will receive a stapled response then the attacker might as well redirect HTTPS traffic to their server and block any traffic to the OCSP server (in which case browsers soft-fail). Adam Langley explains all possible attack vectors in great detail.

One solution might be the proposed OCSP Must Staple Extension. This would add another field to the certificate issued by the CA that says a server must provide a stapled OCSP response. The problem here is that the proposal expired and in practice it would take years for CAs to support that.

Another solution would be to implement a header similar to HSTS, that lets the browser remember to require a stapled OCSP response when connecting next time. This however has the same problems on first connection just like HSTS, and we might have to maintain a “OCSP-Must-Staple Preload List”. As of today there is unfortunately no immediate solution in sight.

HTTP Public Key Pinning (HPKP)

Even with all those security checks when receiving the server’s certificate we would still be completely out of luck in case your CA’s private key is compromised or your CA simply fucks up. We can prevent these kinds of attacks with an HTTP extension called Public Key Pinning.

Key pinning is a trust-on-first-use (TOFU) mechanism. The first time a browser connects to a host it lacks the the information necessary to perform “pin validation” so it will not be able to detect and thwart a MITM attack. This feature only allows detection of these kinds of attacks after the first connection.

Generating a HPKP header

Creating an HPKP header is easy, all you need to do is to compute the base64-encoded “SPKI fingerprint” of your server’s certificate. An SPKI fingerprint is the output of applying SHA-256 to the public key information contained in your certificate.

openssl req -inform pem -pubkey -noout < |
  openssl pkey -pubin -outform der |
  openssl dgst -sha256 -binary |

The result of running the above command can be directly used as the pin-sha256 values for the Public-Key-Pins header as shown below:

  max-age=15768000; includeSubDomains

Upon receiving this header the browser knows that it has to store the pins given by the header and discard any certificates whose SPKI fingerprints do not match for the next six months (max-age=15768000). We specified the includeSubDomains token so the browser will verify pins when connecting to any subdomain.

Include the pin of a backup key

It is considered good practice to include at least a second pin, the SPKI fingerprint of a backup RSA key that you can generate exactly as the original one:

openssl req -new -newkey rsa:4096 -nodes -sha256 \
  -keyout -out

In case your private key is compromised you might need to revoke your current certificate and request the CA to issue a new one. The old pin however would still be stored in browsers for six months which means they would not be able to connect to your site. By sending two pin-sha256 values the browser will later accept a TLS connection when any of the stored fingerprints match the given certificate.

Known attacks

In the past years (and especially the last year) a few attacks on SSL/TLS were published. Some of those attacks can be worked around on the protocol or crypto library level so that you basically do not have to worry as long as your web server is up to date and the visitor is using a modern browser. A few attacks however need to be thwarted by configuring your server properly.

BEAST (Browser Exploit Against SSL/TLS)

BEAST is an attack that only affects TLSv1.0. Exploiting this vulnerability is possible but rather difficult. You can either disable TLSv1.0 completely - which is certainly the preferred solution although you might neglect folks with old browsers on old operating systems - or you can just not worry. All major browsers have implemented workarounds so that it should not be an issue anymore in practice.

BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext)

BREACH is a security exploit against HTTPS when using HTTP compression. BREACH is based on CRIME but unlike CRIME - which can be successfully defended by turning off TLS compression (which is the default for Nginx and Apache nowadays) - BREACH can only be prevented by turning off HTTP compression. Another method to mitigate this would be to use cross-site request forgery (CSRF) protection or disable HTTP compression selectively based on headers sent by the application.

POODLE (Padding Oracle On Downgraded Legacy Encryption)

POODLE is yet another padding oracle attack on TLS. Luckily it only affects the predecessor of TLS which is SSLv3. The only solution when deploying a new server is to just disable SSLv3 completely. Fortunately, we already excluded SSLv3 in our list of preferred ciphers previously. Firefox 34 will ship with SSLv3 disabled by default, Chrome and others will hopefully follow soon.

Further reading

Thanks for reading and I am really glad you made it that far! I hope this post did not discourage you from deploying TLS - after all getting your setup right is the most important thing. And it certainly is better to to know what you are getting yourselves into than leaving your visitors unprotected.

If you want to read even more about setting up TLS, the Mozilla Wiki page on Server-Side TLS has more information and proposed web server configurations.

Thanks a lot to Frederik Braun for taking the time to proof-read this post and helping to clarify a few things!

October 27, 2014 06:00 PM

October 22, 2014

Gavin Sharp (gavin)


I’m happy to announce that Georg Fritzsche is now officially a Firefox reviewer.

Georg has been contributing to Firefox for a while now – his contributions started with done some great work on Firefox’s Telemetry system, as well as investigating stability issues in plugins and Firefox itself. He played a crucial role in building the telemetry experiments system, and more recently has become familiar with a few key parts of the Firefox front-end code, including our click to play plugin UI and the upcoming “FHR self support” feature. He’s a thorough reviewer with lots of experience with Mozilla code, so don’t hesitate to ask Georg for review if you’re patching Firefox!

Thanks Georg!

October 22, 2014 10:14 PM

August 01, 2014

Margaret Leibovic (margaret)

Firefox for Android: Search Experiments

Search is a large part of mobile browser usage, so we (the Firefox for Android team) decided to experiment with ways to improve our search experience for users. As an initial goal, we decided to look into how we can make search faster. To explore this space, we’re about to enable two new features in Nightly: a search activity and a home screen widget.

Android allows apps to register to handle an “assist” intent, which is triggered by the swipe-up gesture on Nexus devices. We decided to hook into this intent to launch a quick, lightweight search experience for Firefox for Android users.

image image

Right now we’re using Yahoo! to power search suggestions and results, but we have patches in the works to let users choose their own search engine. Tapping on results will launch users back into their normal Firefox for Android experience.

We also created a simple home screen widget to help users quickly launch this search activity even if they’re not using a Nexus device. As a bonus, this widget also lets users quickly open a new tab in Firefox for Android.

image image

We are still in the early phases of design and development, so be prepared to see changes as we iterate to improve this search experience. We have a few telemetry probes in place to let us gather data on how people are using these new search features, but we’d also love to hear your feedback!

You can find links to relevant bugs on our project wiki page. As always, discussion about Firefox for Android development happens on the mobile-firefox-dev mailing list and in #mobile on IRC. And we’re always looking for new contributors if you’d like to get involved!

Special shout-out to our awesome intern Eric for leading the initial search activity development, as well as Wes for implementing the home screen widget.

August 01, 2014 09:10 PM

July 24, 2014

Marco Bonardo (mak)

Unified Complete coming to Firefox 34

The awesomebar in Firefox Desktop has been so far driven by two autocomplete searches implemented by the Places component:

  1. history: managing switch-to-tab, adaptive and browsing history, bookmarks, keywords and tags
  2. urlinline: managing autoFill results

Moving on, we plan to improve the awesomebar contents making them even more awesome and personal, but the current architecture complicates things.

Some of the possible improvements suggested include:

When working on these changes we don't want to spend time fighting with outdated architecture choices:

Due to these reasons, we decided to merge the existing components into a single new component called UnifiedComplete (toolkit/components/places/UnifiedComplete.js), that will take care of both autoFill and popup results. While the component has been rewritten from scratch, we were able to re-use most of the old logic that was well tested and appreciated. We were also able to retain all of the unit tests, that have been also rewritten, making them use a single harness (you can find them in toolkit/components/places/tests/unifiedcomplete/).

So, the actual question is: which differences should I expect from this change?

The component is currently disabled, but I will shortly push a patch to flip the pref that enables it. The preference to control whether to use new or old components is browser.urlbar.unifiedcomplete, you can already set it to true into your current Nightly build to enable it.

This also means old components will shortly be deprecated and won't be maintained anymore. That won't happen until we are completely satisfied with the new component, but you should start looking at the new one if you use autocomplete in your project. Regardless we'll add console warnings at least 2 versions before complete removal.

If you notice anything wrong with the new awesomebar behavior please file a bug in Toolkit/Places and make it block Bug UnifiedComplete so we are notified of it and can improve the handling before we reach the first Release.

July 24, 2014 11:23 AM

May 28, 2014

Marco Bonardo (mak)

Bookmarks backups respin

Part of the performance improvements we planned for Places, the history, bookmarking and tagging subsystem of Firefox, involved changes to the way we generate bookmarks backups.

As you may know, Firefox stores almost everyday a backup of your bookmarks into the profile folder, as a JSON file. The process, so far, had various issues:

  1. it was completely synchronous, I/O on main-thread is evil
  2. it was also doing a lot more I/O than needed
  3. it was using expensive live-updating Places results, instead of a static snapshot
  4. backups take up quite some space in the profile folder
  5. We were creating useless duplicate backups
  6. We were slowing down Firefox shutdown
  7. the code was old and ugly

The first step was to reorganize the code and APIs, Raymond Lee and Andres Hernandez took care of most of this part. Most of the existing code was converted to use the new async tools (like Task, Promises and Sqlite.jsm) and old synchronous APIs were deprecated with loud console warnings.

The second step was dedicated to some user-facing improvements. We must ensure the user can revert to the best possible status, but it was basically impossibile to distinguish backups created before a corruption from the ones created after it, so we added the size and bookmarks count to each backup. They are now visible in the Library / Restore menu.

Once we had a complete async API exposed and all of the internal consumers were converted to it, we could start rewriting the internals. We replaced the expensive Places result with a direct async SQL query reading the whole bookmarks tree at once. Raymond started working on this, I completed and landed it and, a few days ago, Asaf Romano further improved this new API. It is much faster than before (initial measurements have shown a 10x improvement) and it's also off the main-thread.

Along the process I also removed the backups-on-shutdown code, in favor of an idle-only behavior. Before this change we were trying to backup on an idle of 15 minutes, if we could not find a long enough interval we were enforcing a backup on shutdown. This means, in some cases, we were delaying the browser shutdown by seconds. Currently we look for an 8 minutes idle, if after 3 days we could not find a long enough interval, we cut the idle interval to 4 minutes (note: we started from a larger time and just recently tweaked it based on telemetry data).

At that point we had an async backups system, a little bit more user-friendly and doing less I/O. But we still had some issues to resolve.

First, we added an md5 hash to each backup, representing its contents, so we can avoid replacing a still valid backup, thus providing more meaningful backups to the user and reducing I/O considerably.

Then the only remaining piece of work was to reduce the footprint of backups in the profile folder, both for space and I/O reasons. Luckily we have an awesome community! Althaf Hameez, a community member, volunteered to help us completing this work. Bug 818587 landed recently, providing lz4-like compression to the backups: automatic backups are compressed and have .jsonlz4 extension, while manual backups are still plain-text, to allow sharing them easily with third party software or previous versions.

Finally, I want to note that we are now re-using most of the changed code also for bookmarks.html files. While these are no more our main exporting format, we still support them for default bookmarks import and bookmarks exchange with third party services or other browsers. So we obtained the same nice perf improvements for them.

Apart from some minor regressions, that are currently being worked on by Althaf himself, who kindly accepted to help us further, we are at the end of this long trip. If you want to read some more gory details about the path that brought us here, you can sneak into the dependency tree of bug 818399. If you find any bugs related to bookmark backups, please file a bug in Toolkit / Places.

May 28, 2014 12:11 PM

May 09, 2014

Stephen Horlander (shorlander)

(Re)Designing Firefox

Update — 2014/05/28

I added a slideshow of some of the design iterations we went through from Firefox 4 to Firefox 29.

Warning: It’s image heavy, might take a while to load on a slow connection.

So, last Monday we launched a thing. You may have noticed it. We called it Australis. Now it’s just called Firefox.

We spent a lot of time on it.

Dedicating yourself to a project can become an intense experience. You think about it all the time: in the morning, at dinner, when you are trying to watch a movie, in the shower, in your sleep, when you should be sleeping…

But then you set it free, because it’s finally mature enough for that. It’s exciting. But it’s also really scary. You are never sure if you made the right decisions along the way. I like to take some time in the post release glow for some reflection.


Left: Firefox 4, Right: Firefox 29

So how did we get from there to here?

We launched Firefox 4 in March 2011. It was a big change from Firefox 3. It introduced the Firefox button, revamped the add-ons manager, removed the status bar, combined the stop, go and reload buttons and included a comprehensive visual update—all while still having time to prototype and discard some other features along the way.

And yet it wasn’t perfect. It had a lot of the rough edges that projects accumulate in the process of going from being designed and built to being shipped.

Firefox 4 was our last monolithic release before we moved to a rapid release cycle. Six week cycles seemed like the perfect timeframe to iteratively smooth out the rough edges. So I created a project to do just that.


The project that I had created for iterative refinement however quickly transformed into a significant overhaul.

People in the frame: Sinchan Banerjee, Alex Faaborg, Brian Dils, Trond, Stephen Horlander (my legs), Jennifer Boriss, Frank Yan, Alex Limi
Taking the picture: Madhava Enros

At the beginning of June the UX team met up for its first post Firefox 4 team offsite. On the agenda was figuring out “What’s next?”. The entire team gathered in a room to pitch ideas and talk about problems unresolved—or that had been introduced—during the development of Firefox 4.

One theme that had been floating around for a while rose to the surface— Firefox is about customization, it should feel like it’s yours.

What would this mean for the interface we had just shipped? A lot of ideas were tossed around. Eventually one guiding principal stuck—make the best core experience we can and allow users to add and change the things that matter the most to them.

Building a fun easy to use Customization Mode—along with a more flexible Firefox Menu—would become the foundation of the new Firefox.

Visual Profile

So Curve, Such Aerodynamic

The offsite also sparked a set of other ideas that would make up what became known as Australis. Primarily: unifying the disparate bookmarking elements in the main window, refining the visual design, consolidating related or redundant features and streamlining the tabstrip.

While the redesigned customization mode would be core to the experience—the redesigned tabstrip would change the entire profile of Firefox.

“Make it look faster. No, no! It needs more air vents!”
Firefox 4 Aerodynamic Sketch — 2010

We had explored the idea of adding visual cues to Firefox to make it feel faster and smoother before. Yet some of the ideas were a little over the top.

Curvy Tab Sketch — June, 2011

This sketch from the design session—inspired by a previous mockup from Trond—had a curvy tab shape that immediately resonated with everyone.

It also had one important additional design tweak—only render the tab shape for the active tab. Highlighting the active tab reduces visual noise and makes it easier to keep your place in the tabstrip.

The early curve shape tried on a few looks. At first it was too angular, then it was too curvy, then it was too short, then it was too tall and then (finally) it was just right.

Design ideas for background tabs on Windows — October 2011

It turns out that designing background tabs without a tab shape is a lot easier if you have a stable background to work with. Windows 7 has translucent windows of variable tints and Windows 8 has flat windows of variable color. This meant we needed to create our own stable background.

We went through several variations to ensure that the background tabs would be legible. First we tried creating a unified background block, but it seemed too heavy. We even thought about keeping background tab shapes and highlighting the active tab in some other way.

Eventually we decided on a background “fog” that would sit behind the tabs and in front of the window. Think of it as an interface sandwich—glass back, curvy-tab front with a delicious foggy center.

We also made sure that adding curves didn’t increase the width of the tabs taking up precious tabstrip real estate. And we removed the blank favicon placeholder for sites without favicons. A small tweak that frees up some extra room for the title.

Menu Panel and Customization

Redesigning the Firefox Menu

One size really doesn’t fit all with something as complex and personal as your web browser. Add-ons have always made Firefox a thoroughly customizable browser, however arranging things has never been a great experience out of the box.

But before we could tackle the customization behavior, we had to rethink the Firefox Menu.

The main toolbar is the primary surface for frequently used actions; while the menu is the secondary surface for stuff you only use some of the time. We wanted users to be able to move things between these surfaces so they could tailor Firefox to their needs.

For the updated layout we started with the idea of a visual icon grid. This grid nicely mapped to the icons already found on the toolbar and would ideally make moving things back and forth feel cohesive.

Larger 3×3 Grid with Labels—by Alex Faaborg

The first grid we designed was wider, with smaller icons and no labels. This quickly changed to a three column grid with larger labeled icons. The updated layout served the same goal but was more clear and also less cramped.

Not everything we currently had in the Firefox Menu would translate directly to the new layout. We had to add special conjoined widgets—also known as “Wide Widgets”—for Cut/Copy/Paste and the Page Zoom controls. We also added a footer for persistent items that can’t be customized away. We did this to prevent users from getting into a broken layout with no escape hatch.

We had some serious debates about whether to use a an icon grid or a traditional menu list. The visual grid has some drawbacks: it isn’t as easy to scan and doesn’t scale as well with a lot of items. But the icon grid won this round because it was more visual, more inline with what we wanted out of drag-and-drop customization and had the side benefit of being touch friendly.

Some items from the Firefox Menu had submenus that could’t be easily reduced to a single action: Developer Tools, History, Bookmarks, Help and Character Encoding. Nested submenus hanging off of a panel felt pretty awkward. We had several ideas on how to handle this: inline expando-tray™, a drawer(interactive mockup—click History) and in-panel navigation.

We settled on what we call subviews(interactive mockup—click History). Subviews are partial overlays containing lists that slide in over the menu. Click anywhere in the partially visible menu layer to get back to the main menu—or close the entire menu and reopen it.

With the new menu layout Firefox should hopefully work just fine for most people out of the box. But by using the new Customization Mode you can really tailor Firefox to your needs. If you are interested in knowing more about why what is where Zhenshuo Fang wrote a great post about it.

Customize All the Things!

Now we needed to figure out how update Firefox’s aging customization experience. Things started off a little ambitious with the idea that we could combine toolbar customization, themes and add-ons into the same UI. This led to an early interactive demo (it does some stuff, hint: tools icon –> plus sign) to try and see if this would work.

This demo surfaced a few issues: 1) including add-ons was going to be complicated 2) we needed a direct representation of the menu panel instead of a proxy. This led to a bunch of mockups for a flatter interface sans add-ons integration. Eventually I made a second demo without the theme selector that is closer to what we ended up shipping. Then Blake Winton turned that into an awesome prototype that actually did something.

Final Customization Mode

The demos and prototypes helped us quickly get feedback from people on the idea. One of the complaints we heard was that it wasn’t obvious that you were entering a mode. We mocked up a lot of ideas for various ways to give visual feedback that you were in a mode and could now rearrange your stuff. We eventually settled on a subtle blueprint pattern and Madhava suggested we add some kind of animation with padding.

Wrapping it up

Thank you to everyone who dedicated so much time and effort into making this happen.

If you want to know more about the people and process behind Firefox 29, Madhava has a good post with an overview.

I think the post-release glow is over now. Time to get back to making Firefox better.

May 09, 2014 03:43 PM

May 05, 2014

Christian Legnitto (LegNeato)

Firefox 29’s best feature

Apparently Firefox 29 had some major feature or something. I don’t know much about that. What I do know about is a new Firefox feature that will finally cause Firefox to beat Chrome once and for all. Yes ladies and gentlemen, I am talking about a revolutionary feature I’ve dubbed:

Scheme typo fixer-upper™©

Have you ever copy and pasted URLs into the location bar and seen this screen instead of the page you wanted?


Of course you have. So you double check the URL and go “ooooooooh, I missed the h in http when copying and pasting.” Stupid me. Well, at least Firefox gave me an obtuse message to tell me what is going on. How lovely.

So, this happened to me one too many times. And I got pissed. And I had a conversation with my browser. I was probably drinking so it made sense at the time. The conversation went something like this:

WTF Firefox. You KNOW what I meant. Stop being a dick and just do the right thing™.

And then it reminded me of one of the very first “cool” things I experienced at Facebook. Facebook uses Phabricator and Arcanist for their development workflow. Arc (like Mozilla’s Mach) is a local tool to run builds, send up diffs, manage patches, and things of that persuasion. One of the things you do with arc is run

arc build

Well guess what happened. My first day at Facebook I typed

arc biuld
instead of
arc build
. I’m such an idiot. They should have fired me. Much to my delight I saw this:


It says:

Assuming 'biuld' is the British spelling of 'build'...

And it hit me. Someone was THINKING. They encountered this problem, thought about how many people would do that typo, realized there was really only one possible intended result, and decided to have the computer just do the right thing™. Someone cared about engineer time so much they fixed it for every future engineer to come. The thought and whimsical message made such a large impression on me I insisted we add a similar feature to Buck, Facebook’s super-fast Android build tool:

$ ./bin/buck biuld
 No sign of buck.jar -- building Buck!

It uses Levenshtein distance to figure out what you really meant to type. Humans: 1, machines: 0. Sweet.

So right about here I am likely supposed to point out Facebook (and specifically my team) is hiring. Cool, I’ve now done my duty as a manager–that should keep the recruiting attack dogs at bay.

Where was I? Oh yeah, man over machine. So I started thinking, which is never a good thing. This sort of thing happens to me every day and slows me down a lot. For example, if I type

git cone
instead of
git clone


It says:

$ /usr/bin/git cone
git: 'cone' is not a git command. See 'git --help'.

Did you mean this?

You know DAMN WELL what I meant git, and you mock me by echoing it out right in front of me. You spit in my face and just sit there all smug about it. What a dick.

So I went to teach git some manners, and it turns out the feature is already there. If you set help.autocorrect git will do the right thing™. There was a bug about making it default but I lost interest reading it to figure out why they insist on making humans do the work of computers. As an aside, do you know the git developers only take contributions as patches to a mailing list? Coding like we’re in the 90s, whooooooo!

Get. To. The. Point.

What does this have to do with Firefox you might ask? What, you don’t like my stories? Fine! Remember the screenshot at the beginning:

I started thinking how stupid Firefox was being, how many wasted human-hours were spent reading the message and acting on it, and how much goodwill Firefox loses when its users roll their eyes at this screen.

And it made me angry. So I decided to fix it. How hard could it be? Alcohol may have been a factor in my difficulty calculation. But I did it! The proof is a screenshot of Google:


I typed in “ttps://” and was taken to “” automatically without seeing an error screen! Firefox will now fix all these typos:

ttp:// → http://
ttps:// → https://
tps:// → https://
ps:// → https://
ile:// → file://
le:// → file://

Humans: 2, machines: 0.

Err, ok, this feature doesn’t screenshot well. Just trust me it works. But you probably don’t trust me. I wouldn’t. Try it yourself in any flavor of Firefox > 29! Go ahead, I’ll wait.

Did it work? Of course it did! Oh damn, you tried “hptts” and it didn’t get fixed? My change only deals with dropped characters and not jumbled character typos. It’s not a bug because I documented it™. To be honest I am lazy and lost interest after I fixed my copy and paste pain but it would be trivial to extend my patch to cover the latter case. Get in touch and I can mentor you! Take a look at the code (part 1, part 2). And there are tests! I mainly wrote those to not be a hypocrite as I preach the unit test gospel to product engineers every day. Oh, and for “quality”.

Turns out this life-changing feature was a lot more involved than expected, for two reasons.

First, I naively assumed there would be one location where everything gets loaded from the location bar input. To me that was a sensible way of organizing things. I could then just do a string match on the URL scheme and replace it with the right stuff…easy peasy, right? Errr, no. Turns out Firefox is…interesting. And on a lot of platforms with different UI code.

I kept digging deeper in the stack and when I got to where I needed (docshell) it wasn’t the right place for the fix at all. I’d potentially break web compatibility. Whoops. Rather than doing a simple fix at a lower-level chokepoint I needed to patch in optional support to that low-level and then hunt down every place Firefox takes in URLs and do the typo fixup. And then of course my first patch didn’t work due to a last-minute change and I had to fix things up in a second patch. See the gory details in the bug.

Second, Mozilla’s current setup is very developer-hostile. And that isn’t just “OMGWTFBBQ why isn’t Firefox on teh GitHubz!” and “lke dis if u cry evry tme u tuch c++”. Seriously, I do not know how Firefox developers get any work done. I would quit if I was forced to work on Firefox code all day. I’m not talking about code quality, I’m talking about supporting stuff–from build times, to no linters, to crufty code-search tools, to horrible code review, to dealing with no CI. I think I’ve been spoiled at Facebook as we invest heavily in speeding up the development workflow (we’re hiring!). I have more thoughts on this I intend to blog later, but to my Mozilla friends it doesn’t need to be like this. Expect more from your tools and development workflow!

Ahem. I’ll get off my soapbox and save my fire and brimstone tools talk for another day.

El fin.

Please enjoy the best feature Mozilla has shipped since Firefox 1.0 – the Scheme typo fixer-upper™©! If you encounter problems feel free to file bugs on me and I will promptly ignore them.

May 05, 2014 02:10 PM

April 30, 2014

Margaret Leibovic (margaret)

Firefox Hub Add-on Hackathon

For the past few months, the Firefox for Android team has been working on Firefox Hub, a project to make your home page more customizable and extensible. At its core, this feature is a set of new APIs that allows add-ons to add new content to the Firefox for Android home page.

These APIs are new in Firefox 30, but there are even more features available in Firefox 31, which is moving to Aurora this week. As we’ve been working on these APIs, we’ve been building plenty of demo add-ons ourselves, but we’re at the point where we want more developers to get involved!

Next week (May 5-9), we’re holding a distributed add-on hackathon to encourage more people to start building things with these APIs, and I want to encourage anyone who’s interested to participate!

This hackathon has three main goals:

To kick off this hackathon, I made an etherpad that links to documentation, example add-ons, and a long list of new add-on ideas. Since our community lives all over the world, we’ll hang out in #mobile on to ask questions, report problems, and share progress on things we’re making. In addition to building add-ons, you can also participate by testing new add-ons as they’re made!

At the end of the week, everyone who participated in the hackathon will receive a special limited-edition open badge, as well as pride in contributing to Firefox for Android. And maybe I’ll try to dig up some special prizes for anyone who makes a really cool add-on :)

April 30, 2014 05:40 PM

April 29, 2014

Matthew Noorenberghe (MattN)

Thanks for reviewing screenshots of Firefox's new look!

Thanks to the 150 participants in just over 24 hours who reviewed screenshots of the new Firefox look which was released today! This was an amazing success for the first time using the online screenshot review tool and led to around 1000 pieces of feedback given. I've now summarized all of the feedback given (over 90 unique issues) and have begun filing bugs or associating reports with existing issues on file. The top contributors were syssgx, MarcN, db48x, Vykintas, & zombie. Send me a message to claim your prize. Given the success, we'll likely be using the tool again for UI changes in the future so watch for participation opportunities in the coming months. You can download the new version of Firefox now or check for updates from the about dialog. Thanks!

April 29, 2014 09:50 AM

April 15, 2014

Matthew Noorenberghe (MattN)

An easy way to test the New Firefox Beta look and feel before it's released

Screenshot of the new Firefox UI on Windows 7 with the menu panel openThe new Firefox Beta is faster, simplified and easier to customize and we need your help to test it out before it gets released in a few weeks time. There is now an easy way to review Firefox user interface changes without even installing the new version. "How is that possible?" you might ask. We have a collection of hundreds of screenshots of the new Firefox in various different configurations (affecting features such as tabs, toolbars, themes, customization mode, and the new menu) that are ready for you to review. It's fast and easy to do in three simple steps:
  1. Open up the screenshot review tool and enter a nickname to log in (there may be a small reward for the most valuable contributions so keep that in mind when choosing)
  2. A random screenshot will be displayed where you can simply identify any visual issues related to the new user interface that you may see. Simply drag to select the region of the image and add a comment (and optionally a bug number). See an example.
  3. When you're done reviewing that image, simply click the button to get another and go to step 2. Endless fun ensues!
Of course, installing Firefox Beta, testing the functionality and filing bugs is still really valuable and encouraged. Areas to focus on include the new customization mode, menu panel, tabs, and Firefox Account Sync. So, what are you waiting for? Start reviewing now.

April 15, 2014 01:14 AM

April 07, 2014

Tim Taubert (ttaubert)

The Mozilla Build VM

Note: This post might be outdated as it has been turned into an MDN page. Please refer to the MDN page for the latest information about the Firefox Developer VM. It will also tell you the correct checksum to compare to after downloading.

If you ever wondered what contributing to Firefox feels like but you never had the time to read and follow through our instructions to setup a build environment or wanted to avoid screwing around with your precious system then this might be for you.

This article will guide you through a small list of steps that in the end will leave you with a virtual machine ready to modify and build your own development version of Firefox.

I hope this will be valuable to novice programmers that do not have a full C++ development environment at hand as well as to the more experienced folks with little time and lots of curiosity.

Install VirtualBox

Note: The Open Virtualization Format (OVF) is supported by other Virtualization Environments such as VMWare, etc. You can use those if already installed instead of VirtualBox.

Go to the VirtualBox Downloads page and download the latest version available for your operating system. Should you already have VirtualBox installed then please ensure you are running the latest version by checking for updates before continuing.

Download the Firefox Build Environment

Now is the time to download the virtual machine containing our development environment ready to modify and build Firefox. You can get it here:
(SHA-256 = <Please see MDN page linked at the top.>)

Downloading ~2.8 GB might take a while if you are on a slow connection, sorry.

Set up the virtual machine

Once the image has been downloaded you can double-click the .ova file and import the new virtual machine into VirtualBox. Please give it at least 2048MB of RAM (4096MB if you can) and the same number of processors that your host machine has available. Building Firefox takes up a lot of resources and you want it to build as fast as possible.

Now that your virtual machine is ready, boot it and wait for the Ubuntu desktop to be shown. A terminal will pop up automatically and do some last steps before we can get started. After a successful installation Sublime 2 should start automatically.

Note: Should you ever need root credentials, use “firefox-dev” as the password. If you want to change your Language and Keyboard settings then follow the instructions on How to change the UI Language in Ubuntu.

Build Firefox

Click Tools > Build to start the process. This might take a long time depending on the features of your host machine, please be patient. You can watch the build progress in the text editor’s console at the bottom. Once the build has finished you can use Tools > Run to start your custom Firefox build and check that everything works as expected.

Note: if you want to switch from an optimized to a debug build then choose Tools > Build System > Firefox (Debug) and hit Tools > Build again to start a debug build.

Now what?

You successfully built Firefox for the first time and wonder what’s next? How about picking a small bug for a start, contribute code and get your changes shipped to half a billion people? If that sounds compelling then take a look at Bugs Ahoy! and find something to work on that sounds interesting to you.

If you are interested in digging deeper into the build system or the version control system, or want to know more about how to create your first patch and post it to our bug tracker then take a look at our Code Firefox Lessons.

I would love to hear your feedback about the Firefox Build Environment! Please tell me what can be improved and what you would like to see in the next version. Do not hesitate to drop me a mail should you have a more detailed opinion.

April 07, 2014 04:00 PM

April 04, 2014

Tim Taubert (ttaubert)

Starting My Fourth Year at Mozilla

Today marks the beginning of my fourth year at Mozilla. It has been an amazing three years and the best job I could hope for. Since March I am now in the position of an Engineering Manager with a few highly intelligent and great people that I am very grateful to call my team.

I am super excited about all the personal and professional challenges I will be facing this year. It is my core belief that it is all about growth and for that Mozilla is exactly the right place to be.


April 04, 2014 09:00 AM

March 27, 2014

Gavin Sharp (gavin)

even more new firefox reviewers

I’m pleased to announce two additions to the list of Firefox reviewers:

Please join me in congratulating Mike and Florian!

March 27, 2014 03:00 AM

March 23, 2014

Matthew Noorenberghe (MattN)

PSA: Deployed Firefox profiles should not contain profile lock files

In short

If you are making a base Firefox profile which will be deployed to other machines, be sure to delete the profile lock files before creating the template/image. The filenames to delete vary by operating system and can be found in the roaming profile directory: When a lock file exists, the modification timestamp of that file needs to reflect the time when that profile was locked/used and so having a lock file from months ago when a base profile or image is made will lead to incorrect results.

Longer version

In the past (prior to bug 294260 in Firefox 13), the profile lock files (mentioned above) would be deleted upon successful shutdown of the Mozilla application. Since bug 294260, we now leave the profile lock files in place and simply update the existing file (and thus its modification time). We use the modification time and a preference to detect startup crashes related to the profile. In this case that means a crash from the time of locking the profile to the browser window appearing for 30 seconds. If the timestamp of the profile lock file doesn't match what was stored in the preferences, we assume that there was a crash in the last startup before it got to updating the preference. After three consecutive startup crashes detected within 6 hours, we will ask the user if they want to reset their profile or enter safe mode to resolve the crashes.

Offering to reset unused profiles

More recently, we added a new use of the last time of profile use in order to detect profiles which hadn't been used in a while (60 days) and offer to reset them (bug 498181 in Firefox 25). The idea is that there are people returning to Firefox after having been away for a while using another browser and they'd like to give Firefox another try. If they left Firefox due to technical issues such as slow startup or crashes, we don't want them to return to that problem and so we offer a reset as a way to get a fresher Firefox experience while still preserving key data such as bookmarks, history, passwords, etc. (the list is outlined in the reset dialog). If the profile lock file is part of a base profile or image like DeepFreeze and its timestamp is more than 60 days ago users may get prompted to reset immediately. Some users have already run into this and that's what prompted this email. In summary, the profile lock file must accurately reflect when the profile was last locked. Deleting it from base profiles is the best way to do this. Let me know in the comments if you have any questions.

March 23, 2014 09:00 PM

March 18, 2014

Brian Bondy (bbondy)

Does Modern UI Firefox usage indicate Windows 8.1 modern UI is in trouble?

On Friday, Mozilla announced it would not ship its Modern UI Firefox browser due to low adoption.

Is Windows 8.1 modern UI in trouble?


Did Mozilla make the right decision with Metro given the current circumstances?


Modern UI Firefox usage, in Mozilla's measurements, is not necessarily a true reflection of Modern UI usage in general.

Modern UI Firefox Usage != Modern UI usage

I do believe that Microsoft's modern UI is important for touch hardware, and I do believe that touch hardware is something people are adopting and will adopt more.

I believe that the Modern UI Firefox usage was low, at least in part, for 2 specific reasons:

  1. Microsoft doesn't allow your browser to run in Modern UI unless you are the default browser. Several people could have had a Modern UI capable Firefox pre-releases installed, but just never knew it.

  2. Microsoft makes it a lot harder to set your browser as the default in Windows 8. Before Windows 8, each browser could prompt you, and then they could set your default for you. As of Windows 8 you need to ask first, then tell Microsoft to show a prompt that shows a list of browsers (confusing). And that only sets the HTTP default. If you want all defaults, such as HTML and HTTP, then you have to send the user to control panel, make them search for the browser, then make them select your browser and set all defaults.

It would be great if Microsoft could fix these issues around default status. More competition leads to better software, and having good software on your platform is important. Every Windows Modern UI user loses when there's only one Modern UI browser choice.

March 18, 2014 12:00 AM