Planet Firefox Mobile

October 29, 2014

Kartikaya Gupta

Building a NAS

I've been wanting to build a NAS (network-attached storage) box for a while now, and the ominous creaking noises from the laptop I was previously using as a file server prompted me to finally take action. I wanted to build rather than buy because (a) I wanted more control over the machine and OS, (b) I figured I'd learn something along the way and (c) thought it might be cheaper. This blog posts documents the decisions and mistakes I made and problems I ran into.

First step was figuring out the level of data redundancy and storage space I wanted. After reading up on the different RAID levels I figured 4 drives with 3 TB each in a RAID5 configuration would suit my needs for the next few years. I don't have a huge amount of data so the ~9TB of usable space sounded fine, and being able to survive single-drive failures sounded sufficient to me. For all critical data I keep a copy on a separate machine as well.

I chose to go with software RAID rather than hardware because I've read horror stories of hardware RAID controllers going obsolete and being unable to find a replacement, rendering the data unreadable. That didn't sound good. With an open-source software RAID controller at least you can get the source code and have a shot at recovering your data if things go bad.

With this in mind I started looking at software options - a bit of searching took me to FreeNAS which sounded exactly like what I wanted. However after reading through random threads in the user forums it seemed like the FreeNAS people are very focused on using ZFS and hardware setups with ECC RAM. From what I gleaned, using ZFS without ECC RAM is a bad idea, because errors in the RAM can cause ZFS to corrupt your data silently and unrecoverably (and worse, it causes propagation of the corruption). A system that makes bad situations worse didn't sound so good to me.

I could have still gone with ZFS with ECC RAM but from some rudimentary searching it sounded like it would increase the cost significantly, and frankly I didn't see the point. So instead I decided to go with NAS4Free (which actually was the original FreeNAS before iXsystems bought the trademark and forked the code) which allows using a UFS file system in a software RAID5 configuration.

So with the software decisions made, it was time to pick hardware. I used this guide by Sam Kear as a starting point and modified a few things here and there. I ended up with this parts list that I mostly ordered from canadadirect.com. (Aside: I wish I had discovered pcpartpicker.com earlier in the process as it would have saved me a lot of time). They shipped things to me in 5 different packages which arrived on 4 different days using 3 different shipping services. Woo! The parts I didn't get from canadadirect.com I picked up at a local Canada Computers store. Then, last weekend, I put it all together.

It's been a while since I've built a box so I screwed up a few things and had to rewind (twice) to fix them. Took about 3 hours in total for assembly; somebody who knew what they were doing could have done it in less than one. I mostly blame lack of documentation with the chassis since there were a bunch of different screws and it wasn't obvious which ones I had to use for what. They all worked for mounting the motherboard but only one of them was actually correct and using the wrong one meant trouble later.

In terms of the hardware compatibility I think my choices were mostly sound, but there were a few hitches. The case and motherboard both support up to 6 SATA drives (I'm using 4, giving me some room to grow). However, the PSU only came with 4 SATA power connectors which means I'll need to get some adaptors or maybe a different PSU if I need to add drives. The other problem was that the chassis comes with three fans (two small ones at the front, one big one at the back) but there was only one chassis power connector on the motherboard. I plugged the big fan in and so far the machine seems to be staying pretty cool so I'm not too worried. Does seem like a waste to have those extra unused fans though.

Finally, I booted it up using a monitor/keyboard borrowed from another machine, and ran memtest86 to make sure the RAM was good. It was, so I flashed the NAS4Free LiveUSB onto a USB drive and booted it up. Unfortunately after booting into NAS4Free my keyboard stopped working. I had to disable the USB 3.0 stuff in the BIOS to get around that. I don't really care about having USB 3.0 support on this machine so not a big deal. It took me some time to figure out what installation mode I wanted to use NAS4Free in. I decided to do a full install onto a second USB drive and not have a swap partition (figured hosting swap over USB would be slow and probably unnecessary).

So installing that was easy enough, and I was able to boot into the full NAS4Free install and configure it to have a software RAID5 on the four disks. Things generally seemed OK and I started copying stuff over.. and then the box rebooted. It also managed to corrupt my installation somehow, so I had to start over from the LiveUSB stick and re-install. I had saved the config from the first time so it was easy to get it back up again, and once again I started putting data on there. Again it rebooted, although this time it didn't corrupt my installation. This was getting worrying, particularly since the system log files provided no indication as to what went wrong.

My first suspicion was that the RAID wasn't fully initialized and so copying data onto it resulted in badness. The array was "rebuilding" and I'm supposed to be able to use it then, but I figured I might as well wait until it was done. Turns out it's going to be rebuilding for the next ~20 days because RAID5 has to read/write the entire disk to initialize fully and in the days of multi-terabyte disk this takes forever. So in retrospect perhaps RAID5 was a poor choice for such large disks.

Anyway in order to debug the rebooting, I looked up the FreeBSD kernel debugging documentation, and that requires having a swap partition that the kernel can dump a crash report to. So I reinstalled and set up a swap partition this time. This seemed to magically fix the rebooting problem entirely, so I suspect the RAID drivers just don't deal well when there's no swap, or something. Not an easy situation to debug if it only happens with no swap partition but you need a swap partition to get a kernel dump.

So, things were good, and I started copying more data over and configuring more stuff and so on. The next problem I ran into was the USB drive to which I had installed NAS4Free started crapping out with read/write errors. This wasn't so great but by this point I'd already reinstalled it about 6 or 7 times, so I reinstalled again onto a different USB stick. The one that was crapping out seems to still work fine in other machines, so I'm not sure what the problem was there. The new one that I used, however, was extremely slow. Things that took seconds on the previous drive took minutes on this one. So I switched again to yet another drive, this time an old 2.5" internal drive that I have mounted in an enclosure through USB.

And finally, after installing the OS at least I've-lost-count-how-many times, I have a NAS that seems stable and appears to work well. To be fair, reinstalling the OS is a pretty painless process and by the end I could do it in less than 10 minutes from sticking in the LiveUSB to a fully-configured working system. Being able to download the config file (which includes not just the NAS config but also user accounts and so on) makes it pretty painless to restore your system to exactly the way it was. The only additional things I had to do were install a few FreeBSD packages and unpack a tarball into my home directory to get some stuff I wanted. At no point was any of the data on the RAID array itself lost or corrupted, so I'm pretty happy about that.

In conclusion, setup was a bit of a pain, mostly due to unclear documentation and flaky USB drives (or drivers) but now that I have it set up it seems to be working well. If I ever have to do it over I might go for something other than RAID5 just because of the long rebuild time but so far it hasn't been an actual problem.

October 29, 2014 02:24 AM

October 24, 2014

Nick Alexander

Building Fennec with Gradle and IntelliJ: first steps

Developing Fennec with Eclipse has been working well for quite some time now, but Eclipse is officially no longer supported by Google and the new standard is to build with Gradle and to edit in Android Studio or IntelliJ. I’ve got a provisional patch up at Bug 1041395; here is a companion demonstration screencast.

Instructions

./mach build && ./mach package
cd $OBJDIR/mobile/android
./gradlew build

The debug APK will be at $OBJDIR/mobile/android/base/app/build/outputs/apk/app-debug.apk.

The $OBJDIR/mobile/android/gradle directory can be imported into IntelliJ as follows:

  • File > Import Project
  • [select $OBJDIR/mobile/android/gradle]
  • Import project from external model > Gradle
  • [select Use default Gradle wrapper]

When prompted, do not add any files to git. You may need to re-open the project, or restart IntelliJ, to pick up a compiler language-level change.

Technical overview

Caveats

  • The Gradle build will "succeed" but crash on start up if the object directory has not been properly packaged.
  • Changes to preprocessed source code and resources (namely, strings.xml.in and the accompanying DTD files) are not recognized.
  • There’s no support for editing JavaScript.

How the Gradle project is laid out

To the greatest extent possible, the Gradle configuration lives in the source directory. The only Gradle configuration that lives in the object directory is installed when building the mobile/android/gradle directory.

At the time of writing, their are three sub-modules: app, base, and thirdparty.

app is the Fennec wrapper; it generates the org.mozilla.fennec.R resource package. base is the Gecko code; it generates the org.mozilla.gecko.R resource package. Together, app and base address the "two package namespaces" that has plagued Fennec from day one.

Due to limitations in the Android Gradle plugin, all test code is shoved into the app module. (The issue is that, at the time of writing, there is no support for test-only APKs.) For no particular reason, the compiled C/C++ libraries are included in the app module; they could be included in the base module. I expect base to rebuilt slightly more frequently than app, so I’m hoping this choice will allow for faster incremental builds.

thirdparty is the external code we use in Fennec; it’s built as an Android library but uses no resources. It’s separate simply to allow the build system to cache the compiled and pre-dexed artifacts, hopefully allowing for faster incremental builds.

Recursive make backend details

The mobile/android/gradle directory writes the following into $OBJDIR/mobile/android/gradle:

  1. the Gradle wrapper;
  2. gradle.properties;
  3. symlinks to certain source and resource directories.

The Gradle wrapper is written to make it easy to build with Gradle from the object directory. The wrapper is intended to be checked into version control.

gradle.properties is the single source of per-object directory Gradle configuration, and provides the Gradle configuration access to configure/moz.build variables.

The symlinks are not necessary for the Gradle build itself, but they prevent nested directory errors and incorrect Java package scoping when the Gradle project is imported into IntelliJ. Because IntelliJ treats the Gradle project as authoritative, it’s not sufficient to fix these manually in IntelliJ after the initial import — IntelliJ reverts to the Gradle configuration after every build. Since there aren’t many symlinks, I’ve done them in the Makefile rather than at a higher level of abstraction (like a moz.build definition, or a custom build backend). In future, I expect to be able to remove all such symlinks by making our in-tree directory structures agree with what Gradle and IntelliJ expect.

Notes

Many thanks to ckitching for doing the first work on developing Fennec with IntelliJ. Thanks also to mhaigh for championing IntelliJ as the tool of choice for the Fennec front-end team, and to wesj for circulating updated IntelliJ usage notes.

October 24, 2014 05:00 AM

October 23, 2014

James Willcox

MP4 improvements in Firefox for Android

One of the things that has always been a bit of a struggle in Firefox for Android is getting reliable video decoding for H264. For a couple of years, we've been shipping an implementation that went through great heroics in order to use libstagefright directly. While it does work fine in many cases, we consistently get reports of videos not playing, not displayed correctly, or just crashing.

In Android 4.1, Google added the MediaCodec class to the SDK. This provides a blessed interface to the underlying libstagefright API, so presumably it will be far more reliable. This summer, my intern Martin McDonough worked on adding a decoding backend in Firefox for Android that uses this class. I expected him to be able to get something that sort of worked by the end of the internship, but he totally shocked me by having video on the screen inside of two weeks. This included some time spent modifying our JNI bindings generator to work against the Android SDK. You can view Martin's intern presentation on Air Mozilla.

While the API for MediaCodec seems relatively straightforward, there are several details you need to get right or the whole thing falls apart. Martin constantly ran into problems where it would throw IllegalStateException for seemingly no valid reason. There was no error message or other explanation in the exception. This made development pretty frustrating, but he fought through it. It looks like Google has improved both the documentation and the error handling in the API as of Lollipop, so that's good to see.

As Martin wrapped up his internship he was working on handling the video frames as output by the decoder. Ideally you would get some kind of sane YUV variation, but this often is not the case. Qualcomm devices frequently output in their own proprietary format, OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. You'll notice this doesn't even appear in the list of possibilities according to MediaCodecInfo.CodecCapabilities. It does, however, appear in the OMX headers, along with a handful of other proprietary formats. Great, so Android has this mostly-nice class to decode video, but you can't do anything with the output? Yeah. Kinda. It turns out we actually have code to handle this format for B2G, because we run on QC hardware there, so this specific case had a possible solution. But maybe there is a better way?

I know from my work on supporting Flash on Android that we use a SurfaceTexture there to render video layers from the plugin. It worked really well most of the time. We can use that with MediaCodec too. With this output path we don't ever see the raw data; it goes straight into the Surface attached to the SurfaceTexture. You can then composite it with OpenGL and the crazy format conversions are done by the GPU. Pretty nice! I think handling all the different YUV conversions would've been a huge source of pain, so I was happy to eliminate that entire class of bugs. I imagine the GPU conversions are probably faster, too.

There is one problem with this. Sometimes we need to do something with the video other than composite it onto the screen with OpenGL. One common usage is to draw the video into a canvas (either 2D or WebGL). Now we have a problem, because the only way to get stuff out of the SurfaceTexture (and the attached Surface) is to draw it with OpenGL. Initially, my plan to handle this was to ask the compositor to draw this single SurfaceTexture separately into a temporary FBO, read it back, and give me those bits. It worked, but boy was it ugly. There has to be a better way, right? There is, but it's still not great. SurfaceTexture, as of Jelly Bean, allows you to attach and detach a GL context. Once attached, the updateTexImage() call updates whatever texture you attached. Detaching frees that texture, and makes the SurfaceTexture able to be attached to another texture (or GL context). My idea was to only attach the compositor to the SurfaceTexture while it was drawing it, and detach after. This would leave the SurfaceTexture able to be consumed by another GL context/texture. For doing the readback, we just attach to a context created specifically for this purpose on the main thread, blit the texture to a FBO, read the pixels, detach. Performance is not great, as glReadPixels() always seems to be slow on mobile GPUs, but it works. And it doesn't involve IPC to the compositor. I had to resort to a little hack to make some of this work well, though. Right now there is no way to create a SurfaceTexture in an initially detached state. You must always pass a texture in the constructor, so I pass 0 and then immediately call detachFromGLContext(). Pretty crappy, but it should be relatively safe. I filed an Android bug to request a no-arg constructor for SurfaceTexture more than two years ago, but nothing has happened. I'm not sure why Google even allows people to file stuff, honestly.

tl;dr: Video decoding should be much better in Firefox for Android as of today's Nightly if you are on Jelly Bean or higher. Please give it a try, especially if you've had problems in the past. Also, file bugs if you have issues!

October 23, 2014 02:00 PM

October 22, 2014

Matt Brubeck

A little randomness for Hacker News

In systems that rely heavily on “most popular” lists, like Reddit or Hacker News, the rich get richer while the poor stay poor. Since most people only look at the top of the charts, anything that’s not already listed has a much harder time being seen. You need visibility to get ratings, and you need ratings to get visibility.

Aggregators try to address this problem by promoting new items as well as popular ones. But this is hard to do effectively. For example, the “new” page at Hacker News gets only a fraction of the front page’s traffic. Most users want to see the best content, not wade through an unfiltered stream of links. Thus, very little input is available to decide which links get promoted to the front page.

As an experiment, I wrote a userscript that uses the Hacker News API to search for new or low-ranked links and randomly insert just one or two of them into the front page. It’s also available as a bookmarklet for those who can’t or don’t want to install the user script.

Install user script (may require a browser extension)

Randomize HN (drag to bookmark bar, or right-click to bookmark)

This gives readers a chance to see and vote on links that they otherwise wouldn’t, without altering their habits or wading through a ton of unfiltered content. Each user will see just one or two links per visit, but thanks to randomness a much larger number of links will be seen by the overall user population. My belief, though I can’t prove it, is that widespread use of this feature would improve the quality of the selection process.

The script isn’t perfect (search for FIXME in the source code for some known issues), but it works well enough to try out the idea. Unfortunately, the HN API doesn’t give access to all the data I’d like, and sometimes the script won’t find any suitable links to insert. (You can look at your browser’s console to see which which items were randomly inserted.) Ideally, this feature would be built in to Hacker News—and any other service that recommends “popular” items.

October 22, 2014 10:00 PM

October 19, 2014

Kartikaya Gupta

Google-free android usage

When I switched from using a BlackBerry to an Android phone a few years ago it really irked me that the only way to keep my contacts info on the phone was to also let Google sync them into their cloud. This may not be true universally (I think some samsung phones will let you store contacts to the SD card) but it was true for phone I was using then and is true on the Nexus 4 I'm using now. It took a lot of painful digging through Android source and googling, but I successfully ended up writing a bunch of code to get around this.

I've been meaning to put up the code and post this for a while, but kept procrastinating because the code wasn't generic/pretty enough to publish. It still isn't but it's better to post it anyway in case somebody finds it useful, so that's what I'm doing.

In a nutshell, what I wrote is an Android app that includes (a) an account authenticator, (b) a contacts sync adapter and (c) a calendar sync adapter. On a stock Android phone this will allow you to create an "account" on the device and add contacts/calendar entries to it.

Note that I wrote this to interface with the way I already have my data stored, so the account creation process actually tries to validate the entered credentials against a webhost, and the the contacts sync adapter is actually a working one-way sync adapter that will download contact info from a remote server in vcard format and update the local database. The calendar sync adapter, though, is just a dummy. You're encouraged to rip out the parts that you don't want and use the rest as you see fit. It's mostly meant to be a working example of how this can be accomplished.

The net effect is that you can store contacts and calendar entries on the device so they don't get synced to Google, but you can still use the built-in contacts and calendar apps to manipulate them. This benefits from much better integration with the rest of the OS than if you were to use a third-party contacts or calendar app.

Source code is on Github: staktrace/pimple-android.

October 19, 2014 03:42 AM

October 07, 2014

Lucas Rocha

Probing with Gradle

Up until now, Probe relied on dynamic view proxies generated at runtime to intercept View calls. Although very convenient, this approach greatly affects the time to inflate your layouts—which limits the number of use cases for the library, especially in more complex apps.

This is all changing now with Probe’s brand new Gradle plugin which seamlessly generates build-time proxies for your app. This means virtually no overhead at runtime!

Using Probe’s Gradle plugin is very simple. First, add the Gradle plugin as a dependency in your build script.

buildscript {
    ...
    dependencies {
        ...
        classpath 'org.lucasr.probe:gradle-plugin:0.1.3'
    }
}

Then apply the plugin to your app’s build.gradle.

apply plugin: 'org.lucasr.probe'

Probe’s proxy generation is disabled by default and needs to be explicitly enabled on specific build variants (build type + product flavour). For example, this is how you enable Probe proxies in debug builds.

probe {
    buildVariants {
        debug {
            enabled = true
        }
    }
}

And that’s all! You should now be able to deploy interceptors on any part of your UI. Here’s how you could deploy an OvermeasureInterceptor in an activity.

public final class MainActivity extends Activity {
   @Override
   protected void onCreate(Bundle savedInstanceState) {
       Probe.deploy(this, new OvermeasureInterceptor());
       super.onCreate(savedInstanceState);
       setContentView(R.id.main_activity);
   }
}

While working on this feature, I have changed DexMaker to be an optional dependency i.e. you have to explicitly add DexMaker as a build dependency in your app in order to use it.

This is my first Gradle plugin. There’s definitely a lot of room for improvement here. These features are available in the 0.1.3 release in Maven Central.

As usual, feedback, bug reports, and fixes are very welcome. Enjoy!

October 07, 2014 11:12 PM

September 29, 2014

William Lachance

Using Flexbox in web applications

Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in horizontally or vertically”. I’ve used it in three projects so far:

When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.

Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. :) They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.

To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.

September 29, 2014 03:07 PM

September 23, 2014

Lucas Rocha

New Features in Picasso

I’ve always been a big fan of Picasso, the Android image loading library by the Square folks. It provides some powerful features with a rather simple API.

Recently, I started working on a set of new features for Picasso that will make it even more awesome: request handlers, request management, and request priorities. These features have all been merged to the main repo now. Let me give you a quick overview of what they enable you to do.

Request Handlers

Picasso supports a wide variety of image sources, from simple resources to content providers, network, and more. Sometimes though, you need to load images in unconventional ways that are not supported by default in Picasso.

Wouldn’t it be nice if you could easily integrate your custom image loading logic with Picasso? That’s what the new request handlers are about. All you need to do is subclass RequestHandler and implement a couple of methods. For example:

public class PonyRequestHandler extends RequestHandler {
    private static final String PONY_SCHEME = "pony";

    @Override public boolean canHandleRequest(Request data) {
        return PONY_SCHEME.equals(data.uri.getScheme());
    }

    @Override public Result load(Request data) {
         return new Result(somePonyBitmap, MEMORY);
    }
}

Then you register your request handler when instantiating Picasso:

Picasso picasso = new Picasso.Builder(context)
    .addRequestHandler(new PonyHandler())
    .build();

Voilà! Now Picasso can handle pony URIs:

picasso.load("pony://somePonyName")
       .into(someImageView)

This pull request also involved rewriting all built-in bitmap loaders on top of the new API. This means you can also override the built-in request handlers if you need to.

Request Management

Even though Picasso handles view recycling, it does so in an inefficient way. For instance, if you do a fling gesture on a ListView, Picasso will still keep triggering and canceling requests blindly because there was no way to make it pause/resume requests according to the user interaction. Not anymore!

The new request management APIs allow you to tag associated requests that should be managed together. You can then pause, resume, or cancel requests associated with specific tags. The first thing you have to do is tag your requests as follows:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .tag(someTag)
       .into(someImageView)

Then you can pause and resume requests with this tag based on, say, the scroll state of a ListView. For example, Picasso’s sample app now has the following scroll listener:

public class SampleScrollListener implements AbsListView.OnScrollListener {
    ...
    @Override
    public void onScrollStateChanged(AbsListView view, int scrollState) {
        Picasso picasso = Picasso.with(context);
        if (scrollState == SCROLL_STATE_IDLE ||
            scrollState == SCROLL_STATE_TOUCH_SCROLL) {
            picasso.resumeTag(someTag);
        } else {
            picasso.pauseTag(someTag);
        }
    }
    ...
}

These APIs give you a much finer control over your image requests. The scroll listener is just the canonical use case.

Request Priorities

It’s very common for images in your Android UI to have different priorities. For instance, you may want to give higher priority to the big hero image in your activity in relation to other secondary images in the same screen.

Up until now, there was no way to hint Picasso about the relative priorities between images. The new priority API allows you to tell Picasso about the intended order of your image requests. You can just do:

Picasso.with(context)
       .load("http://example.com/image.jpg")
       .priority(HIGH)
       .into(someImageView);

These priorities don’t guarantee a specific order, they just tilt the balance towards higher-priority requests.


That’s all for now. Big thanks to Jake Wharton and Dimitris Koutsogiorgas for the prompt code and API reviews!

You can try these new APIs now by fetching the latest Picasso code on Github. These features will probably be available in the 2.4 release. Enjoy!

September 23, 2014 03:52 PM

September 18, 2014

Kartikaya Gupta

Maker Party shout-out

I've blogged before about the power of web scale; about how important it is to ensure that everybody can use the web and to keep it as level of a playing field as possible. That's why I love hearing about announcements like this one: 127K Makers, 2513 Events, 86 Countries, and One Party That Just Won't Quit. Getting more people all around the world to learn about how the web works and keeping that playing field level is one of the reasons I love working at Mozilla. Even though I'm not directly involved in Maker Party, it's great to see projects like this having such a huge impact!

September 18, 2014 03:48 PM

Matt Brubeck

Let's build a browser engine! Part 6: Block layout

Welcome back to my series on building a toy HTML rendering engine:

This article will continue the layout module that we started in Part 5. This time, we’ll add the ability to lay out block boxes. These are boxes that are stack vertically, such as headings and paragraphs.

To keep things simple, this code implements only normal flow: no floats, no absolute positioning, and no fixed positioning.

Traversing the Layout Tree

The entry point to this code is the layout function, which takes a takes a LayoutBox and calculates its dimensions. We’ll break this function into three cases, and implement only one of them for now:

impl LayoutBox {
    /// Lay out a box and its descendants.
    fn layout(&mut self, containing_block: Dimensions) {
        match self.box_type {
            BlockNode(_) => self.layout_block(containing_block),
            InlineNode(_) => {} // TODO
            AnonymousBlock => {} // TODO
        }
    }

    // ...
}

A block’s layout depends on the dimensions of its containing block. For block boxes in normal flow, this is just the box’s parent. For the root element, it’s the size of the browser window (or “viewport”).

You may remember from the previous article that a block’s width depends on its parent, while its height depends on its children. This means that our code needs to traverse the tree top-down while calculating widths, so it can lay out the children after their parent’s width is known, and traverse bottom-up to calculate heights, so that a parent’s height is calculated after its children’s.

fn layout_block(&mut self, containing_block: Dimensions) {
    // Child width can depend on parent width, so we need to calculate
    // this box's width before laying out its children.
    self.calculate_block_width(containing_block);

    // Determine where the box is located within its container.
    self.calculate_block_position(containing_block);

    // Recursively lay out the children of this box.
    self.layout_block_children();

    // Parent height can depend on child height, so `calculate_height`
    // must be called *after* the children are laid out.
    self.calculate_block_height();
}

This function performs a single traversal of the layout tree, doing width calculations on the way down and height calculations on the way back up. A real layout engine might perform several tree traversals, some top-down and some bottom-up.

Calculating the Width

The width calculation is the first step in the block layout function, and also the most complicated. I’ll walk through it step by step. To start, we need the values of the CSS width property and all the left and right edge sizes:

fn calculate_block_width(&mut self, containing_block: Dimensions) {
    let style = self.get_style_node();

    // `width` has initial value `auto`.
    let auto = Keyword("auto".to_string());
    let mut width = style.value("width").unwrap_or(auto.clone());

    // margin, border, and padding have initial value 0.
    let zero = Length(0.0, Px);

    let mut margin_left = style.lookup("margin-left", "margin", &zero);
    let mut margin_right = style.lookup("margin-right", "margin", &zero);

    let border_left = style.lookup("border-left-width", "border-width", &zero);
    let border_right = style.lookup("border-right-width", "border-width", &zero);

    let padding_left = style.lookup("padding-left", "padding", &zero);
    let padding_right = style.lookup("padding-right", "padding", &zero);

    // ...
}

This uses a helper function called lookup, which just tries a series of values in sequence. If the first property isn’t set, it tries the second one. If that’s not set either, it returns the given default value. This provides an incomplete (but simple) implementation of shorthand properties and initial values.

Note: This is similar to the following code in, say, JavaScript or Ruby:

margin_left = style["margin-left"] || style["margin"] || zero;

Since a child can’t change its parent’s width, it needs to make sure its own width fits the parent’s. The CSS spec expresses this as a set of constraints and an algorithm for solving them. The following code implements that algorithm.

First we add up the margin, padding, border, and content widths. The to_px helper method converts lengths to their numerical values. If a property is set to 'auto', it returns 0 so it doesn’t affect the sum.

let total = [&margin_left, &margin_right, &border_left, &border_right,
             &padding_left, &padding_right, &width].iter().map(|v| v.to_px()).sum();

This is the minimum horizontal space needed for the box. If this isn’t equal to the container width, we’ll need to adjust something to make it equal.

If the width or margins are set to 'auto', they can expand or contract to fit the available space. Following the spec, we first check if the box is too big. If so, we set any expandable margins to zero.

// If width is not auto and the total is wider than the container, treat auto margins as 0.
if width != auto && total > containing_block.width {
    if margin_left == auto {
        margin_left = Length(0.0, Px);
    }
    if margin_right == auto {
        margin_right = Length(0.0, Px);
    }
}

If the box is too large for its container, it overflows the container. If it’s too small, it will underflow, leaving extra space. We’ll calculate the underflow—the amount of extra space left in the container. (If this number is negative, it is actually an overflow.)

let underflow = containing_block.width - total;

We now follow the spec’s algorithm for eliminating any overflow or underflow by adjusting the expandable dimensions. If there are no 'auto' dimensions, we adjust the right margin. (Yes, this means the margin may be negative in the case of an overflow!)

match (width == auto, margin_left == auto, margin_right == auto) {
    // If the values are overconstrained, calculate margin_right.
    (false, false, false) => {
        margin_right = Length(margin_right.to_px() + underflow, Px);
    }

    // If exactly one size is auto, its used value follows from the equality.
    (false, false, true) => { margin_right = Length(underflow, Px); }
    (false, true, false) => { margin_left  = Length(underflow, Px); }

    // If width is set to auto, any other auto values become 0.
    (true, _, _) => {
        if margin_left == auto { margin_left = Length(0.0, Px); }
        if margin_right == auto { margin_right = Length(0.0, Px); }

        if underflow >= 0.0 {
            // Expand width to fill the underflow.
            width = Length(underflow, Px);
        } else {
            // Width can't be negative. Adjust the right margin instead.
            width = Length(0.0, Px);
            margin_right = Length(margin_right.to_px() + underflow, Px);
        }
    }

    // If margin-left and margin-right are both auto, their used values are equal.
    (false, true, true) => {
        margin_left = Length(underflow / 2.0, Px);
        margin_right = Length(underflow / 2.0, Px);
    }
}

At this point, the constraints are met and any 'auto' values have been converted to lengths. The results are the the used values for the horizontal box dimensions, which we will store in the layout tree. You can see the final code in layout.rs.

Positioning

The next step is simpler. This function looks up the remanining margin/padding/border styles, and uses these along with the containing block dimensions to determine this block’s position on the page.

fn calculate_block_position(&mut self, containing_block: Dimensions) {
    let style = self.get_style_node();
    let d = &mut self.dimensions;

    // margin, border, and padding have initial value 0.
    let zero = Length(0.0, Px);

    // If margin-top or margin-bottom is `auto`, the used value is zero.
    d.margin.top = style.lookup("margin-top", "margin", &zero).to_px();
    d.margin.bottom = style.lookup("margin-bottom", "margin", &zero).to_px();

    d.border.top = style.lookup("border-top-width", "border-width", &zero).to_px();
    d.border.bottom = style.lookup("border-bottom-width", "border-width", &zero).to_px();

    d.padding.top = style.lookup("padding-top", "padding", &zero).to_px();
    d.padding.bottom = style.lookup("padding-bottom", "padding", &zero).to_px();

    // Position the box below all the previous boxes in the container.
    d.x = containing_block.x +
          d.margin.left + d.border.left + d.padding.left;
    d.y = containing_block.y + containing_block.height +
          d.margin.top + d.border.top + d.padding.top;
}

Take a close look at that last statement, which sets the y position. This is what gives block layout its distinctive vertical stacking behavior. For this to work, we’ll need to make sure the parent’s height is updated after laying out each child.

Children

Here’s the code that recursively lays out the box’s contents. As it loops through the child boxes, it keeps track of the total content height. This is used by the positioning code (above) to find the vertical position of the next child.

fn layout_block_children(&mut self) {
    let d = &mut self.dimensions;
    for child in self.children.iter_mut() {
        child.layout(*d);
        // Track the height so each child is laid out below the previous content.
        d.height = d.height + child.dimensions.margin_box_height();
    }
}

The total vertical space taken up by each child is the height of its margin box, which we calculate just by adding all up the vertical dimensions.

impl Dimensions {
    /// Total height of a box including its margins, border, and padding.
    fn margin_box_height(&self) -> f32 {
        self.height + self.padding.top + self.padding.bottom
                    + self.border.top + self.border.bottom
                    + self.margin.top + self.margin.bottom
    }
}

For simplicity, this does not implement margin collapsing. A real layout engine would allow the bottom margin of one box to overlap the top margin of the next box, rather than placing each margin box completely below the previous one.

The ‘height’ Property

By default, the box’s height is equal to the height of its contents. But if the 'height' property is set to an explicit length, we’ll use that instead:

fn calculate_block_height(&mut self) {
    // If the height is set to an explicit length, use that exact length.
    match self.get_style_node().value("height") {
        Some(Length(h, Px)) => { self.dimensions.height = h; }
        _ => {}
    }
}

And that concludes the block layout algorithm. You can now call layout() on a styled HTML document, and it will spit out a bunch of rectangles with widths, heights, margins, etc. Cool, right?

Exercises

Some extra ideas for the ambitious implementer:

  1. Collapsing vertical margins.

  2. Relative positioning.

  3. Parallelize the layout process, and measure the effect on performance.

If you try the parallelization project, you may want to separate the width calculation and the height calculation into two distinct passes. The top-down traversal for width is easy to parallelize just by spawning a separate task for each child. The height calculation is a little trickier, since you need to go back and adjust the y position of each child after its siblings are laid out.

To Be Continued…

Thank you to everyone who’s followed along this far!

These articles are taking longer and longer to write, as I journey further into unfamiliar areas of layout and rendering. There will be a longer hiatus before the next part as I experiment with font and graphics code, but I’ll resume the series as soon as I can.

September 18, 2014 04:30 AM

September 16, 2014

Lucas Rocha

Introducing Probe

We’ve all heard of the best practices regarding layouts on Android: keep your view tree as simple as possible, avoid multi-pass layouts high up in the hierarchy, etc. But the truth is, it’s pretty hard to see what’s actually going on in your view tree in each UI traversal (measure → layout → draw).

We’re well served with developer options for tracking graphics performance—debug GPU overdraw, show hardware layers updates, profile GPU rendering, and others. However, there is a big gap in terms of development tools for tracking layout traversals and figuring out how your layouts actually behave. This is why I created Probe.

Probe is a small library that allows you to intercept view method calls during Android’s layout traversals e.g. onMeasure(), onLayout(), onDraw(), etc. Once a method call is intercepted, you can either do extra things on top of the view’s original implementation or completely override the method on-the-fly.

Using Probe is super simple. All you have to do is implement an Interceptor. Here’s an interceptor that completely overrides a view’s onDraw(). Calling super.onDraw() would call the view’s original implementation.

public class DrawGreen extends Interceptor {
    private final Paint mPaint;

    public DrawGreen() {
        mPaint = new Paint();
        mPaint.setColor(Color.GREEN);
    }

    @Override
    public void onDraw(View view, Canvas canvas) {
        canvas.drawPaint(mPaint);
    }
}

Then deploy your Interceptor by inflating your layout with a Probe:

Probe probe = new Probe(this, new DrawGreen(), new Filter.ViewId(R.id.view2));
View root = probe.inflate(R.layout.main_activity, null);

Just to give you an idea of the kind of things you can do with Probe, I’ve already implemented a couple of built-in interceptors. OvermeasureInterceptor tints views according to the number of times they got measured in a single traversal i.e. equivalent to overdraw but for measurement.

LayoutBoundsInterceptor is equivalent to Android’s “Show layout bounds” developer option. The main difference is that you can show bounds only for specific views.

Under the hood, Probe uses Google’s DexMaker to generate dynamic View proxies during layout inflation. The stock ProxyBuilder implementation was not good enough for Probe because I wanted to avoid using reflection entirely after the proxy classes were generated. So I created a specialized View proxy builder that generates proxy classes tailored for Probe’s use case.

This means Probe takes longer than your usual LayoutInflater to inflate layout resources. There’s no use of reflection after layout inflation though. Your views should perform the same. For now, Probe is meant to be a developer tool only and I don’t recommend using it in production.

The code is available on Github. As usual, contributions are very welcome.

September 16, 2014 10:32 AM

September 15, 2014

William Lachance

mozregression 0.24

I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:

  1. Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like:

    mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14

  2. Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
  3. Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly

If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.

September 15, 2014 10:02 PM

September 12, 2014

Geoff Brown

Running my own AutoPhone

AutoPhone is a brilliant platform for running automated tests on physical mobile devices.

:bc maintains an AutoPhone instance running startup performance tests (aka “S1/S2 tests” or “throbber start/stop tests”) on a small farm of test phones; those tests run against all of our Firefox for Android builds and results are reported to PhoneDash, available for viewing at http://phonedash.mozilla.org/.

I have used phonedash.mozilla.org for a long time now, and reported regressions in bugs and in my monthly “Performance Check-up” posts, but I have never looked under the covers or tried to use AutoPhone myself — until this week.

All things considered, it is surprisingly easy to set up your own AutoPhone instance and run your own tests. You might want to do this to reproduce phonedash.mozilla.org results on your own computer, or to check for regressions on a feature before check-in.

Here’s what I did to run my own AutoPhone instance running S1/S2 tests against mozilla-inbound builds:

Install AutoPhone:

git clone https://github.com/mozilla/autophone

cd autophone

pip install -r requirements.txt

Install PhoneDash, to store and view results:

git clone https://github.com/markrcote/phonedash

Create a phonedash settings file, phonedash/server/settings.cfg with content:

[database]
SQL_TYPE=sqlite
SQL_DB=yourdb
SQL_SERVER=localhost
SQL_USER=
SQL_PASSWD=

Start phonedash:

python server.py <ip address of your computer>

It will log status messages to the console. Watch that for any errors, and to get a better understanding of what’s happening.

Prepare your device:

Connect your Android phone or tablet to your computer by USB. Multiple devices may be connected. Each device must be rooted. Check that you can see your devices with adb devices — and note the serial number(s) (see devices.ini below).

Configure your device:

cp devices.ini.example devices.ini

Edit devices.ini, changing the serial numbers to your device serial numbers and the device names to something meaningful to you. Here’s my simple devices.ini for one device I called “gbrown”:

[gbrown]
serialno=01498B300600B008

Configure autophone:

cp autophone.ini.example autophone.ini

Edit autophone.ini to make it your own. Most of the defaults are fine; here is mine:

[settings]
#clear_cache = False
#ipaddr = …
#port = 28001
#cachefile = autophone_cache.json
#logfile = autophone.log
loglevel = DEBUG
test_path = tests/manifest.ini
#emailcfg = email.ini
enable_pulse = True
enable_unittests = False
#cache_dir = builds
#override_build_dir = None
repos = mozilla-inbound
#buildtypes = opt
#build_cache_port = 28008
verbose = True

#build_cache_size = 20
#build_cache_expires = 7
#device_ready_retry_wait = 20
#device_ready_retry_attempts = 3
#device_battery_min = 90
#device_battery_max = 95
#phone_retry_limit = 2
#phone_retry_wait = 15
#phone_max_reboots = 3
#phone_ping_interval = 15
#phone_command_queue_timeout = 10
#phone_command_queue_timeout = 1
#phone_crash_window = 30
#phone_crash_limit = 5

python autophone.py -h provides help on options, which are analogues of the autophone.ini settings.

Configure your tests:

Notice that autophone.ini has a test path of tests/manifest.ini. By default, tests/manifest.ini is configured for S1/S2 tests — it points to configs/s1s2_settings.ini. We need to set up that file:

cd configs

cp s1s2_settings.ini.example s1s2_settings.ini

Edit s1s2_settings.ini to make it your own. Here’s mine:

[paths]
#source = files/
#dest = /mnt/sdcard/tests/autophone/s1test/
#profile = /data/local/tmp/profile

[locations]
# test locations can be empty to specify a local
# path on the device or can be a url to specify
# a web server.
local =
remote = http://192.168.0.82:8080/

[tests]
blank = blank.html
twitter = twitter.html

[settings]
iterations = 2
resulturl = http://192.168.0.82:8080/api/s1s2/

[signature]
id =
key =

Be sure to set the resulturl to match your PhoneDash instance.

If running local tests, copy your test files (like blank.html above) to the files directory. If runnng remote tests, be sure that your test files are served from the resulturl (if using PhoneDash, copy to the html directory).

Start autophone:

python autophone.py –config autophone.ini

With these settings, autophone will listen for new builds on mozilla-inbound, and start tests on your device(s) for each one. You should start to see your device reboot, then Firefox will be installed and startup tests will run. As more builds complete on mozilla-inbound, more tests will run.

autophone.py will print some diagnostics to the console, but much more detail is available in autophone.log — watch that to see what’s happening.

Check your phonedash instance for results — visit http://<ip address of your computer>:8080. At first this won’t have any data, but as autophone runs tests, you’ll start to see results. Here’s my instance after a few hours:

myphonedash


September 12, 2014 08:02 PM

September 11, 2014

William Lachance

Hacking on the Treeherder front end: refreshingly easy

Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.

One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:


git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js

Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.

We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:

https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking

If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.

September 11, 2014 08:35 PM

September 08, 2014

Matt Brubeck

Let's build a browser engine! Part 5: Boxes

This is the latest in a series of articles about writing a simple HTML rendering engine:

This article will begin the layout module, which takes the style tree and translates it into a bunch of rectangles in a two-dimensional space. This is a big module, so I’m going to split it into several articles. Also, some of the code I share in this article may need to change as I write the code for the later parts.

The layout module’s input is the style tree from Part 4, and its output is yet another tree, the layout tree. This takes us one step further in our mini rendering pipeline:

I’ll start by talking about the basic HTML/CSS layout model. If you’ve ever learned to develop web pages you might be familiar with this already—but it may look a bit different from the implementer’s point of view.

The Box Model

Layout is all about boxes. A box is a rectangular section of a web page. It has a width, a height, and a position on the page. This rectangle is called the content area because it’s where the box’s content is drawn. The content may be text, image, video, or other boxes.

A box may also have padding, borders, and margins surrounding its content area. The CSS spec has a diagram showing how all these layers fit together.

Robinson stores a box’s content area and surrounding areas in the following structure. [Rust note: f32 is a 32-bit floating point type.]

// CSS box model. All sizes are in px.
struct Dimensions {
    // Top left corner of the content area, relative to the document origin:
    x: f32,
    y: f32,

    // Content area size:
    width: f32,
    height: f32,

    // Surrounding edges:
    padding: EdgeSizes,
    border: EdgeSizes,
    margin: EdgeSizes,
}

struct EdgeSizes {
    left: f32,
    right: f32,
    top: f32,
    bottom: f32,
}

Block and Inline Layout

Note: This section contains diagrams that won't make sense if you are reading them without the associated visual styles. If you are reading this in a feed reader, try opening the original page in a regular browser tab. I also included text descriptions for those of you using screen readers or other assistive technologies.

The CSS display property determines which type of box an element generates. CSS defines several box types, each with its own layout rules. I’m only going to talk about two of them: block and inline.

I’ll use this bit of pseudo-HTML to illustrate the difference:

<container>
  <a></a>
  <b></b>
  <c></c>
  <d></d>
</container>

Block boxes are placed vertically within their container, from top to bottom.

a, b, c, d { display: block; }

Description: The diagram below shows four rectangles in a vertical stack.

a
b
c
d

Inline boxes are placed horizontally within their container, from left to right. If they reach the right edge of the container, they will wrap around and continue on a new line below.

a, b, c, d { display: inline; }

Description: The diagram below shows boxes `a`, `b`, and `c` in a horizontal line from left to right, and box `d` in the next line.

a
b
c
d

Each box must contain only block children, or only inline children. When an DOM element contains a mix of block and inline children, the layout engine inserts anonymous boxes to separate the two types. (These boxes are “anonymous” because they aren’t associated with nodes in the DOM tree.)

In this example, the inline boxes b and c are surrounded by an anonymous block box, shown in pink:

a    { display: block; }
b, c { display: inline; }
d    { display: block; }

Description: The diagram below shows three boxes in a vertical stack. The first is labeled `a`; the second contains two boxes in a horizonal row labeled `b` and `c`; the third box in the stack is labeled `d`.

a
b
c
d

Note that content grows vertically by default. That is, adding children to a container generally makes it taller, not wider. Another way to say this is that, by default, the width of a block or line depends on its container’s width, while the height of a container depends on its children’s heights.

This gets more complicated if you override the default values for properties like width and height, and way more complicated if you want to support features like vertical writing.

The Layout Tree

The layout tree is a collection of boxes. A box has dimensions, and it may contain child boxes.

struct LayoutBox<'a> {
    dimensions: Dimensions,
    box_type: BoxType<'a>,
    children: Vec<LayoutBox<'a>>,
}

A box can be a block node, an inline node, or an anonymous block box. (This will need to change when I implement text layout, because line wrapping can cause a single inline node to split into multiple boxes. But it will do for now.)

enum BoxType<'a> {
    BlockNode(&'a StyledNode<'a>),
    InlineNode(&'a StyledNode<'a>),
    AnonymousBlock,
}

To build the layout tree, we need to look at the display property for each DOM node. I added some code to the style module to get the display value for a node. If there’s no specified value it returns the initial value, 'inline'.

enum Display {
    Inline,
    Block,
    DisplayNone,
}

impl StyledNode {
    /// Return the specified value of a property if it exists, otherwise `None`.
    fn value(&self, name: &str) -> Option<Value> {
        self.specified_values.find_equiv(&name).map(|v| v.clone())
    }

    /// The value of the `display` property (defaults to inline).
    fn display(&self) -> Display {
        match self.value("display") {
            Some(Keyword(s)) => match s.as_slice() {
                "block" => Block,
                "none" => DisplayNone,
                _ => Inline
            },
            _ => Inline
        }
    }
}

Now we can walk through the style tree, build a LayoutBox for each node, and then insert boxes for the node’s children. If a node’s display property is set to 'none' then it is not included in the layout tree.

/// Build the tree of LayoutBoxes, but don't perform any layout calculations yet.
fn build_layout_tree<'a>(style_node: &'a StyledNode<'a>) -> LayoutBox<'a> {
    // Create the root box.
    let mut root = LayoutBox::new(match style_node.display() {
        Block => BlockNode(style_node),
        Inline => InlineNode(style_node),
        DisplayNone => panic!("Root node has display: none.")
    });

    // Create the descendant boxes.
    for child in style_node.children.iter() {
        match child.display() {
            Block => root.children.push(build_layout_tree(child)),
            Inline => root.get_inline_container().children.push(build_layout_tree(child)),
            DisplayNone => {} // Skip nodes with `display: none;`
        }
    }
    return root;
}

impl LayoutBox {
    /// Constructor function
    fn new(box_type: BoxType) -> LayoutBox {
        LayoutBox {
            box_type: box_type,
            dimensions: Default::default(), // initially set all fields to 0.0
            children: Vec::new(),
        }
    }
}

If a block node contains an inline child, create an anonymous block box to contain it. If there are several inline children in a row, put them all in the same anonymous container.

impl LayoutBox {
    /// Where a new inline child should go.
    fn get_inline_container(&mut self) -> &mut LayoutBox {
        match self.box_type {
            InlineNode(_) | AnonymousBlock => self,
            BlockNode(_) => {
                // If we've just generated an anonymous block box, keep using it.
                // Otherwise, create a new one.
                match self.children.last() {
                    Some(&LayoutBox { box_type: AnonymousBlock,..}) => {}
                    _ => self.children.push(LayoutBox::new(AnonymousBlock))
                }
                self.children.last_mut().unwrap()
            }
        }
    }
}

This is intentionally simplified in a number of ways from the standard CSS box generation algorithm. For example, it doesn’t handle the case where an inline box contains a block-level child. Also, it generates an unnecessary anonymous box if a block-level node has only inline children.

To Be Continued…

Whew, that took longer than I expected. I think I’ll stop here for now, but don’t worry: Part 6 is coming soon, and will cover block-level layout.

Once block layout is finished, we could jump ahead to the next stage of the pipeline: painting! I think I might do that, because then we can finally see the rendering engine’s output as pretty pictures instead of just numbers.

However, the pictures will just be a bunch of colored rectangles, unless we finish the layout module by implementing inline layout and text layout. If I don’t implement those before moving on to painting, I hope to come back to them afterward.

September 08, 2014 11:16 PM

Lucas Rocha

Introducing dspec

With all the recent focus on baseline grids, keylines, and spacing markers from Android’s material design, I found myself wondering how I could make it easier to check the correctness of my Android UI implementation against the intended spec.

Wouldn’t it be nice if you could easily provide the spec values as input and get it rendered on top of your UI for comparison? Enter dspec, a super simple way to define UI specs that can be rendered on top of Android UIs.

Design specs can be defined either programmatically through a simple API or via JSON files. Specs can define various aspects of the baseline grid, keylines, and spacing markers such as visibility, offset, size, color, etc.

Baseline grid, keylines, and spacing markers in action.

Baseline grid, keylines, and spacing markers in action.

Given the responsive nature of Android UIs, the keylines and spacing markers are positioned in relation to predefined reference points (e.g. left, right, vertical center, etc) instead of absolute offsets.

The JSON files are Android resources which means you can easily adapt the spec according to different form factors e.g. different specs for phones and tablets. The JSON specs provide a simple way for designers to communicate their intent in a computer-readable way.

You can integrate a DesignSpec with your custom views by drawing it in your View‘s onDraw(Canvas) method. But the simplest way to draw a spec on top of a view is to enclose it in a DesignSpecFrameLayout—which can take an designSpec XML attribute pointing to the spec resource. For example:

<DesignSpecFrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:designSpec="@raw/my_spec">
    ...
</DesignSpecFrameLayout>

I can’t wait to start using dspec in some of the new UI work we’re doing Firefox for Android now. I hope you find it useful too. The code is available on Github. As usual, testing and fixes are very welcome. Enjoy!

September 08, 2014 01:52 PM

September 02, 2014

Geoff Brown

Firefox for Android Performance Measures – August check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

 – Eideticker for Android is back!

 – small regression in ts_paint

 – small improvement in tp4m

 – small regression in time to throbber start / stop

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

12 (start of period) – 12 (end of period)

There was a temporary regression in this test for much of the month, but it seems to be resolved now.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 850 (end of period).

tp4m

Improvement noted around August 21.

ts_paint

Startup performance test. Lower values are better.

3650 (start of period) – 3850 (end of period).

tspaint

Note the slight regression around August 12, and perhaps another around August 27 – bug 1061878.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

Note the regression in time to throbber start around August 14 — bug 1056176.

throbstop

The same regression, less pronounced, is seen in time to throbber stop.

Eideticker

Eideticker for Android is back after a long rest – yahoo!!

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

eide1 eide2 eide3 eide4 eide5


September 02, 2014 07:49 PM

August 25, 2014

Matt Brubeck

Let's build a browser engine! Part 4: Style

Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

The Style Tree

The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

/// Map from CSS property names to values.
type PropertyMap = HashMap<String, Value>;

/// A node with associated style data.
struct StyledNode<'a> {
    node: &'a Node, // pointer to a DOM node
    specified_values: PropertyMap,
    children: Vec<StyledNode<'a>>,
}

What’s with all the 'a stuff? These are lifetime annotations, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you are not working in Rust you can safely ignore them; they aren’t critical to the meaning of this code.

We could add style information directly to the dom::Node struct from Part 1 instead, but I wanted to keep this code out of the earlier “lessons.” This is also a good opportunity to talk about the parallel trees that inhabit most layout engines.

A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

The pipeline for our toy browser engines will look something like this after we complete a few more stages:

In my implementation, each node in the DOM tree produces exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

Selector Matching

The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

fn matches(elem: &ElementData, selector: &Selector) -> bool {
    match *selector {
        Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
    }
}

To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table. [Note: The Rust types below look a bit hairy because we are passing around pointers rather than copying values. This code should be a lot more concise in languages that are not so concerned with this distinction.]

impl ElementData {
    fn get_attribute<'a>(&'a self, key: &str) -> Option<&'a String> {
        self.attributes.find_equiv(&key)
    }

    fn id<'a>(&'a self) -> Option<&'a String> {
        self.get_attribute("id")
    }

    fn classes<'a>(&'a self) -> HashSet<&'a str> {
        match self.get_attribute("class") {
            Some(classlist) => classlist.as_slice().split(' ').collect(),
            None => HashSet::new()
        }
    }
}

To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
    // Check type selector
    if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
        return false;
    }

    // Check ID selector
    if selector.id.iter().any(|id| elem.id() != Some(id)) {
        return false;
    }

    // Check class selectors
    let elem_classes = elem.classes();
    if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
        return false;
    }

    // We didn't find any non-matching selector components.
    return true;
}

Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

/// A single CSS rule and the specificity of its most specific matching selector.
type MatchedRule<'a> = (Specificity, &'a Rule);

/// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
    // Find the first (highest-specificity) matching selector.
    rule.selectors.iter().find(|selector| matches(elem, *selector))
        .map(|selector| (selector.specificity(), rule))
}

To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

/// Find all CSS rules that match the given element.
fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
    stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
}

Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the higher specificity rules are processed after the lower ones and can overwrite their values in the HashMap.

/// Apply styles to a single element, returning the specified styles.
fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
    let mut values = HashMap::new();
    let mut rules = matching_rules(elem, stylesheet);

    // Go through the rules from lowest to highest specificity.
    rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
    for &(_, rule) in rules.iter() {
        for declaration in rule.declarations.iter() {
            values.insert(declaration.name.clone(), declaration.value.clone());
        }
    }
    return values;
}

Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

/// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
    StyledNode {
        node: root,
        specified_values: match root.node_type {
            Element(ref elem) => specified_values(elem, stylesheet),
            Text(_) => HashMap::new()
        },
        children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
    }
}

That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

The Cascade

Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

Robinson’s style code does not implement the cascade; it uses only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

head { display: none; }

Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

Computed Values

In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

Inheritance

If text nodes can’t match selectors, how do they get colors and fonts and other styles? Through the magic of inheritance.

When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

Style Attributes

Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

<span style="color: red; background: yellow;">

If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

Exercises

In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

  1. Cascading
  2. Initial and/or computed values
  3. Inheritance
  4. The style attribute

Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

To be continued…

Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another short before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

August 25, 2014 10:45 PM

August 23, 2014

Matt Brubeck

Let's build a browser engine! Part 4: Style

Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

The Style Tree

The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

/// Map from CSS property names to values.
type PropertyMap = HashMap<String, Value>;

/// A node with associated style data.
struct StyledNode<'a> {
    node: &'a Node, // pointer to a DOM node
    specified_values: PropertyMap,
    children: Vec<StyledNode<'a>>,
}

What’s with all the 'a stuff? Those are lifetimes, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you’re not working in Rust you can ignore them; they aren’t critical to the code’s meaning.

We could add new fields to the dom::Node struct instead of creating a new tree, but I wanted to keep style code out of the earlier “lessons.” This also gives me an opportunity to talk about the parallel trees that inhabit most rendering engines.

A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

The pipeline for our toy browser engine will look something like this, after we complete a few more stages:

In my implementation, each node in the DOM tree has exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

Selector Matching

The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

fn matches(elem: &ElementData, selector: &Selector) -> bool {
    match *selector {
        Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
    }
}

To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table.

impl ElementData {
    fn get_attribute(&self, key: &str) -> Option<&String> {
        self.attributes.find_equiv(&key)
    }

    fn id(&self) -> Option<&String> {
        self.get_attribute("id")
    }

    fn classes(&self) -> HashSet<&str> {
        match self.get_attribute("class") {
            Some(classlist) => classlist.as_slice().split(' ').collect(),
            None => HashSet::new()
        }
    }
}

To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
    // Check type selector
    if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
        return false;
    }

    // Check ID selector
    if selector.id.iter().any(|id| elem.id() != Some(id)) {
        return false;
    }

    // Check class selectors
    let elem_classes = elem.classes();
    if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
        return false;
    }

    // We didn't find any non-matching selector components.
    return true;
}

Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

Building the Style Tree

Next we need to traverse the DOM tree. For each element in the tree, we will search the stylesheet for matching rules.

When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

type MatchedRule<'a> = (Specificity, &'a Rule);

/// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
    // Find the first (highest-specificity) matching selector.
    rule.selectors.iter().find(|selector| matches(elem, *selector))
        .map(|selector| (selector.specificity(), rule))
}

To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

/// Find all CSS rules that match the given element.
fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
    stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
}

Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the more-specific rules are processed after the less-specific ones, and can overwrite their values in the HashMap.

/// Apply styles to a single element, returning the specified values.
fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
    let mut values = HashMap::new();
    let mut rules = matching_rules(elem, stylesheet);

    // Go through the rules from lowest to highest specificity.
    rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
    for &(_, rule) in rules.iter() {
        for declaration in rule.declarations.iter() {
            values.insert(declaration.name.clone(), declaration.value.clone());
        }
    }
    return values;
}

Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

/// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
    StyledNode {
        node: root,
        specified_values: match root.node_type {
            Element(ref elem) => specified_values(elem, stylesheet),
            Text(_) => HashMap::new()
        },
        children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
    }
}

That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

The Cascade

Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

Robinson’s style code does not implement the cascade; it takes only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

head { display: none; }

Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

Computed Values

In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

Inheritance

If text nodes can’t match selectors, how do they get colors and fonts and other styles? The answer is inheritance.

When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

Style Attributes

Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

<span style="color: red; background: yellow;">

If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

Exercises

In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

  1. Cascading
  2. Initial and/or computed values
  3. Inheritance
  4. The style attribute

Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

To Be Continued…

Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another delay before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

August 23, 2014 10:45 PM

Chris Kitching

Preventing mercurial from eating itself

Mercurial doesn’t seem to handle being interrupted particularly well. That, coupled with my tendency to hit ^C when I do something stupid leads to Mercurial ending up in an inconsistent state about once a week, causing me to manually restore my patch queue state with judicious use of `strip` (or otherwise).

I’ve just about had enough of this. A simple hack (that hopefully others will find useful) is to alias `hg` to a script containing:

hg “$@”&
wait

exit $!

Sort of evil hack, but does the job (provided you don’t kill the terminal you’re typing in: but I’m yet to meet someone who is suffering from reflexively executing `exit`. Now you can ^C Mercurial all you want and it will blithely ignore you. This seems preferable to it half-doing something, throwing a tantrum, and eating itself…


August 23, 2014 12:41 AM

August 18, 2014

Chris Kitching

A novel (well, not really) way of optimising nine-patches

While tinkering with shrinking Fennec’s apk size, I came across a potential optimisation for nine-patches that’s oddly missing from Android’s resource preprocessing steps.

 

There’s a pretty decent explanation of what nine-patches are over here.

 

When a nine-patch contains multiple scalable regions along an axis, the system guarantees to maintain their relative size (though no promises are made about aspect ratio of the entire image).

In the simpler (and more common) case where only a single scalable region exists along an axis, if all the columns (or rows) are identical, they may safely be collapsed to a single column (or row). The renderer will happily later upscale this single-pixel region as much as necessary with identical results.

Apparently this isn’t something that’s done automagically during the packaging step, so I’ve written a neat (and slightly badly-structured) utility for performing this optimisation, available here:

https://bitbucket.org/ckitching/shrinkninepatch

It takes a list of .9.png files as input and overwrites them with optimised versions (if the optimisation is found to be safe. It does not currently perform any optimisation where multiple scalable regions exist, even though it should be safe to do so under some circumstances (if the regions each consist of homogenous rows/columns and you reduce their size in a way that preserve their relative size)).

Note that this program doesn’t preserve PNG indexes, so if you feed it an indexed png you’ll get an unindexed ARGB image as the result (causing the file to grow in size). Simply repeat your png quantisation step to obtain your optimised image.


August 18, 2014 08:30 AM

August 15, 2014

William Lachance

A new meditation app

I had some time on my hands two weekends ago and was feeling a bit of an itch to build something, so I decided to do a project I’ve had in the back of my head for a while: a meditation timer.

If you’ve been following this log, you’d know that meditation has been a pretty major interest of mine for the past year. The foundation of my practice is a daily round of seated meditation at home, where I have been attempting to follow the breath and generally try to connect with the world for a set period every day (usually varying between 10 and 30 minutes, depending on how much of a rush I’m in).

Clock watching is rather distracting while sitting so having a tool to notify you when a certain amount of time has elapsed is quite useful. Writing a smartphone app to do this is an obvious idea, and indeed approximately a zillion of these things have been written for Android and iOS. Unfortunately, most are not very good. Really, I just want something that does this:

  1. Select a meditation length (somewhere between 10 and 40 minutes).
  2. Sound a bell after a short preparation to demarcate the beginning of meditation.
  3. While the meditation period is ongoing, do a countdown of the time remaining (not strictly required, but useful for peace of mind in case you’re wondering whether you’ve really only sat for 25 minutes).
  4. Sound a bell when the meditation ends.

Yes, meditation can get more complex than that. In Zen practice, for example, sometimes you have several periods of varying length, broken up with kinhin (walking meditation). However, that mostly happens in the context of a formal setting (e.g. a Zendo) where you leave your smartphone at the door. Trying to shoehorn all that into an app needlessly complicates what should be simple.

Even worse are the apps which “chart” your progress or have other gimmicks to connect you to a virtual “community” of meditators. I have to say I find that kind of stuff really turns me off. Meditation should be about connecting with reality in a more fundamental way, not charting gamified statistics or interacting online. We already have way too much of that going on elsewhere in our lives without adding even more to it.

So, you might ask why the alarm feature of most clock apps isn’t sufficient? Really, it is most of the time. A specialized app can make selecting the interval slightly more convenient and we can preselect an appropriate bell sound up front. It’s also nice to hear something to demarcate the start of a meditation session. But honestly I didn’t have much of a reason to write this other than the fact than I could. Outside of work, I’ve been in a bit of a creative rut lately and felt like I needed to build something, anything and put it out into the world (even if it’s tiny and only a very incremental improvement over what’s out there already). So here it is:

meditation-timer-screen

The app was written entirely in HTML5 so it should work fine on pretty much any reasonably modern device, desktop or mobile. I tested it on my Nexus 5 (Chrome, Firefox for Android)[1], FirefoxOS Flame, and on my laptop (Chrome, Firefox, Safari). It lives on a subdomain of this site or you can grab it from the Firefox Marketplace if you’re using some variant of Firefox (OS). The source, such as it is, can be found on github.

I should acknowledge taking some design inspiration from the Mind application for iOS, which has a similarly minimalistic take on things. Check that out too if you have an iPhone or iPad!

Happy meditating!

[1] Note that there isn’t a way to inhibit the screen/device from going to sleep with these browsers, which means that you might miss the ending bell. On FirefoxOS, I used the requestWakeLock API to make sure that doesn’t happen. I filed a bug to get this implemented on Firefox for Android.

August 15, 2014 02:02 AM

August 14, 2014

Sriram Ramasubramanian

Multiple Text Layout

The pretty basic unit for developing UI in Android is a View. But if we look closely, View is a UI widget that provides user interaction. It comprises of Drawables and text Layouts. We see drawables everywhere — right from the background of a View. TextView has compound drawables too. However, TextView has only one layout. Is it possible to have more than one text layout in a View/TextView?

Multiple Text Layout

Let’s take an example. We have a simple ListView with each row having an image, text and some sub-text. Since TextView shows only one text Layout by default, we would need a LinearLayout with 2 or 3 views (2 TextViews in them) to achieve this layout. What if TextView can hold one more text layout? It’s just a private variable that can be created and drawn on the Even if it can hold and draw it, how would we be able to let TextView’s original layout account for this layout?

If we look at TextView’s onMeasure() closely, the available width for the layout accounts for the space occupied by the compound drawables. If we make TextView account for a larger compound drawable space on the right, the layout will constrain itself more. Now that the space is carved out, we can draw the layout in that space.

    private Layout mSubTextLayout;

    @Override
    public int getCompoundPaddingRight() {
        // Assumption: the layout has only one line.
        return super.getCompoundPaddingRight() + mSubTextLayout.getLineWidth(0);
    }

Now we need to create a layout for the sub-text and draw. Ideally it’s not good to create new objects inside onMeasure(). But if we take care of when and how we create the layouts, we don’t have to worry about this restriction. And what different kind of Layouts can we create? TextView allows creating a BoringLayout, a StaticLayout or a DynamicLayout. BoringLayout can be used if the text is only single line. StaticLayout is for multi-line layouts that cannot be changed after creation. DynamicLayout is for editable text, like in an EditText.

    @Override
    public void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        int width = MeasureSpec.getSize(widthMeasureSpec);

        // Create a layout for sub-text.
        mSubTextLayout = new StaticLayout(
                mSubText,
                mPaint,
                width,
                Alignment.ALIGN_NORMAL,
                1.0f,
                0.0f,
                true);

        // TextView doesn't know about mSubTextLayout.
        // It calculates the space using compound drawables' sizes.
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);
    }

The mPaint used here has all the attributes for the sub-text — like text color, shadow, text-size, etc. This is what determines the size used for a text layout.

    @Override
    public void onDraw(Canvas canvas) {
        // Do the default draw.
        super.onDraw(canvas);

        // Calculate the place to show the sub-text
        // using the padding, available width, height and
        // the sub-text width and height.
        // Note: The 'right' padding to use here is 'super.getCompoundPaddingRight()'
        // as we have faked the actual value.

        // Draw the sub-text.
        mLayout.draw(canvas);
    }

But hey, can’t we just use a Spannable text? Well… what if the name is really long and runs into multiple lines or need to be ellipsized?

By this, we use the same TextView to draw two layouts. And that has helped us remove 2 Views! Happy hacking! ;)

P.S: The icons are from: http://www.tutorial9.net/downloads/108-mono-icons-huge-set-of-minimal-icons/


August 14, 2014 08:41 AM

August 13, 2014

Matt Brubeck

Let's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

Next I’ll walk through the css module from my toy browser engine, robinson. The code is written in Rust, though the concepts should translate pretty easily into other programming languages. Reading the previous articles first might help you understand some the code below.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

struct Stylesheet {
    rules: Vec<Rule>,
}

A rule includes one or more selectors separated by commas, followed by a series of declarations enclosed in braces.

struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,
}

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

enum Selector {
    Simple(SimpleSelector),
}

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,
}

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

struct Declaration {
    name: String,
    value: Value,
}

My toy engine supports only a handful of CSS’s many value types.

enum Value {
    Keyword(String),
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here
}

enum Unit { Px, /* insert more units here */ }

Rust note: u8 is an 8-bit unsigned integer, and f32 is a 32-bit float.

All other CSS syntax is unsupported, including @-rules, comments, and any selectors/values/units not mentioned above.

Parsing

CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

/// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
fn parse_simple_selector(&mut self) -> SimpleSelector {
    let mut selector = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
    while !self.eof() {
        match self.next_char() {
            '#' => {
                self.consume_char();
                selector.id = Some(self.parse_identifier());
            }
            '.' => {
                self.consume_char();
                selector.class.push(self.parse_identifier());
            }
            '*' => {
                // universal selector
                self.consume_char();
            }
            c if valid_identifier_char(c) => {
                selector.tag_name = Some(self.parse_identifier());
            }
            _ => break
        }
    }
    return selector;
}

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.

Specificity

Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        // http://www.w3.org/TR/selectors/#specificity
        let Simple(ref simple) = *self;
        let a = simple.id.iter().len();
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)
    }
}

(If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.)

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

/// Parse a rule set: `<selectors> { <declarations> }`.
fn parse_rule(&mut self) -> Rule {
    Rule {
        selectors: self.parse_selectors(),
        declarations: self.parse_declarations()
    }
}

/// Parse a comma-separated list of selectors.
fn parse_selectors(&mut self) -> Vec<Selector> {
    let mut selectors = Vec::new();
    loop {
        selectors.push(Simple(self.parse_simple_selector()));
        self.consume_whitespace();
        match self.next_char() {
            ',' => { self.consume_char(); self.consume_whitespace(); }
            '{' => break, // start of declarations
            c   => panic!("Unexpected character {} in selector list", c)
        }
    }
    // Return selectors with highest specificity first, for use in matching.
    selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));
    return selectors;
}

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with one built on rust-peg or similar.

Exercises

As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.

Shortcuts

Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To Be Continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some of the upcoming articles. I’ll keep them coming as fast as I can!

August 13, 2014 07:30 PM

August 11, 2014

Matt Brubeck

Let's build a browser engine! Part 2: HTML

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

<html>
    <body>
        <h1>Title</h1>
        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>
        </div>
    </body>
</html>

The following syntax is allowed:

Everything else is unsupported, including:

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

struct Parser {
    pos: uint,
    input: String,
}

We can use this to implement some simple methods for peeking at the next characters in the input:

impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {
        self.input.as_slice().char_at(self.pos)
    }

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {
        self.input.as_slice().slice_from(self.pos).starts_with(s)
    }

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()
    }

    // ...
}

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

/// Return the current character, and advance to the next character.
fn consume_char(&mut self) -> char {
    let range = self.input.as_slice().char_range_at(self.pos);
    self.pos = range.next;
    return range.ch;
}

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

/// Consume characters until `test` returns false.
fn consume_while(&mut self, test: |char| -> bool) -> String {
    let mut result = String::new();
    while !self.eof() && test(self.next_char()) {
        result.push(self.consume_char());
    }
    return result;
}

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

/// Consume and discard zero or more whitespace characters.
fn consume_whitespace(&mut self) {
    self.consume_while(|c| c.is_whitespace());
}

/// Parse a tag or attribute name.
fn parse_tag_name(&mut self) -> String {
    self.consume_while(|c| match c {
        'a'...'z' | 'A'...'Z' | '0'...'9' => true,
        _ => false
    })
}

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

/// Parse a single node.
fn parse_node(&mut self) -> dom::Node {
    match self.next_char() {
        '<' => self.parse_element(),
        _   => self.parse_text()
    }
}

/// Parse a text node.
fn parse_text(&mut self) -> dom::Node {
    dom::text(self.consume_while(|c| c != '<'))
}

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

/// Parse a single element, including its open tag, contents, and closing tag.
fn parse_element(&mut self) -> dom::Node {
    // Opening tag.
    assert!(self.consume_char() == '<');
    let tag_name = self.parse_tag_name();
    let attrs = self.parse_attributes();
    assert!(self.consume_char() == '>');

    // Contents.
    let children = self.parse_nodes();

    // Closing tag.
    assert!(self.consume_char() == '<');
    assert!(self.consume_char() == '/');
    assert!(self.parse_tag_name() == tag_name);
    assert!(self.consume_char() == '>');

    return dom::elem(tag_name, attrs, children);
}

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

/// Parse a single name="value" pair.
fn parse_attr(&mut self) -> (String, String) {
    let name = self.parse_tag_name();
    assert!(self.consume_char() == '=');
    let value = self.parse_attr_value();
    return (name, value);
}

/// Parse a quoted value.
fn parse_attr_value(&mut self) -> String {
    let open_quote = self.consume_char();
    assert!(open_quote == '"' || open_quote == '\'');
    let value = self.consume_while(|c| c != open_quote);
    assert!(self.consume_char() == open_quote);
    return value;
}

/// Parse a list of name="value" pairs, separated by whitespace.
fn parse_attributes(&mut self) -> dom::AttrMap {
    let mut attributes = HashMap::new();
    loop {
        self.consume_whitespace();
        if self.next_char() == '>' {
            break;
        }
        let (name, value) = self.parse_attr();
        attributes.insert(name, value);
    }
    return attributes;
}

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

/// Parse a sequence of sibling nodes.
fn parse_nodes(&mut self) -> Vec<dom::Node> {
    let mut nodes = Vec::new();
    loop {
        self.consume_whitespace();
        if self.eof() || self.starts_with("</") {
            break;
        }
        nodes.push(self.parse_node());
    }
    return nodes;
}

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

/// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
        nodes.swap_remove(0).unwrap()
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)
    }
}

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

// <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
root.children.push(body);
body.children.push(text("Hello, world!"));

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

August 11, 2014 03:00 PM

August 08, 2014

Matt Brubeck

Let's build a browser engine! Part 1: Getting started

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles:

The full series will describe the code I’ve written, and show how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,
}

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

enum NodeType {
    Text(String),
    Element(ElementData),
}

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

struct ElementData {
    tag_name: String,
    attributes: AttrMap,
}

type AttrMap = HashMap<String, String>;

Finally, some constructor functions to make it easy to create new nodes:

fn text(data: String) -> Node {
    Node { children: Vec::new(), node_type: Text(data) }
}

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node {
        children: children,
        node_type: Element(ElementData {
            tag_name: name,
            attributes: attrs,
        })
    }
}

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

Exercises

These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.

References

For much more detailed information about browser engine internals, see Tali Garsiel’s wonderful How Browsers Work and its links to further resources.

For example code, here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

August 08, 2014 04:40 PM

August 03, 2014

Geoff Brown

Firefox for Android Performance Measures – July check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

- No significant regressions or improvements found!

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Screenshot from 2014-08-03 13:13:44

12 (start of period) – 12 (end of period)

The temporary regression of July 24 was caused by bug 1031107; resolved by bug 1044702.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 940 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3650 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Screenshot from 2014-08-03 13:39:26

Screenshot from 2014-08-03 13:45:17

Screenshot from 2014-08-03 13:49:07

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

The Eideticker dashboard is slowly coming back to life, but there are still not enough results to show graphs here. We’ll check back at the end of August.


August 03, 2014 07:51 PM

August 01, 2014

Margaret Leibovic

Firefox for Android: Search Experiments

Search is a large part of mobile browser usage, so we (the Firefox for Android team) decided to experiment with ways to improve our search experience for users. As an initial goal, we decided to look into how we can make search faster. To explore this space, we’re about to enable two new features in Nightly: a search activity and a home screen widget.

Android allows apps to register to handle an “assist” intent, which is triggered by the swipe-up gesture on Nexus devices. We decided to hook into this intent to launch a quick, lightweight search experience for Firefox for Android users.

image image

Right now we’re using Yahoo! to power search suggestions and results, but we have patches in the works to let users choose their own search engine. Tapping on results will launch users back into their normal Firefox for Android experience.

We also created a simple home screen widget to help users quickly launch this search activity even if they’re not using a Nexus device. As a bonus, this widget also lets users quickly open a new tab in Firefox for Android.

image image

We are still in the early phases of design and development, so be prepared to see changes as we iterate to improve this search experience. We have a few telemetry probes in place to let us gather data on how people are using these new search features, but we’d also love to hear your feedback!

You can find links to relevant bugs on our project wiki page. As always, discussion about Firefox for Android development happens on the mobile-firefox-dev mailing list and in #mobile on IRC. And we’re always looking for new contributors if you’d like to get involved!

Special shout-out to our awesome intern Eric for leading the initial search activity development, as well as Wes for implementing the home screen widget.

August 01, 2014 09:10 PM

July 31, 2014

Lucas Rocha

The new TwoWayView

What if writing custom view recycling layouts was a lot simpler? This question stuck in my mind since I started writing Android apps a few years ago.

The lack of proper extension hooks in the AbsListView API has been one of my biggest pain points on Android. The community has come up with different layout implementations that were largely based on AbsListView‘s code but none of them really solved the framework problem.

So a few months ago, I finally set to work on a new API for TwoWayView that would provide a framework for custom view recycling layouts. I had made some good progress but then Google announced RecyclerView at I/O and everything changed.

At first sight, RecyclerView seemed to be an exact overlap with the new TwoWayView API. After some digging though, it became clear that RecyclerView was a superset of what I was working on. So I decided to embrace RecyclerView and rebuild TwoWayView on top of it.

The new TwoWayView is functional enough now. Time to get some early feedback. This post covers the upcoming API and the general-purpose layout managers that will ship with it.

Creating your own layouts

RecyclerView itself doesn’t actually do much. It implements the fundamental state handling around child views, touch events and adapter changes, then delegates the actual behaviour to separate components—LayoutManager, ItemDecoration, ItemAnimator, etc. This means that you still have to write some non-trivial code to create your own layouts.

LayoutManager is a low-level API. It simply gives you extension points to handle scrolling and layout. For most layouts, the general structure of a LayoutManager implementation is going to be very similar—recycle views out of parent bounds, add new views as the user scrolls, layout scrap list items, etc.

Wouldn’t it be nice if you could implement LayoutManagers with a higher-level API that was more focused on the layout itself? Enter the new TwoWayView API.

TwoWayLayoutManagercode is a simple API on top of LayoutManager that does all the laborious work for you so that you can focus on how the child views are measured, placed, and detached from the RecyclerView.

To get a better idea of what the API looks like, have a look at these sample layouts: SimpleListLayout is a list layout and GridAndListLayout is a more complex example where the first N items are laid out as a grid and the remaining ones behave like a list. As you can see you only need to override a couple of simple methods to create your own layouts.

Built-in layouts

The new API is pretty nice but I also wanted to create a space for collaboration around general-purpose layout managers. So far, Google has only provided LinearLayoutManager. They might end up releasing a few more layouts later this year but, for now, that is all we got.

layouts

The new TwoWayView ships with a collection of four built-in layouts: List, Grid, Staggered Grid, and Spannable Grid.

These layouts support all RecyclerView features: item animations, decorations, scroll to position, smooth scroll to position, view state saving, etc. They can all be scrolled vertically and horizontally—this is the TwoWayView project after all ;-)

You probably know how the List and Grid layouts work. Staggered Grid arranges items with variable heights or widths into different columns or rows according to its orientation.

Spannable Grid is a grid layout with fixed-size cells that allows items to span multiple columns and rows. You can define the column and row spans as attributes in the child views as shown below.

<FrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:colSpan="2"
    app:rowSpan="3">
    ...
</FrameLayout>

Utilities

The new TwoWayView API will ship with a convenience view (TwoWayView) that can take a layoutManager XML attribute that points to a layout manager class.

<org.lucasr.twowayview.widget.TwoWayView
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:layoutManager="ListLayoutManager"/>

This way you can leverage the resource system to set layout manager depending on device features and configuration via styles.

You can also use ItemClickSupport to use ListView-style item (long) click listeners. You can easily plug-in support for those in any RecyclerView (see sample).

I’m also planning to create pluggable item decorations for dividers, item spacing, list selectors, and more.


That’s all for now! The API is still in flux and will probably go through a few more iterations. The built-in layouts definitely need more testing.

You can help by filing (and fixing) bugs and giving feedback on the API. Maybe try using the built-in layouts in your apps and see what happens?

I hope TwoWayView becomes a productive collaboration space for RecyclerView extensions and layouts. Contributions are very welcome!

July 31, 2014 11:33 AM

July 30, 2014

Chris Peterson

Testing Add-on Compatibility With Multi-Process Firefox

“Electrolysis” (or “e10s” for short) is the project name for Mozilla’s multi-process Firefox. Sandboxing tabs into multiple processes will improve security and UI responsiveness. Firefox currently sandboxes plugins like Flash into a separate process, but sandboxing web content is more difficult because Firefox’s third-party add-ons were not designed for multiple processes. IE and Chrome use multiple processes today, but Google didn’t need to worry about add-on compatibility when designing Chrome’s multi-process sandbox because they didn’t have any. :)

And that’s where our Firefox Nightly testers come in! We can’t test every Firefox add-on ourselves. We’re asking for your help testing your favorite add-ons in Firefox Nightly’s multi-process mode. We’re tracking tested add-ons, those that work and those that need to be fixed, on the website arewee10syet.com (“Are We e10s Yet?”). Mozilla is hosting a QMO Testday on Friday August 1 where Mozilla QA and e10s developers will be available in Mozilla’s #testday IRC channel to answer questions.

To test an add-on:

  1. Install Firefox Nightly.
  2. Optional but recommended: create a new Firefox profile so you are testing the add-on without any other add-ons or old settings.
  3. Install the add-on you would like to test. See arewee10syet.com for some suggestions.
  4. e10s is disabled by default. Confirm that the add-on works as expected in Firefox Nightly before enabling e10s. You might find Firefox Nightly bugs that are not e10s’ fault. :)
  5. Now enable e10s by opening the about:config page and changing the browser.tabs.remote.autostart preference to true.
  6. Restart Firefox Nightly. When e10s is enabled, Firefox’s tab titles will be underlined. Tabs for special pages, like your home page or the new tab page, are not underlined, but tabs for most websites should be underlined.
  7. Confirm that the add-on still works as expected with e10s.
  8. To disable e10s, reset the browser.tabs.remote.autostart preference to false and restart Firefox.

Some e10s problems you might find include Firefox crashing or hanging. Add-ons that modify web page content, like Greasemonkey or AdBlock Plus, might appear to do nothing. But many add-ons will just work.

If the add-on works as expected, click the “it works” link on arewee10syet.com for that add-on or just email me so we can update our list of compatible add-ons.

If the add-on does not work as expected, click the add-on’s “Report bug” link on arewe10syet.com to file a bug report on Bugzilla. Please include the add-on’s name and version, steps to reproduce the problem, a description of what you expected to happen, and what actually happened. If Firefox crashed, include the most recent crash report IDs from about:crashes. If Firefox didn’t crash, copying the log messages from Firefox’s Browser Console (Tools menu > Web Developer menu > Browser Console menu item; not Web Console) to the bug might include useful debugging information.

July 30, 2014 07:31 PM

Wes Johnston

Better tiles in Fennec

We recently reworked Firefox for Android‘s homescreen to look a little prettier on first-run by shipping “tile” icons and colors for the default sites. In Firefox 33, we’re allowing sites to designate their own tiles images by supporting  msApplication-Tile and Colors in Fennec. So, for example, you might start seeing tiles that look like:

appear as you browse. Sites can add these with just a little markup in the page:

<meta name="msapplication-TileImage" content="images/myimage.png"/>
<meta name="msapplication-TileColor" content="#d83434"/>

As you can see above in the Boston Globe tile, sometimes we don’t have much to work with. Firefox for Android already supports the sizes attribute on favicon links, and our fabulous intern Chris Kitching improved things even more last year. In the absence of a tile, we’ll show a screenshot. If you’ve designated that Firefox shouldn’t cache the content of your site for security reasons, we’ll use the most appropriate size we can find and pull colors out of it for the background. But if sites can provide us with this information directly its 1.) must faster and 2.) gives much better results.

AFAIK, there is no standard spec for these types of meta tags, and none in the works either. Its a bit of the wild wild west right now. For instance, Apple supports apple-mobile-web-app-status-bar-style for designating the color of the status bar in certain situations, as well as a host of images for use in different situations.

Opera at one point supported using a minimized media query to designate a stylesheet for thumbnails (sadly they’ve removed all of those docs, so instead you just get a github link to an html file there). Gecko doesn’t have view-mode media query support currently, and not many sites have implemented it anyway, but it might in the future provide a standards based alternative. That said, there are enough useful reasons to know a “color” or a few different “logos” for an app or site, that it might be useful to come up with some standards based ways to list these things in pages.


July 30, 2014 04:30 PM

July 25, 2014

Mark Finkle

Firefox for Android: Collecting and Using Telemetry

Firefox 31 for Android is the first release where we collect telemetry data on user interactions. We created a simple “event” and “session” system, built on top of the current telemetry system that has been shipping in Firefox for many releases. The existing telemetry system is focused more on the platform features and tracking how various components are behaving in the wild. The new system is really focused on how people are interacting with the application itself.

Collecting Data

The basic system consists of two types of telemetry probes:

We add the probes into any part of the application that we want to study, which is most of the application.

Visualizing Data

The raw telemetry data is processed into summaries, one for Events and one for Sessions. In order to visualize the telemetry data, we created a simple dashboard (source code). It’s built using a great little library called PivotTable.js, which makes it easy to slice and dice the summary data. The dashboard has several predefined tables so you can start digging into various aspects of the data quickly. You can drag and drop the fields into the column or row headers to reorganize the table. You can also add filters to any of the fields, even those not used in the row/column headers. It’s a pretty slick library.

uitelemetry-screenshot-crop

Acting on Data

Now that we are collecting and studying the data, the goal is to find patterns that are unexpected or might warrant a closer inspection. Here are a few of the discoveries:

Page Reload: Even in our Nightly channel, people seem to be reloading the page quite a bit. Way more than we expected. It’s one of the Top 2 actions. Our current thinking includes several possibilities:

  1. Page gets stuck during a load and a Reload gets it going again
  2. Networking error of some kind, with a “Try again” button on the page. If the button does not solve the problem, a Reload might be attempted.
  3. Weather or some other update-able page where a Reload show the current information.

We have started projects to explore the first two issues. The third issue might be fine as-is, or maybe we could add a feature to make updating pages easier? You can still see high uses of Reload (reload) on the dashboard.

Remove from Home Pages: The History, primarily, and Top Sites pages see high uses of Remove (home_remove) to delete browsing information from the Home pages. People do this a lot, again it’s one of the Top 2 actions. People will do this repeatably, over and over as well, clearing the entire list in a manual fashion. Firefox has a Clear History feature, but it must not be very discoverable. We also see people asking for easier ways of clearing history in our feedback too, but it wasn’t until we saw the telemetry data for us to understand how badly this was needed. This led us to add some features:

  1. Since the History page was the predominant source of the Removes, we added a Clear History button right on the page itself.
  2. We added a way to Clear History when quitting the application. This was a bit tricky since Android doesn’t really promote “Quitting” applications, but if a person wants to enable this feature, we add a Quit menu item to make the action explicit and in their control.
  3. With so many people wanting to clear their browsing history, we assumed they didn’t know that Private Browsing existed. No history is saved when using Private Browsing, so we’re adding some contextual hinting about the feature.

These features are included in Nightly and Aurora versions of Firefox. Telemetry is showing a marked decrease in Remove usage, which is great. We hope to see the trend continue into Beta next week.

External URLs: People open a lot of URLs from external applications, like Twitter, into Firefox. This wasn’t totally unexpected, it’s a common pattern on Android, but the degree to which it happened versus opening the browser directly was somewhat unexpected. Close to 50% of the URLs loaded into Firefox are from external applications. Less so in Nightly, Aurora and Beta, but even those channels are almost 30%. We have started looking into ideas for making the process of opening URLs into Firefox a better experience.

Saving Images: An unexpected discovery was how often people save images from web content (web_save_image). We haven’t spent much time considering this one. We think we are doing the “right thing” with the images as far as Android conventions are concerned, but there might be new features waiting to be implemented here as well.

Take a look at the data. What patterns do you see?

Here is the obligatory UI heatmap, also available from the dashboard:
uitelemetry-heatmap

July 25, 2014 03:08 AM

July 10, 2014

Nick Alexander

Build your own browser: A Maven repository for GeckoView

GeckoView is a project that lets you embed the Gecko rendering engine into your Android App. Slowly but surely, we’ve making this process easier. It’s now really easy to include GeckoView in your Gradle-based application, thanks to a new Maven repository hosting Nightly GeckoView builds.

GeckoView is a long-time Fennec (Firefox for Android) side-project: you can see the the GeckoView project page and the first GeckoView blog post. The first sample code, the original geckobrowser is still working too, but in the year since the initial development, progress has been slow.

I think part of the reason that progress has been slow is that it’s quite difficult to embed GeckoView into an App — at least, it’s quite tricky if you use it how it’s packaged on Mozilla’s build infrastructure [1]. Now, there’s an easier way that takes advantage of Gradle’s excellent support for Maven repositories.

A Maven repository for GeckoView

A Jenkins job, running on ci.mozilla.org, builds a new AAR (Android ARchive) library file [2]. The new job runs every night at 5AM Pacific; Nightly builds are usually kicked off between 2 and 3AM Pacific, so the artifacts should usually be fresh.

The AAR files produced are versioned appropriately [3] and then published in the Maven repository hosted at https://ci.mozilla.org/job/mozilla-central-geckoview/mozilla-central_Maven_Repository The AAR artifacts are pushed with groupId=org.mozilla.geckoview and artifactId=library; to refer to the latest AAR in Gradle, use:

repositories {
    maven {
        url 'https://ci.mozilla.org/job/mozilla-central-geckoview/mozilla-central_Maven_Repository'
    }
}

dependencies {
    compile 'com.android.support:support-v4:19.+'
    compile 'org.mozilla.geckoview:library:+'
}

That’s it; that’s all you need to build against GeckoView [4] [5]. For a worked example, keep reading.

Example: an updated geckobrowser

You can follow along with the repository at https://github.com/ncalexan/geckobrowser-gradle.

As of Android 19, the android create project tool can create Gradle projects, so let’s use it:

~/Mozilla/geckobrowser-gradle $ android create project \
  -a MainActivity -k org.mozilla.geckobrowser -t android-19 -g -p . -v 0.12
Error: Project folder '.' is not empty. Please consider using 'android update' instead.
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/java
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/java/org/mozilla/geckobrowser
Added file ./src/main/java/org/mozilla/geckobrowser/MainActivity.java
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/instrumentTest/java
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/instrumentTest/java/org/mozilla/geckobrowser
Added file ./src/instrumentTest/java/org/mozilla/geckobrowser/MainActivityTest.java
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/values
Added file ./src/main/res/values/strings.xml
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/layout
Added file ./src/main/res/layout/main.xml
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/drawable-xhdpi
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/drawable-hdpi
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/drawable-mdpi
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/src/main/res/drawable-ldpi
Added file ./src/main/AndroidManifest.xml
Added file ./build.gradle
Created directory /Users/nalexander/Mozilla/geckobrowser-gradle/gradle/wrapper

My new project is in directory geckobrowser-gradle and the original is in directory geckobrowser. Copy the Android resources (res/ directory) verbatim, and take MainActivity.java from the original geckobrowser; it will need some light editing:

~/Mozilla/geckobrowser-gradle $ rm -rf src/main/res && cp -R ../geckobrowser/res src/main
~/Mozilla/geckobrowser-gradle $ cp ../geckobrowser/src/com/starkravingfinkle/geckobrowser/MainActivity.java \
  src/main/java/org/mozilla/geckobrowser/MainActivity.java

We’re almost there; we just need to rename the package. Update the Java package in MainActivity.java like so:

--- a/src/main/java/org/mozilla/geckobrowser/MainActivity.java
+++ b/src/main/java/org/mozilla/geckobrowser/MainActivity.java
@@ -1,4 +1,4 @@
-package com.starkravingfinkle.geckobrowser;
+package org.mozilla.geckobrowser;

 import org.mozilla.gecko.GeckoView;
 import org.mozilla.gecko.GeckoView.Browser;

After building and pushing to device using ./gradlew build installDebug, I took the following screen capture:

Conclusion

That’s a browser — minimal but functional — built around GeckoView! And it uses "Android standard" packaging techniques: no fussing with manually managed ZIP files. I’ve worked on improving packaging Fennec and GeckoView so that things like this were at least feasible, and now that I’ve done it, I’d like to congratulate the Android Tools team for steadily improving the Android packaging story. It’s still not perfect [6], but it handles most functional requirements and is a huge improvement over the frustrating limitations of earlier iterations.

Finally, a plea: GeckoView needs consumers to drive its development. The Fennec team tries to move the project forward when possible, but without guiding lights and community involvement (other than Fennec itself, and the Fennec team), making GeckoView more functional always plays a poor second fiddle to making Fennec more awesome than it already is. But we need to bring the Open Web to more places than those where Fennec can take it, and for that, we need GeckoView, and your help.

Notes

[1]Every mozilla-central Nightly build includes GeckoView, in the form of two zip files. The first, geckoview_library.zip, contains compiled code (Java JARs) and Android resources that need to be included in the embedding App. The second, geckoview_assets.zip, contains specially compressed native code libraries (.so files) that need to be copied into the embedding App’s assets/ directory. Who likes manual dependency management? Not me!
[2]It’s not well-documented, but an assets/ directory packaged into an AAR will be merged with the embedding application’s assets, and we take advantage of this fact.
[3]The Maven version is the Gecko build id of the corresponding Nightly build; for example, 20140710071924 is a version from today.
[4]Technically, the incantation org.mozilla.geckoview:library:+ means to find the latest version of the library artifact (in this case, the appropriate AAR) in the group org.mozilla.geckoview. That’s living on the bleeding edge, to be sure. I think the current Jenkins job saves each Nightly it produces, but I’m not certain, and in any case, each AAR is roughly 35Mb, so I’m going to have start culling pretty much immediately. I’m not sure what the right way to host such a large Maven repository is, so the version scheme is likely to change in the (near) future. If you depend on a specific version, download and install it into your local Maven repository manually.
[5]I did witness a transient error downloading from the Maven repository, perhaps during the first download. The content length was mis-reported and the AAR file was valid but contained redundant data. Let me know if you see this; I have only witnessed it the one time.
[6]For example there is only patchy Gradle plugin documentation, and with the shift to Gradle, there is little support for library dependency management at the aapt level. This matters for Fennec, since we don’t use Gradle, but would like to participate in the AAR ecosystem.

July 10, 2014 11:20 PM

Geoff Brown

New try aliases “xpcshell” and “robocop”

We now have two new try aliases which will be of interest to some mobile developers.

Android 2.3 tests run xpcshell tests in 3 chunks, which can be specified in a try push:

try: … -u xpcshell-1,xpcshell-2,xpcshell-3

but since all other test platforms run xpcshell as a single chunk, it’s easy to forget about Android 2.3’s chunks and push something like:

try: -b o -p all -u xpcshell -t none

…and then wonder why xpcshell tests didn’t run for Android 2.3!

As of today, a new try alias recognizes “xpcshell” to mean “run all the xpcshell test chunks”.

Similarly, a new try alias recognizes “robocop” to mean “run all the robocop test chunks”.

An example: https://tbpl.mozilla.org/?tree=Try&rev=e52bcf945dcd

tryaliases

How convenient!

(Of course, “-u xpcshell-1″, “-u robocop-2, robocop-3″, etc still work and you should use them if you only need to run specific chunks.)

Thanks to :Callek and :RyanVM for making this happen.


July 10, 2014 04:55 PM

July 09, 2014

Nick Alexander

Bumpy landings: How to land a Fennec feature behind a build flag

Fennec (Firefox for Android) features are staged and ride the trains (Nightly, Aurora, and Beta) before reaching the Release audience. Features that land on Nightly may — or may not — continue to Aurora. To support rapid Nightly development, while letting code mature before it reaches Aurora, you should land your new feature behind a runtime preference or a build flag. Here’s a guide to landing behind such a flag.

Examples

We’ll use the following ticket as an example of landing a new build flag, code behind the flag, and build system integration:

  • Bug 1021864 landed the code and Android resources for the Search Activity. The Search Activity is an Android activity that depends on GeckoView. As such, it’s built as part of the main Fennec code-base, since GeckoView is essentially the same as Fennec. The build flag is MOZ_ANDROID_SEARCH_ACTIVITY.

You can also look at the following tickets, which are being actively developed during the Firefox for Android 33 cycle.

  • Bug 1024708 will land the code and Android manifest integration for the Mozilla Stumbler. The stumbler is an Android background service that uploads location sensor data to Mozilla. That location sensor data is then reflected to device users through the Mozilla Location Service. This helps device users locate themselves in the world, without requiring a GPS lock. Since the stumbler presents no UI directly, it’s built as a separate Java JAR, and integrated into the Fennec APK via manifest fragments.
  • Bug 1033560 "flipped the switch" to enable flinging videos to Google Chromecast devices. (The code landed in a long sequence of earlier tickets.) The Chromecast source code is part of the main Fennec code base, and parts of it are compiled (or manifest fragments included) conditionally.

Guide to landing

Let’s look at Bug 1021864. This ticket landed as a series of 5 commits [1].

Build flags

The interesting commits of Bug 1021864 are the second and the fifth. The second commit adds a build flag, MOZ_ANDROID_SEARCH_ACTIVITY, that defaults to not being set:

--- a/configure.in
+++ b/configure.in
@@ -3915,16 +3915,17 @@ if test -n "$MOZ_RTSP"; then
 fi
 MOZ_LOCALE_SWITCHER=
+MOZ_ANDROID_SEARCH_ACTIVITY=
 ACCESSIBILITY=1

And enables it just for mobile/android:

--- a/mobile/android/confvars.sh
+++ b/mobile/android/confvars.sh
@@ -69,8 +69,11 @@ fi
 # Enable second screen using native Android libraries
 MOZ_NATIVE_DEVICES=
+
+# Enable the Search Activity.
+MOZ_ANDROID_SEARCH_ACTIVITY=1

Guarding a feature behind the build flag

Use the preprocessor to conditionally include code (and other includes, etc) behind the build flag. For example, the first commit landed all the code for the initial version of the Search Activity with no build integration at all. Then, the third commit exposed the build flag:

--- a/mobile/android/base/locales/moz.build
+++ b/mobile/android/base/locales/moz.build
@@ -1,6 +1,8 @@
 # -*- Mode: python; c-basic-offset: 4; indent-tabs-mode: nil; tab-width: 40 -*-
 # vim: set filetype=python:
 # This Source Code Form is subject to the terms of the Mozilla Public
 # License, v. 2.0. If a copy of the MPL was not distributed with this
 # file, You can obtain one at http://mozilla.org/MPL/2.0/.

+if CONFIG['MOZ_ANDROID_SEARCH_ACTIVITY']:
+    DEFINES['MOZ_ANDROID_SEARCH_ACTIVITY'] = 1

and used the flag in a preprocessed Android resource file:

--- a/mobile/android/base/strings.xml.in
+++ b/mobile/android/base/strings.xml.in
@@ -3,16 +3,19 @@
 <!-- This Source Code Form is subject to the terms of the Mozilla Public
    - License, v. 2.0. If a copy of the MPL was not distributed with this
    - file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
   <string name="android_package_name_for_ui">@ANDROID_PACKAGE_NAME@</string>
+
+#ifdef MOZ_ANDROID_SEARCH_ACTIVITY
+#include ../search/strings/search_strings.xml.in
+#endif
+
 #include ../services/strings.xml.in

The fourth commit includes the bulk of the integration: the Java code itself is built and included in Fennec by the code changes in mobile/android/base/moz.build and mobile/android/base/Makefile.in.

Landing disabled

The fifth commit disables the Search Activity. This is because we wanted to land build-time disabled, work out initial issues using local and try builds, and then build-time enable when the feature stabilizes.

--- a/mobile/android/confvars.sh
+++ b/mobile/android/confvars.sh
 # Enable second screen using native Android libraries
 MOZ_NATIVE_DEVICES=

-# Enable the Search Activity.
-MOZ_ANDROID_SEARCH_ACTIVITY=1
+# Don't enable the Search Activity.
+# MOZ_ANDROID_SEARCH_ACTIVITY=1

Guide to enabling

To enable a feature behind a build time flag, we merely need to flip the switch in mobile/android/confvars.sh. This can be a conditional flip; for example, to enable only in Nightly:

if test "$NIGHTLY_BUILD"; then
  MOZ_ANDROID_SEARCH_ACTIVITY=1
else
  MOZ_ANDROID_SEARCH_ACTIVITY=
fi

Likewise, to enable only when not in Release or Beta:

if test ! "$RELEASE_BUILD"; then
  MOZ_ANDROID_SEARCH_ACTIVITY=1
else
  MOZ_ANDROID_SEARCH_ACTIVITY=
fi

See https://wiki.mozilla.org/Platform/Channel-specific_build_defines for details on the relevant flags.

July 09, 2014 05:24 PM

July 08, 2014

Nick Alexander

How the Android Eclipse build system integration works

Firefox for Android (Fennec) can be built with Eclipse, but it’s a delicate dance. This post runs through the technical details of what happens, and when, during an Eclipse build.

This is not intended to be a guide to using Eclipse to build Fennec. For such a guide, see https://wiki.mozilla.org/Mobile/Fennec/Android/Eclipse.

Theory

  1. The RecursiveMakeBackend writes Makefile and backend.mk files into the object directory for every directory in the tree. If the corresponding source directory’s moz.build file includes Eclipse project definitions, then the backend.mk includes special "Eclipse-only" recursive make targets, like:

    ANDROID_ECLIPSE_PROJECT_FennecResourcesBranding: .aapt.deps .locales.deps
      $(call py_action,process_install_manifest,\
        --no-remove --no-remove-all-directory-symlinks --no-remove-empty-directories\
        /Users/nalexander/Mozilla/gecko-dev/objdir-droid/android_eclipse/FennecResourcesBranding\
        /Users/nalexander/Mozilla/gecko-dev/objdir-droid/android_eclipse/FennecResourcesBranding.manifest)
    

    These targets are always written (based on the Eclipse projects defined in the moz.build files).

  2. The AndroidEclipseBackend writes Eclipse project files and support files to the top-level android_eclipse directory of the object directory. The directory layout looks something like:

    android_eclipse
      Fennec.manifest
      Fennec
       .classpath
       .externalToolBuilders
       .project
       .settings
       gen
       lint.xml
       project.properties
    

    At this point, the project files are in place, but things are barren — there are no source files, Android resource files, or Android manifest.

  3. The object directory is built and packaged using mach build && mach package. This prepares the C/C++ layer and writes libraries (.so files) that Fennec requires.

  4. The Eclipse project files written include directions for a special builder plugin [#plugin] to run as the first step of every build. Each time Eclipse requests a build (for example, after a file is modified), the builder plugin takes whatever action is required to prepare the Eclipse project for building.

    Currently, the plugin:

    1. checks if anything (interesting) has changed
    2. runs the single recursive make target ANDROID_ECLIPSE_PROJECT_Project written by the recursive make backend (if necessary)
    3. marks Eclipse resources as needing to be refreshed (if necessary).

    After this, the regular Eclipse/Android build steps happen: processing the Android manifest, packaging Android resources, building Java files, etc.

Practice

Let’s dig in to what the plugin really does. All the integration glue is in the ANDROID_ECLIPSE_PROJECT_Project recursive make target. This target is really an aggregate target that does two things: install files and aggregate dependencies.

Install needed files

The target calls the Python process_install_manifest build action to install needed files.

For each Eclipse project, the eclipse backend writes an install manifest file (named Project.manifest) that contains directions for files to copy and symlink into the project directory. When I noted earlier that the project directory was "barren", we’re seeing it before this target has run, and specifically before this manifest is installed.

To compare, let’s execute:

$ mach build $OBJDIR/mobile/android/base/ANDROID_ECLIPSE_PROJECT_Fennec
/usr/bin/make -C /Users/nalexander/Mozilla/gecko-dev/objdir-droid -j8 -s backend.RecursiveMakeBackend
/usr/bin/make -C mobile/android/base -j8 -s ANDROID_ECLIPSE_PROJECT_Fennec
...
From /Users/nalexander/Mozilla/gecko-dev/objdir-droid/android_eclipse/Fennec:\
  Kept 1 existing; Added/updated 8; Removed 0 files and 0 directories.

That last line is the output from installing the manifest; it has populated the Fennec directory:

android_eclipse
  Fennec.manifest
  Fennec
    .classpath
    .externalToolBuilders
    .project
    .settings
    AndroidManifest.xml
    assets -> /Users/nalexander/Mozilla/gecko-dev/objdir-droid/dist/fennec/assets
    gen
    generated -> /Users/nalexander/Mozilla/gecko-dev/objdir-droid/mobile/android/base/generated
    java -> /Users/nalexander/Mozilla/gecko-dev/mobile/android/search/java
    libs
    lint.xml
    project.properties
    res
    src
    thirdparty -> /Users/nalexander/Mozilla/gecko-dev/mobile/android/thirdparty

Add build system dependencies

The target may depend on additional recursive make targets, as specified in the moz.build file using the recursive_make_targets list. So, for example, looking at mobile/android/base/moz.build, I can see that the FennecResourcesGenerated Eclipse project depends on the targets that capture dependencies on the Android manifest and all resources:

# Captures dependencies on Android manifest and all resources.
generated_recursive_make_targets = ['.aapt.deps', '.locales.deps']
generated = add_android_eclipse_library_project('FennecResourcesGenerated')
generated.recursive_make_targets += generated_recursive_make_targets

These additional recursive make targets are defined deeper in the build system: in this case, in mobile/android/base/Makefile.in.

In this way, the project-specific recursive make target both prepares the project directory, and provides a flexible extension point for additional build system dependencies. The custom Eclipse plugin merely invokes this make target (like we did above, with mach build).

An important note about build stages

It’s important to note that the packaging step must happen before the plugin runs: the per-project recursive make targets depend on the build and package artifacts. In the vanilla build system, the one built on mach and recursive make, there are two build stages:

  • configure and build-backend time
  • build (and incremental build) time
  • package time

Due to the unusual requirements of the Android build system, we add two new stages:

  • configure and build-backend time
  • Android build-backend time
  • build (and incremental build) time
  • package time
  • Eclipse incremental build time

At Android build-backend time, we don’t have the artifacts produced by at package time. That’s why we have the install manifests detailed above, which process package artifacts lazily; and that’s why the Eclipse projects produced by the Android build-backend look barren. There has been a good deal of confusion about this lazy approach (which I think results from not having the Eclipse plugin installed), manifesting as reports like https://mail.mozilla.org/pipermail/mobile-firefox-dev/2014-July/000788.html.

Conclusion

There are a fair number of moving parts:

  • two Python build backends, that work in concert;
  • staged build targets, that can’t be run at build-backend time;
  • coupled make targets and Eclipse project builders, joined by a custom Eclipse plugin.

And yet, it all basically works. Don’t move too quickly or make any loud noises.

July 08, 2014 05:45 PM

July 07, 2014

William Lachance

Measuring frames per second and animation smoothness with Eideticker

[ For more information on the Eideticker software I'm referring to, see this entry ]

Just wanted to write up a few notes on using Eideticker to measure animation smoothness, since this is a topic that comes up pretty often and I wind up explaining these things repeatedly. ;)

When rendering web content, we want the screen to update something like 60 times per second (typical refresh rate of an LCD screen) when an animation or other change is occurring. When this isn’t happening, there is often a user perception of jank (a.k.a. things not working as they should). Generally we express how well we measure up to this ideal by counting the number of “frames per second” that we’re producing. If you’re reading this, you’re probably already familiar with the concept in outline. If you want to know more, you can check out the wikipedia article which goes into more detail.

At an internal level, this concept matches up conceptually with what Gecko is doing. The graphics pipeline produces frames inside graphics memory, which is then sent to the LCD display (whether it be connected to a laptop or a mobile phone) to be viewed. By instrumenting the code, we can see how often this is happening, and whether it is occurring at the right frequency to reach 60 fps. My understanding is that we have at least some code which does exactly this, though I’m not 100% up to date on how accurate it is.

But even assuming the best internal system monitoring, Eideticker might still be useful because:

Unfortunately, deriving this sort of information from a video capture is more complicated than you’d expect.

What does frames per second even mean?

Given a set of N frames captured from the device, the immediate solution when it comes to “frames per second” is to just compare frames against each other (e.g. by comparing the value of individual pixels) and then counting the ones that are different as “unique frames”. Divide the total number of unique frames by the length of the
capture and… voila? Frames per second? Not quite.

First off, there’s the inherent problem that sometimes the expected behaviour of a test is for the screen to be unchanging for a period of time. For example, at the very beginning of a capture (when we are waiting for the input event to be acknowledged) and at the end (when we are waiting for things to settle). Second, it’s also easy to imagine the display remaining static for a period of time in the middle of a capture (say in between gestures in a multi-part capture). In these cases, there will likely be no observable change on the screen and thus the number of frames counted will be artificially low, skewing the frames per second number down.

Measurement problems

Ok, so you might not consider that class of problem that big a deal. Maybe we could just not consider the frames at the beginning or end of the capture. And for pauses in the middle… as long as we get an absolute number at the end, we’re fine right? That’s at least enough to let us know that we’re getting better or worse, assuming that whatever we’re testing is behaving the same way between runs and we’re just trying to measure how many frames hit the screen.

I might agree with you there, but there’s a further problems that are specific to measuring on-screen performance using a high-speed camera as we are currently with FirefoxOS.

An LCD updates gradually, and not all at once. Remnants of previous frames will remain on screen long past their interval. Take for example these five frames (sampled at 120fps) from a capture of a pan down in the FirefoxOS Contacts application (movie):

sidebyside

Note how if you look closely these 5 frames are actually the intersection of *three* seperate frames. One with “Adam Card” at the top, another with “Barbara Bloomquist” at the top, then another with “Barbara Bloomquist” even further up. Between each frame, artifacts of the previous one are clearly visible.

Plausible sounding solutions:

Personally the last solution appeals to me the most, although it has the obvious disadvantage of being a “homebrew” metric that no one has ever heard of before, which might make it difficult to use to prove that performance is adequate — the numbers come with a long-winded explanation instead of being something that people immediately understand.

July 07, 2014 04:13 PM

July 06, 2014

Nick Alexander

How to connect Firefox for Android to self-hosted Firefox Account and Firefox Sync servers

Firefox 29 was a huge release. The two largest items were, by most measures, the new Australis Desktop theme, and the new Firefox Accounts sign-in to Firefox Sync. In the rush to land the new Firefox Account sign-in, we pared our feature set aggressively. One of the things that got delayed was support in Firefox for Android for connecting to self-hosted Firefox Account auth servers, and to self-hosted Firefox Sync servers. I’m thrilled to announce that support for such self-hosted servers has just landed in Firefox Nightly, and should make it to release as part of Firefox 33.

Background

Historically, connecting Firefox to a self-hosted Firefox Sync server was — if not easy — at least possible and supported in all products, including Firefox for Android. The new Firefox Accounts system introduces a redesigned, easy to use sign-up/sign-in flow that delivers great user value, but it’s complicated to host your own servers. Instead of talking to a single Firefox Sync server, you need to talk to a Firefox Accounts auth server and a Firefox Sync server, both working in tandem to provide your service. Dan Callahan has been leading the effort to clarify the self-hosted server side of the story, and we have work in progress documentation explaining how to use self-hosted servers (in Firefox Desktop).

Guide

Firefox for Android 33 lets you

  • specify your Firefox Account and Sync servers before connecting your device to an account, and
  • see what servers your device is talking to after you have connected your device.

Let’s see how to use the new features.

Install Firefox for Android 33

First, let’s install Firefox version 33 (or higher). You can download a recent Firefox Nightly here. Download the fennec-33.0a1.multi.android-arm.apk file and install it on your device. Open Firefox.

Install the fxa-custom-server-addon Firefox for Android add-on

Tap back several times to return to Firefox. We need to install a custom Firefox for Android add-on called fxa-custom-server-addon. This add-on lets us specify self-hosted servers [1].

Tap the link above in Firefox for Android, and click the (hopefully green!) Add to Firefox button. Since this add-on is hosted at the Firefox for Android Add-ons site, you should not see a warning when you install. (If you do, Allow the installation, and then Install the downloaded add-on — and please let me know.)

After you see a toast saying that the add-on is installed, tap the menu. You should see a new Custom Firefox Account menu item, right at the bottom.

Launch the Sync set-up flow with self-hosted server URLs

From the menu, tap the new Custom Firefox Account menu item. Now you can enter your self-hosted server URLs! The Save button closes the dialog but keeps your entered URLs for next time; it makes it easy to copy-and-paste your URLs from elsewhere. When you’ve got your URLs correct, tap Launch Setup. (It can help to flip your device to landscape mode.)

You should skip right past the Get Started screen and go directly to the sign-up/sign-in flow. Most importantly, you should see big boxes loudly announcing that you are using non-standard server URLs [2]. If you tap Already have an account? Sign in, you’ll see the sign-in flow, still with the non-standard server URL boxes.

The server URLs I’ve entered are Mozilla’s staging test servers. Our quality assurance team uses these servers to verify that new versions of the Sync client and server are working before they get released. The accounts and data on Mozilla’s staging servers are frequently deleted, so you shouldn’t use these staging test servers. (You should either self-host your own servers — that’s why you’re here — or use Mozilla’s standard production servers.)

I have already created an account on Mozilla’s staging test servers, so I’m going to sign-in to it. I tap Sign in, and I see that my account has already been verified. I tap Back to browsing, and I’m back in Firefox, with my shiny new Firefox Account ready to Sync.

Note: When this was written, there was a bug with custom servers with non-standard ports (in Firefox for Android 33 and 34 only). Bug 1046020 fixes an issue where custom servers with non-standard ports (i.e., servers with a non-80 port for HTTP or non-443 port for HTTPS) failed to sync. The failure looked like an authentication error when requesting a token from the token server. The fix is in Nightly 34, has been uplifted to Firefox 33, and should be present in Aurora 33 by the end of August. Many thanks to user Ben Curtis for reporting and fixing this issue!

Verify that Firefox Sync is using self-hosted server URLs

Let’s verify that our device is connected to a Firefox Account and is healthy by inspecting our server URLs in the Firefox Account settings. Tap the menu, and then Settings > Sync. Observe that we have two new sections, labeled Account server and Sync server. You should see the self-hosted server URLs you entered in the sign-up/sign-in flow [3].

You can also watch the adb logcat while syncing to see what URLs are being used [4].

The fxa-custom-server-addon has done its job, and you can safely remove it entirely: tap the menu and then select Tools > Add-ons to uninstall it.

Conclusion

Firefox 33 allows you to

  • specify self-hosted Firefox Account and Sync servers when connecting a device, and
  • see what servers your device is connected to.

We hope Firefox for Android works well with your self-hosted servers!

Footnotes

[1]You can see the fxa-custom-server-addon‘s source code and you can write your own add-on using the brand new Accounts.jsm add-on API that we built to support these features.
[2]The boxes are red because we absolutely don’t want a new user to accidentally use a non-standard, non-Mozilla server when they didn’t choose to do so explicitly. If changing the server is too easy, or the change is not visible, there is a possible attack on the user’s private Sync data.
[3]

Changing the (self-hosted) servers after you’ve connected your device is a non-goal. If you want to change servers, delete your account using the Android Settings App, and create a new one with the updated server URLs.

Why is this? There are two main reasons.

  • Sync on Android maintains a number of caches and partial sync states to provide an efficient sync experience. It’s very difficult to sync smoothly through a change of servers.

  • Sync on Android is not like Desktop. It’s not written in C++ and JavaScript, like the rest of the Firefox Desktop; as my colleague Richard says,

    It’s best to think of Sync on Android as being a separate pure-Java application that’s bundled with Firefox. Sync doesn’t use the same network stack, or any of the Gecko features that you might be used to from desktop.

This means that we can’t just edit a few preferences using about:config or some JavaScript. We would have to provide a Java user-interface (or a Gecko-to-Java bridge that respects Sync’s runtime lifecycle) to allow such changes, and that is an on-going maintenance burden.

[4]

The sure-fire way to know what Sync on Android is really doing is to observe the Android device log using adb logcat.

You’ll want to bump your log-level:

adb shell setprop log.tag.FxAccounts VERBOSE

Then, you can observe the log using:

adb logcat | grep FxAccounts

It’s best to observe the log while you force a sync from the Android Settings App. You should see output like:

D FxAccounts(...) fennec :: BaseResource :: HTTP GET https://token.stage.mozaws.net/1.0/sync/1.5
...
D FxAccounts(...) fennec :: BaseResource :: HTTP GET https://sync-4-us-east-1.stage.mozaws.net/...

See How to file a good Android Sync bug for details.

Changes

July 06, 2014 03:01 AM

July 03, 2014

Geoff Brown

Firefox for Android Performance Measures – June check-up

My monthly review of Firefox for Android performance measurements. June highlights:

- Talos values tracked here switch to Android 4.0, rather than Android 2.2

- Talos regressions in tcheck2 and tsvgx

- small regression in time to throbber stop

- Eideticker still not reporting results.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec. In all of my previous posts, this section has tracked Talos for Android 2.2 Opt. This month, and going forward, I switch to Android 4.0 Opt, since the Android 2.2 Opt tests are being phased out. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

6 (start of period) – 12 (end of period)

Regression of June 17 – bug 1026742.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

There was a large temporary regression between June 12 and June 14 – bug 1026798.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6100 (start of period) – 6300 (end of period).

Regression of June 16 – bug 1026551.

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 940 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

 

throbstop

“Time to throbber start” looks very flat for all devices, but “Time to throbber stop” has a slight upward trend, especially for nexus-s-2 — bug 1032249.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results are still not available. We’ll check back at the end of July.


July 03, 2014 02:39 AM

Nick Alexander

Adding assets to the Fennec APK file

The Fennec Android package file includes static assets in the APK root, and in the assets directory. User jkraml recently asked how to add new assets, and we should write down how it works.

First, a word on how packaging (and re-packaging!) works. There are three phases:

Adding a file to the APK root

As an example, I’m going to take the build actions in mobile/android/base. The links below lead to a few example commits hosted on github.

First, create mobile/android/base/example.cert.

Then:

  • build: as part of the libs target, install the new file;
  • stage: add the new file to mobile/android/installer/package-manifest.in;
  • package: add the new file to $(DIST_FILES) so that it gets packed into the APK root.

Then run:

mach build-backend &&
mach build mobile/android/base &&
mach package

and you should see your new file in the root of $OBJDIR/dist/fennec-*.apk.

Adding an entire directory to the APK root

Andre Natal asked how to add a directory to the APK root. It’s a little trickier than an individual file, mostly because installing a directory into the package staging area requires care. However, you can see that the form is very similar.

Adding a file to the assets directory

This is a tiny bit trickier:

  • build: exactly the same as before;
  • stage: similar, but observe that the new file is added in a section with destdir="assets";
  • package: a new list of files packed into assets/.

Conclusion

It’s not hard to add files to the Fennec APK root or the assets directory. As always, you can reach me (nalexander) directly on irc.mozilla.org, channel #mobile; on the Twitters; and via the public Fennec mailing list.

Updates

  • Monday 18 August 2014: Added section and commit showing how to add a directory to the APK root.

July 03, 2014 12:42 AM

June 27, 2014

William Lachance

End of Q2 Eideticker update: Flame tests, future plans

[ For more information on the Eideticker software I'm referring to, see this entry ]

Just wanted to give an update on where Eideticker is at the end of Q2 2014. The big news is that we’ve started to run startup tests against the Flame, the results of which are starting to appear on the dashboard:

eideticker-contacts-flame [link]

It is expected that these tests will provide a useful complement to the existing startup tests we’re running with b2gperf, in particular answering the “is this regression real?” question.

Pending work for Q3:

The above isn’t an exhaustive list: there’s much more that we have in mind for the future that’s not yet scheduled or defined well (e.g. get Eideticker reporting to Treeherder’s new performance module). If you have any questions or feedback on anything outlined above I’d love to hear it!

June 27, 2014 09:23 PM

June 25, 2014

Nick Alexander

Better Fennec builds with an Eclipse plugin

I’ve been working on making building Fennec better with Eclipse.

tl;dr: soon, the output of mach build-backend -b=AndroidEclipse will require a new plugin to work. The steps to get started are the same as before, except that you’ll need to install a new Eclipse plugin. The plugin can be installed from the Eclipse update site at http://people.mozilla.org/~nalexander/eclipse/update-site/.

I’ve taken some work of my colleague Brian Nicholson’s and modified it to provide a generic "run this command in Eclipse" that lets us have faster, more predictable builds, with better error reporting. This is about to land as Bug 1029232.

What the patches on that bug do is replace the Eclipse "ExternalToolBuilder" invocations that we were using with a brand-new Eclipse plugin (plugin, feature, and update site hosted at https://github.com/ncalexan/fennec-eclipse). The plugin handles ignoring superfluous build requests; these were what led to the long (or even infinite!) build cycles, where Eclipse would continually build and rebuild projects. The plugin also marks errors in the Problems view, and shows output in the Android console output log.

I’ve uploaded a few mid-length screencasts showing how to install the plugin and demonstrating how the plugin is faster for editing preprocessed resources. The last two are of general interest: one shows a few advanced features of the Eclipse debugger and Android plugin; the other shows some of the Android layout features of the Android plugin.

As always, thanks to Brian Nicholson and my testers, especially Mike Comella and Richard Newman.

June 25, 2014 04:32 AM

June 11, 2014

William Lachance

Managing test manifests: ManifestDestiny -> manifestparser

Just wanted to make a quick announcement that ManifestDestiny, the python package we use internally here at Mozilla for declaratively managing lists of tests in Mochitest and other places, has been renamed to manifestparser. We kept the versioning the same (0.6), so the only thing you should need to change in your python package dependencies is a quick substitution of “ManifestDestiny” with “manifestparser”. We will keep ManifestDestiny around indefinitely on pypi, but only to make sure old stuff doesn’t break. New versions of the software will only be released under the name “manifestparser”.

Quick history lesson: “Manifest destiny” refers to a philosophy of exceptionalism and expansionism that was widely held by American settlers in the 19th century. The concept is considered offensive by some, as it was used to justify displacing and dispossessing Native Americans. Wikipedia’s article on the subject has a good summary if you want to learn more.

Here at Mozilla Tools & Automation, we’re most interested in creating software that everyone can feel good about depending on, so we agreed to rename it. When I raised this with my peers, there were no objections. I know these things are often the source of much drama in the free software world, but there’s really none to see here.

Happy manifest parsing!

June 11, 2014 02:44 PM

June 06, 2014

Mark Finkle

Firefox for Android: Casting videos and Roku support – Ready to test in Nightly

Firefox for Android Nightly builds now support casting HTML5 videos from a web page to a TV via a connected Roku streaming player. Using the system is simple, but it does require you to install a viewer application on your Roku device. Firefox support for the Roku viewer and the viewer itself are both currently pre-release. We’re excited to invite our Nightly channel users to help us test these new features, share feedback and file any bugs so we can continue to make improvements to performance and functionality.

Setup

To begin testing, first you’ll need to install the viewer application to your Roku. The viewer app, called Firefox for Roku Nightly, is currently a private channel. You can install it via this link: Firefox Nightly

Once installed, try loading this test page into your Firefox for Android Nightly browser: Casting Test

When Firefox has discovered your Roku, you should see the Media Control Bar with Cast and Play icons:

casting-onload

The Cast icon on the left of the video controls allows you to send the video to a device. You can also long-tap on the video to get the context menu, and cast from there too.

Hint: Make sure Firefox and the Roku are on the same Wifi network!

Once you have sent a video to a device, Firefox will display the Media Control Bar in the bottom of the application. This allows you to pause, play and close the video. You don’t need to stay on the original web page either. The Media Control Bar will stay visible as long as the video is playing, even as you change tabs or visit new web pages.

fennec-casting-pageaction-active

You’ll notice that Firefox displays an “active casting” indicator in the URL Bar when a video on the current web page is being cast to a device.

Limitations and Troubleshooting

Firefox currently limits casting HTML5 video in H264 format. This is one of the formats most easily handled by Roku streaming players. We are working on other formats too.

Some web sites hide or customize the HTML5 video controls and some override the long-tap menu too. This can make starting to cast difficult, but the simple fallback is to start playing the video in the web page. If the video is H264 and Firefox can find your Roku, a “ready to cast” indicator will appear in the URL Bar. Just tap on that to start casting the video to your Roku.

If Firefox does not display the casting icons, it might be having a problem discovering your Roku on the network. Make sure your Android device and the Roku are on the same Wifi network. You can load about:devices into Firefox to see what devices Firefox has discovered.

This is a pre-release of video casting support. We need your help to test the system. Please remember to share your feedback and file any bugs. Happy testing!

June 06, 2014 03:45 PM

May 31, 2014

Mark Finkle

Firefox for Android: Your Feedback Matters!

Millions of people use Firefox for Android every day. It’s amazing to work on a product used by so many people. Unsurprisingly, some of those people send us feedback. We even have a simple system built into the application to make it easy to do. We have various systems to scan the feedback and look for trends. Sometimes, we even manually dig through the feedback for a given day. It takes time. There is a lot.

Your feedback is important and I thought I’d point out a few recent features and fixes that were directly influenced from feedback:

Help Menu
Some people have a hard time discovering features or were not aware Firefox supported some of the features they wanted. To make it easier to learn more about Firefox, we added a simple Help menu which directs you to SUMO, our online support system.

Managing Home Panels
Not everyone loves the Firefox Homepage (I do!), or more specifically, they don’t like some of the panels. We added a simple way for people to control the panels shown in Firefox’s Homepage. You can change the default panel. You can even hide all the panels. Use Settings > Customize > Home to get there.

Home panels

Improve Top Sites
The Top Sites panel in the Homepage is used by many people. At the same time, other people find that the thumbnails can reveal a bit too much of their browsing to others. We recently added support for respecting sites that might not want to be snapshot into thumbnails. In those cases, the thumbnail is replaced with a favicon and a favicon-influenced background color. The Facebook and Twitter thumbnails show the effect below:

fennec-private-thumbnails

We also added the ability to remove thumbnails using the long-tap menu.

Manage Search Engines
People also like to be able to manage their search engines. They like to switch the default. They like to hide some of the built-in engines. They like to add new engines. We have a simple system for managing search engines. Use Settings > Customize > Search to get there.

fennec-search-mgr

Clear History
We have a lot of feedback from people who want to clear their browsing history quickly and easily. We are not sure if the Settings > Privacy > Clear private data method is too hard to find or too time consuming to use, but it’s apparent people need other methods. We added a quick access method at the bottom of the History panel in the Homepage.

clear-history

We are also working on a Clear data on exit approach too.

Quickly Switch to a Newly Opened Tab
When you long-tap on a link in a webpage, you get a menu that allows you to Open in New Tab or Open in New Private Tab. Both of those open the new tab in the background. Feedback indicates the some people really want to switch to the new tab. We already show an Android toast to let you know the tab was opened. Now we add a button to the toast allowing you to quickly switch to the tab too.

switch-to-tab

Undo Closing a Tab
Closing tabs can be awkward for people. Sometimes the [x] is too easy to hit by mistake or swiping to close is unexpected. In any case, we added the ability to undo closing a tab. Again, we use a button toast.

undo-close-tab

Offer to Setup Sync from Tabs Tray
We feel that syncing your desktop and mobile browsing data makes browsing on mobile devices much easier. Figuring out how to setup the Sync feature in Firefox might not be obvious. We added a simple banner to the Homepage to let you know the feature exists. We also added a setup entry point in the Sync area of the Tabs Tray.

fennec-setup-sync

We’ll continue to make changes based on your feedback, so keep sending it to us. Thanks for using Firefox for Android!

May 31, 2014 03:28 AM

May 30, 2014

Geoff Brown

Firefox for Android Performance Measures – May check-up

My monthly review of Firefox for Android performance measurements. May highlights:

- slight regressions in tcanvasmark and trobopan

- small regression in time to throbber stop

- Eideticker still not reporting results.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

Image

6300 (start of period) – 5700 (end of period).

Regression of May 12 – bug 1009646.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

9 (start of period) – 9 (end of period)

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

Image

110000 (start of period) – 130000 (end of period)

This regression just happened today and has not triggered a Talos alert yet — I don’t have a bug number yet.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

425 (start of period) – 425 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7300 (start of period) – 7300 (end of period).

tp4m

Generic page load test. Lower values are better.

750 (start of period) – 750 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Image

Image

The improvement on May 2 was due to a change in the test setup (sut vs adb).

The small regression of May 11 is tracked in bug 1018463.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results are still not available. We’ll check back at the end of June.


May 30, 2014 10:31 PM

May 22, 2014

Kartikaya Gupta

Cracking libxul

For a while now I've been wanting to take a look inside libxul to see why it's so big. In particular I wanted to know what the impact of using templates so heavily in our code was - things like nsTArray and nsRefPtr are probably used on hundreds of different types throughout our codebase. Last night I was have trouble sleeping so I decided to crack open libxul and see if I could figure it out. I didn't persist enough to get the exact answers I wanted, but I got close enough. It was also kind of fun and I figured I'd post about it partly as an educational thing and partly to inspire others to dig deeper into this.

First step: build libxul. I had a debug build on my Linux machine with recent gecko, so I just used the libxul.so from that.

Second step: disassemble libxul.

objdump -d libxul.so > libxul.disasm


Although I've looked at disassemblies before I had to look at the file in vim a little bit to figure the best way to parse it to get what I wanted, which was the size of every function defined in the library. This turned out to be a fairly simple awk script.

Third step: get function sizes. (snippet below is reformatted for easier reading)

awk 'BEGIN { addr=0; label="";}
     /:$/ && !/Disassembly of section/ { naddr = sprintf("%d", "0x" $1);
                                         print (naddr-addr), label;
                                         addr=naddr;
                                         label=$2 }'
    libxul.disasm > libxul.sizes


For those of you unfamiliar with awk, this identifies every line that ends in a colon, but doesn't have the text "Disassembly of section" (I determined this would be sufficient to match the line that starts off every function disassembly). It then takes the address (which is in hex in the dump), converts it to decimal, and subtracts it from the address of the previous matching line. Finally it dumps out the size/name pairs. I inspected the file to make sure it looked ok, and removed a bad line at the top of the file (easier to fix it manually than fix the awk script).

Now that I had the size of each function, I did a quick sanity check to make sure it added up to a reasonable number:

awk '{ total += $1 } END { print total }' libxul.sizes
40263032


The value spit out is around 40 megs. This seemed to be in the right order of magnitude for code in libxul so I proceeded further.

Fourth step: see what's biggest!

sort -rn libxul.sizes | head -n 20
57984 <_ZL9InterpretP9JSContextRN2js8RunStateE>:
43798 <_ZN20nsHtml5AttributeName17initializeStaticsEv>:
41614 <_ZN22nsWindowMemoryReporter14CollectReportsEP25nsIMemoryReporterCallbackP11nsISupports>:
39792 <_Z7JS_Initv>:
32722 <vp9_fdct32x32_sse2>:
28674 <encode_mcu_huff>:
24365 <_Z7yyparseP13TParseContext>:
21800 <_ZN18nsHtml5ElementName17initializeStaticsEv>:
20558 <_ZN7mozilla3dom14PContentParent17OnMessageReceivedERKN3IPC7MessageE.part.1247>:
20302 <_ZN16nsHtml5Tokenizer9stateLoopI23nsHtml5ViewSourcePolicyEEiiDsiPDsbii>:
18367 <sctp_setopt>:
17900 <vp9_find_best_sub_pixel_comp_tree>:
16952 <_ZN7mozilla3dom13PBrowserChild17OnMessageReceivedERKN3IPC7MessageE>:
16096 <vp9_sad64x64x4d_sse2>:
15996 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE17EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
15594 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE16EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14963 <vp9_idct32x32_1024_add_sse2>:
14838 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE4EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14792 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE21EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14740 <_ZN16nsHtml5Tokenizer9stateLoopI19nsHtml5SilentPolicyEEiiDsiPDsbii>:


That output looks reasonable. Top of the list is something to do with interpreting JS, followed by some HTML name static initializer thing. Guessing from the symbol names it seems like everything there would be pretty big. So far so good.

Fifth step: see how much space nsTArray takes up. As you can see above, the function names in the disassembly are mangled, and while I could spend some time trying to figure out how to demangle them it didn't seem particularly worth the time. Instead I just looked for symbols that started with nsTArray_Impl which by visual inspection seemed to match what I was looking for, and would at least give me a ballpark figure.

grep "<_ZN13nsTArray_Impl" libxul.sizes | awk '{ total += $1 } END { print total }'
377522


That's around 377k of stuff just to deal with nsTArray_Impl functions. You can compare that to the total libxul number and the largest functions listed above to get a sense of how much that is. I did the same for nsRefPtr and got 92k. Looking for ZNSt6vector, which I presume is the std::vector class, returned 101k.

That more or less answered the questions I had and gave me an idea of how much space was being used by a particular template class. I tried a few more things like grouping by the first 20 characters of the function name and summing up the sizes, but it didn't give particularly useful results. I had hoped it would approximate the total size taken up by each class but because of the variability in name lengths I would really need a demangler before being able to get that.

May 22, 2014 02:02 PM

May 20, 2014

Chris Peterson

JS Work Week 2014

Mozilla’s SpiderMonkey (JS) and Low-Level Tools engineering teams convened at Mozilla’s chilly Toronto office in March to plan our 2014 roadmap.

To start the week, we reviewed Mozilla’s 2014 organizational goals. If a Mozilla team is working on projects that do not advance the organization’s stated goals, then something is out of sync. The goals where the JS team can most effectively contribute are “Scale Firefox OS” (sell 10M Firefox OS phones) and “Get Firefox on a Growth Trajectory” (increase total users and hours of usage). Knowing that Mozilla’s plans to sell 10M Firefox OS phones helps us prioritize optimizations for Tarako (the $25 Firefox OS phone) over larger devices like Firefox OS tablets, TVs, or dishwashers.

Security was a hot topic after Mozilla’s recent beating in Pwn2Own 2014. Christian Holler (“decoder”) and Gary Kwong gave presentations on OOM and Windows fuzzing, respectively. Bill McCloskey discussed the current status of Electrolysis (e10s), Firefox’s multiprocess browser that will reduce UI jank and implement sandboxing of security exploits. e10s is currently available for testing in the Nightly channel; just select “File > New e10s Window” to open a new e10s window. (This works out of the box on OS X today, but requires an OMTC pref change on Windows and Linux.)

The ES6 spec is feature frozen and should be signed-off by the end of 2014. Jason Orendorff asked for help implementing remaining ES6 features like Modules and let/const scoping. Proposed improvements to Firefox’s web developer tools included live editing of code in the JS debugger and exposing JIT optimization feedback

Thinker Lee and Ting-Yuan Huang, from Mozilla’s Firefox OS team in Taipei, presented some of the challenges they’ve faced with Tarako, a Firefox OS phone with only 128 MB RAM. They’re using zram to compress unused memory pages instead of paging them to flash storage. Thinker and Ting-Yuan had suggestions for tuning SpiderMonkey’s GC to avoid problems where the GC runs in background apps or inadvertently touches compressed zram pages.

Till Schneidereit lead a brainstorming session about improving SpiderMonkey’s embedding API. Ideas included promoting SpiderMonkey as a scripting language solution for game engines (like 0 A.D.) or revisiting SpiderNode, a 2012 experiment to link Node.js with SpiderMonkey instead of V8. SpiderNode might be interesting for our testing or to Node developers that would like to use SpiderMonkey’s more extensive support for ES6 features or remote debugging tools. ES6 on the server doesn’t have the browser compatibility limitations that front-end web development does. The meeting notes and further discussion continued on the SpiderMonkey mailing list. New Mozilla contributor Sarat Adiraj soon posted his patches to revive SpiderNode in bug 1005411.

For the work week’s finale, Mozilla’s GC developers Terrence Cole, Steve Fink, and Jon Coppeard landed their generational garbage collector (GGC), a major redesign of SpiderMonkey’s GC. GGC will improve JS performance and lay the foundation for implementing a compacting GC to reduce JS memory usage later this year. GGC is riding the trains and should ship in Firefox 31 (July 2014).

May 20, 2014 06:53 AM

May 10, 2014

Fennec Nightly News

First-run Gets a Facelift

Nightly now has a cleaner, fresher looking first-run appearance. Compare the old (top) and new (bottom) looks:

May 10, 2014 04:41 PM

May 08, 2014

William Lachance

mozregression: New maintainer, issues tracked in bugzilla

Just wanted to give some quick updates on mozregression, your favorite regression-finding tool for Firefox:

  1. I moved all issue tracking in mozregression to bugzilla from github issues. Github unfortunately doesn’t really scale to handle notifications sensibly when you’re part of a large organization like Mozilla, which meant many problems were flying past me unseen. File your new bugs in bugzilla, they’re now much more likely to be acted upon.
  2. Sam Garrett has stepped up to be co-maintainer of the project with me. He’s been doing a great job whacking out a bunch of bugs and keeping things running reliably, and it was time to give him some recognition and power to keep things moving forward. :)
  3. On that note, I just released mozregression 0.17, which now shows the revision number when running a build (a request from the graphics team, bug 1007238) and handles respins of nightly builds correctly (bug 1000422). Both of these were fixed by Sam.

If you’re interested in contributing to Mozilla and are somewhat familiar with python, mozregression is a great place to start. The codebase is quite approachable and the impact will be high — as I’ve found out over the last few months, people all over the Mozilla organization (managers, developers, QA …) use it in the course of their work and it saves tons of their time. A list of currently open bugs is here.

May 08, 2014 10:31 PM