Planet Firefox Mobile

September 16, 2014

Lucas Rocha

Introducing Probe

We’ve all heard of the best practices regarding layouts on Android: keep your view tree as simple as possible, avoid multi-pass layouts high up in the hierarchy, etc. But the truth is, it’s pretty hard to see what’s actually going on in your view tree in each platform traversal (measure → layout → draw).

We’re well served with developer options for tracking graphics performance—debug GPU overdraw, show hardware layers updates, profile GPU rendering, and others. However, there is a big gap in terms of development tools for tracking layout traversals and figuring out how your layouts actually behave. This is why I created Probe.

Probe is a small library that allows you to intercept view method calls during Android’s layout traversals e.g. onMeasure(), onLayout(), onDraw(), etc. Once a method call is intercepted, you can either do extra things on top of the view’s original implementation or completely override the method on-the-fly.

Using Probe is super simple. All you have to do is implement an Interceptor. Here’s an interceptor that completely overrides a view’s onDraw(). Calling super.onDraw() would call the view’s original implementation.

public class DrawGreen extends Interceptor {
    private final Paint mPaint;

    public DrawGreen() {
        mPaint = new Paint();
        mPaint.setColor(Color.GREEN);
    }

    @Override
    public void onDraw(View view, Canvas canvas) {
        canvas.drawPaint(mPaint);
    }
}

Then deploy your Interceptor by inflating your layout with a Probe:

Probe probe = new Probe(this, new DrawGreen(), new Filter.ViewId(R.id.view2));
View root = probe.inflate(R.layout.main_activity, null);

Just to give you an idea of the kind of things you can do with Probe, I’ve already implemented a couple of built-in interceptors. OvermeasureInterceptor tints views according to the number of times they got measured in a single traversal i.e. equivalent to overdraw but for measurement.

LayoutBoundsInterceptor is equivalent to Android’s “Show layout bounds” developer option. The main difference is that you can show bounds only for specific views.

Under the hood, Probe uses Google’s DexMaker to generate dynamic View proxies during layout inflation. The stock ProxyBuilder implementation was not good enough for Probe because I wanted to avoid using reflection entirely after the proxy classes were generated. So I created a specialized View proxy builder that generates proxy classes tailored for Probe’s use case.

This means Probe takes longer than your usual LayoutInflater to inflate layout resources. There’s no use of reflection after layout inflation though. Your views should perform the same. For now, Probe is meant to be a developer tool only and I don’t recommend using it in production.

The code is available on Github. As usual, contributions are very welcome.

September 16, 2014 10:32 AM

September 15, 2014

William Lachance

mozregression 0.24

I just released mozregression 0.24. This would be a good time to note some of the user-visible fixes / additions that have gone in recently:

  1. Thanks to Sam Garrett, you can now specify a different branch other than inbound to get finer grained regression ranges from. E.g. if you’re pretty sure a regression occurred on fx-team, you can do something like:

    mozregression --inbound-branch fx-team -g 2014-09-13 -b 2014-09-14

  2. Fixed a bug where we could get an incorrect regression range (bug 1059856). Unfortunately the root cause of the bug is still open (it’s a bit tricky to match mozilla-central commits to that of other branches) but I think this most recent fix should make things work in 99.9% of cases. Let me know if I’m wrong.
  3. Thanks to Julien Pagès, we now download the inbound build metadata in parallel, which speeds up inbound bisection quite significantly

If you know a bit of python, contributing to mozregression is a great way to have a high impact on Mozilla. Many platform developers use this project in their day-to-day work, but there’s still lots of room for improvement.

September 15, 2014 10:02 PM

September 12, 2014

Geoff Brown

Running my own AutoPhone

AutoPhone is a brilliant platform for running automated tests on physical mobile devices.

:bc maintains an AutoPhone instance running startup performance tests (aka “S1/S2 tests” or “throbber start/stop tests”) on a small farm of test phones; those tests run against all of our Firefox for Android builds and results are reported to PhoneDash, available for viewing at http://phonedash.mozilla.org/.

I have used phonedash.mozilla.org for a long time now, and reported regressions in bugs and in my monthly “Performance Check-up” posts, but I have never looked under the covers or tried to use AutoPhone myself — until this week.

All things considered, it is surprisingly easy to set up your own AutoPhone instance and run your own tests. You might want to do this to reproduce phonedash.mozilla.org results on your own computer, or to check for regressions on a feature before check-in.

Here’s what I did to run my own AutoPhone instance running S1/S2 tests against mozilla-inbound builds:

Install AutoPhone:

git clone https://github.com/mozilla/autophone

cd autophone

pip install -r requirements.txt

Install PhoneDash, to store and view results:

git clone https://github.com/markrcote/phonedash

Create a phonedash settings file, phonedash/server/settings.cfg with content:

[database]
SQL_TYPE=sqlite
SQL_DB=yourdb
SQL_SERVER=localhost
SQL_USER=
SQL_PASSWD=

Start phonedash:

python server.py <ip address of your computer>

It will log status messages to the console. Watch that for any errors, and to get a better understanding of what’s happening.

Prepare your device:

Connect your Android phone or tablet to your computer by USB. Multiple devices may be connected. Each device must be rooted. Check that you can see your devices with adb devices — and note the serial number(s) (see devices.ini below).

Configure your device:

cp devices.ini.example devices.ini

Edit devices.ini, changing the serial numbers to your device serial numbers and the device names to something meaningful to you. Here’s my simple devices.ini for one device I called “gbrown”:

[gbrown]
serialno=01498B300600B008

Configure autophone:

cp autophone.ini.example autophone.ini

Edit autophone.ini to make it your own. Most of the defaults are fine; here is mine:

[settings]
#clear_cache = False
#ipaddr = …
#port = 28001
#cachefile = autophone_cache.json
#logfile = autophone.log
loglevel = DEBUG
test_path = tests/manifest.ini
#emailcfg = email.ini
enable_pulse = True
enable_unittests = False
#cache_dir = builds
#override_build_dir = None
repos = mozilla-inbound
#buildtypes = opt
#build_cache_port = 28008
verbose = True

#build_cache_size = 20
#build_cache_expires = 7
#device_ready_retry_wait = 20
#device_ready_retry_attempts = 3
#device_battery_min = 90
#device_battery_max = 95
#phone_retry_limit = 2
#phone_retry_wait = 15
#phone_max_reboots = 3
#phone_ping_interval = 15
#phone_command_queue_timeout = 10
#phone_command_queue_timeout = 1
#phone_crash_window = 30
#phone_crash_limit = 5

python autophone.py -h provides help on options, which are analogues of the autophone.ini settings.

Configure your tests:

Notice that autophone.ini has a test path of tests/manifest.ini. By default, tests/manifest.ini is configured for S1/S2 tests — it points to configs/s1s2_settings.ini. We need to set up that file:

cd configs

cp s1s2_settings.ini.example s1s2_settings.ini

Edit s1s2_settings.ini to make it your own. Here’s mine:

[paths]
#source = files/
#dest = /mnt/sdcard/tests/autophone/s1test/
#profile = /data/local/tmp/profile

[locations]
# test locations can be empty to specify a local
# path on the device or can be a url to specify
# a web server.
local =
remote = http://192.168.0.82:8080/

[tests]
blank = blank.html
twitter = twitter.html

[settings]
iterations = 2
resulturl = http://192.168.0.82:8080/api/s1s2/

[signature]
id =
key =

Be sure to set the resulturl to match your PhoneDash instance.

If running local tests, copy your test files (like blank.html above) to the files directory. If runnng remote tests, be sure that your test files are served from the resulturl (if using PhoneDash, copy to the html directory).

Start autophone:

python autophone.py –config autophone.ini

With these settings, autophone will listen for new builds on mozilla-inbound, and start tests on your device(s) for each one. You should start to see your device reboot, then Firefox will be installed and startup tests will run. As more builds complete on mozilla-inbound, more tests will run.

autophone.py will print some diagnostics to the console, but much more detail is available in autophone.log — watch that to see what’s happening.

Check your phonedash instance for results — visit http://<ip address of your computer>:8080. At first this won’t have any data, but as autophone runs tests, you’ll start to see results. Here’s my instance after a few hours:

myphonedash


September 12, 2014 08:02 PM

September 11, 2014

William Lachance

Hacking on the Treeherder front end: refreshingly easy

Over the past two weeks, I’ve been working a bit on the Treeherder front end (our interface to managing build and test jobs from mercurial changesets), trying to help get things in shape so that the sheriffs can feel comfortable transitioning to it from tbpl by the end of the quarter.

One thing that has pleasantly surprised me is just how easy it’s been to get going and be productive. The process looks like this on Linux or Mac:


git clone https://github.com/mozilla/treeherder-ui.git
cd treeherder-ui/webapp
./scripts/web-server.js

Then just load http://localhost:8000 in your favorite web browser (Firefox) and you should be good to go (it will load data from the actually treeherder site). If you want to make modifications to the HTML, Javascript, or CSS just go ahead and do so with your favorite editor and the changes will be immediately reflected.

We have a fair backlog of issues to get through, many of them related to the front end. If you’re interested in helping out, please have a look:

https://wiki.mozilla.org/Auto-tools/Projects/Treeherder#Bugs_.26_Project_Tracking

If nothing jumps out at you, please drop by irc.mozilla.org #treeherder and we can probably find something for you to work on. We’re most active during Pacific Time working hours.

September 11, 2014 08:35 PM

September 08, 2014

Matt Brubeck

Let's build a browser engine! Part 5: Boxes

This is the latest in a series of articles about writing a simple HTML rendering engine:

This article will begin the layout module, which takes the style tree and translates it into a bunch of rectangles in a two-dimensional space. This is a big module, so I’m going to split it into several articles. Also, some of the code I share in this article may need to change as I write the code for the later parts.

The layout module’s input is the style tree from Part 4, and its output is yet another tree, the layout tree. This takes us one step further in our mini rendering pipeline:

I’ll start by talking about the basic HTML/CSS layout model. If you’ve ever learned to develop web pages you might be familiar with this already—but it may look a bit different from the implementer’s point of view.

The Box Model

Layout is all about boxes. A box is a rectangular section of a web page. It has a width, a height, and a position on the page. This rectangle is called the content area because it’s where the box’s content is drawn. The content may be text, image, video, or other boxes.

A box may also have padding, borders, and margins surrounding its content area. The CSS spec has a diagram showing how all these layers fit together.

Robinson stores a box’s content area and surrounding areas in the following structure. [Rust note: f32 is a 32-bit floating point type.]

// CSS box model. All sizes are in px.
struct Dimensions {
    // Top left corner of the content area, relative to the document origin:
    x: f32,
    y: f32,

    // Content area size:
    width: f32,
    height: f32,

    // Surrounding edges:
    padding: EdgeSizes,
    border: EdgeSizes,
    margin: EdgeSizes,
}

struct EdgeSizes {
    left: f32,
    right: f32,
    top: f32,
    bottom: f32,
}

Block and Inline Layout

Note: This section contains diagrams that won't make sense if you are reading them without the associated visual styles. If you are reading this in a feed reader, try opening the original page in a regular browser tab. I also included text descriptions for those of you using screen readers or other assistive technologies.

The CSS display property determines which type of box an element generates. CSS defines several box types, each with its own layout rules. I’m only going to talk about two of them: block and inline.

I’ll use this bit of pseudo-HTML to illustrate the difference:

<container>
  <a></a>
  <b></b>
  <c></c>
  <d></d>
</container>

Block boxes are placed vertically within their container, from top to bottom.

a, b, c, d { display: block; }

Description: The diagram below shows four rectangles in a vertical stack.

a
b
c
d

Inline boxes are placed horizontally within their container, from left to right. If they reach the right edge of the container, they will wrap around and continue on a new line below.

a, b, c, d { display: inline; }

Description: The diagram below shows boxes `a`, `b`, and `c` in a horizontal line from left to right, and box `d` in the next line.

a
b
c
d

Each box must contain only block children, or only inline children. When an DOM element contains a mix of block and inline children, the layout engine inserts anonymous boxes to separate the two types. (These boxes are “anonymous” because they aren’t associated with nodes in the DOM tree.)

In this example, the inline boxes b and c are surrounded by an anonymous block box, shown in pink:

a    { display: block; }
b, c { display: inline; }
d    { display: block; }

Description: The diagram below shows three boxes in a vertical stack. The first is labeled `a`; the second contains two boxes in a horizonal row labeled `b` and `c`; the third box in the stack is labeled `d`.

a
b
c
d

Note that content grows vertically by default. That is, adding children to a container generally makes it taller, not wider. Another way to say this is that, by default, the width of a block or line depends on its container’s width, while the height of a container depends on its children’s heights.

This gets more complicated if you override the default values for properties like width and height, and way more complicated if you want to support features like vertical writing.

The Layout Tree

The layout tree is a collection of boxes. A box has dimensions, and it may contain child boxes.

struct LayoutBox<'a> {
    dimensions: Dimensions,
    box_type: BoxType<'a>,
    children: Vec<LayoutBox<'a>>,
}

A box can be a block node, an inline node, or an anonymous block box. (This will need to change when I implement text layout, because line wrapping can cause a single inline node to split into multiple boxes. But it will do for now.)

enum BoxType<'a> {
    BlockNode(&'a StyledNode<'a>),
    InlineNode(&'a StyledNode<'a>),
    AnonymousBlock,
}

To build the layout tree, we need to look at the display property for each DOM node. I added some code to the style module to get the display value for a node. If there’s no specified value it returns the initial value, 'inline'.

enum Display {
    Inline,
    Block,
    DisplayNone,
}

impl StyledNode {
    /// Return the specified value of a property if it exists, otherwise `None`.
    fn value(&self, name: &str) -> Option<Value> {
        self.specified_values.find_equiv(&name).map(|v| v.clone())
    }

    /// The value of the `display` property (defaults to inline).
    fn display(&self) -> Display {
        match self.value("display") {
            Some(Keyword(s)) => match s.as_slice() {
                "block" => Block,
                "none" => DisplayNone,
                _ => Inline
            },
            _ => Inline
        }
    }
}

Now we can walk through the style tree, build a LayoutBox for each node, and then insert boxes for the node’s children. If a node’s display property is set to 'none' then it is not included in the layout tree.

/// Build the tree of LayoutBoxes, but don't perform any layout calculations yet.
fn build_layout_tree<'a>(style_node: &'a StyledNode<'a>) -> LayoutBox<'a> {
    // Create the root box.
    let mut root = LayoutBox::new(match style_node.display() {
        Block => BlockNode(style_node),
        Inline => InlineNode(style_node),
        DisplayNone => fail!("Root node has display: none.")
    });

    // Create the descendant boxes.
    for child in style_node.children.iter() {
        match child.display() {
            Block => root.children.push(build_layout_tree(child)),
            Inline => root.get_inline_container().children.push(build_layout_tree(child)),
            DisplayNone => {} // Skip nodes with `display: none;`
        }
    }
    return root;
}

impl LayoutBox {
    /// Constructor function
    fn new(box_type: BoxType) -> LayoutBox {
        LayoutBox {
            box_type: box_type,
            dimensions: Default::default(), // initially set all fields to 0.0
            children: Vec::new(),
        }
    }
}

If a block node contains an inline child, create an anonymous block box to contain it. If there are several inline children in a row, put them all in the same anonymous container.

impl LayoutBox {
    /// Where a new inline child should go.
    fn get_inline_container(&mut self) -> &mut LayoutBox {
        match self.box_type {
            InlineNode(_) | AnonymousBlock => self,
            BlockNode(_) => {
                // If we've just generated an anonymous block box, keep using it.
                // Otherwise, create a new one.
                match self.children.last() {
                    Some(&LayoutBox { box_type: AnonymousBlock,..}) => {}
                    _ => self.children.push(LayoutBox::new(AnonymousBlock))
                }
                self.children.mut_last().unwrap()
            }
        }
    }
}

This is intentionally simplified in a number of ways from the standard CSS box generation algorithm. For example, it doesn’t handle the case where an inline box contains a block-level child. Also, it generates an unnecessary anonymous box if a block-level node has only inline children.

To Be Continued…

Whew, that took longer than I expected. I think I’ll stop here for now, but don’t worry: Part 6 is coming soon, and will cover block-level layout.

Once block layout is finished, we could jump ahead to the next stage of the pipeline: painting! I think I might do that, because then we can finally see the rendering engine’s output as pretty pictures instead of just numbers.

However, the pictures will just be a bunch of colored rectangles, unless we finish the layout module by implementing inline layout and text layout. If I don’t implement those before moving on to painting, I hope to come back to them afterward.

September 08, 2014 11:16 PM

Lucas Rocha

Introducing dspec

With all the recent focus on baseline grids, keylines, and spacing markers from Android’s material design, I found myself wondering how I could make it easier to check the correctness of my Android UI implementation against the intended spec.

Wouldn’t it be nice if you could easily provide the spec values as input and get it rendered on top of your UI for comparison? Enter dspec, a super simple way to define UI specs that can be rendered on top of Android UIs.

Design specs can be defined either programmatically through a simple API or via JSON files. Specs can define various aspects of the baseline grid, keylines, and spacing markers such as visibility, offset, size, color, etc.

Baseline grid, keylines, and spacing markers in action.

Baseline grid, keylines, and spacing markers in action.

Given the responsive nature of Android UIs, the keylines and spacing markers are positioned in relation to predefined reference points (e.g. left, right, vertical center, etc) instead of absolute offsets.

The JSON files are Android resources which means you can easily adapt the spec according to different form factors e.g. different specs for phones and tablets. The JSON specs provide a simple way for designers to communicate their intent in a computer-readable way.

You can integrate a DesignSpec with your custom views by drawing it in your View‘s onDraw(Canvas) method. But the simplest way to draw a spec on top of a view is to enclose it in a DesignSpecFrameLayout—which can take an designSpec XML attribute pointing to the spec resource. For example:

<DesignSpecFrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:designSpec="@raw/my_spec">
    ...
</DesignSpecFrameLayout>

I can’t wait to start using dspec in some of the new UI work we’re doing Firefox for Android now. I hope you find it useful too. The code is available on Github. As usual, testing and fixes are very welcome. Enjoy!

September 08, 2014 01:52 PM

September 02, 2014

Geoff Brown

Firefox for Android Performance Measures – August check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

 – Eideticker for Android is back!

 – small regression in ts_paint

 – small improvement in tp4m

 – small regression in time to throbber start / stop

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

12 (start of period) – 12 (end of period)

There was a temporary regression in this test for much of the month, but it seems to be resolved now.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 850 (end of period).

tp4m

Improvement noted around August 21.

ts_paint

Startup performance test. Lower values are better.

3650 (start of period) – 3850 (end of period).

tspaint

Note the slight regression around August 12, and perhaps another around August 27 – bug 1061878.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

Note the regression in time to throbber start around August 14 — bug 1056176.

throbstop

The same regression, less pronounced, is seen in time to throbber stop.

Eideticker

Eideticker for Android is back after a long rest – yahoo!!

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

eide1 eide2 eide3 eide4 eide5


September 02, 2014 07:49 PM

August 25, 2014

Matt Brubeck

Let's build a browser engine! Part 4: Style

Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

The Style Tree

The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

/// Map from CSS property names to values.
type PropertyMap = HashMap<String, Value>;

/// A node with associated style data.
struct StyledNode<'a> {
    node: &'a Node, // pointer to a DOM node
    specified_values: PropertyMap,
    children: Vec<StyledNode<'a>>,
}

What’s with all the 'a stuff? These are lifetime annotations, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you are not working in Rust you can safely ignore them; they aren’t critical to the meaning of this code.

We could add style information directly to the dom::Node struct from Part 1 instead, but I wanted to keep this code out of the earlier “lessons.” This is also a good opportunity to talk about the parallel trees that inhabit most layout engines.

A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

The pipeline for our toy browser engines will look something like this after we complete a few more stages:

In my implementation, each node in the DOM tree produces exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

Selector Matching

The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

fn matches(elem: &ElementData, selector: &Selector) -> bool {
    match *selector {
        Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
    }
}

To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table. [Note: The Rust types below look a bit hairy because we are passing around pointers rather than copying values. This code should be a lot more concise in languages that are not so concerned with this distinction.]

impl ElementData {
    fn get_attribute<'a>(&'a self, key: &str) -> Option<&'a String> {
        self.attributes.find_equiv(&key)
    }

    fn id<'a>(&'a self) -> Option<&'a String> {
        self.get_attribute("id")
    }

    fn classes<'a>(&'a self) -> HashSet<&'a str> {
        match self.get_attribute("class") {
            Some(classlist) => classlist.as_slice().split(' ').collect(),
            None => HashSet::new()
        }
    }
}

To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
    // Check type selector
    if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
        return false;
    }

    // Check ID selector
    if selector.id.iter().any(|id| elem.id() != Some(id)) {
        return false;
    }

    // Check class selectors
    let elem_classes = elem.classes();
    if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
        return false;
    }

    // We didn't find any non-matching selector components.
    return true;
}

Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

/// A single CSS rule and the specificity of its most specific matching selector.
type MatchedRule<'a> = (Specificity, &'a Rule);

/// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
    // Find the first (highest-specificity) matching selector.
    rule.selectors.iter().find(|selector| matches(elem, *selector))
        .map(|selector| (selector.specificity(), rule))
}

To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

/// Find all CSS rules that match the given element.
fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
    stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
}

Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the higher specificity rules are processed after the lower ones and can overwrite their values in the HashMap.

/// Apply styles to a single element, returning the specified styles.
fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
    let mut values = HashMap::new();
    let mut rules = matching_rules(elem, stylesheet);

    // Go through the rules from lowest to highest specificity.
    rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
    for &(_, rule) in rules.iter() {
        for declaration in rule.declarations.iter() {
            values.insert(declaration.name.clone(), declaration.value.clone());
        }
    }
    return values;
}

Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

/// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
    StyledNode {
        node: root,
        specified_values: match root.node_type {
            Element(ref elem) => specified_values(elem, stylesheet),
            Text(_) => HashMap::new()
        },
        children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
    }
}

That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

The Cascade

Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

Robinson’s style code does not implement the cascade; it uses only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

head { display: none; }

Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

Computed Values

In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

Inheritance

If text nodes can’t match selectors, how do they get colors and fonts and other styles? Through the magic of inheritance.

When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

Style Attributes

Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

<span style="color: red; background: yellow;">

If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

Exercises

In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

  1. Cascading
  2. Initial and/or computed values
  3. Inheritance
  4. The style attribute

Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

To be continued…

Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another short before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

August 25, 2014 10:45 PM

August 23, 2014

Matt Brubeck

Let's build a browser engine! Part 4: Style

Welcome back to my series on building your own toy browser engine. If you’re just tuning in, you can find the previous episodes here:

This article will cover what the CSS standard calls assigning property values, or what I call the style module. This module takes DOM nodes and CSS rules as input, and matches them up to determine the value of each CSS property for any given node.

This part doesn’t contain a lot of code, since I’ve left out all the really complicated parts. However, I think what’s left is still quite interesting, and I’ll also explain how some of the missing pieces can be implemented.

The Style Tree

The output of robinson’s style module is something I call the style tree. Each node in this tree includes a pointer to a DOM node, plus its CSS property values:

/// Map from CSS property names to values.
type PropertyMap = HashMap<String, Value>;

/// A node with associated style data.
struct StyledNode<'a> {
    node: &'a Node, // pointer to a DOM node
    specified_values: PropertyMap,
    children: Vec<StyledNode<'a>>,
}

What’s with all the 'a stuff? Those are lifetimes, part of how Rust guarantees that pointers are memory-safe without requiring garbage collection. If you’re not working in Rust you can ignore them; they aren’t critical to the code’s meaning.

We could add new fields to the dom::Node struct instead of creating a new tree, but I wanted to keep style code out of the earlier “lessons.” This also gives me an opportunity to talk about the parallel trees that inhabit most rendering engines.

A browser engine module often takes one tree as input, and produces a different but related tree as output. For example, Gecko’s layout code takes a DOM tree and produces a frame tree, which is then used to build a view tree. Blink and WebKit transform the DOM tree into a render tree. Later stages in all these engines produce still more trees, including layer trees and widget trees.

The pipeline for our toy browser engine will look something like this, after we complete a few more stages:

In my implementation, each node in the DOM tree has exactly one node in the style tree. But in a more complicated pipeline stage, several input nodes could collapse into a single output node. Or one input node might expand into several output nodes, or be skipped completely. For example, the style tree could exclude elements whose display property is set to 'none'. (Instead this will happen in the layout stage, because my code turned out a bit simpler that way.)

Selector Matching

The first step in building the style tree is selector matching. This will be very easy, since my CSS parser supports only simple selectors. You can tell whether a simple selector matches an element just by looking at the element itself. Matching compound selectors would require traversing the DOM tree to look at the element’s siblings, parents, etc.

fn matches(elem: &ElementData, selector: &Selector) -> bool {
    match *selector {
        Simple(ref simple_selector) => matches_simple_selector(elem, simple_selector)
    }
}

To help, we’ll add some convenient ID and class accessors to our DOM element type. The class attribute can contain multiple class names separated by spaces, which we return in a hash table.

impl ElementData {
    fn get_attribute(&self, key: &str) -> Option<&String> {
        self.attributes.find_equiv(&key)
    }

    fn id(&self) -> Option<&String> {
        self.get_attribute("id")
    }

    fn classes(&self) -> HashSet<&str> {
        match self.get_attribute("class") {
            Some(classlist) => classlist.as_slice().split(' ').collect(),
            None => HashSet::new()
        }
    }
}

To test whether a simple selector matches an element, just look at each selector component, and return false if the element doesn’t have a matching class, ID, or tag name.

fn matches_simple_selector(elem: &ElementData, selector: &SimpleSelector) -> bool {
    // Check type selector
    if selector.tag_name.iter().any(|name| elem.tag_name != *name) {
        return false;
    }

    // Check ID selector
    if selector.id.iter().any(|id| elem.id() != Some(id)) {
        return false;
    }

    // Check class selectors
    let elem_classes = elem.classes();
    if selector.class.iter().any(|class| !elem_classes.contains(&class.as_slice())) {
        return false;
    }

    // We didn't find any non-matching selector components.
    return true;
}

Rust note: This function uses the any method, which returns true if an iterator contains an element that passes the provided test. This is the same as the any function in Python (or Haskell), or the some method in JavaScript.

Building the Style Tree

Next we need to traverse the DOM tree. For each element in the tree, we will search the stylesheet for matching rules.

When comparing two rules that match the same element, we need to use the highest-specificity selector from each match. Because our CSS parser stores the selectors from most- to least-specific, we can stop as soon as we find a matching one, and return its specificity along with a pointer to the rule.

type MatchedRule<'a> = (Specificity, &'a Rule);

/// If `rule` matches `elem`, return a `MatchedRule`. Otherwise return `None`.
fn match_rule<'a>(elem: &ElementData, rule: &'a Rule) -> Option<MatchedRule<'a>> {
    // Find the first (highest-specificity) matching selector.
    rule.selectors.iter().find(|selector| matches(elem, *selector))
        .map(|selector| (selector.specificity(), rule))
}

To find all the rules that match an element we call filter_map, which does a linear scan through the style sheet, checking every rule and throwing out ones that don’t match. A real browser engine would speed this up by storing the rules in multiple hash tables based on tag name, id, class, etc.

/// Find all CSS rules that match the given element.
fn matching_rules<'a>(elem: &ElementData, stylesheet: &'a Stylesheet) -> Vec<MatchedRule<'a>> {
    stylesheet.rules.iter().filter_map(|rule| match_rule(elem, rule)).collect()
}

Once we have the matching rules, we can find the specified values for the element. We insert each rule’s property values into a HashMap. We sort the matches by specificity, so the more-specific rules are processed after the less-specific ones, and can overwrite their values in the HashMap.

/// Apply styles to a single element, returning the specified values.
fn specified_values(elem: &ElementData, stylesheet: &Stylesheet) -> PropertyMap {
    let mut values = HashMap::new();
    let mut rules = matching_rules(elem, stylesheet);

    // Go through the rules from lowest to highest specificity.
    rules.sort_by(|&(a, _), &(b, _)| a.cmp(&b));
    for &(_, rule) in rules.iter() {
        for declaration in rule.declarations.iter() {
            values.insert(declaration.name.clone(), declaration.value.clone());
        }
    }
    return values;
}

Now we have everything we need to walk through the DOM tree and build the style tree. Note that selector matching works only on elements, so the specified values for a text node are just an empty map.

/// Apply a stylesheet to an entire DOM tree, returning a StyledNode tree.
pub fn style_tree<'a>(root: &'a Node, stylesheet: &'a Stylesheet) -> StyledNode<'a> {
    StyledNode {
        node: root,
        specified_values: match root.node_type {
            Element(ref elem) => specified_values(elem, stylesheet),
            Text(_) => HashMap::new()
        },
        children: root.children.iter().map(|child| style_tree(child, stylesheet)).collect(),
    }
}

That’s all of robinson’s code for building the style tree. Next I’ll talk about some glaring omissions.

The Cascade

Style sheets provided by the author of a web page are called author style sheets. In addition to these, browsers also provide default styles via user agent style sheets. And they may allow users to add custom styles through user style sheets (like Gecko’s userContent.css).

The cascade defines which of these three “origins” takes precedence over another. There are six levels to the cascade: one for each origin’s “normal” declarations, plus one for each origin’s !important declarations.

Robinson’s style code does not implement the cascade; it takes only a single style sheet. The lack of a default style sheet means that HTML elements will not have any of the default styles you might expect. For example, the <head> element’s contents will not be hidden unless you explicitly add this rule to your style sheet:

head { display: none; }

Implementing the cascade should by fairly easy: Just track the origin of each rule, and sort declarations by origin and importance in addition to specificity. A simplified, two-level cascade should be enough to support the most common cases: normal user agent styles and normal author styles.

Computed Values

In addition to the “specified values” mentioned above, CSS defines initial, computed, used, and actual values.

Initial values are defaults for properties that aren’t specified in the cascade. Computed values are based on specified values, but may have some property-specific normalization rules applied.

Implementing these correctly requires separate code for each property, based on its definition in the CSS specs. This work is necessary for a real-world browser engine, but I’m hoping to avoid it in this toy project. In later stages, code that uses these values will (sort of) simulate initial values by using a default when the specified value is missing.

Used values and actual values are calculated during and after layout, which I’ll cover in future articles.

Inheritance

If text nodes can’t match selectors, how do they get colors and fonts and other styles? The answer is inheritance.

When a property is inherited, any node without a cascaded value will receive its parent’s value for that property. Some properties, like 'color', are inherited by default; others only if the cascade specifies the special value 'inherit'.

My code does not support inheritance. To implement it, you could pass the parent’s style data into the specified_values function, and use a hard-coded lookup table to decide which properties should be inherited.

Style Attributes

Any HTML element can include a style attribute containing a list of CSS declarations. There are no selectors, because these declarations automatically apply only to the element itself.

<span style="color: red; background: yellow;">

If you want to support the style attribute, make the specified_values function check for the attribute. If the attribute is present, pass it to parse_declarations from the CSS parser. Apply the resulting declarations after the normal author declarations, since the attribute is more specific than any CSS selector.

Exercises

In addition to writing your own selector matching and value assignment code, for further exercise you can implement one or more of the missing pieces discussed above, in your own project or a fork of robinson:

  1. Cascading
  2. Initial and/or computed values
  3. Inheritance
  4. The style attribute

Also, if you extended the CSS parser from Part 3 to include compound selectors, you can now implement matching for those compound selectors.

To Be Continued…

Part 5 will introduce the layout module. I haven’t finished the code for this yet, so there will be another delay before I can start writing the article. I plan to split layout into at least two articles (one for block layout and one for inline layout, probably).

In the meantime, I’d love to see anything you’ve created based on these articles or exercises. If your code is online somewhere, feel free to add a link to the comments below! So far I have seen Martin Tomasi’s Java implementation and Pohl Longsine’s Swift version.

August 23, 2014 10:45 PM

Chris Kitching

Preventing mercurial from eating itself

Mercurial doesn’t seem to handle being interrupted particularly well. That, coupled with my tendency to hit ^C when I do something stupid leads to Mercurial ending up in an inconsistent state about once a week, causing me to manually restore my patch queue state with judicious use of `strip` (or otherwise).

I’ve just about had enough of this. A simple hack (that hopefully others will find useful) is to alias `hg` to a script containing:

hg “$@”&
wait

exit $!

Sort of evil hack, but does the job (provided you don’t kill the terminal you’re typing in: but I’m yet to meet someone who is suffering from reflexively executing `exit`. Now you can ^C Mercurial all you want and it will blithely ignore you. This seems preferable to it half-doing something, throwing a tantrum, and eating itself…


August 23, 2014 12:41 AM

August 18, 2014

Chris Kitching

A novel (well, not really) way of optimising nine-patches

While tinkering with shrinking Fennec’s apk size, I came across a potential optimisation for nine-patches that’s oddly missing from Android’s resource preprocessing steps.

 

There’s a pretty decent explanation of what nine-patches are over here.

 

When a nine-patch contains multiple scalable regions along an axis, the system guarantees to maintain their relative size (though no promises are made about aspect ratio of the entire image).

In the simpler (and more common) case where only a single scalable region exists along an axis, if all the columns (or rows) are identical, they may safely be collapsed to a single column (or row). The renderer will happily later upscale this single-pixel region as much as necessary with identical results.

Apparently this isn’t something that’s done automagically during the packaging step, so I’ve written a neat (and slightly badly-structured) utility for performing this optimisation, available here:

https://bitbucket.org/ckitching/shrinkninepatch

It takes a list of .9.png files as input and overwrites them with optimised versions (if the optimisation is found to be safe. It does not currently perform any optimisation where multiple scalable regions exist, even though it should be safe to do so under some circumstances (if the regions each consist of homogenous rows/columns and you reduce their size in a way that preserve their relative size)).

Note that this program doesn’t preserve PNG indexes, so if you feed it an indexed png you’ll get an unindexed ARGB image as the result (causing the file to grow in size). Simply repeat your png quantisation step to obtain your optimised image.


August 18, 2014 08:30 AM

August 15, 2014

William Lachance

A new meditation app

I had some time on my hands two weekends ago and was feeling a bit of an itch to build something, so I decided to do a project I’ve had in the back of my head for a while: a meditation timer.

If you’ve been following this log, you’d know that meditation has been a pretty major interest of mine for the past year. The foundation of my practice is a daily round of seated meditation at home, where I have been attempting to follow the breath and generally try to connect with the world for a set period every day (usually varying between 10 and 30 minutes, depending on how much of a rush I’m in).

Clock watching is rather distracting while sitting so having a tool to notify you when a certain amount of time has elapsed is quite useful. Writing a smartphone app to do this is an obvious idea, and indeed approximately a zillion of these things have been written for Android and iOS. Unfortunately, most are not very good. Really, I just want something that does this:

  1. Select a meditation length (somewhere between 10 and 40 minutes).
  2. Sound a bell after a short preparation to demarcate the beginning of meditation.
  3. While the meditation period is ongoing, do a countdown of the time remaining (not strictly required, but useful for peace of mind in case you’re wondering whether you’ve really only sat for 25 minutes).
  4. Sound a bell when the meditation ends.

Yes, meditation can get more complex than that. In Zen practice, for example, sometimes you have several periods of varying length, broken up with kinhin (walking meditation). However, that mostly happens in the context of a formal setting (e.g. a Zendo) where you leave your smartphone at the door. Trying to shoehorn all that into an app needlessly complicates what should be simple.

Even worse are the apps which “chart” your progress or have other gimmicks to connect you to a virtual “community” of meditators. I have to say I find that kind of stuff really turns me off. Meditation should be about connecting with reality in a more fundamental way, not charting gamified statistics or interacting online. We already have way too much of that going on elsewhere in our lives without adding even more to it.

So, you might ask why the alarm feature of most clock apps isn’t sufficient? Really, it is most of the time. A specialized app can make selecting the interval slightly more convenient and we can preselect an appropriate bell sound up front. It’s also nice to hear something to demarcate the start of a meditation session. But honestly I didn’t have much of a reason to write this other than the fact than I could. Outside of work, I’ve been in a bit of a creative rut lately and felt like I needed to build something, anything and put it out into the world (even if it’s tiny and only a very incremental improvement over what’s out there already). So here it is:

meditation-timer-screen

The app was written entirely in HTML5 so it should work fine on pretty much any reasonably modern device, desktop or mobile. I tested it on my Nexus 5 (Chrome, Firefox for Android)[1], FirefoxOS Flame, and on my laptop (Chrome, Firefox, Safari). It lives on a subdomain of this site or you can grab it from the Firefox Marketplace if you’re using some variant of Firefox (OS). The source, such as it is, can be found on github.

I should acknowledge taking some design inspiration from the Mind application for iOS, which has a similarly minimalistic take on things. Check that out too if you have an iPhone or iPad!

Happy meditating!

[1] Note that there isn’t a way to inhibit the screen/device from going to sleep with these browsers, which means that you might miss the ending bell. On FirefoxOS, I used the requestWakeLock API to make sure that doesn’t happen. I filed a bug to get this implemented on Firefox for Android.

August 15, 2014 02:02 AM

August 14, 2014

Sriram Ramasubramanian

Multiple Text Layout

The pretty basic unit for developing UI in Android is a View. But if we look closely, View is a UI widget that provides user interaction. It comprises of Drawables and text Layouts. We see drawables everywhere — right from the background of a View. TextView has compound drawables too. However, TextView has only one layout. Is it possible to have more than one text layout in a View/TextView?

Multiple Text Layout

Let’s take an example. We have a simple ListView with each row having an image, text and some sub-text. Since TextView shows only one text Layout by default, we would need a LinearLayout with 2 or 3 views (2 TextViews in them) to achieve this layout. What if TextView can hold one more text layout? It’s just a private variable that can be created and drawn on the Even if it can hold and draw it, how would we be able to let TextView’s original layout account for this layout?

If we look at TextView’s onMeasure() closely, the available width for the layout accounts for the space occupied by the compound drawables. If we make TextView account for a larger compound drawable space on the right, the layout will constrain itself more. Now that the space is carved out, we can draw the layout in that space.

    private Layout mSubTextLayout;

    @Override
    public int getCompoundPaddingRight() {
        // Assumption: the layout has only one line.
        return super.getCompoundPaddingRight() + mSubTextLayout.getLineWidth(0);
    }

Now we need to create a layout for the sub-text and draw. Ideally it’s not good to create new objects inside onMeasure(). But if we take care of when and how we create the layouts, we don’t have to worry about this restriction. And what different kind of Layouts can we create? TextView allows creating a BoringLayout, a StaticLayout or a DynamicLayout. BoringLayout can be used if the text is only single line. StaticLayout is for multi-line layouts that cannot be changed after creation. DynamicLayout is for editable text, like in an EditText.

    @Override
    public void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        int width = MeasureSpec.getSize(widthMeasureSpec);

        // Create a layout for sub-text.
        mSubTextLayout = new StaticLayout(
                mSubText,
                mPaint,
                width,
                Alignment.ALIGN_NORMAL,
                1.0f,
                0.0f,
                true);

        // TextView doesn't know about mSubTextLayout.
        // It calculates the space using compound drawables' sizes.
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);
    }

The mPaint used here has all the attributes for the sub-text — like text color, shadow, text-size, etc. This is what determines the size used for a text layout.

    @Override
    public void onDraw(Canvas canvas) {
        // Do the default draw.
        super.onDraw(canvas);

        // Calculate the place to show the sub-text
        // using the padding, available width, height and
        // the sub-text width and height.
        // Note: The 'right' padding to use here is 'super.getCompoundPaddingRight()'
        // as we have faked the actual value.

        // Draw the sub-text.
        mLayout.draw(canvas);
    }

But hey, can’t we just use a Spannable text? Well… what if the name is really long and runs into multiple lines or need to be ellipsized?

By this, we use the same TextView to draw two layouts. And that has helped us remove 2 Views! Happy hacking! ;)

P.S: The icons are from: http://www.tutorial9.net/downloads/108-mono-icons-huge-set-of-minimal-icons/


August 14, 2014 08:41 AM

August 13, 2014

Matt Brubeck

Let's build a browser engine! Part 3: CSS

This is the third in a series of articles on building a toy browser rendering engine. Want to build your own? Start at the beginning to learn more:

This article introduces code for reading Cascading Style Sheets (CSS). As usual, I won’t try to cover everything in the spec. Instead, I tried to implement just enough to illustrate some concepts and produce input for later stages in the rendering pipeline.

Anatomy of a Stylesheet

Here’s an example of CSS source code:

h1, h2, h3 { margin: auto; color: #cc0000; }
div.note { margin-bottom: 20px; padding: 10px; }
#answer { display: none; }

Next I’ll walk through the css module from my toy browser engine, robinson. The code is written in Rust, though the concepts should translate pretty easily into other programming languages. Reading the previous articles first might help you understand some the code below.

A CSS stylesheet is a series of rules. (In the example stylesheet above, each line contains one rule.)

struct Stylesheet {
    rules: Vec<Rule>,
}

A rule includes one or more selectors separated by commas, followed by a series of declarations enclosed in braces.

struct Rule {
    selectors: Vec<Selector>,
    declarations: Vec<Declaration>,
}

A selector can be a simple selector, or it can be a chain of selectors joined by combinators. Robinson supports only simple selectors for now.

Note: Confusingly, the newer Selectors Level 3 standard uses the same terms to mean slightly different things. In this article I’ll mostly refer to CSS2.1. Although outdated, it’s a useful starting point because it’s smaller and more self-contained than CSS3 (which is split into myriad specs that reference both each other and CSS2.1).

In robinson, a simple selector can include a tag name, an ID prefixed by '#', any number of class names prefixed by '.', or some combination of the above. If the tag name is empty or '*' then it is a “universal selector” that can match any tag.

There are many other types of selector (especially in CSS3), but this will do for now.

enum Selector {
    Simple(SimpleSelector),
}

struct SimpleSelector {
    tag_name: Option<String>,
    id: Option<String>,
    class: Vec<String>,
}

A declaration is just a name/value pair, separated by a colon and ending with a semicolon. For example, "margin: auto;" is a declaration.

struct Declaration {
    name: String,
    value: Value,
}

My toy engine supports only a handful of CSS’s many value types.

enum Value {
    Keyword(String),
    Color(u8, u8, u8, u8), // RGBA
    Length(f32, Unit),
    // insert more values here
}

enum Unit { Px, /* insert more units here */ }

Rust note: u8 is an 8-bit unsigned integer, and f32 is a 32-bit float.

All other CSS syntax is unsupported, including @-rules, comments, and any selectors/values/units not mentioned above.

Parsing

CSS has a regular grammar, making it easier to parse correctly than its quirky cousin HTML. When a standards-compliant CSS parser encounters a parse error, it discards the unrecognized part of the stylesheet but still processes the remaining portions. This is useful because it allows stylesheets to include new syntax but still produce well-defined output in older browsers.

Robinson uses a very simplistic (and totally not standards-compliant) parser, built the same way as the HTML parser from Part 2. Rather than go through the whole thing line-by-line again, I’ll just paste in a few snippets. For example, here is the code for parsing a single selector:

/// Parse one simple selector, e.g.: `type#id.class1.class2.class3`
    fn parse_simple_selector(&mut self) -> SimpleSelector {
        let mut selector = SimpleSelector { tag_name: None, id: None, class: Vec::new() };
        while !self.eof() {
            match self.next_char() {
                '#' => {
                    self.consume_char();
                    selector.id = Some(self.parse_identifier());
                }
                '.' => {
                    self.consume_char();
                    selector.class.push(self.parse_identifier());
                }
                '*' => {
                    // universal selector
                    self.consume_char();
                }
                c if valid_identifier_char(c) => {
                    selector.tag_name = Some(self.parse_identifier());
                }
                _ => break
            }
        }
        return selector;
    }

Note the lack of error checking. Some malformed input like ### or *foo* will parse successfully and produce weird results. A real CSS parser would discard these invalid selectors.

Specificity

Specificity is one of the ways a rendering engine decides which style overrides the other in a conflict. If a stylesheet contains two rules that match an element, the rule with the matching selector of higher specificity can override values from the one with lower specificity.

The specificity of a selector is based on its components. An ID selector is more specific than a class selector, which is more specific than a tag selector. Within each of these “levels,” more selectors beats fewer.

pub type Specificity = (uint, uint, uint);

impl Selector {
    pub fn specificity(&self) -> Specificity {
        // http://www.w3.org/TR/selectors/#specificity
        let Simple(ref simple) = *self;
        let a = simple.id.iter().len();
        let b = simple.class.len();
        let c = simple.tag_name.iter().len();
        (a, b, c)
    }
}

(If we supported chained selectors, we could calculate the specificity of a chain just by adding up the specificities of its parts.)

The selectors for each rule are stored in a sorted vector, most-specific first. This will be important in matching, which I’ll cover in the next article.

/// Parse a rule set: `<selectors> { <declarations> }`.
    fn parse_rule(&mut self) -> Rule {
        Rule {
            selectors: self.parse_selectors(),
            declarations: self.parse_declarations()
        }
    }

    /// Parse a comma-separated list of selectors.
    fn parse_selectors(&mut self) -> Vec<Selector> {
        let mut selectors = Vec::new();
        loop {
            selectors.push(Simple(self.parse_simple_selector()));
            self.consume_whitespace();
            match self.next_char() {
                ',' => { self.consume_char(); self.consume_whitespace(); }
                '{' => break, // start of declarations
                c   => fail!("Unexpected character {} in selector list", c)
            }
        }
        // Return selectors with highest specificity first, for use in matching.
        selectors.sort_by(|a,b| b.specificity().cmp(&a.specificity()));
        return selectors;
    }

The rest of the CSS parser is fairly straightforward. You can read the whole thing on GitHub. And if you didn’t already do it for Part 2, this would be a great time to try out a parser generator. My hand-rolled parser gets the job done for simple example files, but it has a lot of hacky bits and will fail badly if you violate its assumptions. Eventually I hope to replace it with one built on rust-peg or similar.

Exercises

As before, you should decide which of these exercises you want to do, and skip the rest:

  1. Implement your own simplified CSS parser and specificity calculation.

  2. Extend robinson’s CSS parser to support more values, or one or more selector combinators.

  3. Extend the CSS parser to discard any declaration that contains a parse error, and follow the error handling rules to resume parsing after the end of the declaration.

  4. Make the HTML parser pass the contents of any <style> nodes to the CSS parser, and return a Document object that includes a list of Stylesheets in addition to the DOM tree.

Shortcuts

Just like in Part 2, you can skip parsing by hard-coding CSS data structures directly into your program, or by writing them in an alternate format like JSON that you already have a parser for.

To Be Continued…

The next article will introduce the style module. This is where everything starts to come together, with selector matching to apply CSS styles to DOM nodes.

The pace of this series might slow down soon, since I’ll be busy later this month and I haven’t even written the code for some of the upcoming articles. I’ll keep them coming as fast as I can!

August 13, 2014 07:30 PM

August 11, 2014

Matt Brubeck

Let's build a browser engine! Part 2: HTML

This is the second in a series of articles on building a toy browser rendering engine:

This article is about parsing HTML source code to produce a tree of DOM nodes. Parsing is a fascinating topic, but I don’t have the time or expertise to give it the introduction it deserves. You can get a detailed introduction to parsing from any good course or book on compilers. Or get a hands-on start by going through the documentation for a parser generator that works with your chosen programming language.

HTML has its own unique parsing algorithm. Unlike parsers for most programming languages and file formats, the HTML parsing algorithm does not reject invalid input. Instead it includes specific error-handling instructions, so web browsers can agree on how to display every web page, even ones that don’t conform to the syntax rules. Web browsers have to do this to be usable: Since non-conforming HTML has been supported since the early days of the web, it is now used in a huge portion of existing web pages.

A Simple HTML Dialect

I didn’t even try to implement the standard HTML parsing algorithm. Instead I wrote a basic parser for a tiny subset of HTML syntax. My parser can handle simple pages like this:

<html>
    <body>
        <h1>Title</h1>
        <div id="main" class="test">
            <p>Hello <em>world</em>!</p>
        </div>
    </body>
</html>

The following syntax is allowed:

Everything else is unsupported, including:

At each stage of this project I’m writing more or less the minimum code needed to support the later stages. But if you want to learn more about parsing theory and tools, you can be much more ambitious in your own project!

Example Code

Next, let’s walk through my toy HTML parser, keeping in mind that this is just one way to do it (and probably not the best way). Its structure is based loosely on the tokenizer module from Servo’s cssparser library. It has no real error handling; in most cases, it just aborts when faced with unexpected syntax. The code is in Rust, but I hope it’s fairly readable to anyone who’s used similar-looking languages like Java, C++, or C#. It makes use of the DOM data structures from part 1.

The parser stores its input string and a current position within the string. The position is the index of the next character we haven’t processed yet.

struct Parser {
    pos: uint,
    input: String,
}

We can use this to implement some simple methods for peeking at the next characters in the input:

impl Parser {
    /// Read the next character without consuming it.
    fn next_char(&self) -> char {
        self.input.as_slice().char_at(self.pos)
    }

    /// Do the next characters start with the given string?
    fn starts_with(&self, s: &str) -> bool {
        self.input.as_slice().slice_from(self.pos).starts_with(s)
    }

    /// Return true if all input is consumed.
    fn eof(&self) -> bool {
        self.pos >= self.input.len()
    }

    // ...
}

Rust strings are stored as UTF-8 byte arrays. To go to the next character, we can’t just advance by one byte. Instead we use char_range_at which correctly handles multi-byte characters. (If our string used fixed-width characters, we could just increment pos.)

/// Return the current character, and advance to the next character.
    fn consume_char(&mut self) -> char {
        let range = self.input.as_slice().char_range_at(self.pos);
        self.pos = range.next;
        return range.ch;
    }

Often we will want to consume a string of consecutive characters. The consume_while method consumes characters that meet a given condition, and returns them as a string:

/// Consume characters until `test` returns false.
    fn consume_while(&mut self, test: |char| -> bool) -> String {
        let mut result = String::new();
        while !self.eof() && test(self.next_char()) {
            result.push_char(self.consume_char());
        }
        return result;
    }

We can use this to ignore a sequence of space characters, or to consume a string of alphanumeric characters:

/// Consume and discard zero or more whitespace characters.
    fn consume_whitespace(&mut self) {
        self.consume_while(|c| c.is_whitespace());
    }

    /// Parse a tag or attribute name.
    fn parse_tag_name(&mut self) -> String {
        self.consume_while(|c| match c {
            'a'..'z' | 'A'..'Z' | '0'..'9' => true,
            _ => false
        })
    }

Now we’re ready to start parsing HTML. To parse a single node, we look at its first character to see if it is an element or a text node. In our simplified version of HTML, a text node can contain any character except <.

/// Parse a single node.
    fn parse_node(&mut self) -> dom::Node {
        match self.next_char() {
            '<' => self.parse_element(),
            _   => self.parse_text()
        }
    }

    /// Parse a text node.
    fn parse_text(&mut self) -> dom::Node {
        dom::text(self.consume_while(|c| c != '<'))
    }

An element is more complicated. It includes opening and closing tags, and between them any number of child nodes:

/// Parse a single element, including its open tag, contents, and closing tag.
    fn parse_element(&mut self) -> dom::Node {
        // Opening tag.
        assert!(self.consume_char() == '<');
        let tag_name = self.parse_tag_name();
        let attrs = self.parse_attributes();
        assert!(self.consume_char() == '>');

        // Contents.
        let children = self.parse_nodes();

        // Closing tag.
        assert!(self.consume_char() == '<');
        assert!(self.consume_char() == '/');
        assert!(self.parse_tag_name() == tag_name);
        assert!(self.consume_char() == '>');

        return dom::elem(tag_name, attrs, children);
    }

Parsing attributes is pretty easy in our simplified syntax. Until we reach the end of the opening tag (>) we repeatedly look for a name followed by = and then a string enclosed in quotes.

/// Parse a single name="value" pair.
    fn parse_attr(&mut self) -> (String, String) {
        let name = self.parse_tag_name();
        assert!(self.consume_char() == '=');
        let value = self.parse_attr_value();
        return (name, value);
    }

    /// Parse a quoted value.
    fn parse_attr_value(&mut self) -> String {
        let open_quote = self.consume_char();
        assert!(open_quote == '"' || open_quote == '\'');
        let value = self.consume_while(|c| c != open_quote);
        assert!(self.consume_char() == open_quote);
        return value;
    }

    /// Parse a list of name="value" pairs, separated by whitespace.
    fn parse_attributes(&mut self) -> dom::AttrMap {
        let mut attributes = HashMap::new();
        loop {
            self.consume_whitespace();
            if self.next_char() == '>' {
                break;
            }
            let (name, value) = self.parse_attr();
            attributes.insert(name, value);
        }
        return attributes;
    }

To parse the child nodes, we recursively call parse_node in a loop until we reach the closing tag:

/// Parse a sequence of sibling nodes.
    fn parse_nodes(&mut self) -> Vec<dom::Node> {
        let mut nodes = vec!();
        loop {
            self.consume_whitespace();
            if self.eof() || self.starts_with("</") {
                break;
            }
            nodes.push(self.parse_node());
        }
        return nodes;
    }

Finally, we can put this all together to parse an entire HTML document into a DOM tree. This function will create a root node for the document if it doesn’t include one explicitly; this is similar to what a real HTML parser does.

/// Parse an HTML document and return the root element.
pub fn parse(source: String) -> dom::Node {
    let mut nodes = Parser { pos: 0u, input: source }.parse_nodes();

    // If the document contains a root element, just return it. Otherwise, create one.
    if nodes.len() == 1 {
        nodes.swap_remove(0).unwrap()
    } else {
        dom::elem("html".to_string(), HashMap::new(), nodes)
    }
}

That’s it! The entire code for the robinson HTML parser. The whole thing weighs in at just over 100 lines of code (not counting blank lines and comments). If you use a good library or parser generator, you can probably build a similar toy parser in even less space.

Exercises

Here are a few alternate ways to try this out yourself. As before, you can choose one or more of them and ignore the others.

  1. Build a parser (either “by hand” or with a library or parser generator) that takes a subset of HTML as input and produces a tree of DOM nodes.

  2. Modify robinson’s HTML parser to add some missing features, like comments. Or replace it with a better parser, perhaps built with a library or generator.

  3. Create an invalid HTML file that causes your parser (or mine) to fail. Modify the parser to recover from the error and produce a DOM tree for your test file.

Shortcuts

If you want to skip parsing completely, you can build a DOM tree programmatically instead, by adding some code like this to your program (in pseudo-code; adjust it to match the DOM code you wrote in Part 1):

// <html><body>Hello, world!</body></html>
let root = element("html");
let body = element("body");
root.children.push(body);
body.children.push(text("Hello, world!"));

Or you can find an existing HTML parser and incorporate it into your program.

The next article in this series will cover CSS data structures and parsing.

August 11, 2014 03:00 PM

August 08, 2014

Matt Brubeck

Let's build a browser engine! Part 1: Getting started

I’m building a toy HTML rendering engine, and I think you should too. This is the first in a series of articles:

The full series will describe the code I’ve written, and show how you can make your own. But first, let me explain why.

You’re building a what?

Let’s talk terminology. A browser engine is the portion of a web browser that works “under the hood” to fetch a web page from the internet, and translate its contents into forms you can read, watch, hear, etc. Blink, Gecko, WebKit, and Trident are browser engines. In contrast, the the browser’s own UI—tabs, toolbar, menu and such—is called the chrome. Firefox and SeaMonkey are two browsers with different chrome but the same Gecko engine.

A browser engine includes many sub-components: an HTTP client, an HTML parser, a CSS parser, a JavaScript engine (itself composed of parsers, interpreters, and compilers), and much more. The many components involved in parsing web formats like HTML and CSS and translating them into what you see on-screen are sometimes called the layout engine or rendering engine.

Why a “toy” rendering engine?

A full-featured browser engine is hugely complex. Blink, Gecko, WebKit—these are millions of lines of code each. Even younger, simpler rendering engines like Servo and WeasyPrint are each tens of thousands of lines. Not the easiest thing for a newcomer to comprehend!

Speaking of hugely complex software: If you take a class on compilers or operating systems, at some point you will probably create or modify a “toy” compiler or kernel. This is a simple model designed for learning; it may never be run by anyone besides the person who wrote it. But making a toy system is a useful tool for learning how the real thing works. Even if you never build a real-world compiler or kernel, understanding how they work can help you make better use of them when writing your own programs.

So, if you want to become a browser developer, or just to understand what happens inside a browser engine, why not build a toy one? Like a toy compiler that implements a subset of a “real” programming language, a toy rendering engine could implement a small subset of HTML and CSS. It won’t replace the engine in your everyday browser, but should nonetheless illustrate the basic steps needed for rendering a simple HTML document.

Try this at home.

I hope I’ve convinced you to give it a try. This series will be easiest to follow if you already have some solid programming experience and know some high-level HTML and CSS concepts. However, if you’re just getting started with this stuff, or run into things you don’t understand, feel free to ask questions and I’ll try to make it clearer.

Before you start, a few remarks on some choices you can make:

On Programming Languages

You can build a toy layout engine in any programming language. Really! Go ahead and use a language you know and love. Or use this as an excuse to learn a new language if that sounds like fun.

If you want to start contributing to major browser engines like Gecko or WebKit, you might want to work in C++ because it’s the main language used in those engines, and using it will make it easier to compare your code to theirs. My own toy project, robinson, is written in Rust. I’m part of the Servo team at Mozilla, so I’ve become very fond of Rust programming. Plus, one of my goals with this project is to understand more of Servo’s implementation. (I’ve written a lot of browser chrome code, and a few small patches for Gecko, but before joining the Servo project I knew nothing about many areas of the browser engine.) Robinson sometimes uses simplified versions of Servo’s data structures and code. If you too want to start contributing to Servo, try some of the exercises in Rust!

On Libraries and Shortcuts

In a learning exercise like this, you have to decide whether it’s “cheating” to use someone else’s code instead of writing your own from scratch. My advice is to write your own code for the parts that you really want to understand, but don’t be shy about using libraries for everything else. Learning how to use a particular library can be a worthwhile exercise in itself.

I’m writing robinson not just for myself, but also to serve as example code for these articles and exercises. For this and other reasons, I want it to be as tiny and self-contained as possible. So far I’ve used no external code except for the Rust standard library. (This also side-steps the minor hassle of getting multiple dependencies to build with the same version of Rust while the language is still in development.) This rule isn’t set in stone, though. For example, I may decide later to use a graphics library rather than write my own low-level drawing code.

Another way to avoid writing code is to just leave things out. For example, robinson has no networking code yet; it can only read local files. In a toy program, it’s fine to just skip things if you feel like it. I’ll point out potential shortcuts like this as I go along, so you can bypass steps that don’t interest you and jump straight to the good stuff. You can always fill in the gaps later if you change your mind.

First Step: The DOM

Are you ready to write some code? We’ll start with something small: data structures for the DOM. Let’s look at robinson’s dom module.

The DOM is a tree of nodes. A node has zero or more children. (It also has various other attributes and methods, but we can ignore most of those for now.)

struct Node {
    // data common to all nodes:
    children: Vec<Node>,

    // data specific to each node type:
    node_type: NodeType,
}

There are several node types, but for now we will ignore most of them and say that a node is either an Element or a Text node. In a language with inheritance these would be subtypes of Node. In Rust they can be an enum (Rust’s keyword for a “tagged union” or “sum type”):

enum NodeType {
    Text(String),
    Element(ElementData),
}

An element includes a tag name and any number of attributes, which can be stored as a map from names to values. Robinson doesn’t support namespaces, so it just stores tag and attribute names as simple strings.

struct ElementData {
    tag_name: String,
    attributes: AttrMap,
}

type AttrMap = HashMap<String, String>;

Finally, some constructor functions to make it easy to create new nodes:

fn text(data: String) -> Node {
    Node { children: vec![], node_type: Text(data) }
}

fn elem(name: String, attrs: AttrMap, children: Vec<Node>) -> Node {
    Node {
        children: children,
        node_type: Element(ElementData {
            tag_name: name,
            attributes: attrs,
        })
    }
}

And that’s it! A full-blown DOM implementation would include a lot more data and dozens of methods, but this is all we need to get started. In the next article, we’ll add a parser that turns HTML source code into a tree of these DOM nodes.

Exercises

These are just a few suggested ways to follow along at home. Do the exercises that interest you and skip any that don’t.

  1. Start a new program in the language of your choice, and write code to represent a tree of DOM text nodes and elements.

  2. Install the latest version of Rust, then download and build robinson. Open up dom.rs and extend NodeType to include additional types like comment nodes.

  3. Write code to pretty-print a tree of DOM nodes.

References

For much more detailed information about browser engine internals, see Tali Garsiel’s wonderful How Browsers Work and its links to further resources.

For example code, here’s a short list of “small” open source web rendering engines. Most of them are many times bigger than robinson, but still way smaller than Gecko or WebKit. WebWhirr, at 2000 lines of code, is the only other one I would call a “toy” engine.

You may find these useful for inspiration or reference. If you know of any other similar projects—or if you start your own—please let me know!

August 08, 2014 04:40 PM

August 03, 2014

Geoff Brown

Firefox for Android Performance Measures – July check-up

My monthly review of Firefox for Android performance measurements. This month’s highlights:

- No significant regressions or improvements found!

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Firefox for Android, for Talos tests run on Android 4.0 Opt. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Screenshot from 2014-08-03 13:13:44

12 (start of period) – 12 (end of period)

The temporary regression of July 24 was caused by bug 1031107; resolved by bug 1044702.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6300 (start of period) – 6300 (end of period).

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 940 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3650 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Screenshot from 2014-08-03 13:39:26

Screenshot from 2014-08-03 13:45:17

Screenshot from 2014-08-03 13:49:07

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

The Eideticker dashboard is slowly coming back to life, but there are still not enough results to show graphs here. We’ll check back at the end of August.


August 03, 2014 07:51 PM

August 01, 2014

Margaret Leibovic

Firefox for Android: Search Experiments

Search is a large part of mobile browser usage, so we (the Firefox for Android team) decided to experiment with ways to improve our search experience for users. As an initial goal, we decided to look into how we can make search faster. To explore this space, we’re about to enable two new features in Nightly: a search activity and a home screen widget.

Android allows apps to register to handle an “assist” intent, which is triggered by the swipe-up gesture on Nexus devices. We decided to hook into this intent to launch a quick, lightweight search experience for Firefox for Android users.

image image

Right now we’re using Yahoo! to power search suggestions and results, but we have patches in the works to let users choose their own search engine. Tapping on results will launch users back into their normal Firefox for Android experience.

We also created a simple home screen widget to help users quickly launch this search activity even if they’re not using a Nexus device. As a bonus, this widget also lets users quickly open a new tab in Firefox for Android.

image image

We are still in the early phases of design and development, so be prepared to see changes as we iterate to improve this search experience. We have a few telemetry probes in place to let us gather data on how people are using these new search features, but we’d also love to hear your feedback!

You can find links to relevant bugs on our project wiki page. As always, discussion about Firefox for Android development happens on the mobile-firefox-dev mailing list and in #mobile on IRC. And we’re always looking for new contributors if you’d like to get involved!

Special shout-out to our awesome intern Eric for leading the initial search activity development, as well as Wes for implementing the home screen widget.

August 01, 2014 09:10 PM

July 31, 2014

Lucas Rocha

The new TwoWayView

What if writing custom view recycling layouts was a lot simpler? This question stuck in my mind since I started writing Android apps a few years ago.

The lack of proper extension hooks in the AbsListView API has been one of my biggest pain points on Android. The community has come up with different layout implementations that were largely based on AbsListView‘s code but none of them really solved the framework problem.

So a few months ago, I finally set to work on a new API for TwoWayView that would provide a framework for custom view recycling layouts. I had made some good progress but then Google announced RecyclerView at I/O and everything changed.

At first sight, RecyclerView seemed to be an exact overlap with the new TwoWayView API. After some digging though, it became clear that RecyclerView was a superset of what I was working on. So I decided to embrace RecyclerView and rebuild TwoWayView on top of it.

The new TwoWayView is functional enough now. Time to get some early feedback. This post covers the upcoming API and the general-purpose layout managers that will ship with it.

Creating your own layouts

RecyclerView itself doesn’t actually do much. It implements the fundamental state handling around child views, touch events and adapter changes, then delegates the actual behaviour to separate components—LayoutManager, ItemDecoration, ItemAnimator, etc. This means that you still have to write some non-trivial code to create your own layouts.

LayoutManager is a low-level API. It simply gives you extension points to handle scrolling and layout. For most layouts, the general structure of a LayoutManager implementation is going to be very similar—recycle views out of parent bounds, add new views as the user scrolls, layout scrap list items, etc.

Wouldn’t it be nice if you could implement LayoutManagers with a higher-level API that was more focused on the layout itself? Enter the new TwoWayView API.

TwoWayLayoutManagercode is a simple API on top of LayoutManager that does all the laborious work for you so that you can focus on how the child views are measured, placed, and detached from the RecyclerView.

To get a better idea of what the API looks like, have a look at these sample layouts: SimpleListLayout is a list layout and GridAndListLayout is a more complex example where the first N items are laid out as a grid and the remaining ones behave like a list. As you can see you only need to override a couple of simple methods to create your own layouts.

Built-in layouts

The new API is pretty nice but I also wanted to create a space for collaboration around general-purpose layout managers. So far, Google has only provided LinearLayoutManager. They might end up releasing a few more layouts later this year but, for now, that is all we got.

layouts

The new TwoWayView ships with a collection of four built-in layouts: List, Grid, Staggered Grid, and Spannable Grid.

These layouts support all RecyclerView features: item animations, decorations, scroll to position, smooth scroll to position, view state saving, etc. They can all be scrolled vertically and horizontally—this is the TwoWayView project after all ;-)

You probably know how the List and Grid layouts work. Staggered Grid arranges items with variable heights or widths into different columns or rows according to its orientation.

Spannable Grid is a grid layout with fixed-size cells that allows items to span multiple columns and rows. You can define the column and row spans as attributes in the child views as shown below.

<FrameLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:colSpan="2"
    app:rowSpan="3">
    ...
</FrameLayout>

Utilities

The new TwoWayView API will ship with a convenience view (TwoWayView) that can take a layoutManager XML attribute that points to a layout manager class.

<org.lucasr.twowayview.TwoWayView
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    app:layoutManager="ListLayoutManager"/>

This way you can leverage the resource system to set layout manager depending on device features and configuration via styles.

You can also use ItemClickSupport to use ListView-style item (long) click listeners. You can easily plug-in support for those in any RecyclerView (see sample).

I’m also planning to create pluggable item decorations for dividers, item spacing, list selectors, and more.


That’s all for now! The API is still in flux and will probably go through a few more iterations. The built-in layouts definitely need more testing.

You can help by filing (and fixing) bugs and giving feedback on the API. Maybe try using the built-in layouts in your apps and see what happens?

I hope TwoWayView becomes a productive collaboration space for RecyclerView extensions and layouts. Contributions are very welcome!

July 31, 2014 11:33 AM

July 30, 2014

Chris Peterson

Testing Add-on Compatibility With Multi-Process Firefox

“Electrolysis” (or “e10s” for short) is the project name for Mozilla’s multi-process Firefox. Sandboxing tabs into multiple processes will improve security and UI responsiveness. Firefox currently sandboxes plugins like Flash into a separate process, but sandboxing web content is more difficult because Firefox’s third-party add-ons were not designed for multiple processes. IE and Chrome use multiple processes today, but Google didn’t need to worry about add-on compatibility when designing Chrome’s multi-process sandbox because they didn’t have any. :)

And that’s where our Firefox Nightly testers come in! We can’t test every Firefox add-on ourselves. We’re asking for your help testing your favorite add-ons in Firefox Nightly’s multi-process mode. We’re tracking tested add-ons, those that work and those that need to be fixed, on the website arewee10syet.com (“Are We e10s Yet?”). Mozilla is hosting a QMO Testday on Friday August 1 where Mozilla QA and e10s developers will be available in Mozilla’s #testday IRC channel to answer questions.

To test an add-on:

  1. Install Firefox Nightly.
  2. Optional but recommended: create a new Firefox profile so you are testing the add-on without any other add-ons or old settings.
  3. Install the add-on you would like to test. See arewee10syet.com for some suggestions.
  4. e10s is disabled by default. Confirm that the add-on works as expected in Firefox Nightly before enabling e10s. You might find Firefox Nightly bugs that are not e10s’ fault. :)
  5. Now enable e10s by opening the about:config page and changing the browser.tabs.remote.autostart preference to true.
  6. Restart Firefox Nightly. When e10s is enabled, Firefox’s tab titles will be underlined. Tabs for special pages, like your home page or the new tab page, are not underlined, but tabs for most websites should be underlined.
  7. Confirm that the add-on still works as expected with e10s.
  8. To disable e10s, reset the browser.tabs.remote.autostart preference to false and restart Firefox.

Some e10s problems you might find include Firefox crashing or hanging. Add-ons that modify web page content, like Greasemonkey or AdBlock Plus, might appear to do nothing. But many add-ons will just work.

If the add-on works as expected, click the “it works” link on arewee10syet.com for that add-on or just email me so we can update our list of compatible add-ons.

If the add-on does not work as expected, click the add-on’s “Report bug” link on arewe10syet.com to file a bug report on Bugzilla. Please include the add-on’s name and version, steps to reproduce the problem, a description of what you expected to happen, and what actually happened. If Firefox crashed, include the most recent crash report IDs from about:crashes. If Firefox didn’t crash, copying the log messages from Firefox’s Browser Console (Tools menu > Web Developer menu > Browser Console menu item; not Web Console) to the bug might include useful debugging information.

July 30, 2014 07:31 PM

Wes Johnston

Better tiles in Fennec

We recently reworked Firefox for Android‘s homescreen to look a little prettier on first-run by shipping “tile” icons and colors for the default sites. In Firefox 33, we’re allowing sites to designate their own tiles images by supporting  msApplication-Tile and Colors in Fennec. So, for example, you might start seeing tiles that look like:

appear as you browse. Sites can add these with just a little markup in the page:

<meta name="msapplication-TileImage" content="images/myimage.png"/>
<meta name="msapplication-TileColor" content="#d83434"/>

As you can see above in the Boston Globe tile, sometimes we don’t have much to work with. Firefox for Android already supports the sizes attribute on favicon links, and our fabulous intern Chris Kitching improved things even more last year. In the absence of a tile, we’ll show a screenshot. If you’ve designated that Firefox shouldn’t cache the content of your site for security reasons, we’ll use the most appropriate size we can find and pull colors out of it for the background. But if sites can provide us with this information directly its 1.) must faster and 2.) gives much better results.

AFAIK, there is no standard spec for these types of meta tags, and none in the works either. Its a bit of the wild wild west right now. For instance, Apple supports apple-mobile-web-app-status-bar-style for designating the color of the status bar in certain situations, as well as a host of images for use in different situations.

Opera at one point supported using a minimized media query to designate a stylesheet for thumbnails (sadly they’ve removed all of those docs, so instead you just get a github link to an html file there). Gecko doesn’t have view-mode media query support currently, and not many sites have implemented it anyway, but it might in the future provide a standards based alternative. That said, there are enough useful reasons to know a “color” or a few different “logos” for an app or site, that it might be useful to come up with some standards based ways to list these things in pages.


July 30, 2014 04:30 PM

July 25, 2014

Mark Finkle

Firefox for Android: Collecting and Using Telemetry

Firefox 31 for Android is the first release where we collect telemetry data on user interactions. We created a simple “event” and “session” system, built on top of the current telemetry system that has been shipping in Firefox for many releases. The existing telemetry system is focused more on the platform features and tracking how various components are behaving in the wild. The new system is really focused on how people are interacting with the application itself.

Collecting Data

The basic system consists of two types of telemetry probes:

We add the probes into any part of the application that we want to study, which is most of the application.

Visualizing Data

The raw telemetry data is processed into summaries, one for Events and one for Sessions. In order to visualize the telemetry data, we created a simple dashboard (source code). It’s built using a great little library called PivotTable.js, which makes it easy to slice and dice the summary data. The dashboard has several predefined tables so you can start digging into various aspects of the data quickly. You can drag and drop the fields into the column or row headers to reorganize the table. You can also add filters to any of the fields, even those not used in the row/column headers. It’s a pretty slick library.

uitelemetry-screenshot-crop

Acting on Data

Now that we are collecting and studying the data, the goal is to find patterns that are unexpected or might warrant a closer inspection. Here are a few of the discoveries:

Page Reload: Even in our Nightly channel, people seem to be reloading the page quite a bit. Way more than we expected. It’s one of the Top 2 actions. Our current thinking includes several possibilities:

  1. Page gets stuck during a load and a Reload gets it going again
  2. Networking error of some kind, with a “Try again” button on the page. If the button does not solve the problem, a Reload might be attempted.
  3. Weather or some other update-able page where a Reload show the current information.

We have started projects to explore the first two issues. The third issue might be fine as-is, or maybe we could add a feature to make updating pages easier? You can still see high uses of Reload (reload) on the dashboard.

Remove from Home Pages: The History, primarily, and Top Sites pages see high uses of Remove (home_remove) to delete browsing information from the Home pages. People do this a lot, again it’s one of the Top 2 actions. People will do this repeatably, over and over as well, clearing the entire list in a manual fashion. Firefox has a Clear History feature, but it must not be very discoverable. We also see people asking for easier ways of clearing history in our feedback too, but it wasn’t until we saw the telemetry data for us to understand how badly this was needed. This led us to add some features:

  1. Since the History page was the predominant source of the Removes, we added a Clear History button right on the page itself.
  2. We added a way to Clear History when quitting the application. This was a bit tricky since Android doesn’t really promote “Quitting” applications, but if a person wants to enable this feature, we add a Quit menu item to make the action explicit and in their control.
  3. With so many people wanting to clear their browsing history, we assumed they didn’t know that Private Browsing existed. No history is saved when using Private Browsing, so we’re adding some contextual hinting about the feature.

These features are included in Nightly and Aurora versions of Firefox. Telemetry is showing a marked decrease in Remove usage, which is great. We hope to see the trend continue into Beta next week.

External URLs: People open a lot of URLs from external applications, like Twitter, into Firefox. This wasn’t totally unexpected, it’s a common pattern on Android, but the degree to which it happened versus opening the browser directly was somewhat unexpected. Close to 50% of the URLs loaded into Firefox are from external applications. Less so in Nightly, Aurora and Beta, but even those channels are almost 30%. We have started looking into ideas for making the process of opening URLs into Firefox a better experience.

Saving Images: An unexpected discovery was how often people save images from web content (web_save_image). We haven’t spent much time considering this one. We think we are doing the “right thing” with the images as far as Android conventions are concerned, but there might be new features waiting to be implemented here as well.

Take a look at the data. What patterns do you see?

Here is the obligatory UI heatmap, also available from the dashboard:
uitelemetry-heatmap

July 25, 2014 03:08 AM

July 10, 2014

Geoff Brown

New try aliases “xpcshell” and “robocop”

We now have two new try aliases which will be of interest to some mobile developers.

Android 2.3 tests run xpcshell tests in 3 chunks, which can be specified in a try push:

try: … -u xpcshell-1,xpcshell-2,xpcshell-3

but since all other test platforms run xpcshell as a single chunk, it’s easy to forget about Android 2.3’s chunks and push something like:

try: -b o -p all -u xpcshell -t none

…and then wonder why xpcshell tests didn’t run for Android 2.3!

As of today, a new try alias recognizes “xpcshell” to mean “run all the xpcshell test chunks”.

Similarly, a new try alias recognizes “robocop” to mean “run all the robocop test chunks”.

An example: https://tbpl.mozilla.org/?tree=Try&rev=e52bcf945dcd

tryaliases

How convenient!

(Of course, “-u xpcshell-1″, “-u robocop-2, robocop-3″, etc still work and you should use them if you only need to run specific chunks.)

Thanks to :Callek and :RyanVM for making this happen.


July 10, 2014 04:55 PM

July 07, 2014

William Lachance

Measuring frames per second and animation smoothness with Eideticker

[ For more information on the Eideticker software I'm referring to, see this entry ]

Just wanted to write up a few notes on using Eideticker to measure animation smoothness, since this is a topic that comes up pretty often and I wind up explaining these things repeatedly. ;)

When rendering web content, we want the screen to update something like 60 times per second (typical refresh rate of an LCD screen) when an animation or other change is occurring. When this isn’t happening, there is often a user perception of jank (a.k.a. things not working as they should). Generally we express how well we measure up to this ideal by counting the number of “frames per second” that we’re producing. If you’re reading this, you’re probably already familiar with the concept in outline. If you want to know more, you can check out the wikipedia article which goes into more detail.

At an internal level, this concept matches up conceptually with what Gecko is doing. The graphics pipeline produces frames inside graphics memory, which is then sent to the LCD display (whether it be connected to a laptop or a mobile phone) to be viewed. By instrumenting the code, we can see how often this is happening, and whether it is occurring at the right frequency to reach 60 fps. My understanding is that we have at least some code which does exactly this, though I’m not 100% up to date on how accurate it is.

But even assuming the best internal system monitoring, Eideticker might still be useful because:

Unfortunately, deriving this sort of information from a video capture is more complicated than you’d expect.

What does frames per second even mean?

Given a set of N frames captured from the device, the immediate solution when it comes to “frames per second” is to just compare frames against each other (e.g. by comparing the value of individual pixels) and then counting the ones that are different as “unique frames”. Divide the total number of unique frames by the length of the
capture and… voila? Frames per second? Not quite.

First off, there’s the inherent problem that sometimes the expected behaviour of a test is for the screen to be unchanging for a period of time. For example, at the very beginning of a capture (when we are waiting for the input event to be acknowledged) and at the end (when we are waiting for things to settle). Second, it’s also easy to imagine the display remaining static for a period of time in the middle of a capture (say in between gestures in a multi-part capture). In these cases, there will likely be no observable change on the screen and thus the number of frames counted will be artificially low, skewing the frames per second number down.

Measurement problems

Ok, so you might not consider that class of problem that big a deal. Maybe we could just not consider the frames at the beginning or end of the capture. And for pauses in the middle… as long as we get an absolute number at the end, we’re fine right? That’s at least enough to let us know that we’re getting better or worse, assuming that whatever we’re testing is behaving the same way between runs and we’re just trying to measure how many frames hit the screen.

I might agree with you there, but there’s a further problems that are specific to measuring on-screen performance using a high-speed camera as we are currently with FirefoxOS.

An LCD updates gradually, and not all at once. Remnants of previous frames will remain on screen long past their interval. Take for example these five frames (sampled at 120fps) from a capture of a pan down in the FirefoxOS Contacts application (movie):

sidebyside

Note how if you look closely these 5 frames are actually the intersection of *three* seperate frames. One with “Adam Card” at the top, another with “Barbara Bloomquist” at the top, then another with “Barbara Bloomquist” even further up. Between each frame, artifacts of the previous one are clearly visible.

Plausible sounding solutions:

Personally the last solution appeals to me the most, although it has the obvious disadvantage of being a “homebrew” metric that no one has ever heard of before, which might make it difficult to use to prove that performance is adequate — the numbers come with a long-winded explanation instead of being something that people immediately understand.

July 07, 2014 04:13 PM

July 03, 2014

Geoff Brown

Firefox for Android Performance Measures – June check-up

My monthly review of Firefox for Android performance measurements. June highlights:

- Talos values tracked here switch to Android 4.0, rather than Android 2.2

- Talos regressions in tcheck2 and tsvgx

- small regression in time to throbber stop

- Eideticker still not reporting results.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec. In all of my previous posts, this section has tracked Talos for Android 2.2 Opt. This month, and going forward, I switch to Android 4.0 Opt, since the Android 2.2 Opt tests are being phased out. The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test is not currently run on Android 4.0.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

6 (start of period) – 12 (end of period)

Regression of June 17 – bug 1026742.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

50000 (start of period) – 50000 (end of period)

There was a large temporary regression between June 12 and June 14 – bug 1026798.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

520 (start of period) – 520 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

6100 (start of period) – 6300 (end of period).

Regression of June 16 – bug 1026551.

tp4m

Generic page load test. Lower values are better.

940 (start of period) – 940 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbstart

 

throbstop

“Time to throbber start” looks very flat for all devices, but “Time to throbber stop” has a slight upward trend, especially for nexus-s-2 — bug 1032249.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results are still not available. We’ll check back at the end of July.


July 03, 2014 02:39 AM

June 27, 2014

William Lachance

End of Q2 Eideticker update: Flame tests, future plans

[ For more information on the Eideticker software I'm referring to, see this entry ]

Just wanted to give an update on where Eideticker is at the end of Q2 2014. The big news is that we’ve started to run startup tests against the Flame, the results of which are starting to appear on the dashboard:

eideticker-contacts-flame [link]

It is expected that these tests will provide a useful complement to the existing startup tests we’re running with b2gperf, in particular answering the “is this regression real?” question.

Pending work for Q3:

The above isn’t an exhaustive list: there’s much more that we have in mind for the future that’s not yet scheduled or defined well (e.g. get Eideticker reporting to Treeherder’s new performance module). If you have any questions or feedback on anything outlined above I’d love to hear it!

June 27, 2014 09:23 PM

June 11, 2014

William Lachance

Managing test manifests: ManifestDestiny -> manifestparser

Just wanted to make a quick announcement that ManifestDestiny, the python package we use internally here at Mozilla for declaratively managing lists of tests in Mochitest and other places, has been renamed to manifestparser. We kept the versioning the same (0.6), so the only thing you should need to change in your python package dependencies is a quick substitution of “ManifestDestiny” with “manifestparser”. We will keep ManifestDestiny around indefinitely on pypi, but only to make sure old stuff doesn’t break. New versions of the software will only be released under the name “manifestparser”.

Quick history lesson: “Manifest destiny” refers to a philosophy of exceptionalism and expansionism that was widely held by American settlers in the 19th century. The concept is considered offensive by some, as it was used to justify displacing and dispossessing Native Americans. Wikipedia’s article on the subject has a good summary if you want to learn more.

Here at Mozilla Tools & Automation, we’re most interested in creating software that everyone can feel good about depending on, so we agreed to rename it. When I raised this with my peers, there were no objections. I know these things are often the source of much drama in the free software world, but there’s really none to see here.

Happy manifest parsing!

June 11, 2014 02:44 PM

June 06, 2014

Mark Finkle

Firefox for Android: Casting videos and Roku support – Ready to test in Nightly

Firefox for Android Nightly builds now support casting HTML5 videos from a web page to a TV via a connected Roku streaming player. Using the system is simple, but it does require you to install a viewer application on your Roku device. Firefox support for the Roku viewer and the viewer itself are both currently pre-release. We’re excited to invite our Nightly channel users to help us test these new features, share feedback and file any bugs so we can continue to make improvements to performance and functionality.

Setup

To begin testing, first you’ll need to install the viewer application to your Roku. The viewer app, called Firefox for Roku Nightly, is currently a private channel. You can install it via this link: Firefox Nightly

Once installed, try loading this test page into your Firefox for Android Nightly browser: Casting Test

When Firefox has discovered your Roku, you should see the Media Control Bar with Cast and Play icons:

casting-onload

The Cast icon on the left of the video controls allows you to send the video to a device. You can also long-tap on the video to get the context menu, and cast from there too.

Hint: Make sure Firefox and the Roku are on the same Wifi network!

Once you have sent a video to a device, Firefox will display the Media Control Bar in the bottom of the application. This allows you to pause, play and close the video. You don’t need to stay on the original web page either. The Media Control Bar will stay visible as long as the video is playing, even as you change tabs or visit new web pages.

fennec-casting-pageaction-active

You’ll notice that Firefox displays an “active casting” indicator in the URL Bar when a video on the current web page is being cast to a device.

Limitations and Troubleshooting

Firefox currently limits casting HTML5 video in H264 format. This is one of the formats most easily handled by Roku streaming players. We are working on other formats too.

Some web sites hide or customize the HTML5 video controls and some override the long-tap menu too. This can make starting to cast difficult, but the simple fallback is to start playing the video in the web page. If the video is H264 and Firefox can find your Roku, a “ready to cast” indicator will appear in the URL Bar. Just tap on that to start casting the video to your Roku.

If Firefox does not display the casting icons, it might be having a problem discovering your Roku on the network. Make sure your Android device and the Roku are on the same Wifi network. You can load about:devices into Firefox to see what devices Firefox has discovered.

This is a pre-release of video casting support. We need your help to test the system. Please remember to share your feedback and file any bugs. Happy testing!

June 06, 2014 03:45 PM

May 31, 2014

Mark Finkle

Firefox for Android: Your Feedback Matters!

Millions of people use Firefox for Android every day. It’s amazing to work on a product used by so many people. Unsurprisingly, some of those people send us feedback. We even have a simple system built into the application to make it easy to do. We have various systems to scan the feedback and look for trends. Sometimes, we even manually dig through the feedback for a given day. It takes time. There is a lot.

Your feedback is important and I thought I’d point out a few recent features and fixes that were directly influenced from feedback:

Help Menu
Some people have a hard time discovering features or were not aware Firefox supported some of the features they wanted. To make it easier to learn more about Firefox, we added a simple Help menu which directs you to SUMO, our online support system.

Managing Home Panels
Not everyone loves the Firefox Homepage (I do!), or more specifically, they don’t like some of the panels. We added a simple way for people to control the panels shown in Firefox’s Homepage. You can change the default panel. You can even hide all the panels. Use Settings > Customize > Home to get there.

Home panels

Improve Top Sites
The Top Sites panel in the Homepage is used by many people. At the same time, other people find that the thumbnails can reveal a bit too much of their browsing to others. We recently added support for respecting sites that might not want to be snapshot into thumbnails. In those cases, the thumbnail is replaced with a favicon and a favicon-influenced background color. The Facebook and Twitter thumbnails show the effect below:

fennec-private-thumbnails

We also added the ability to remove thumbnails using the long-tap menu.

Manage Search Engines
People also like to be able to manage their search engines. They like to switch the default. They like to hide some of the built-in engines. They like to add new engines. We have a simple system for managing search engines. Use Settings > Customize > Search to get there.

fennec-search-mgr

Clear History
We have a lot of feedback from people who want to clear their browsing history quickly and easily. We are not sure if the Settings > Privacy > Clear private data method is too hard to find or too time consuming to use, but it’s apparent people need other methods. We added a quick access method at the bottom of the History panel in the Homepage.

clear-history

We are also working on a Clear data on exit approach too.

Quickly Switch to a Newly Opened Tab
When you long-tap on a link in a webpage, you get a menu that allows you to Open in New Tab or Open in New Private Tab. Both of those open the new tab in the background. Feedback indicates the some people really want to switch to the new tab. We already show an Android toast to let you know the tab was opened. Now we add a button to the toast allowing you to quickly switch to the tab too.

switch-to-tab

Undo Closing a Tab
Closing tabs can be awkward for people. Sometimes the [x] is too easy to hit by mistake or swiping to close is unexpected. In any case, we added the ability to undo closing a tab. Again, we use a button toast.

undo-close-tab

Offer to Setup Sync from Tabs Tray
We feel that syncing your desktop and mobile browsing data makes browsing on mobile devices much easier. Figuring out how to setup the Sync feature in Firefox might not be obvious. We added a simple banner to the Homepage to let you know the feature exists. We also added a setup entry point in the Sync area of the Tabs Tray.

fennec-setup-sync

We’ll continue to make changes based on your feedback, so keep sending it to us. Thanks for using Firefox for Android!

May 31, 2014 03:28 AM

May 30, 2014

Geoff Brown

Firefox for Android Performance Measures – May check-up

My monthly review of Firefox for Android performance measurements. May highlights:

- slight regressions in tcanvasmark and trobopan

- small regression in time to throbber stop

- Eideticker still not reporting results.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

Image

6300 (start of period) – 5700 (end of period).

Regression of May 12 – bug 1009646.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

9 (start of period) – 9 (end of period)

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

Image

110000 (start of period) – 130000 (end of period)

This regression just happened today and has not triggered a Talos alert yet — I don’t have a bug number yet.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

425 (start of period) – 425 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7300 (start of period) – 7300 (end of period).

tp4m

Generic page load test. Lower values are better.

750 (start of period) – 750 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Image

Image

The improvement on May 2 was due to a change in the test setup (sut vs adb).

The small regression of May 11 is tracked in bug 1018463.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results are still not available. We’ll check back at the end of June.


May 30, 2014 10:31 PM

May 22, 2014

Kartikaya Gupta

Cracking libxul

For a while now I've been wanting to take a look inside libxul to see why it's so big. In particular I wanted to know what the impact of using templates so heavily in our code was - things like nsTArray and nsRefPtr are probably used on hundreds of different types throughout our codebase. Last night I was have trouble sleeping so I decided to crack open libxul and see if I could figure it out. I didn't persist enough to get the exact answers I wanted, but I got close enough. It was also kind of fun and I figured I'd post about it partly as an educational thing and partly to inspire others to dig deeper into this.

First step: build libxul. I had a debug build on my Linux machine with recent gecko, so I just used the libxul.so from that.

Second step: disassemble libxul.

objdump -d libxul.so > libxul.disasm


Although I've looked at disassemblies before I had to look at the file in vim a little bit to figure the best way to parse it to get what I wanted, which was the size of every function defined in the library. This turned out to be a fairly simple awk script.

Third step: get function sizes. (snippet below is reformatted for easier reading)

awk 'BEGIN { addr=0; label="";}
     /:$/ && !/Disassembly of section/ { naddr = sprintf("%d", "0x" $1);
                                         print (naddr-addr), label;
                                         addr=naddr;
                                         label=$2 }'
    libxul.disasm > libxul.sizes


For those of you unfamiliar with awk, this identifies every line that ends in a colon, but doesn't have the text "Disassembly of section" (I determined this would be sufficient to match the line that starts off every function disassembly). It then takes the address (which is in hex in the dump), converts it to decimal, and subtracts it from the address of the previous matching line. Finally it dumps out the size/name pairs. I inspected the file to make sure it looked ok, and removed a bad line at the top of the file (easier to fix it manually than fix the awk script).

Now that I had the size of each function, I did a quick sanity check to make sure it added up to a reasonable number:

awk '{ total += $1 } END { print total }' libxul.sizes
40263032


The value spit out is around 40 megs. This seemed to be in the right order of magnitude for code in libxul so I proceeded further.

Fourth step: see what's biggest!

sort -rn libxul.sizes | head -n 20
57984 <_ZL9InterpretP9JSContextRN2js8RunStateE>:
43798 <_ZN20nsHtml5AttributeName17initializeStaticsEv>:
41614 <_ZN22nsWindowMemoryReporter14CollectReportsEP25nsIMemoryReporterCallbackP11nsISupports>:
39792 <_Z7JS_Initv>:
32722 <vp9_fdct32x32_sse2>:
28674 <encode_mcu_huff>:
24365 <_Z7yyparseP13TParseContext>:
21800 <_ZN18nsHtml5ElementName17initializeStaticsEv>:
20558 <_ZN7mozilla3dom14PContentParent17OnMessageReceivedERKN3IPC7MessageE.part.1247>:
20302 <_ZN16nsHtml5Tokenizer9stateLoopI23nsHtml5ViewSourcePolicyEEiiDsiPDsbii>:
18367 <sctp_setopt>:
17900 <vp9_find_best_sub_pixel_comp_tree>:
16952 <_ZN7mozilla3dom13PBrowserChild17OnMessageReceivedERKN3IPC7MessageE>:
16096 <vp9_sad64x64x4d_sse2>:
15996 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE17EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
15594 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE16EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14963 <vp9_idct32x32_1024_add_sse2>:
14838 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE4EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14792 <_ZN7mozilla12_GLOBAL__N_119WebGLImageConverter3runILNS_16WebGLTexelFormatE21EEEvS3_NS_29WebGLTexelPremultiplicationOpE>:
14740 <_ZN16nsHtml5Tokenizer9stateLoopI19nsHtml5SilentPolicyEEiiDsiPDsbii>:


That output looks reasonable. Top of the list is something to do with interpreting JS, followed by some HTML name static initializer thing. Guessing from the symbol names it seems like everything there would be pretty big. So far so good.

Fifth step: see how much space nsTArray takes up. As you can see above, the function names in the disassembly are mangled, and while I could spend some time trying to figure out how to demangle them it didn't seem particularly worth the time. Instead I just looked for symbols that started with nsTArray_Impl which by visual inspection seemed to match what I was looking for, and would at least give me a ballpark figure.

grep "<_ZN13nsTArray_Impl" libxul.sizes | awk '{ total += $1 } END { print total }'
377522


That's around 377k of stuff just to deal with nsTArray_Impl functions. You can compare that to the total libxul number and the largest functions listed above to get a sense of how much that is. I did the same for nsRefPtr and got 92k. Looking for ZNSt6vector, which I presume is the std::vector class, returned 101k.

That more or less answered the questions I had and gave me an idea of how much space was being used by a particular template class. I tried a few more things like grouping by the first 20 characters of the function name and summing up the sizes, but it didn't give particularly useful results. I had hoped it would approximate the total size taken up by each class but because of the variability in name lengths I would really need a demangler before being able to get that.

May 22, 2014 02:02 PM

May 20, 2014

Chris Peterson

JS Work Week 2014

Mozilla’s SpiderMonkey (JS) and Low-Level Tools engineering teams convened at Mozilla’s chilly Toronto office in March to plan our 2014 roadmap.

To start the week, we reviewed Mozilla’s 2014 organizational goals. If a Mozilla team is working on projects that do not advance the organization’s stated goals, then something is out of sync. The goals where the JS team can most effectively contribute are “Scale Firefox OS” (sell 10M Firefox OS phones) and “Get Firefox on a Growth Trajectory” (increase total users and hours of usage). Knowing that Mozilla’s plans to sell 10M Firefox OS phones helps us prioritize optimizations for Tarako (the $25 Firefox OS phone) over larger devices like Firefox OS tablets, TVs, or dishwashers.

Security was a hot topic after Mozilla’s recent beating in Pwn2Own 2014. Christian Holler (“decoder”) and Gary Kwong gave presentations on OOM and Windows fuzzing, respectively. Bill McCloskey discussed the current status of Electrolysis (e10s), Firefox’s multiprocess browser that will reduce UI jank and implement sandboxing of security exploits. e10s is currently available for testing in the Nightly channel; just select “File > New e10s Window” to open a new e10s window. (This works out of the box on OS X today, but requires an OMTC pref change on Windows and Linux.)

The ES6 spec is feature frozen and should be signed-off by the end of 2014. Jason Orendorff asked for help implementing remaining ES6 features like Modules and let/const scoping. Proposed improvements to Firefox’s web developer tools included live editing of code in the JS debugger and exposing JIT optimization feedback

Thinker Lee and Ting-Yuan Huang, from Mozilla’s Firefox OS team in Taipei, presented some of the challenges they’ve faced with Tarako, a Firefox OS phone with only 128 MB RAM. They’re using zram to compress unused memory pages instead of paging them to flash storage. Thinker and Ting-Yuan had suggestions for tuning SpiderMonkey’s GC to avoid problems where the GC runs in background apps or inadvertently touches compressed zram pages.

Till Schneidereit lead a brainstorming session about improving SpiderMonkey’s embedding API. Ideas included promoting SpiderMonkey as a scripting language solution for game engines (like 0 A.D.) or revisiting SpiderNode, a 2012 experiment to link Node.js with SpiderMonkey instead of V8. SpiderNode might be interesting for our testing or to Node developers that would like to use SpiderMonkey’s more extensive support for ES6 features or remote debugging tools. ES6 on the server doesn’t have the browser compatibility limitations that front-end web development does. The meeting notes and further discussion continued on the SpiderMonkey mailing list. New Mozilla contributor Sarat Adiraj soon posted his patches to revive SpiderNode in bug 1005411.

For the work week’s finale, Mozilla’s GC developers Terrence Cole, Steve Fink, and Jon Coppeard landed their generational garbage collector (GGC), a major redesign of SpiderMonkey’s GC. GGC will improve JS performance and lay the foundation for implementing a compacting GC to reduce JS memory usage later this year. GGC is riding the trains and should ship in Firefox 31 (July 2014).

May 20, 2014 06:53 AM

May 10, 2014

Fennec Nightly News

First-run Gets a Facelift

Nightly now has a cleaner, fresher looking first-run appearance. Compare the old (top) and new (bottom) looks:

May 10, 2014 04:41 PM

May 08, 2014

William Lachance

mozregression: New maintainer, issues tracked in bugzilla

Just wanted to give some quick updates on mozregression, your favorite regression-finding tool for Firefox:

  1. I moved all issue tracking in mozregression to bugzilla from github issues. Github unfortunately doesn’t really scale to handle notifications sensibly when you’re part of a large organization like Mozilla, which meant many problems were flying past me unseen. File your new bugs in bugzilla, they’re now much more likely to be acted upon.
  2. Sam Garrett has stepped up to be co-maintainer of the project with me. He’s been doing a great job whacking out a bunch of bugs and keeping things running reliably, and it was time to give him some recognition and power to keep things moving forward. :)
  3. On that note, I just released mozregression 0.17, which now shows the revision number when running a build (a request from the graphics team, bug 1007238) and handles respins of nightly builds correctly (bug 1000422). Both of these were fixed by Sam.

If you’re interested in contributing to Mozilla and are somewhat familiar with python, mozregression is a great place to start. The codebase is quite approachable and the impact will be high — as I’ve found out over the last few months, people all over the Mozilla organization (managers, developers, QA …) use it in the course of their work and it saves tons of their time. A list of currently open bugs is here.

May 08, 2014 10:31 PM

May 02, 2014

Brian Nicholson

FloatingHintEditText: An Android floating label implementation

 

Working on the Firefox for Android requestAutocomplete implementation, we’ve been brainstorming some ways to make the upcoming autofill UI pleasant to use. An early prototype of the payment edit dialog:

Empty EditText payment dialog

Payment dialog with empty EditText fields.

This seems innocuous enough — just a few standard EditTexts ready to be joyously filled in by the user. But here’s what that form looks like after it has been filled out:

Filled EditText payment dialog (before)

Payment dialog filled in, using standard EditText fields.

Notice how all of the hints have been replaced by our entered text. We’ve lost all the context of what we’re entering, and we’re left figuring out what the fields represent based on what we entered (the bottom row is particularly cryptic). I’d like to take this opportunity to re-emphasize that this is a an early prototype; the astute reader might point out that we could use better visual cues to indicate what these fields are. For instance, we might show a “/” between the the month and year to indicate a date, and we might put the CVC number directly next to the credit card number to more clearly relate the two fields. These layout flaws notwithstanding, the user is still required to make assumptions about the meaning of each field based on the layout and entered data.

Since most (hopefully, all) of our users have a memory span greater than one second, it may not even seem like an issue that the hints disappear — we probably won’t forget what we’re typing as we’re typing it. But these fields are stored for use later, meaning that if we try to edit this data in the future, we’ll jump right back to the screen above, and we won’t get to see those helpful hints. The only way to show a hint for a given field would be to erase the contents of that field, and that’s obviously a terrible user experience.

To fix this problem, we decided to go with a recently trending pattern: float labels. The concept, introduced by Matt Smith, uses the simple solution of “floating” the labels above the text to prevent the hint from simply disappearing. Here’s our same filled-out form using floating labels:

Filled EditText payment dialog (after)

Payment dialog filled in, using FloatingHintEditText fields.

Much better. The hints are always visible, so if we return to this same dialog in the future to edit our payment information, we don’t lose context about the fields we’re entering.

This custom View — FloatingHintEditText — is implemented as a subclass of EditText. By hooking into onDraw to get the canvas, we can paint our floating hint directly onto the EditText. To transition from the normal hint to the floating one, this implementation linearly interpolates the size/position for each draw frame to create an animation, calling invalidate() to immediately trigger each step.

The code is available on GitHub; at under 150 lines, the implementation is fairly straightforward. You can also check out the screencast demo.

Some limitations:

Enjoy!

The post FloatingHintEditText: An Android floating label implementation appeared first on Brian Nicholson.

May 02, 2014 07:57 AM

April 30, 2014

Geoff Brown

Firefox for Android Performance Measures – April check-up

My monthly review of Firefox for Android performance measurements. April highlights:

- No Talos regressions, no Throbber Start/Stop regressions.

- tcheck2 improvement.

- Eideticker still not reporting results.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

6300 (start of period) – 6300 (end of period).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Image

24 (start of period) – 9 (end of period)

Note significant improvement and noise reduction.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

425 (start of period) – 425 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7300 (start of period) – 7300 (end of period).

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Image

Image

No regressions noted this month.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results are still not available. We’ll check back at the end of May.


April 30, 2014 11:21 PM

Margaret Leibovic

Firefox Hub Add-on Hackathon

For the past few months, the Firefox for Android team has been working on Firefox Hub, a project to make your home page more customizable and extensible. At its core, this feature is a set of new APIs that allows add-ons to add new content to the Firefox for Android home page.

These APIs are new in Firefox 30, but there are even more features available in Firefox 31, which is moving to Aurora this week. As we’ve been working on these APIs, we’ve been building plenty of demo add-ons ourselves, but we’re at the point where we want more developers to get involved!

Next week (May 5-9), we’re holding a distributed add-on hackathon to encourage more people to start building things with these APIs, and I want to encourage anyone who’s interested to participate!

This hackathon has three main goals:

To kick off this hackathon, I made an etherpad that links to documentation, example add-ons, and a long list of new add-on ideas. Since our community lives all over the world, we’ll hang out in #mobile on irc.mozilla.org to ask questions, report problems, and share progress on things we’re making. In addition to building add-ons, you can also participate by testing new add-ons as they’re made!

At the end of the week, everyone who participated in the hackathon will receive a special limited-edition open badge, as well as pride in contributing to Firefox for Android. And maybe I’ll try to dig up some special prizes for anyone who makes a really cool add-on :)

April 30, 2014 05:40 PM

April 22, 2014

William Lachance

PyCon 2014 impressions: ipython notebook is the future & more

This year’s PyCon US (Python Conference) was in my city of residence (Montréal) so I took the opportunity to go and see what was up in the world of the language I use the most at Mozilla. It was pretty great!

ipython

The highlight for me was learning about the possibilities of ipython notebooks, an absolutely fantastic interactive tool for debugging python in a live browser-based environment. I’d heard about it before, but it wasn’t immediately apparent how it would really improve things — it seemed to be just a less convenient interface to the python console that required me to futz around with my web browser. Watching a few presentations on the topic made me realize how wrong I was. It’s already changed the way I do work with Eideticker data, for the better.

Using ipython to analyze some eideticker dataUsing ipython to analyze some eideticker data

I think the basic premise is really quite simple: a better interface for typing in, experimenting with, and running python code. If you stop and think about it, the modern web interface supports a much richer vocabulary of interactive concepts that the console (or even text editors like emacs): there’s no reason we shouldn’t take advantage of it.

Here are the (IMO) killer features that make it worth using:

To learn more about how to use ipython notebooks for data analysis, I highly recommend Julie Evan’s talk Diving into Open Data with IPython Notebook & Pandas, which you can find on pyvideo.org.

Other Good Talks

I saw some other good talks at the conference, here are some of them:

April 22, 2014 09:36 PM

April 14, 2014

Ian Barlow

Notes from UX Immersion Mobile Conference 2014

Last week I was in Denver for a three day conference put on by User Interface Engineering. I met lots of great people, and the workshops and talks were fantastic. Would highly recommend to anyone looking for a good UX conference to attend.

http://uxim14.uie.com/

Brad Frost

Screen Shot 2014-04-13 at 10.22.59 AM

Screen Shot 2014-04-13 at 10.23.06 AM

Screen Shot 2014-04-13 at 10.23.13 AM

We don’t know what will be under the Christmas tree in two years, but that is what we need to design for.
Principles of Adaptive Design
Tools
Atomic Design

Screen Shot 2014-04-13 at 10.29.36 AM

More details on Atomic Design here: http://bradfrostweb.com/blog/post/atomic-web-design/

 


 

Ben Callahan

Screen Shot 2014-04-13 at 10.34.57 AM

Screen Shot 2014-04-13 at 6.52.15 PM

Dissecting Design

Part 1: Establish the Aesthetic

Use tools you are comfortable with to establish the aesthetic

 

Part 2: Solve the Problem

You best solve problems using tools you are fluent with

 

Part 3: Refine the Solution

Efficiency is key with refining a design solution

 

Group improvisation

Screen Shot 2014-04-13 at 10.38.29 AM

The fact is, there is no one way to design for screens. Every project is different. Every team is different. It’s interesting to look at it as a form of group improvisation, where everyone is contributing in the way that makes this particular project work.

“Group improvisation is a challenge. Aside from the weighty technical problem of collective coherent thinking, there is the very human, even social need for sympathy from all members to bend for the common result.”

Group Improvisation requires individuals on a team to be…

 

Ben’s Theory on Web Process

Create guidelines instead of rigid processes. “The amount of process required is inversely proportional to the skill, humility, and empathy of your team.”

More details on Dissecting Design here: http://seesparkbox.com/foundry/dissecting_design


 

Luke Wroblewski

Screen Shot 2014-04-13 at 10.41.26 AM

Mobile Growth

Mobile shopping in US

Paypal mobile payments

Mobile revenue

Screen Shot 2014-04-13 at 10.42.39 AM

We’ve only had about 6 years to figure out mobile design, vs 30 years of figuring out PCs. We have lots to learn. And more importantly, lots to unlearn.

On the hamburger menu
On the importance of good inputs

Screen Shot 2014-04-13 at 10.43.50 AM

On Startups
Idea: Preemptive customer service

They were watching the user logs, and when they saw bugs they fixed them before users complained, and then reached out to let them know they had fixed something. User feedback was 100% positive. Brilliant.

Screen Shot 2014-04-13 at 10.44.26 AM

 


 

 

Jared Spool

Designing Designers

Job interview test

Side comment about unintentional design: What happens when you spend time working with everything in the system *except* the user’s experience

The need for design talent is growing, massively. How do we staff it?

IBM is investing 100M to expand design business. 1000+ UX designers are going to IBM. This means all the big corporations are going to start hiring UX like crazy. How do we as the design community even staff that? Especially since today, all design unicorns are self taught.

How to become a design unicorn <3
  1. Train yourself
  2. Practice your skills
  3. Deconstruct as many designs as you can
  4. Seek out feedback (and listen to it)
  5. Teach others

 

It doesn’t happen like this in school, though.

Tying the education problem back to Unintentional Design. We focused so much on the system that we forgot what we were actually trying to do.

Changes to education system?

What if design school were more like Medical Education (combines theory and craft). This idea of pre-med, medical school, internships, residences, and finally fellowships.

Changes to our workplace?

Jared is exploring this idea with The Center Centre — formerly known as the Unicorn Institute


 

Nate Schutta

JQuery Mobile Prototyping Workshop

“If a picture is worth a thousand words, how many meetings is a prototype worth?”

Useful links:


April 14, 2014 01:41 AM

April 01, 2014

Geoff Brown

Android 2.3 Opt tests on tbpl

Today, we started running some Android 2.3 Opt tests on tbpl:

Image

“Android 2.3 Opt” tests run on emulators running Android 2.3. The emulator is simply the Android arm emulator, taken from the Android SDK (version 18). The emulator runs a special build of Gingerbread (2.3.7), patched and built specifically to support our Android tests. The emulator is running on an aws ec2 host. Android 2.3 Opt runs one emulator at a time on a host (unlike the Android x86 emulator tests, which run up to 4 emulators concurrently on one ix host).

Android 2.3 Opt tests generally run slower than tests run on devices. We have found that tests will run faster on faster hosts; for instance, if we run the emulator on an aws m3.large instance (more memory, more cpu), mochitests run in about 1/3 of the time that they do currently, on m1.medium instances.

Reftests – plain reftests, js reftests, and crashtests – run particularly slowly. In fact, they take so long that we cannot run them to completion with a reasonable number of test chunks. We are investigating more and also considering the simple solution: running on different hosts.

We have no plans to run Talos tests on Android 2.3 Opt; we think there is limited value in running performance tests on emulators.

Android 2.3 Opt tests are supported on try — “try: -b o -p android …” You can also request that a slave be loaned to you for debugging more intense problems: https://wiki.mozilla.org/ReleaseEngineering/How_To/Request_a_slave. In my experience, these methods – try and slave loans – are more effective at reproducing test results than running an emulator locally: The host seems to affect the emulator’s behavior in significant and unpredictable ways.

Once the Android 2.3 Opt tests are running reliably, we hope to stop the corresponding tests on Android 2.2 Opt, reducing the burden on our old and limited population of Tegra boards.

As with any new test platform, we had to disable some tests to get a clean run suitable for tbpl. These are tracked in bug 979921.

There are also a few unresolved issues causing infrequent problems in active tests. These are tracked in bug 967704.


April 01, 2014 08:51 PM

March 31, 2014

Geoff Brown

Firefox for Android Performance Measures – March check-up

My monthly review of Firefox for Android performance measurements. March highlights:

- 3 throbber start/stop regressions

- Eideticker not reporting results for the last couple of weeks.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7200 (start of period) – 6300 (end of period).

Regression of March 5 – bug 980423 (disable skia-gl).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

24 (start of period) – 24 (end of period)

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 425 (end of period).

Regression of March 29 – bug 990101. (test modified)

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7600 (start of period) – 7300 (end of period).

tp4m

Generic page load test. Lower values are better.

710 (start of period) – 750 (end of period).

No specific regression identified.

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

3 regressions were reported this month: bug 980757, bug 982864, bug 986416.

:bc continued his work on noise reduction in March. Changes in the test setup have likely affected the phonedash graphs this month. We’ll check back at the end of April.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results for the last couple of weeks are not available. We’ll check back at the end of April.


March 31, 2014 09:58 PM

Kartikaya Gupta

Brendan as CEO

I would not vote for Brendan if he were running for president. However I fully support him as CEO of Mozilla.

Why the difference? Simply because as Mozilla's CEO, his personal views on LGBT (at least what one can infer from monetary support to Prop 8) do not have any measurable chance of making any difference in what Mozilla does or Mozilla's mission. It's not like we're going to ship Firefox OS phones to everybody... except LGBT individuals. There's a zero chance of that happening.

From what I've read so far (and I would love to be corrected) it seems like people who are asking Brendan to step down are doing so as a matter of principle rather than a matter of possible consequence. They feel very strongly about LGBT equality, and rightly so. And therefore they do not want to see any person who is at all opposed to that cause take any position of power, as a general principle. This totally makes sense, and given two CEO candidates who are identical except for their views on LGBT issues, I too would pick the pro-LGBT one.

But that's not the situation we have. I don't know who the other CEO candidates are or were, but I can say with confidence that there's nobody else in the world who can match Brendan in some areas that are very relevant to Mozilla's mission. I don't know exactly what qualities we need in a CEO right now but I'm pretty sure that dedication and commitment to Mozilla's mission, as well as technical expertise, are going to be pretty high on that list. That's why I support Brendan as CEO despite his views.

If you're reading this, you are probably a strong supporter of Mozilla's mission. If you don't want Brendan as CEO because of his views, it's because you are being forced into making a tough choice - you have to choose between the "open web" affiliation on your personal identity and the "LGBT" affiliation on your personal identity. That's a hard choice for anybody, and I don't think anybody can fault you regardless of what you choose.

If you choose to go further and boycott Mozilla and Mozilla's products because of the CEO's views, you have a right to do that too. However I would like to understand how you think this will help with either the open web or LGBT rights. I believe that switching from Firefox to Chrome will not change Brendan or anybody else's views on LGBT rights, and will actively harm the open web. The only winner there is Google's revenue stream. If you disagree with this I would love to know why. You may wish to boycott Mozilla products as a matter of principle, and I can't argue with that. But please make sure that the benefit you gain from doing so outweighs the cost.

March 31, 2014 09:37 PM

Geoff Brown

Android 4.0 Debug tests on tbpl

Today, we started running some Android 4.0 Debug mochitests and js-reftests on tbpl.

Screenshot from 2014-03-31 12:13:10b

“Android 4.0 Debug” tests run on Pandaboards running Android 4.0, just like the existing “Android 4.0 Opt” tests which have been running for some time. Unlike the Opt tests, the Debug tests run debug builds, with more log messages than Opt and notably, assertions. The “complete logcats” can be very useful for these jobs — see Complete logcats for Android tests.

Other test suites – the remaining mochitests chunks, robocop, reftests, etc – run on Android 4.0 Debug only on the Cedar tree at this time. They mostly work, but have failures that make them too unreliable to run on trunk trees. Would you like to see more Android 4.0 Debug tests running? A few test failures are all that is blocking us from running the remainder of our test suites. See Bug 940068 for the list of known failures.

 


March 31, 2014 06:34 PM

March 23, 2014

Kartikaya Gupta

Javascript login shell

I was playing around with node.js this weekend, and I realized that it's not that hard to end up with a Javascript-based login shell. A basic one can be obtained by simply installing node.js and ShellJS on top of it. Add CoffeeScript to get a slightly less verbose syntax. For example:

kats@kgupta-pc shelljs$ coffee
coffee> require './global.js'
{}
coffee> ls()
[ 'LICENSE',
  'README.md',
  'bin',
  'global.js',
  'make.js',
  'package.json',
  'scripts',
  'shell.js',
  'src',
  'test' ]
coffee> cat 'global.js'
'var shell = require('./shell.js');nfor (var cmd in shell)n  global[cmd] = shell[cmd];n'
coffee> cp('global.js', 'foo.tmp')
undefined
coffee> cat 'foo.tmp'
'var shell = require('./shell.js');nfor (var cmd in shell)n  global[cmd] = shell[cmd];n'
coffee> rm 'foo.tmp'
undefined
coffee> 


Basically, if you're in a JS REPL (node.js or coffee) and you have access to functions that wrap shell utilities (which is what ShellJS provides some of) then you can use that setup as your login shell instead of bash or zsh or whatever else you might be using.

I'm a big fan of bash but I am sometimes frustrated with some things in it, such as hard-to-use variable manipulation and the fact that loops sometimes create subshells and make state manipulation hard. Being able to write scripts in JS instead of bash would solve that quite nicely. There are probably other use cases in which having a JS shell as your login shell would be quite handy.

March 23, 2014 03:46 PM

March 16, 2014

William Lachance

Upcoming travels to Japan and Taiwan

Just a quick note that I’ll shortly be travelling from the frozen land of Montreal, Canada to Japan and Taiwan over the next week, with no particular agenda other than to explore and meet people. If any Mozillians are interested in meeting up for food or drink, and discussion of FirefoxOS performance, Eideticker, entropy or anything else… feel free to contact me at wrlach@gmail.com.

Exact itinerary:

I will also be in Taipei the week of the March 31st, though I expect most of my time to be occupied with discussions/activities inside the Taipei office about FirefoxOS performance matters (the Firefox performance team is having a work week there, and I’m tagging along to talk about / hack on Eideticker and other automation stuff).

March 16, 2014 09:19 PM

March 15, 2014

Kartikaya Gupta

The Project Premortem

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.


(From Thinking, Fast and Slow, by Daniel Kahneman)


When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.

In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).

Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.

March 15, 2014 02:33 AM

March 14, 2014

William Lachance

It’s all about the entropy

[ For more information on the Eideticker software I'm referring to, see this entry ]

So recently I’ve been exploring new and different methods of measuring things that we care about on FirefoxOS — like startup time or amount of checkerboarding. With Android, where we have a mostly clean signal, these measurements were pretty straightforward. Want to measure startup times? Just capture a video of Firefox starting, then compare the frames pixel by pixel to see how much they differ. When the pixels aren’t that different anymore, we’re “done”. Likewise, to measure checkerboarding we just calculated the areas of the screen where things were not completely drawn yet, frame-by-frame.

On FirefoxOS, where we’re using a camera to measure these things, it has not been so simple. I’ve already discussed this with respect to startup time in a previous post. One of the ideas I talk about there is “entropy” (or the amount of unique information in the frame). It turns out that this is a pretty deep concept, and is useful for even more things than I thought of at the time. Since this is probably a concept that people are going to be thinking/talking about for a while, it’s worth going into a little more detail about the math behind it.

The wikipedia article on information theoretic entropy is a pretty good introduction. You should read it. It all boils down to this formula:

wikipedia-entropy-formula

You can see this section of the wikipedia article (and the various articles that it links to) if you want to break down where that comes from, but the short answer is that given a set of random samples, the more different values there are, the higher the entropy will be. Look at it from a probabilistic point of view: if you take a random set of data and want to make predictions on what future data will look like. If it is highly random, it will be harder to predict what comes next. Conversely, if it is more uniform it is easier to predict what form it will take.

Another, possibly more accessible way of thinking about the entropy of a given set of data would be “how well would it compress?”. For example, a bitmap image with nothing but black in it could compress very well as there’s essentially only 1 piece of unique information in it repeated many times — the black pixel. On the other hand, a bitmap image of completely randomly generated pixels would probably compress very badly, as almost every pixel represents several dimensions of unique information. For all the statistics terminology, etc. that’s all the above formula is trying to say.

So we have a model of entropy, now what? For Eideticker, the question is — how can we break the frame data we’re gathering down into a form that’s amenable to this kind of analysis? The approach I took (on the recommendation of this article) was to create a histogram with 256 bins (representing the number of distinct possibilities in a black & white capture) out of all the pixels in the frame, then run the formula over that. The exact function I wound up using looks like this:


def _get_frame_entropy((i, capture, sobelized)):
    frame = capture.get_frame(i, True).astype('float')
    if sobelized:
        frame = ndimage.median_filter(frame, 3)

        dx = ndimage.sobel(frame, 0)  # horizontal derivative
        dy = ndimage.sobel(frame, 1)  # vertical derivative
        frame = numpy.hypot(dx, dy)  # magnitude
        frame *= 255.0 / numpy.max(frame)  # normalize (Q&D)

    histogram = numpy.histogram(frame, bins=256)[0]
    histogram_length = sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])

    return entropy

[Context]

The “sobelized” bit allows us to optionally convolve the frame with a sobel filter before running the entropy calculation, which removes most of the data in the capture except for the edges. This is especially useful for FirefoxOS, where the signal has quite a bit of random noise from ambient lighting that artificially inflate the entropy values even in places where there is little actual “information”.

This type of transformation often reveals very interesting information about what’s going on in an eideticker test. For example, take this video of the user panning down in the contacts app:

If you graph the entropies of the frame of the capture using the formula above you, you get a graph like this:

contacts scrolling entropy graph
[Link to original]

The Y axis represents entropy, as calculated by the code above. There is no inherently “right” value for this — it all depends on the application you’re testing and what you expect to see displayed on the screen. In general though, higher values are better as it indicates more frames of the capture are “complete”.

The region at the beginning where it is at about 5.0 represents the contacts app with a set of contacts fully displayed (at startup). The “flat” regions where the entropy is at roughly 4.25? Those are the areas where the app is “checkerboarding” (blanking out waiting for graphics or layout engine to draw contact information). Click through to the original and swipe over the graph to see what I mean.

It’s easy to see what a hypothetical ideal end state would be for this capture: a graph with a smooth entropy of about 5.0 (similar to the start state, where all contacts are fully drawn in). We can track our progress towards this goal (or our deviation from it), by watching the eideticker b2g dashboard and seeing if the summation of the entropy values for frames over the entire test increases or decreases over time. If we see it generally increase, that probably means we’re seeing less checkerboarding in the capture. If we see it decrease, that might mean we’re now seeing checkerboarding where we weren’t before.

It’s too early to say for sure, but over the past few days the trend has been positive:

entropy-levels-climbing
[Link to original]

(note that there were some problems in the way the tests were being run before, so results before the 12th should not be considered valid)

So one concept, at least two relevant metrics we can measure with it (startup time and checkerboarding). Are there any more? Almost certainly, let’s find them!

March 14, 2014 11:52 PM

March 13, 2014

Lucas Rocha

How Android transitions work

One of the biggest highlights of the Android KitKat release was the new Transitions framework which provides a very convenient API to animate UIs between different states.

The Transitions framework got me curious about how it orchestrates layout rounds and animations between UI states. This post documents my understanding of how transitions are implemented in Android after skimming through the source code for a bit. I’ve sprinkled a bunch of source code links throughout the post to make it easier to follow.

Although this post does contain a few development tips, this is not a tutorial on how to use transitions in your apps. If that’s what you’re looking for, I recommend reading Mark Allison’s tutorials on the topic.

With that said, let’s get started.

The framework

Android’s Transitions framework is essentially a mechanism for animating layout changes e.g. adding, removing, moving, resizing, showing, or hiding views.

The framework is built around three core entities: scene root, scenes, and transitions. A scene root is an ordinary ViewGroup that defines the piece of the UI on which the transitions will run. A scene is a thin wrapper around a ViewGroup representing a specific layout state of the scene root.

Finally, and most importantly, a transition is the component responsible for capturing layout differences and generating animators to switch UI states. The execution of any transition always follows these steps:

  1. Capture start state
  2. Perform layout changes
  3. Capture end state
  4. Run animators

The process as a whole is managed by the TransitionManager but most of the steps above (except for step 2) are performed by the transition. Step 2 might be either a scene switch or an arbitrary layout change.

How it works

Let’s take the simplest possible way of triggering a transition and go through what happens under the hood. So, here’s a little code sample:

TransitionManager.beginDelayedTransition(sceneRoot);

View child = sceneRoot.findViewById(R.id.child);
LayoutParams params = child.getLayoutParams();
params.width = 150;
params.height = 25;
child.setLayoutParams(params);

This code triggers an AutoTransition on the given scene root animating child to its new size.

The first thing the TransitionManager will do in beingDelayedTransition() is checking if there is a pending delayed transition on the same scene root and just bail if there is onecode. This means only the first beingDelayedTransition() call within the same rendering frame will take effect.

Next, it will reuse a static AutoTransition instancecode. You could also provide your own transition using a variant of the same method. In any case, it will always clonecode the given transition instance to ensure a fresh startcode—consequently allowing you to safely reuse Transition instances across beingDelayedTransition() calls.

It then moves on to capturing the start statecode. If you set target view IDs on your transition, it will only capture values for thosecode. Otherwise it will capture the start state recursively for all views under the scene rootcode. So, please, set target views on all your transitions, especially if your scene root is a deep container with tons of children.

An interesting detail here: the state capturing code in Transition has especial treatment for ListViews using adapters with stable IDscode. It will mark the ListView children as having transient state to avoid them to be recycled during the transition. This means you can very easily perform transitions when adding or removing items to/from a ListView. Just call beginDelayedTransition() before updating your adapter and an AutoTransition will do the magic for you—see this gist for a quick sample.

The state of each view taking part in a transition is stored in TransitionValues instances which are essentially a Map with an associated Viewcode. This is one part of the API that feels a bit hand wavy. Maybe TransitionValues should have been better encapsulated?

Transition subclasses fill the TransitionValues instances with the bits of the View state that they’re interested in. The ChangeBounds transition, for example, will capture the view bounds (left, top, right, bottom) and location on screencode.

Once the start state is fully captured, beginDelayedTransition() will exit whatever previous scene was set in the scene rootcode, set current scene to nullcode (as this is not a scene switch), and finally wait for the next rendering framecode.

TransitionManager waits for the next rendering frame by adding an OnPreDrawListenercode which is invoked once all views have been properly measured and laid out, and are ready to be drawn on screen (Step 2). In other words, when the OnPreDrawListener is triggered, all the views involved in the transition have their target sizes and positions in the layout. This means it’s time to capture the end state (Step 3) for all of themcode—following the same logic than the start state capturing described before.

With both the start and end states for all views, the transition now has enough data to animate the views aroundcode. It will first update or cancel any running transitions for the same viewscode and then create new animators with the new TransitionValuescode (Step 4).

The transitions will use the start state for each view to “reset” the UI to its original state before animating them to their end state. This is only possible because this code runs just before the next rendering frame is drawn i.e. inside an OnPreDrawListener.

Finally, the animators are startedcode in the defined order (together or sequentially) and magic happens on screen.

Switching scenes

The code path for switching scenes is very similar to beginDelayedTransition()—the main difference being in how the layout changes take place.

Calling go() or transitionTo() only differ in how they get their transition instance. The former will just use an AutoTransition and the latter will get the transition defined by the TransitionManager e.g. toScene and fromScene attributes.

Maybe the most relevant of aspect of scene transitions is that they effectively replace the contents of the scene root. When a new scene is entered, it will remove all views from the scene root and then add itself to itcode.

So make sure you update any class members (in your Activity, Fragment, custom View, etc.) holding view references when you switch to a new scene. You’ll also have to re-establish any dynamic state held by the previous scene. For example, if you loaded an image from the cloud into an ImageView in the previous scene, you’ll have to transfer this state to the new scene somehow.

Some extras

Here are some curious details in how certain transitions are implemented that are worth mentioning.

The ChangeBounds transition is interesting in that it animates, as the name implies, the view bounds. But how does it do this without triggering layouts between rendering frames? It animates the view frame (left, top, right, and bottom) which triggers size changes as you’d expect. But the view frame is reset on every layout() call which could make the transition unreliable. ChangeBounds avoids this problem by suppressing layout changes on the scene root while the transition is runningcode.

The Fade transition fades views in or out according to their presence and visibility between layout or scene changes. Among other things, it fades out the views that have been removed from the scene root, which is an interesting detail because those views will not be in the tree anymore on the next rendering frame. Fade temporarily adds the removed views to the scene root‘s overlay to animate them out during the transitioncode.

Wrapping up

The overall architecture of the Transitions framework is fairly simple—most of the complexity is in the Transition subclasses to handle all types of layout changes and edge cases.

The trick of capturing start and end states before and after an OnPreDrawListener can be easily applied outside the Transitions framework—within the limitations of not having access to certain private APIs such as ViewGroup‘s supressLayout()code.

As a quick exercise, I sketched a LinearLayout that animates layout changes using the same technique—it’s just a quick hack, don’t use it in production! The same idea could be applied to implement transitions in a backwards compatible way for pre-KitKat devices, among other things.

That’s all for now. I hope you enjoyed it!

March 13, 2014 02:15 PM

March 09, 2014

William Lachance

Eideticker for FirefoxOS: Becoming more useful

[ For more information on the Eideticker software I'm referring to, see this entry ]

Time for a long overdue eideticker-for-firefoxos update. Last time we were here (almost 5 months ago! man time flies), I was discussing methodologies for measuring startup performance. Since then, Dave Hunt and myself have been doing lots of work to make Eideticker more robust and useful. Notably, we now have a setup in London running a suite of Eideticker tests on the latest version of FirefoxOS on the Inari on a daily basis, reporting to http://eideticker.mozilla.org/b2g.

b2g-contacts-startup-dashboard

There were more than a few false starts with and some of the earlier data is not to be entirely trusted… but it now seems to be chugging along nicely, hopefully providing startup numbers that provide a useful counterpoint to the datazilla startup numbers we’ve already been collecting for some time. There still seem to be some minor problems, but in general I am becoming more and more confident in it as time goes on.

One feature that I am particularly proud of is the detail view, which enables you to see frame-by-frame what’s going on. Click on any datapoint on the graph, then open up the view that gives an account of what eideticker is measuring. Hover over the graph and you can see what the video looks like at any point in the capture. This not only lets you know that something regressed, but how. For example, in the messages app, you can scan through this view to see exactly when the first message shows up, and what exact state the application is in when Eideticker says it’s “done loading”.

Capture Detail View
[link to original]

(apologies for the low quality of the video — should be fixed with this bug next week)

As it turns out, this view has also proven to be particularly useful when working with the new entropy measurements in Eideticker which I’ve been using to measure checkerboarding (redraw delay) on FirefoxOS. More on that next week.

March 09, 2014 08:31 PM

March 03, 2014

Geoff Brown

Firefox for Android Performance Measures – February check-up

My monthly review of Firefox for Android performance measurements.

February highlights:

- Regressions in tcanvasmark, tcheck2, and tsvgx; improvement in ts-paint.

- Improvements in some eideticker startup measurements.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7800 (start of period) – 7200 (end of period).

Regression on Feb 19 – bug 978958.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

2.7 (start of period) – 24 (end of period)

Regression of Feb 25: bug 976563.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 375 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7500 (start of period) – 7600 (end of period).

This test both improved and regressed slightly over the month, for a slight overall regression. Bug 978878.

tp4m

Generic page load test. Lower values are better.

700 (start of period) – 710 (end of period).

No specific regression identified.

ts_paint

Startup performance test. Lower values are better.

4300 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

throbberstart

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

throbberstop

:bc has been working on reducing noise in these results — notice the improvement. And there is more to come!

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

There is some improvement from last month’s startup measurements:

eide1

eide2

eide3

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

I did not notice any big changes this month.


March 03, 2014 09:51 PM