Planet Firefox Mobile

April 22, 2014

William Lachance

PyCon 2014 impressions: ipython notebook is the future & more

This year’s PyCon US (Python Conference) was in my city of residence (Montréal) so I took the opportunity to go and see what was up in the world of the language I use the most at Mozilla. It was pretty great!

ipython

The highlight for me was learning about the possibilities of ipython notebooks, an absolutely fantastic interactive tool for debugging python in a live browser-based environment. I’d heard about it before, but it wasn’t immediately apparent how it would really improve things — it seemed to be just a less convenient interface to the python console that required me to futz around with my web browser. Watching a few presentations on the topic made me realize how wrong I was. It’s already changed the way I do work with Eideticker data, for the better.

Using ipython to analyze some eideticker dataUsing ipython to analyze some eideticker data

I think the basic premise is really quite simple: a better interface for typing in, experimenting with, and running python code. If you stop and think about it, the modern web interface supports a much richer vocabulary of interactive concepts that the console (or even text editors like emacs): there’s no reason we shouldn’t take advantage of it.

Here are the (IMO) killer features that make it worth using:

To learn more about how to use ipython notebooks for data analysis, I highly recommend Julie Evan’s talk Diving into Open Data with IPython Notebook & Pandas, which you can find on pyvideo.org.

Other Good Talks

I saw some other good talks at the conference, here are some of them:

April 22, 2014 09:36 PM

April 14, 2014

Ian Barlow

Notes from UX Immersion Mobile Conference 2014

Last week I was in Denver for a three day conference put on by User Interface Engineering. I met lots of great people, and the workshops and talks were fantastic. Would highly recommend to anyone looking for a good UX conference to attend.

http://uxim14.uie.com/

Brad Frost

Screen Shot 2014-04-13 at 10.22.59 AM

Screen Shot 2014-04-13 at 10.23.06 AM

Screen Shot 2014-04-13 at 10.23.13 AM

We don’t know what will be under the Christmas tree in two years, but that is what we need to design for.
Principles of Adaptive Design
Tools
Atomic Design

Screen Shot 2014-04-13 at 10.29.36 AM

More details on Atomic Design here: http://bradfrostweb.com/blog/post/atomic-web-design/

 


 

Ben Callahan

Screen Shot 2014-04-13 at 10.34.57 AM

Screen Shot 2014-04-13 at 6.52.15 PM

Dissecting Design

Part 1: Establish the Aesthetic

Use tools you are comfortable with to establish the aesthetic

 

Part 2: Solve the Problem

You best solve problems using tools you are fluent with

 

Part 3: Refine the Solution

Efficiency is key with refining a design solution

 

Group improvisation

Screen Shot 2014-04-13 at 10.38.29 AM

The fact is, there is no one way to design for screens. Every project is different. Every team is different. It’s interesting to look at it as a form of group improvisation, where everyone is contributing in the way that makes this particular project work.

“Group improvisation is a challenge. Aside from the weighty technical problem of collective coherent thinking, there is the very human, even social need for sympathy from all members to bend for the common result.”

Group Improvisation requires individuals on a team to be…

 

Ben’s Theory on Web Process

Create guidelines instead of rigid processes. “The amount of process required is inversely proportional to the skill, humility, and empathy of your team.”

More details on Dissecting Design here: http://seesparkbox.com/foundry/dissecting_design


 

Luke Wroblewski

Screen Shot 2014-04-13 at 10.41.26 AM

Mobile Growth

Mobile shopping in US

Paypal mobile payments

Mobile revenue

Screen Shot 2014-04-13 at 10.42.39 AM

We’ve only had about 6 years to figure out mobile design, vs 30 years of figuring out PCs. We have lots to learn. And more importantly, lots to unlearn.

On the hamburger menu
On the importance of good inputs

Screen Shot 2014-04-13 at 10.43.50 AM

On Startups
Idea: Preemptive customer service

They were watching the user logs, and when they saw bugs they fixed them before users complained, and then reached out to let them know they had fixed something. User feedback was 100% positive. Brilliant.

Screen Shot 2014-04-13 at 10.44.26 AM

 


 

 

Jared Spool

Designing Designers

Job interview test

Side comment about unintentional design: What happens when you spend time working with everything in the system *except* the user’s experience

The need for design talent is growing, massively. How do we staff it?

IBM is investing 100M to expand design business. 1000+ UX designers are going to IBM. This means all the big corporations are going to start hiring UX like crazy. How do we as the design community even staff that? Especially since today, all design unicorns are self taught.

How to become a design unicorn <3
  1. Train yourself
  2. Practice your skills
  3. Deconstruct as many designs as you can
  4. Seek out feedback (and listen to it)
  5. Teach others

 

It doesn’t happen like this in school, though.

Tying the education problem back to Unintentional Design. We focused so much on the system that we forgot what we were actually trying to do.

Changes to education system?

What if design school were more like Medical Education (combines theory and craft). This idea of pre-med, medical school, internships, residences, and finally fellowships.

Changes to our workplace?

Jared is exploring this idea with The Center Centre — formerly known as the Unicorn Institute


 

Nate Schutta

JQuery Mobile Prototyping Workshop

“If a picture is worth a thousand words, how many meetings is a prototype worth?”

Useful links:


April 14, 2014 01:41 AM

April 01, 2014

Geoff Brown

Android 2.3 Opt tests on tbpl

Today, we started running some Android 2.3 Opt tests on tbpl:

Image

“Android 2.3 Opt” tests run on emulators running Android 2.3. The emulator is simply the Android arm emulator, taken from the Android SDK (version 18). The emulator runs a special build of Gingerbread (2.3.7), patched and built specifically to support our Android tests. The emulator is running on an aws ec2 host. Android 2.3 Opt runs one emulator at a time on a host (unlike the Android x86 emulator tests, which run up to 4 emulators concurrently on one ix host).

Android 2.3 Opt tests generally run slower than tests run on devices. We have found that tests will run faster on faster hosts; for instance, if we run the emulator on an aws m3.large instance (more memory, more cpu), mochitests run in about 1/3 of the time that they do currently, on m1.medium instances.

Reftests – plain reftests, js reftests, and crashtests – run particularly slowly. In fact, they take so long that we cannot run them to completion with a reasonable number of test chunks. We are investigating more and also considering the simple solution: running on different hosts.

We have no plans to run Talos tests on Android 2.3 Opt; we think there is limited value in running performance tests on emulators.

Android 2.3 Opt tests are supported on try — “try: -b o -p android …” You can also request that a slave be loaned to you for debugging more intense problems: https://wiki.mozilla.org/ReleaseEngineering/How_To/Request_a_slave. In my experience, these methods – try and slave loans – are more effective at reproducing test results than running an emulator locally: The host seems to affect the emulator’s behavior in significant and unpredictable ways.

Once the Android 2.3 Opt tests are running reliably, we hope to stop the corresponding tests on Android 2.2 Opt, reducing the burden on our old and limited population of Tegra boards.

As with any new test platform, we had to disable some tests to get a clean run suitable for tbpl. These are tracked in bug 979921.

There are also a few unresolved issues causing infrequent problems in active tests. These are tracked in bug 967704.


April 01, 2014 08:51 PM

March 31, 2014

Geoff Brown

Firefox for Android Performance Measures – March check-up

My monthly review of Firefox for Android performance measurements. March highlights:

- 3 throbber start/stop regressions

- Eideticker not reporting results for the last couple of weeks.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7200 (start of period) – 6300 (end of period).

Regression of March 5 – bug 980423 (disable skia-gl).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

24 (start of period) – 24 (end of period)

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 425 (end of period).

Regression of March 29 – bug 990101. (test modified)

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7600 (start of period) – 7300 (end of period).

tp4m

Generic page load test. Lower values are better.

710 (start of period) – 750 (end of period).

No specific regression identified.

ts_paint

Startup performance test. Lower values are better.

3600 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

3 regressions were reported this month: bug 980757, bug 982864, bug 986416.

:bc continued his work on noise reduction in March. Changes in the test setup have likely affected the phonedash graphs this month. We’ll check back at the end of April.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker results for the last couple of weeks are not available. We’ll check back at the end of April.


March 31, 2014 09:58 PM

Kartikaya Gupta

Brendan as CEO

I would not vote for Brendan if he were running for president. However I fully support him as CEO of Mozilla.

Why the difference? Simply because as Mozilla's CEO, his personal views on LGBT (at least what one can infer from monetary support to Prop 8) do not have any measurable chance of making any difference in what Mozilla does or Mozilla's mission. It's not like we're going to ship Firefox OS phones to everybody... except LGBT individuals. There's a zero chance of that happening.

From what I've read so far (and I would love to be corrected) it seems like people who are asking Brendan to step down are doing so as a matter of principle rather than a matter of possible consequence. They feel very strongly about LGBT equality, and rightly so. And therefore they do not want to see any person who is at all opposed to that cause take any position of power, as a general principle. This totally makes sense, and given two CEO candidates who are identical except for their views on LGBT issues, I too would pick the pro-LGBT one.

But that's not the situation we have. I don't know who the other CEO candidates are or were, but I can say with confidence that there's nobody else in the world who can match Brendan in some areas that are very relevant to Mozilla's mission. I don't know exactly what qualities we need in a CEO right now but I'm pretty sure that dedication and commitment to Mozilla's mission, as well as technical expertise, are going to be pretty high on that list. That's why I support Brendan as CEO despite his views.

If you're reading this, you are probably a strong supporter of Mozilla's mission. If you don't want Brendan as CEO because of his views, it's because you are being forced into making a tough choice - you have to choose between the "open web" affiliation on your personal identity and the "LGBT" affiliation on your personal identity. That's a hard choice for anybody, and I don't think anybody can fault you regardless of what you choose.

If you choose to go further and boycott Mozilla and Mozilla's products because of the CEO's views, you have a right to do that too. However I would like to understand how you think this will help with either the open web or LGBT rights. I believe that switching from Firefox to Chrome will not change Brendan or anybody else's views on LGBT rights, and will actively harm the open web. The only winner there is Google's revenue stream. If you disagree with this I would love to know why. You may wish to boycott Mozilla products as a matter of principle, and I can't argue with that. But please make sure that the benefit you gain from doing so outweighs the cost.

March 31, 2014 09:37 PM

Geoff Brown

Android 4.0 Debug tests on tbpl

Today, we started running some Android 4.0 Debug mochitests and js-reftests on tbpl.

Screenshot from 2014-03-31 12:13:10b

“Android 4.0 Debug” tests run on Pandaboards running Android 4.0, just like the existing “Android 4.0 Opt” tests which have been running for some time. Unlike the Opt tests, the Debug tests run debug builds, with more log messages than Opt and notably, assertions. The “complete logcats” can be very useful for these jobs — see Complete logcats for Android tests.

Other test suites – the remaining mochitests chunks, robocop, reftests, etc – run on Android 4.0 Debug only on the Cedar tree at this time. They mostly work, but have failures that make them too unreliable to run on trunk trees. Would you like to see more Android 4.0 Debug tests running? A few test failures are all that is blocking us from running the remainder of our test suites. See Bug 940068 for the list of known failures.

 


March 31, 2014 06:34 PM

March 23, 2014

Kartikaya Gupta

Javascript login shell

I was playing around with node.js this weekend, and I realized that it's not that hard to end up with a Javascript-based login shell. A basic one can be obtained by simply installing node.js and ShellJS on top of it. Add CoffeeScript to get a slightly less verbose syntax. For example:

kats@kgupta-pc shelljs$ coffee
coffee> require './global.js'
{}
coffee> ls()
[ 'LICENSE',
  'README.md',
  'bin',
  'global.js',
  'make.js',
  'package.json',
  'scripts',
  'shell.js',
  'src',
  'test' ]
coffee> cat 'global.js'
'var shell = require('./shell.js');nfor (var cmd in shell)n  global[cmd] = shell[cmd];n'
coffee> cp('global.js', 'foo.tmp')
undefined
coffee> cat 'foo.tmp'
'var shell = require('./shell.js');nfor (var cmd in shell)n  global[cmd] = shell[cmd];n'
coffee> rm 'foo.tmp'
undefined
coffee> 


Basically, if you're in a JS REPL (node.js or coffee) and you have access to functions that wrap shell utilities (which is what ShellJS provides some of) then you can use that setup as your login shell instead of bash or zsh or whatever else you might be using.

I'm a big fan of bash but I am sometimes frustrated with some things in it, such as hard-to-use variable manipulation and the fact that loops sometimes create subshells and make state manipulation hard. Being able to write scripts in JS instead of bash would solve that quite nicely. There are probably other use cases in which having a JS shell as your login shell would be quite handy.

March 23, 2014 03:46 PM

March 16, 2014

William Lachance

Upcoming travels to Japan and Taiwan

Just a quick note that I’ll shortly be travelling from the frozen land of Montreal, Canada to Japan and Taiwan over the next week, with no particular agenda other than to explore and meet people. If any Mozillians are interested in meeting up for food or drink, and discussion of FirefoxOS performance, Eideticker, entropy or anything else… feel free to contact me at wrlach@gmail.com.

Exact itinerary:

I will also be in Taipei the week of the March 31st, though I expect most of my time to be occupied with discussions/activities inside the Taipei office about FirefoxOS performance matters (the Firefox performance team is having a work week there, and I’m tagging along to talk about / hack on Eideticker and other automation stuff).

March 16, 2014 09:19 PM

March 15, 2014

Kartikaya Gupta

The Project Premortem

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.


(From Thinking, Fast and Slow, by Daniel Kahneman)


When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.

In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).

Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.

March 15, 2014 02:33 AM

March 14, 2014

William Lachance

It’s all about the entropy

[ For more information on the Eideticker software I'm referring to, see this entry ]

So recently I’ve been exploring new and different methods of measuring things that we care about on FirefoxOS — like startup time or amount of checkerboarding. With Android, where we have a mostly clean signal, these measurements were pretty straightforward. Want to measure startup times? Just capture a video of Firefox starting, then compare the frames pixel by pixel to see how much they differ. When the pixels aren’t that different anymore, we’re “done”. Likewise, to measure checkerboarding we just calculated the areas of the screen where things were not completely drawn yet, frame-by-frame.

On FirefoxOS, where we’re using a camera to measure these things, it has not been so simple. I’ve already discussed this with respect to startup time in a previous post. One of the ideas I talk about there is “entropy” (or the amount of unique information in the frame). It turns out that this is a pretty deep concept, and is useful for even more things than I thought of at the time. Since this is probably a concept that people are going to be thinking/talking about for a while, it’s worth going into a little more detail about the math behind it.

The wikipedia article on information theoretic entropy is a pretty good introduction. You should read it. It all boils down to this formula:

wikipedia-entropy-formula

You can see this section of the wikipedia article (and the various articles that it links to) if you want to break down where that comes from, but the short answer is that given a set of random samples, the more different values there are, the higher the entropy will be. Look at it from a probabilistic point of view: if you take a random set of data and want to make predictions on what future data will look like. If it is highly random, it will be harder to predict what comes next. Conversely, if it is more uniform it is easier to predict what form it will take.

Another, possibly more accessible way of thinking about the entropy of a given set of data would be “how well would it compress?”. For example, a bitmap image with nothing but black in it could compress very well as there’s essentially only 1 piece of unique information in it repeated many times — the black pixel. On the other hand, a bitmap image of completely randomly generated pixels would probably compress very badly, as almost every pixel represents several dimensions of unique information. For all the statistics terminology, etc. that’s all the above formula is trying to say.

So we have a model of entropy, now what? For Eideticker, the question is — how can we break the frame data we’re gathering down into a form that’s amenable to this kind of analysis? The approach I took (on the recommendation of this article) was to create a histogram with 256 bins (representing the number of distinct possibilities in a black & white capture) out of all the pixels in the frame, then run the formula over that. The exact function I wound up using looks like this:


def _get_frame_entropy((i, capture, sobelized)):
    frame = capture.get_frame(i, True).astype('float')
    if sobelized:
        frame = ndimage.median_filter(frame, 3)

        dx = ndimage.sobel(frame, 0)  # horizontal derivative
        dy = ndimage.sobel(frame, 1)  # vertical derivative
        frame = numpy.hypot(dx, dy)  # magnitude
        frame *= 255.0 / numpy.max(frame)  # normalize (Q&D)

    histogram = numpy.histogram(frame, bins=256)[0]
    histogram_length = sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])

    return entropy

[Context]

The “sobelized” bit allows us to optionally convolve the frame with a sobel filter before running the entropy calculation, which removes most of the data in the capture except for the edges. This is especially useful for FirefoxOS, where the signal has quite a bit of random noise from ambient lighting that artificially inflate the entropy values even in places where there is little actual “information”.

This type of transformation often reveals very interesting information about what’s going on in an eideticker test. For example, take this video of the user panning down in the contacts app:

If you graph the entropies of the frame of the capture using the formula above you, you get a graph like this:

contacts scrolling entropy graph
[Link to original]

The Y axis represents entropy, as calculated by the code above. There is no inherently “right” value for this — it all depends on the application you’re testing and what you expect to see displayed on the screen. In general though, higher values are better as it indicates more frames of the capture are “complete”.

The region at the beginning where it is at about 5.0 represents the contacts app with a set of contacts fully displayed (at startup). The “flat” regions where the entropy is at roughly 4.25? Those are the areas where the app is “checkerboarding” (blanking out waiting for graphics or layout engine to draw contact information). Click through to the original and swipe over the graph to see what I mean.

It’s easy to see what a hypothetical ideal end state would be for this capture: a graph with a smooth entropy of about 5.0 (similar to the start state, where all contacts are fully drawn in). We can track our progress towards this goal (or our deviation from it), by watching the eideticker b2g dashboard and seeing if the summation of the entropy values for frames over the entire test increases or decreases over time. If we see it generally increase, that probably means we’re seeing less checkerboarding in the capture. If we see it decrease, that might mean we’re now seeing checkerboarding where we weren’t before.

It’s too early to say for sure, but over the past few days the trend has been positive:

entropy-levels-climbing
[Link to original]

(note that there were some problems in the way the tests were being run before, so results before the 12th should not be considered valid)

So one concept, at least two relevant metrics we can measure with it (startup time and checkerboarding). Are there any more? Almost certainly, let’s find them!

March 14, 2014 11:52 PM

March 13, 2014

Lucas Rocha

How Android transitions work

One of the biggest highlights of the Android KitKat release was the new Transitions framework which provides a very convenient API to animate UIs between different states.

The Transitions framework got me curious about how it orchestrates layout rounds and animations between UI states. This post documents my understanding of how transitions are implemented in Android after skimming through the source code for a bit. I’ve sprinkled a bunch of source code links throughout the post to make it easier to follow.

Although this post does contain a few development tips, this is not a tutorial on how to use transitions in your apps. If that’s what you’re looking for, I recommend reading Mark Allison’s tutorials on the topic.

With that said, let’s get started.

The framework

Android’s Transitions framework is essentially a mechanism for animating layout changes e.g. adding, removing, moving, resizing, showing, or hiding views.

The framework is built around three core entities: scene root, scenes, and transitions. A scene root is an ordinary ViewGroup that defines the piece of the UI on which the transitions will run. A scene is a thin wrapper around a ViewGroup representing a specific layout state of the scene root.

Finally, and most importantly, a transition is the component responsible for capturing layout differences and generating animators to switch UI states. The execution of any transition always follows these steps:

  1. Capture start state
  2. Perform layout changes
  3. Capture end state
  4. Run animators

The process as a whole is managed by the TransitionManager but most of the steps above (except for step 2) are performed by the transition. Step 2 might be either a scene switch or an arbitrary layout change.

How it works

Let’s take the simplest possible way of triggering a transition and go through what happens under the hood. So, here’s a little code sample:

TransitionManager.beginDelayedTransition(sceneRoot);

View child = sceneRoot.findViewById(R.id.child);
LayoutParams params = child.getLayoutParams();
params.width = 150;
params.height = 25;
child.setLayoutParams(params);

This code triggers an AutoTransition on the given scene root animating child to its new size.

The first thing the TransitionManager will do in beingDelayedTransition() is checking if there is a pending delayed transition on the same scene root and just bail if there is onecode. This means only the first beingDelayedTransition() call within the same rendering frame will take effect.

Next, it will reuse a static AutoTransition instancecode. You could also provide your own transition using a variant of the same method. In any case, it will always clonecode the given transition instance to ensure a fresh startcode—consequently allowing you to safely reuse Transition instances across beingDelayedTransition() calls.

It then moves on to capturing the start statecode. If you set target view IDs on your transition, it will only capture values for thosecode. Otherwise it will capture the start state recursively for all views under the scene rootcode. So, please, set target views on all your transitions, especially if your scene root is a deep container with tons of children.

An interesting detail here: the state capturing code in Transition has especial treatment for ListViews using adapters with stable IDscode. It will mark the ListView children as having transient state to avoid them to be recycled during the transition. This means you can very easily perform transitions when adding or removing items to/from a ListView. Just call beginDelayedTransition() before updating your adapter and an AutoTransition will do the magic for you—see this gist for a quick sample.

The state of each view taking part in a transition is stored in TransitionValues instances which are essentially a Map with an associated Viewcode. This is one part of the API that feels a bit hand wavy. Maybe TransitionValues should have been better encapsulated?

Transition subclasses fill the TransitionValues instances with the bits of the View state that they’re interested in. The ChangeBounds transition, for example, will capture the view bounds (left, top, right, bottom) and location on screencode.

Once the start state is fully captured, beginDelayedTransition() will exit whatever previous scene was set in the scene rootcode, set current scene to nullcode (as this is not a scene switch), and finally wait for the next rendering framecode.

TransitionManager waits for the next rendering frame by adding an OnPreDrawListenercode which is invoked once all views have been properly measured and laid out, and are ready to be drawn on screen (Step 2). In other words, when the OnPreDrawListener is triggered, all the views involved in the transition have their target sizes and positions in the layout. This means it’s time to capture the end state (Step 3) for all of themcode—following the same logic than the start state capturing described before.

With both the start and end states for all views, the transition now has enough data to animate the views aroundcode. It will first update or cancel any running transitions for the same viewscode and then create new animators with the new TransitionValuescode (Step 4).

The transitions will use the start state for each view to “reset” the UI to its original state before animating them to their end state. This is only possible because this code runs just before the next rendering frame is drawn i.e. inside an OnPreDrawListener.

Finally, the animators are startedcode in the defined order (together or sequentially) and magic happens on screen.

Switching scenes

The code path for switching scenes is very similar to beginDelayedTransition()—the main difference being in how the layout changes take place.

Calling go() or transitionTo() only differ in how they get their transition instance. The former will just use an AutoTransition and the latter will get the transition defined by the TransitionManager e.g. toScene and fromScene attributes.

Maybe the most relevant of aspect of scene transitions is that they effectively replace the contents of the scene root. When a new scene is entered, it will remove all views from the scene root and then add itself to itcode.

So make sure you update any class members (in your Activity, Fragment, custom View, etc.) holding view references when you switch to a new scene. You’ll also have to re-establish any dynamic state held by the previous scene. For example, if you loaded an image from the cloud into an ImageView in the previous scene, you’ll have to transfer this state to the new scene somehow.

Some extras

Here are some curious details in how certain transitions are implemented that are worth mentioning.

The ChangeBounds transition is interesting in that it animates, as the name implies, the view bounds. But how does it do this without triggering layouts between rendering frames? It animates the view frame (left, top, right, and bottom) which triggers size changes as you’d expect. But the view frame is reset on every layout() call which could make the transition unreliable. ChangeBounds avoids this problem by suppressing layout changes on the scene root while the transition is runningcode.

The Fade transition fades views in or out according to their presence and visibility between layout or scene changes. Among other things, it fades out the views that have been removed from the scene root, which is an interesting detail because those views will not be in the tree anymore on the next rendering frame. Fade temporarily adds the removed views to the scene root‘s overlay to animate them out during the transitioncode.

Wrapping up

The overall architecture of the Transitions framework is fairly simple—most of the complexity is in the Transition subclasses to handle all types of layout changes and edge cases.

The trick of capturing start and end states before and after an OnPreDrawListener can be easily applied outside the Transitions framework—within the limitations of not having access to certain private APIs such as ViewGroup‘s supressLayout()code.

As a quick exercise, I sketched a LinearLayout that animates layout changes using the same technique—it’s just a quick hack, don’t use it in production! The same idea could be applied to implement transitions in a backwards compatible way for pre-KitKat devices, among other things.

That’s all for now. I hope you enjoyed it!

March 13, 2014 02:15 PM

March 09, 2014

William Lachance

Eideticker for FirefoxOS: Becoming more useful

[ For more information on the Eideticker software I'm referring to, see this entry ]

Time for a long overdue eideticker-for-firefoxos update. Last time we were here (almost 5 months ago! man time flies), I was discussing methodologies for measuring startup performance. Since then, Dave Hunt and myself have been doing lots of work to make Eideticker more robust and useful. Notably, we now have a setup in London running a suite of Eideticker tests on the latest version of FirefoxOS on the Inari on a daily basis, reporting to http://eideticker.mozilla.org/b2g.

b2g-contacts-startup-dashboard

There were more than a few false starts with and some of the earlier data is not to be entirely trusted… but it now seems to be chugging along nicely, hopefully providing startup numbers that provide a useful counterpoint to the datazilla startup numbers we’ve already been collecting for some time. There still seem to be some minor problems, but in general I am becoming more and more confident in it as time goes on.

One feature that I am particularly proud of is the detail view, which enables you to see frame-by-frame what’s going on. Click on any datapoint on the graph, then open up the view that gives an account of what eideticker is measuring. Hover over the graph and you can see what the video looks like at any point in the capture. This not only lets you know that something regressed, but how. For example, in the messages app, you can scan through this view to see exactly when the first message shows up, and what exact state the application is in when Eideticker says it’s “done loading”.

Capture Detail View
[link to original]

(apologies for the low quality of the video — should be fixed with this bug next week)

As it turns out, this view has also proven to be particularly useful when working with the new entropy measurements in Eideticker which I’ve been using to measure checkerboarding (redraw delay) on FirefoxOS. More on that next week.

March 09, 2014 08:31 PM

March 03, 2014

Geoff Brown

Firefox for Android Performance Measures – February check-up

My monthly review of Firefox for Android performance measurements.

February highlights:

- Regressions in tcanvasmark, tcheck2, and tsvgx; improvement in ts-paint.

- Improvements in some eideticker startup measurements.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7800 (start of period) – 7200 (end of period).

Regression on Feb 19 – bug 978958.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

2.7 (start of period) – 24 (end of period)

Regression of Feb 25: bug 976563.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 375 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

7500 (start of period) – 7600 (end of period).

This test both improved and regressed slightly over the month, for a slight overall regression. Bug 978878.

tp4m

Generic page load test. Lower values are better.

700 (start of period) – 710 (end of period).

No specific regression identified.

ts_paint

Startup performance test. Lower values are better.

4300 (start of period) – 3600 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

throbberstart

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

throbberstop

:bc has been working on reducing noise in these results — notice the improvement. And there is more to come!

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

There is some improvement from last month’s startup measurements:

eide1

eide2

eide3

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

I did not notice any big changes this month.


March 03, 2014 09:51 PM

February 12, 2014

Geoff Brown

Complete logcats for Android tests

“Logcats” – those Android logs you see when you execute “adb logcat” – are an essential part of debugging Firefox for Android. For a long time, we have included logcats in our Android test logs on tbpl: After a test run, we run logcat on the device, collect the output and dump it to the test log. Sometimes those logcats are very useful; other times, they are too little, too late. A typical problem is that a failure occurs early in a test run, but does not cause the test to fail immediately; by the time the test ends, the fixed-size logcat buffer has filled up and overwritten the earlier, important messages. How frustrating!

Beginning today, Android 4.0 and Android 4.2 x86 test jobs offer “complete logcats”: logcat is run for the duration of the test job, the output is collected continuously, and dumped to a file. At the end of the test job, the file is uploaded to an aws server, and a link is displayed in tbpl. Here’s a sample of a tbpl summary:

Image

Notice the (blobuploader) line? Open that link (yes, it’s long and awkward — there’s a bug for that!) and you have a complete logcat showing what was happening on the device for the duration of the test job.

We have not changed the “old” logcat features in test logs: We still run logcat at the end of most jobs and dump the output to the test log. That might be more convenient in some cases.

Are you wondering what “blobuploader” means? Curious about how the aws upload works? That’s the “blobber” project at work. See http://atlee.ca/posts/blobber-is-live.html and https://air.mozilla.org/intern-presentation-tabara/.

Unfortunately, the Android 2.2 (Tegra) test jobs use an older infrastructure which makes it difficult to implement blobber and complete logcats. There are no logcats-via-blobber for Android 2.2 — it’s only available for Android 4.0 and the newer Android emulator tests.

Happy test debugging!


February 12, 2014 05:08 PM

February 10, 2014

Fennec Nightly News

Customize the Home page

You can now customize the Firefox Home page in the Settings. Use Setting > Customize > Home to set the default panel or hide other panels. You can even hide all the panels.

If you don’t like Top Sites, hide it and set another panel as the default.

February 10, 2014 04:58 AM

February 04, 2014

Margaret Leibovic

WIP: Home Page Customization in Firefox for Android

In Firefox 26, we released a completely revamped version of the Firefox for Android Home screen, creating a centralized place to find all of your stuff. While this is certainly awesome, we’ve been working to make this new home screen customizable and extensible. Our goal is to give users control of what they see on their home screen, including interesting new content provided by add-ons. For the past two months, we’ve been making steady progress laying the ground work for this feature, but last week the team got together in San Francisco to bring all the pieces together for the first time.

Firefox for Android has a native Java UI that’s partially driven by JavaScript logic behind the scenes. To allow JavaScript add-ons to make their own home page panels, we came up with two sets of APIs for storing and displaying data:

image

During the first half of our hack week, we agreed on a working first version of these APIs, and we hooked up our native Java UI to HomeProvider data stored from JS. After that, we started to dig into the bugs necessary to flesh out a first version of this feature.

image

Many of these patches are still waiting to land, so unfortunately there’s nothing to show in Nightly yet. But stay tuned, exciting things are coming soon!

February 04, 2014 03:45 AM

February 03, 2014

Geoff Brown

Firefox for Android Performance Measures – January check-up

My monthly review of Firefox for Android performance measurements.

January highlights:

- only minor Talos regressions

- Eideticker startup regressions

- inconsistent improvement in many awsy measures

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7800 (start of period) – 7800 (end of period).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

rck2

2.5 (start of period) – 2.7 (end of period)

Jan 16 regression – bug 961869.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

110000 (start of period) – 110000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 375 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

svg

7200 (start of period) – 7500 (end of period).

Regression of Jan 7 – bug 958129.

tp4m

Generic page load test. Lower values are better.

700 (start of period) – 700 (end of period).

ts_paint

Startup performance test. Lower values are better.

4300 (start of period) – 4300 (end of period).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

throbber_start

There is so much data here, it is hard to see what is happening – bug 967052. I filtered out many of the devices to get this:

throbber_start-2

I think existing, long-running devices are showing no regressions, and some of the new devices are exhibiting a lot of noise — a problem that :bc is working to correct.

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

throbber_stop

A similar story here, I think.

But there was a regression for some devices on Jan 24 – bug 964323.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Let’s look at our startup numbers this month:

eide1 eide2 eide3 eide4 eide5 eide6 eide7 eide8

Regressions noted in bugs 964307 and 966580.

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

awsy1

awsy2

awsy3

There seems to be an improvement in several of the measurements, but it is inconsistent — it varies from one test run to the next. I wonder what that’s about.


February 03, 2014 11:46 PM

January 21, 2014

Geoff Brown

Android x86 tests (S4) on tbpl

Today, we started running Robocop and xpcshell tests in an Android x86 emulator environment on tbpl.

Firefox for Android has been running on Android x86 for over a year now [1] and we have had Android x86 builds on tbpl for nearly as long [2], but our attempts to run test suites on Android x86 [3] have been more problematic. (See [4] to get an appreciation of the complications.)

This is the first of our Android test jobs to run in an emulator (Android 2.2 test jobs run on Tegra boards and Android 4.0 jobs run on Panda boards). The Android x86 tests run in Android x86 emulators running Android 4.2. The emulators run on Linux 64-bit in-house machines, and we run up to 4 emulators at a time on each Linux machine.

The new tests are labelled “Android 4.2 x86 Opt” on tbpl and look like this:

Screen shot 2014-01-21 at 12.58.45 PM

Each “set” contains up to 4 test jobs, reflecting the set of emulators that are run in parallel on each Linux box. For now, only set S4 is run on trunk trees; S4 contains xpcshell tests and robocop tests, broken up into 3 chunks, robocop-1, robocop-2, and robocop-3:

Image

Other test suites – mochitests, reftests, etc – run on Android 4.2 x86 Opt only on the Cedar and Ash trees at this time. They mostly work, but have intermittent infrastructure failures that make them too unreliable to run on trunk trees  (bug 927602 is the main issue).

Image

If you need to debug a test failure on this platform, try server support is available, or you can borrow a releng machine and run the mozharness job yourself.

[1] http://starkravingfinkle.org/blog/2012/11/firefox-for-android-running-on-android-x86/

[2] http://oduinn.com/blog/2012/12/20/android-x86-builds-now-on-tbpl-mozilla-org/

[3] http://armenzg.blogspot.ca/2013/09/running-hidden-firefox-for-android-42.html

[4] https://bugzilla.mozilla.org/showdependencytree.cgi?id=891959&hide_resolved=0


January 21, 2014 08:00 PM

January 20, 2014

Mark Finkle

Firefox for Android: Page Load Performance

One of the common types of feedback we get about Firefox for Android is that it’s slow. Other browsers get the same feedback and it’s an ongoing struggle. I mean, is anything ever really fast enough?

We tend to separate performance into three categories: Startup, Page Load and UX Responsiveness. Lately, we have been focusing on the Page Load performance. We separate further into Objective (real timing) and Subjective (perceived timing). If something “feels” slow it can be just as bad as something that is measurably slow. We have a few testing frameworks that help us track objective and subjective performance. We also use Java, JavaScript and C++ profiling to look for slow code.

To start, we have been focusing on anything that is not directly part of the Gecko networking stack. This means we are looking at all the code that executes while Gecko is loading a page. In general, we want to reduce this code as much as possible. Some of the things we turned up include:

Some of these were small improvements, while others, like the proxy lookups, were significant for “desktop” pages. I’d like to expand on two of the improvements:

Predictive Networking Hints
Gecko networking has a feature called Speculative Connections, where it’s possible for the networking system to start opening TCP connections and even begin the SSL handshake, when needed. We use this feature when we have a pretty good idea that a connection might be opened. We now use the feature in three cases:

Animating the Page Load Spinner
Firefox for Android has used the animated spinner as a page load indicator for a long time. We use the Android animation framework to “spin” an image. Keeping the spinner moving smoothly is pretty important for perceived performance. A stuck spinner doesn’t look good. Profiling showed a lot of time was being taken to keep the animation moving, so we did a test and removed it. Our performance testing frameworks showed a variety of improvements in both objective and perceived tests.

We decided to move to a progressbar, but not a real progressbar widget. We wanted to control the rendering. We did not want the same animation rendering issues to happen again. We also use only a handful of “trigger” points, since listening to true network progress is also time consuming. The result is an objective page load improvement the ranges from ~5% on multi-core, faster devices to ~20% on single-core, slower devices.

fennec-throbber-and-progressbar-on-cnn

The progressbar is currently allowed to “stall” during a page load, which can be disconcerting to some people. We will experiment with ways to improve this too.

Install Firefox for Android Nightly and let us know what you think of the page load improvements.

January 20, 2014 05:22 PM

Fennec Nightly News

ProgressBar replaces Spinner

The spinner animation shown while loading a page has been replaced with a progressbar. Not only is the progressbar more subtle, it also improves page load performance from 5% (faster devices) to 20% (slower devices). Take a look at let us know what you think

January 20, 2014 02:04 PM

January 07, 2014

Geoff Brown

Firefox for Android Performance Measures – 2013 in review

Let’s review our performance measures for 2013.

Highlights:

- significant regressions in “time to throbber start/stop” and Eideticker startup tests

- most Talos measurements stable, or regressions addressed

- slight, gradual improvements to frame rate and responsiveness

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

tcanvas

7800 (start of period) – 7700 (end of period).

This test was introduced in September and has been fairly stable ever since. There does however seem to be a slight, gradual regression over this period.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

4.4 (start of period) – 2.8 (end of period)

This test saw lots of regressions and improvements over 2013, ending on a stable high note.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

tpan

14000 (start of period) – 110000 (end of period)

The nature of this test measurement makes it one of the most variable Talos tests. We overcame the worst of the Sept/Oct regression, but still ended the year worse off than we started.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

troboprovider

375 (start of period) – 375 (end of period).

This test has hardly ever reported significant change — is it a useful test?

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

tsvg

1200 (start of period) – 7200 (end of period).

Introduced in September; the regression of Nov 27 is tracked in bug 944429.

tp4m

Generic page load test. Lower values are better.

tp4m

700 (start of period) – 700 (end of period).

This version of tp4m was introduced in September; no significant changes here.

ts_paint

Startup performance test. Lower values are better.

tspaint

4300 (start of period) – 4300 (end of period)

Introduced in September; there are no significant regressions here, but there is a lot of variability, possibly related to the frequent test failures — see bug 781107.

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

throbberstart1

There is so much data here, it is hard to see what is happening, but a troubling upward trend over the year is evident.

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

throbberstop1

Again, there is a lot of data here. Here’s another graph that hides the data for all but a few devices:

throbberstop2

Evidently we have lost a lot of ground over the year, with an increase in “time to throbber stop” of nearly 80% for some devices.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker confirms that startup time has regressed over the year:

eide1

eide2

Most checkerboarding and frame rate measurements have been steady, or show slight improvement:

eide5

Responsiveness — a new measurement added this year — is similarly steady with slight improvement:

eide3

eide4

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

awsy1

awsy2

awsy3

There is an upward trend here for many of the measurements, but what I find striking is how often we have managed to keep these numbers stable while adding new features.


January 07, 2014 04:01 PM

January 01, 2014

Lucas Rocha

Reconnecting

I’m just back from a well deserved 2-week vacation in Salvador—the city where I was born and raised. While in Brazil, I managed to stay (mostly) offline. I intentionally left all my gadgetry at home. No laptop, no tablet. Only an old-ish smartphone with no data connection.

Offline breaks are very refreshing. They give you perspective. I feel we, from tech circles, are always too distracted, too busy, too close to these things.

Stepping back is important. Idle time is as important as productive time. The world already has enough bullshit. Let’s not make it worse.

Here’s to a more thoughtful, mindful, and meaningful online experience for all of us. Happy new year!

January 01, 2014 10:09 PM

December 20, 2013

Wes Johnston

Action modes for Android

Assuming all goes well, Firefox will soon be shipping with our first support for Android’s ActionModes for text selection. We previously allowed text selection via some special context menu options when people long pressed on selected text. This replaces that implementation with a nice new UI:

ImageIf you’ve got an add-on that adds something to the selected text context menu:

  1. You’re likely broken in Nightly and Aurora builds right now and
  2. you can easily update your add-on to show in this menu as well.

Some astute developers have already updated themselves, but adding support is pretty easy (updated MDN docs here). We’ve added two methods to the SelectionHandler

  1. let actionId = SelectionHandler.addAction({
        label: "My addon",
        id: "my_id",
        icon: "resource://myaddon/icon", // You MUST include an icon to be shown in the ActionBar,
    // otherwise you'll always be put into the overflow menu.     action: function(aElement) {         // do something when clicked     },     selector: {
    // Just because this is being shown, doesn't mean there is a selection.
    // We may just be showing the cursor handle to let you move the cursor position.
            matches: function(aElement) { return SelectionHandler.isSelectionActive(); }     } },
  2. SelectionHandler.remove(actionId);

Text selection has had an exciting life in Firefox for Android, and is likely going to go through another transition as we land better platform support for touch friendly handles for not just Android, but also Firefox OS, and Metro Firefox. But this is a good move for the UI. We implemented it as a compatibility layer, so it should be available for all Android versions from Froyo to the newest KitKat devices.


December 20, 2013 11:11 PM

December 17, 2013

Sriram Ramasubramanian

DroidCon India ’13

Last month I presented about “Optimizing UI Performance in Android Apps” at DroidCon India. The event was organized well by the hasgeek folks. It was a great pleasure to present to Bangaloreans — having worked there for three years! There were a lot of startups solving really big problems in Android. One of them had a medical app with 24 different graphs to be shown. He wanted to zoom in and out of any of those, without causing OOMs in bitmap. The other group worked on a Ski resort map. They had huge bitmaps, one superimposed over another, for different Ski routes. And another app had a ton of emoji to be shown without hitting the disk for each emoji. There were few kids out of school jumping into Android development before joining a big company. Few were interesting in joining startups and consultancies right out of school. Overall it was a great experience altogether.

Here’s a link to the slides and a video of the talk.


December 17, 2013 04:35 PM

December 16, 2013

Chris Lord

Linking CSS properties with scroll position: A proposal

As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*

    where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
      and transition-stop        = <relative-scroll-position> <property-value>

This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

[Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.

December 16, 2013 11:20 AM

December 11, 2013

James Willcox

Flash on Android 4.4 KitKat

There has been some some talk recently about the Flash situation on Android 4.4. While it's no secret that Adobe discontinued support for Flash on Android a while back, there are still a lot of folks using it on a daily basis. The Firefox for Android team consistently gets feedback about it, so it didn't take long to find out that things were amiss on KitKat.

I looked into the problem a few weeks ago in bug 935676, and found that some reserved functions were made virtual, breaking binary compatibility. I initially wanted to find a workaround that involved injecting the missing symbols, but that seems to be a bit of a dead end. I ended up making things work by unpacking the Flash APK with apktool, and modifying libflashplayer.so with a hex editor to replace the references to the missing symbols with something else. The functions in question aren't actually being called, so changing them to anything that exists works (I think I used pipe). It was necessary to pad each field with null characters to keep the size of the symbol table unchanged. After that I just repacked with apktool, installed, and everything seemed to work.

There is apparently an APK floating around that makes Flash work in other browsers on KitKat, but not Firefox. The above solution should allow someone to make an APK that works everywhere, so give it a shot if you are so inclined. I am not going to publish my own APK because of reasons.

December 11, 2013 12:30 PM

December 10, 2013

Lucas Rocha

Firefox for Android in 2013

Since our big rewrite last year, we’ve released new features of all sizes and shapes in Firefox for Android—as well as tons of bug fixes, of course. The feedback has been amazingly positive.

This was a year of consolidation for us, and I think we’ve succeeded in getting Firefox for Android in a much better place in the mobile browser space. We’ve gone from an (embarrassing) 3.5 average rating on Google Play to a solid 4.4 in just over a year (!). And we’re wrapping up 2013 as a pre-installed browser in a few devices—hopefully the first of many!

We’ve just released Firefox for Android 26 today, our last release this year. This is my favourite release by a mile. Besides bringing a much better UX, the new Home screen lays the ground for some of the most exciting stuff we’ll be releasing next year.

A lot of what we do in Firefox for Android is so incremental that it’s sometimes hard to see how all the releases add up. If you haven’t tried Firefox for Android yet, here is my personal list of things that I believe sets it apart from the crowd.

All your stuff, one tap

The new Home in Firefox for Android 26 gives you instant access to all your data (history, bookmarks, reading list, top sites) through a fluid set of swipable pages. They are easily accessible at any time—when the app starts, when you create a new tab, or when you tap on the location bar.

You can always search your browsing data by tapping on the location bar. As an extra help, we also show search suggestions from your default search engine as well as auto-completing domains you’ve visited before. You’ll usually find what you’re looking for by just typing a couple of letters.

Top Sites, History, and Search.

Top Sites, History, and Search.

Great for reading

Firefox for Android does a couple of special things for readers. Every time you access a page with long-form content—such as a news article or an essay—we offer you an option to switch to Reader Mode.

Reader Mode removes all the visual clutter from the original page and presents the content in a distraction-free UI—where you can set your own text size and color scheme for comfortable reading. This is especially useful on mobile browsers as there are still many websites that don’t provide a mobile-friendly layout.

Reader Mode in Firefox for Android

Reader Mode in Firefox for Android

Secondly, we bundle nice default fonts for web content. This makes a subtle yet noticeable difference on a lot of websites.

Last but not least, we make it very easy to save content to read later—either by adding pages to Firefox’s reading list or by using our quickshare feature to save it to your favourite app, such as Pocket or Evernote.

Make it yours

Add-ons are big in desktop Firefox. And we want Firefox for Android to be no different. We provide several JavaScript APIs that allow developers to extend the browser with new features. As a user, you can benefit from add-ons like Adblock Plus and Lastpass.

If you’re into blingy UIs, you can install some lightweight themes. Furthermore, you can install and use any web search engine of your choice.

Lightweight theme, Add-ons, and Search Engines

Lightweight theme, Add-ons, and Search Engines

Smooth panning and zooming

An all-new panning and zooming framework was built as part of the big native rewrite last year. The main focus areas were performance and reliability. The (mobile) graphics team has released major improvements since then and some of this framework is going to be shared across most (if not all) platforms soon.

From a user perspective, this means you get consistently smooth panning and zooming in Firefox for Android.

Fast-paced development

We develop Firefox for Android through a series of fast-paced 6-week development cycles. In each cycle, we try to keep a balance between general housekeeping (bug fixes and polishing) and new features. This means you get a better browser every 6 weeks.

Open and transparent

Firefox for Android is the only truly open-source mobile browser. There, I said it. We’re a community of paid staff and volunteers. We’re always mentoring new contributors. Our roadmap is public. Everything we’re working on is being proposed, reviewed, and discussed in Bugzilla and our mailing list. Let us know if you’d like to get involved by the way :-)


That’s it. I hope this post got you curious enough to try Firefox for Android today. Do we still have work to do? Hell yeah. While 2013 was a year of consolidation, I expect 2014 to be the year of excitement and expansion for Firefox on Android. This means we’ll have to set an even higher bar in terms of quality and, at the same time, make sure we’re always working on features our users actually care about.

2014 will be awesome. Can’t wait! In the meantime, install Firefox for Android and let us know what you think!

December 10, 2013 05:15 PM

November 29, 2013

Geoff Brown

Firefox for Android Performance Measures – November check-up

My monthly review of Firefox for Android performance measurements.

November highlights:

- significant improvement in tcheck2

- significant regression in tsvgx (SVG-ASAP) – bug 944429

- more devices added to phonedash

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

7800 (start of period) – 7800 (end of period).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

tcheck2

30 (start of period) – 2.5 (end of period)

The regressions of October were more than reversed.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

140000 (start of period) – 140000 (end of period)

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of period) – 375 (end of period).

tsvgx

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.

tsvg

1400 (start of period) – 7500 (end of period).

Regression of Nov 27 – bug 944429

 

tp4m

Generic page load test. Lower values are better.

700 (start of period) – 740 (end of period)

Slight regression of Nov 27 – bug 944429

 

ts_paint

Startup performance test. Lower values are better.

4300 (start of period) – 4300 (end of period)

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

throbber-start

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

Time to throbber start was generally stable during this period.

throbber-stop

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

Time to throbber stop was generally stable during this period.

Note that many new devices were added this month.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Many of the eideticker graphs were generally stable this month.

eide

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

awsy

awsy graphs were generally stable this month.

 


November 29, 2013 06:01 PM

Chris Lord

Efficient animation for games on the (mobile) web

Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future :)

There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions.

First off, let’s get the basic stuff out of the way.

Help the browser help you

If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen).

Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation.

Use requestAnimationFrame

When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:

var startTime = -1;
var animationLength = 2000; // Animation length in milliseconds

function doAnimation(timestamp) {
 // Calculate animation progress
 var progress = 0;
 if (startTime < 0) {
   startTime = timestamp;
 } else {
   progress = Math.min(1.0, animationLength /
                              (timestamp - startTime));
 }

 // Do animation ...

 if (progress < 1.0) {
   requestAnimationFrame(doAnimation);
 }
}

// Start animation
requestAnimationFrame(doAnimation);

You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised.

To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:

function redraw() {
  drawPending = false;

  // Do drawing ...
}

var drawPending = false;
function requestRedraw() {
  if (!drawPending) {
    drawPending = true;
    requestAnimationFrame(redraw);
  }
}

Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame.

Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable.

Measure performance

One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile.

How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised.

A take-away

To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:

animator.requestAnimationFrame =
  function(callback) {
    requestAnimationFrame(function(t) {
      callback(t);
      redraw();
    });
 };

My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5:

Alternative, low-framerate YouTube link.

November 29, 2013 02:31 PM

November 28, 2013

William Lachance

mozregression now supports inbound builds

Just wanted to send out a quick note that I recently added inbound support to mozregression for desktop builds of Firefox on Windows, Mac, and Linux.

For the uninitiated, mozregression is an automated tool that lets you bisect through builds of Firefox to find out when a problem was introduced. You give it the last known good date, the last known bad date and off it will go, automatically pulling down builds to test. After each iteration, it will ask you whether this build was good or bad, update the regression range accordingly, and then the cycle repeats until there are no more intermediate builds.

Previously, it would only use nightlies which meant a one day granularity — this meant pretty wide regression ranges, made wider in the last year by the fact that so much more is now going into the tree over the course of the day. However, with inbound support (using the new inbound archive) we now have the potential to get a much tighter range, which should be super helpful for developers. Best of all, mozregression doesn’t require any particularly advanced skills to use which means everyone in the Mozilla community can help out.

For anyone interested, there’s quite a bit of scope to improve mozregression to make it do more things (FirefoxOS support, easier installation…). Feel free to check out the repository, the issues list (I just added an easy one which would make a great first bug) and ask questions on irc.mozilla.org#ateam!

November 28, 2013 06:14 PM

November 27, 2013

Naoki Hirata

VM for B2G

It took a while to hunt down a site that allowed for free hosting of 50 gigs or more and then post the file up.  There are certain modifications I had made from yesterday to the image as well.

  1. There are certain files I shouldn’t host for the unagi, so having the directories pulled for them didn’t’ make sense.  I decided to get rid of those.  instead it’s a ~/Project/B2G folder that’s waiting for you to config.sh with whatever device you have. see https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS/Preparing_for_your_first_B2G_build
  2. I decided to download the marionette test cases and also install virtualenvwrapper and virtualenv.  see: https://developer.mozilla.org/en-US/docs/Marionette/Running_Tests and http://doughellmann.com/2008/05/virtualenvwrapper.html
  3. I also have a separate ~/Projects/gaia folder so that you can build your own gaia.  Here are some tips and tricks : https://wiki.mozilla.org/B2G/QA/Tips_And_Tricks#B2G_Compiling_Gaia

The username is Ubuntu and the password is reverse:

https://mega.co.nz/#!9w9ihJwB!YF-zymendf-uzLnhYlssjUFcW9l6XJp31Vx4uyEoDn8

This build is out dated… see https://shizen008.wordpress.com/2013/11/11/images-for-b2gemulator/ for a newer version.


Filed under: mobifx, mobile, Planet, QA, QMO, Uncategorized Tagged: B2G, gaia, mobifx, mobile, Planet, QA, QMO

November 27, 2013 12:11 AM

November 23, 2013

Chris Peterson

Cloaking plugin names to limit browser fingerprinting in Firefox

Web analytics software often tracks people using a “fingerprint” of their browsers’ unique characteristics. Bumper stickers are a good analogy. If you see a blue Volkswagen with the same uncommon bumper stickers as a blue Volkswagen you saw yesterday, there is a very good chance it is the same person driving the same car. If you don’t want people to recognize your car, you can remove your bumper stickers or display the same bumper stickers seen on many other cars. The list of plugins and fonts installed on your computer are like bumper stickers on your browser.

For more technical details of fingerprinting, see Mozilla’s Fingerprinting wiki, the EFF’s Panopticlick demo, or the HTML Standard’s “hidden plugins” description.

I landed a fix for bug 757726 so Firefox 28 will “cloak” uncommon plugin names from navigator.plugins[] enumeration. This change does not disable any plugins; it just hides some plugin names.

If you find that a website no longer recognize your installed plugin when running Firefox 28, this is likely a side effect of bug 757726. Please file a new bug blocking bug 757726 so we can fix our whitelist of uncloaked plugin names or have a web compatibility evangelist reach out to the website author to fix their code.

This code change will reduce browser uniqueness by “cloaking” uncommon plugin names from navigator.plugins[] enumeration. If a website does not use the “Adobe Acrobat NPAPI Plug-in, Version 11.0.02″ plugin, why does it need to know that the “Adobe Acrobat NPAPI Plug-in, Version 11.0.02″ plugin is installed? If a website does need to know whether the plugin is installed or meets minimum version requirements, it can still check navigator.plugins["Adobe Acrobat NPAPI Plug-in, Version 11.0.02"] or navigator.mimeTypes["application/vnd.fdf"].enabledPlugin (to workaround problem plugins that short-sightedly include version numbers in their names).

For example, the following JavaScript reveals my installed plugins:

for (plugin of navigator.plugins) console.log(plugin.name);
“Shockwave Flash”
“QuickTime Plug-in 7.7.3″
“Default Browser Helper”
“Unity Player”
“Google Earth Plug-in”
“Silverlight Plug-In”
“Java Applet Plug-in”
“Adobe Acrobat NPAPI Plug-in, Version 11.0.02″
“WacomTabletPlugin”

navigator.plugins["Unity Player"].name // get cloaked plugin by name
“Unity Player”

But with plugin cloaking, the same JavaScript will not reveal as much personally-identifying information about my browser:

for (plugin of navigator.plugins) console.log(plugin.name);
“Shockwave Flash”
“QuickTime Plug-in 7.7.3″
“Java Applet Plug-in”

navigator.plugins["Unity Player"].name // get cloaked plugin by name
“Unity Player”

In theory, all plugin names could be cloaked because web content can query navigator.plugins[] by plugin name. Unfortunately, we could not cloak all plugin names because many popular websites check for Flash or QuickTime by enumerating navigator.plugins[] and comparing plugin names one by one, instead of just asking for navigator.plugins["Shockwave Flash"] by name. These websites should be fixed.

The policy of which plugin names are uncloaked can be changed in the about:config pref “plugins.enumerable_names”. The pref’s value is a comma-separated list of plugin name prefixes (so the prefix “QuickTime” will match both “QuickTime Plug-in 6.4″ and “QuickTime Plug-in 7.7.3″). The default pref cloaks all plugin names except Flash, Shockwave (Director), Java, and QuickTime. To cloak all plugin names, set the pref to the empty string “”. To cloak no plugin names, set the pref to magic value “*”.

I started hacking on this patch in my spare time 13 months ago. I finally found some time to complete it. :)

November 23, 2013 07:36 PM

November 10, 2013

Matt Brubeck

Better automated detection of Firefox performance regressions

Last spring I spent some of my spare time improving the automated script that detects regressions in Talos and other Firefox performance data. I’m finally writing up some of that work in case it’s useful or interesting to anyone else.

Talos is a system for running performance benchmarks; we use it to run a suite of benchmarks every time a change is pushed to the Firefox source repository. The Talos test harness reports these results to the graph server which stores them and can plot the recorded data to show how it changes over time.

Like most performance measurements, Talos benchmarks can be noisy. We need to use statistics to separate signal from noise. To determine whether a change to the source code caused a change in the benchmark results, an automated script takes multiple datapoints from before and after each push. It computes the average and standard deviation of the “before” datapoints and the “after” datapoints, and uses a Student’s t-test to estimate the likelihood that the datasets are significantly different. If the t-test exceeds a certain threshold, the script sends email to the author(s) of the responsible patches and to the dev-tree-management mailing list.

By nature, these statistical estimates can never be 100% certain. However, we need to make sure that they are correct as often as possible. False negatives mean that real regressions go undetected. But false positives will generate extra work, and may cause developers to ignore future regression alerts. I started inspecting graph server data and regression alerts by hand, recording and investigating any false negatives or false positives I found, and filed bugs to fix the causes of those errors.

Some of these were straightforward implementation bugs, like one where an infinite t-test score (near certain likelihood of regression) was treated as a zero score (no regression at all). Others involved tuning the number of datapoints and the threshold for sending alerts. I also added some unit tests and regression tests, including some past datasets with known regressions. Now we can ensure that future revisions of the script still identify the correct regression points in these datasets.

Some fixes required more involved changes to the analysis. For example, if one code change actually caused a regression, the pushes right before or after that change will also appear somewhat likely to be responsible for the regression (because they will also have large differences in average score between their “before” and “after” windows). If multiple pushes in a row had t-test scores over the threshold, the script used to send an alert for the first of those pushes, even if it was not the most likely culprit. Now the script blames the push with the highest t-test score, which is almost always the guilty party. This change had the biggest impact in reducing incorrect regression alerts.

After those changes, there was still one common cause of false alarms that I found. The regression analyzer compares the 12 datapoints before each push to the 12 datapoints after it. But these 12-point moving averages could change suddenly not just at the point where a regression happened, but also at an unrelated point that happens to be 12 pushes earlier or later. This caused spooky action at a distance where a regression in one push would cause a false alarm in a completely different push. To fix this, we now compute weighted averages with “triangular” weighting functions that give more weight to the point being analyzed, and fall off gradually with increasing distance from that point. This smooths out changes at the opposite ends of the moving windows.

There are still occasional errors in regression detection, but as far as I can tell most of them are caused by genuinely misleading random noise or bimodal data. If you see any problems with regression emails, please file a bug (and CC :mbrubeck) and we’ll take a look at it. And if you’d like to help reduce useless alert emails, maybe you’d like to help fix bug 914756

November 10, 2013 07:53 PM

A good time to try Firefox for Metro

“Firefox for Metro” is our project to build a new Firefox user interface designed for touch-screen devices running Windows 8. (“Metro” was Microsoft’s code name for the new touch-friendly user interface mode in Windows 8.) I’m part of the small team working on this project.

For the past year we’ve been fairly quiet, partly because the browser has been under heavy construction and not really suitable for regular use. It started as a fork of the old Fennec (mobile Firefox) UI, plus a new port of Gecko’s widget layer to Microsoft’s WinRT API. We spent part of that time ripping out and rebuilding old Fennec features to make them work on Windows 8, and finding and fixing bugs in the new widget code. More recently we’ve been focused on reworking the touch input layer. With a ton of help from the graphics team, we replaced Fennec’s old multi-process JavaScript touch support with a new off-main-thread compositing backend for the Windows Direct3D API, and added WinRT support to the async pan/zoom module that implements touch scrolling and zooming on Firefox OS.

All this work is still underway, but in the past week we finally reached a tipping point where I’m able to use Firefox for Metro for most of my everyday browsing. There are still bugs, and we are still actively working on performance and completing the UI work, but I’m now finding very few cases where I need to switch to another browser because of a problem with Firefox for Metro. If you are using Window 8 (especially on a touch-screen PC) and are the type of brave person who uses Firefox nightly builds, this would be a great time to try Metro-style Firefox and let us know what you think!

Looking to the future, here are some of our remaining development priorities for the first release of Firefox for Metro:

And here are some things that I hope we can spend more time on once that work has shipped:

If you want to contribute to any of this work, please check out our developer documentation and come chat with us in #windev on irc.mozilla.org or on our project mailing list!

November 10, 2013 05:20 PM

October 30, 2013

Geoff Brown

Firefox for Android Performance Measures – September/October check-up

I didn’t get a chance to write up a performance check-up post at the end of September, so this entry reviews September and October.

Highlights:

 - New Talos tcanvasmark test added in September

 - Talos tsvgx replaced the previous SVG test

 - Talos Tp4m and Ts tests updated

 - New “responsiveness” measures added to Eideticker tests

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcanvasmark

This newly added (September 2013) test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.

Image

7800 (start of period) – 7800 (end of period).

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Image

High variability and regressions in August/September — see bug 913683.

15 (Sept 1) – 30 (Oct 30)

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

Image

16000000 (Sept 1) – 140000 (Oct 30)

The very significant regression from August (bug 908823) was fixed in September.

Image

A closer look at October reveals another, smaller, regression.

Regression of October 23 – see bug 877203.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (Sept 1) – 375 (Oct 30).

tsvgx

This test was introduced in September, replacing the previous SVG test (see Replacing tscroll,tsvg with tscrollx,tsvgx).

An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete.

Image

1200 (Sept 19) – 1350 (Oct 30)

Regression of Oct 17 – see bug 923193.

tp4m

This test was updated in September, replacing tp4m_nochrome.

Generic page load test. Lower values are better.

Image

700 (Sept 19) – 700 (Oct 30).

tp4m_main_rss

136000000 (Sept 19) – 136000000 ( Oct 30).

ts_paint

Another test updated in September.

Startup performance test. Lower values are better.

Image

4300 (Sept 19) – 4300 (Oct 30).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

Image

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

Time to throbber start was pretty stable during this period.

Image

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

Time to throbber stop was pretty stable during this period.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Eideticker now includes a “responsiveness” measure for several of the existing tests. See http://wrla.ch/blog/2013/10/first-eideticker-responsiveness-tests/ for background information.

Image

Most of the other measures seem fairly stable for September/October. Here are some of the more active graphs:

Image

Image

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

Image

Image

Image


October 30, 2013 10:36 PM

October 29, 2013

Margaret Leibovic

MozFest Recap: Building Your First Mobile Web App

This past weekend I attended Mozilla Festival in London, and it was incredibly energizing! Everyone at MozFest was so excited about the web, but unlike your average tech conference, there was a much more diverse set of participants, including hackers, makers, journalists, educators, and even lots of kids. The focus of this year’s MozFest was about creating a web-literate world, so I decided to facilitate a session on building your first mobile web app.

Although projects like Webmaker and Appmaker are designed to make content creation more accessible for everyone, I wanted to create some learning materials that could put people on the path towards becoming more serious web developers. This path includes learning the fundamentals of HTML/CSS/JS, as well as how to use the developer tools built into browsers, and popular collaboration tools like Github.

To teach these basics, I made a slide deck that walks you through building a simple stopwatch app that’s hosted using Github pages. At the start of my session, I presented these slides (and of course threw in some Firefox dev tools demos!), but then I let everyone loose to start hacking at their own speeds. There were about 25 participants with a wide range of ability levels, and I think this format worked well to keep everyone engaged (and gave me the opportunity to offer more individual help).

My hope is that other people might find these resources useful, so please use them and share them! Also, because they’re open source, pull requests are welcome :)

October 29, 2013 08:27 PM

October 28, 2013

Chris Lord

Sabbatical Over

Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!)

So, what did I do on my sabbatical?

As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included).

The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11′s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game.

You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed.

What did I learn on my sabbatical?

Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too.

Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint):

WebGL cannot be relied upon

WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale.

An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game.

Canvas performance isn’t great

Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance.

Porting to Chrome is a pain

A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented.

Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox.

Appcache is awful

Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here.

Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest.

CSS transitions/animations leak implementation details

This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too.

DOM rendering is slow

One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE.

This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright.

October 28, 2013 09:22 AM

October 17, 2013

Mark Finkle

GeckoView: Embedding Gecko in your Android Application

Firefox for Android is a great browser, bringing a modern HTML rendering engine to Android 2.2 and newer. One of the things we have been hoping to do for a long time now is make it possible for other Android applications to embed the Gecko rendering engine. Over the last few months we started a side project to make this possible. We call it GeckoView.

As mentioned in the project page, we don’t intend GeckoView to be a drop-in replacement for WebView. Internally, Gecko is very different from Webkit and trying to expose the same features using the same APIs just wouldn’t be scalable or maintainable. That said, we want it to feel Android-ish and you should be comfortable with using it in your applications.

We have started to build GeckoView as part of our nightly Firefox for Android builds. You can find the library ZIPs in our latest nightly FTP folder. We are in the process of improving the APIs used to embed GeckoView. The current API is very basic. Most of that work is happening in these bugs:

If you want to start playing around with GeckoView, you can try the demo application I have on Github. It links to some pre-built GeckoView libraries.

We’d love your feedback! We use the Firefox for Android mailing list to discuss status, issues and feedback.

Note: We’re having some Tech Talks at Mozilla’s London office on Monday (Oct 21). One of the topics is GeckoView. If you’re around or in town for Droidcon, please stop by.

October 17, 2013 09:33 PM

William Lachance

Automatically measuring startup / load time with Eideticker

So we’ve been using Eideticker to automatically measure startup/pageload times for about a year now on Android, and more recently on FirefoxOS as well (albeit not automatically). This gives us nice and pretty graphs like this:

flot-startup-times-gn

Ok, so we’re generating numbers and graphing them. That’s great. But what’s really going on behind the scenes? I’m glad you asked. The story is a bit different depending on which platform you’re talking about.

Android

On Android we connect Eideticker to the device’s HDMI out, so we count on a nearly pixel-perfect signal. In practice, it isn’t quite, but it is within a few RGB values that we can easily filter for. This lets us come up with a pretty good mechanism for determining when a page load or app startup is finished: just compare frames, and say we’ve “stopped” when the pixel differences between frames are negligible (previously defined at 2048 pixels, now 4096 — see below). Eideticker’s new frame difference view lets us see how this works. Look at this graph of application startup:

frame-difference-android-startup
[Link to original]

What’s going on here? Well, we see some huge jumps in the beginning. This represents the animated transitions that Android makes as we transition from the SUTAgent application (don’t ask) to the beginnings of the FirefoxOS browser chrome. You’ll notice though that there’s some more changes that come in around the 3 second mark. This is when the site bookmarks are fully loaded. If you load the original page (link above) and swipe your mouse over the graph, you can see what’s going on for yourself. :)

This approach is not completely without problems. It turns out that there is sometimes some minor churn in the display even when the app is for all intents and purposes started. For example, sometimes the scrollbar fading out of view can result in a significantish pixel value change, so I recently upped the threshold of pixels that are different from 2048 to 4096. We also recently encountered a silly problem with a random automation app displaying “toasts” which caused results to artificially spike. More tweaking may still be required. However, on the whole I’m pretty happy with this solution. It gives useful, undeniably objective results whose meaning is easy to understand.

FirefoxOS

So as mentioned previously, we use a camera on FirefoxOS to record output instead of HDMI output. Pretty unsurprisingly, this is much noisier. See this movie of the contacts app starting and note all the random lighting changes, for example:

My experience has been that pixel differences can be so great between visually identical frames on an eideticker capture on these devices that it’s pretty much impossible to settle on when startup is done using the frame difference method. It’s of course possible to detect very large scale changes, but the small scale ones (like the contacts actually appearing in the example above) are very hard to distinguish from random differences in the amount of light absorbed by the camera sensor. Tricks like using median filtering (a.k.a. “blurring”) help a bit, but not much. Take a look at this graph, for example:

plotly-contacts-load-pixeldiff
[Link to original]

You’ll note that the pixel differences during “static” parts of the capture are highly variable. This is because the pixel difference depends heavily on how “bright” each frame is: parts of the capture which are black (e.g. a contacts icon with a black background) have a much lower difference between them than parts that are bright (e.g. the contacts screen fully loaded).

After a day or so of experimenting and research, I settled on an approach which seems to work pretty reliably. Instead of comparing the frames directly, I measure the entropy of the histogram of colours used in each frame (essentially just an indication of brightness in this case, see this article for more on calculating it), then compare that of each frame with the average of the same measure over 5 previous frames (to account for the fact that two frames may be arbitrarily different, but that is unlikely that a sequence of frames will be). This seems to work much better than frame difference in this environment: although there are plenty of minute differences in light absorption in a capture from this camera, the overall color composition stays mostly the same. See this graph:

plotly-contacts-load-entropy
[Link to original]

If you look closely, you can see some minor variance in the entropy differences depending on the state of the screen, but it’s not nearly as pronounced as before. In practice, I’ve been able to get extremely consistent numbers with a reasonable “threshold” of “0.05″.

In Eideticker I’ve tried to steer away from using really complicated math or algorithms to measure things, unless all the alternatives fail. In that sense, I really liked the simplicity of “pixel differences” and am not thrilled about having to resort to this: hopefully the concepts in this case (histograms and entropy) are simple enough that most people will be able to understand my methodology, if they care to. Likely I will need to come up with something else for measuring responsiveness and animation smoothness (frames per second), as likely we can’t count on light composition changing the same way for those cases. My initial thought was to use edge detection (which, while somewhat complex to calculate, is at least easy to understand conceptually) but am open to other ideas.

October 17, 2013 05:12 PM

October 15, 2013

Margaret Leibovic

Remote Developer Tools and Firefox for Android

A lot of the Firefox for Android UI is written in Java, but under the hood there’s still quite a bit of JS that controls core browser logic. We also use HTML to create the content of most of our chrome-privileged pages, such as about:addons. Today I found myself working on a polish bug for about:apps. In the past, tweaking the CSS for these pages has been a pain, since testing changes required pushing a new APK to your device. However, I remembered that the remote inspector recently landed for Firefox 26, so I decided to try it out. Verdict: it was amazing!

imageimage

Setting up the remote developer tools is as easy as flipping prefs on desktop and mobile (these prefs are now exposed in the UI!), running an adb command to set up port forwarding, and hitting a button. The directions are really well documented on MDN, so you should look there for more detailed instructions.

While playing around with the inspector, I also realized that remote chrome debugging is enabled for Firefox for Android. It wasn’t immediately obvious to me that in order to attach the debugger to the main chrome window, you need to select the “Main Process” link on the desktop “Connect to remote device page”, but once I did that, I had access to all of the browser chrome JS. And since add-ons for Firefox for Android are written in JS, this means that you can easily debug all of your add-on code!

image

Big thanks to the Firefox Developer Tools Team for doing so much awesome work. And I know they’re always looking for new contributors to help them out!

October 15, 2013 04:42 AM

October 14, 2013

Sriram Ramasubramanian

Number Tweening

Timely is a beautiful Android app. Waking up to see the numbers animate is like a morning bliss. I was intrigued by how the numbers are animated in that app. It’s not a simple fade-out-fade-in effect. It has some kind of folding in it — but not the same as the airport status boards. UiAutomator says that the entire block is one custom view. By profiling the GPU, I found that something is drawn to the screen consistently — even when there is a pause between animations. What could it be? That’s when Christoper Nolan came in my dreams and asked, “Are you watching closely?”.

The numbers are not directly coming from a font drawn as a TextView, instead constructed as multiple segments. The transition from one number to another happens by shape tweening the segments. Interesting! But how does a line change to a curve and back to a line? That’s where bezier curves help us. A bezier curve has two end points, say A and B, and two control points, say C1 and C2. The curve is drawn by.. alright, no math here! :P And the interesting fact is, if the C1 = A and C2 = B, we get a straight line from A to B. So a cubic bezier can behave as a curve and a line — that’s all we need.

Numbers

The first part of the problem is to identify the control points. Numbers like 1 and 7 needs only two control points. However, numbers like 0, 3, 8 needs 5 control points. If we have to map 4 segments between numbers, then all the 10 digits should use five control points. The image above doesn’t show all the points in digit 1. You could assume that points 3, 4 and 5 are shared. The digit 7 doesn’t need 5 control points. But they are added so that the animation from 7 to 8 looks nice. The next part is to draw the curves with these control points. If you are using an application like Adobe Illustrator, trying this out is easy. The following image shows how the curves are created for number 3. But wait! The choice of font also matters. They choose Futura Light, which has nice curves and straight lines. If you were to try this with Roboto — good luck! ;)

Number-3

Now that we’ve identified the end points and the control points for each segment, lets mash up some code.

public class NumberView extends View {

    private final Interpolator mInterpolator;
    private final Paint mPaint;
    private final Path mPath;

    // Numbers currently shown.
    private int mCurrent = 0;
    private int mNext = 1;

    // Frame of transition between current and next frames.
    private int mFrame = 0;
    
    // The 5 end points. (Note: The last end point is the first end point of the next segment.
    private final float[][][] mPoints = {
        {{44.5f, 100}, {100, 18}, {156, 100}, {100, 180}, {44.5f, 100}}, // 0
        ... and for the other numbers.
    };

    // The set of the "first" control points of each segment.
    private final float[][][] mControlPoint1 = { ... };

    // The set of the "second" control points of each segment.
    private final float[][][] mControlPoint2 = { ... };
    
    public NumberView(Context context, AttributeSet attrs) {
        super(context, attrs);
        
        setWillNotDraw(false);
        mInterpolator = new AccelerateDecelerateInterpolator();
        
        // A new paint with the style as stroke.
        mPaint = new Paint();
        mPaint.setAntiAlias(true);
        mPaint.setColor(Color.BLACK);
        mPaint.setStrokeWidth(5.0f);
        mPaint.setStyle(Paint.Style.STROKE);
        
        mPath = new Path();
    }
    
    @Override
    public void onDraw(Canvas canvas) {
        super.onDraw(canvas);

        // Frames 0, 1 is the first pause.
        // Frames 9, 10 is the last pause.
        // Constrain current frame to be between 0 and 6.
        final int currentFrame;
        if (mFrame < 2) {
            currentFrame = 0;
        } else if (mFrame > 8) {
            currentFrame = 6;
        } else {
            currentFrame = mFrame - 2;
        }
        
        // A factor of the difference between current
        // and next frame based on interpolation.
        // Only 6 frames are used between the transition.
        final float factor = mInterpolator.getInterpolation(currentFrame / 6.0f);
        
        // Reset the path.
        mPath.reset();

        final float[][] current = mPoints[mCurrent];
        final float[][] next = mPoints[mNext];

        final float[][] curr1 = mControlPoint1[mCurrent];
        final float[][] next1 = mControlPoint1[mNext];

        final float[][] curr2 = mControlPoint2[mCurrent];
        final float[][] next2 = mControlPoint2[mNext];
        
        // First point.
        mPath.moveTo(current[0][0] + ((next[0][0] - current[0][0]) * factor),
                     current[0][1] + ((next[0][1] - current[0][1]) * factor));
        
        // Rest of the points connected as bezier curve.
        for (int i = 1; i < 5; i++) {
            mPath.cubicTo(curr1[i-1][0] + ((next1[i-1][0] - curr1[i-1][0]) * factor),
                          curr1[i-1][1] + ((next1[i-1][1] - curr1[i-1][1]) * factor),
                          curr2[i-1][0] + ((next2[i-1][0] - curr2[i-1][0]) * factor),
                          curr2[i-1][1] + ((next2[i-1][1] - curr2[i-1][1]) * factor),
                          current[i][0] + ((next[i][0] - current[i][0]) * factor),
                          current[i][1] + ((next[i][1] - current[i][1]) * factor));
        }

        // Draw the path.
        canvas.drawPath(mPath, mPaint);

        // Next frame.
        mFrame++;

        // Each number change has 10 frames. Reset.
        if (mFrame == 10) {
            // Reset to zarro.
            mFrame = 0;

            mCurrent = mNext;
            mNext++;

            // Reset to zarro.
            if (mNext == 10) {
                mNext = 0;
            }
        }

        // Callback for the next frame.
        postInvalidateDelayed(100);
    }
}

Given a set of end points and control points, the idea is to drawn 4 cubic bezier curves. We create a path and move to the first end point. From there, we draw a bezier curve to the second end point using the given control points. This end point will be the “first” end point for the next segment, and so on. This logic would drawn the numbers without any animation. To kick in some animation bits, each transition is defined by 10 frames, called at 100ms interval. The first two and the last two frames are static — that how the number stops for a heartbeat. The in-between 6 frames maps the end points and control points for the segments between the numbers, draws a part of the transition based on the interpolated value. To put this in simpler terms, let’s take the “first” end points to be mapped to be (100, 100) and (124, 124). The first of the six frames would draw (100, 100). The second would draw ((124 – 100)/6, (124 – 100)6), that is (104, 104). The third of the six frames would draw (108, 108) and the last would finally end at (124, 124). Now extend this argument for other end and control points. This would change the curves from one to another — with in-between curves looking twisted and turned ;)

As you can see in the code, all 6 points are calculated in each draw frame. This might not be a good. Since we know the interpolation, these values could be pre-calculated and held as a giant array of points. It would also be nice to have all the points to be a float value between 0 and 1, as that would allow scaling the numbers to any sized canvas easily. If you were to deal with different width for each digit, and have other texts showing up, which could also animate, and calculate where each digit show go in a single View — that’s when I bow and respect the Timely app! They’ve done more work that what this blog post says. This post is just to give an idea of how such things can be done. Their work is a piece of art! To see how this animation works, try a HTML version here.


October 14, 2013 03:27 PM

Lucas Rocha

“Mozilla on Android” talks in London

Mark Finkle and Margaret Leibovic are flying to London next week to attend Droidcon and the Mozilla Festival. So we thought it would be a good opportunity to give a series of tech talks on the present and future of Mozilla products and web platform on Android in our London MozSpace.

We’ll be talking about Firefox for Android, our beloved mobile browser; GeckoView, the upcoming API for embedding Gecko in Android apps; the latest add-on APIs to extend Firefox for Android; as well as some cool demos of Mozilla’s runtime platform and developer tools for Open Web apps on Android.

The talks will be followed by free drinks and pizzas in our community space. Interested? Register now at the event page.

And by the way, if you’re attending Droidcon this year, I’ll be giving a talk covering some of the most interesting technical details behind Firefox for Android.

Hope to see you next week!

October 14, 2013 01:20 PM

October 12, 2013

Kartikaya Gupta

A little Webmaker success story

My girlfriend was writing a blog post and wanted to put some tables in. Unfortunately the Wordpress interface wasn't being very helpful with adding tables so she took a PNG screenshot of the tables in Word and inserted those in. While the solution worked fine for her, I was aware that turning text into images is generally not a great solution for the web, for a number of reasons (bad for accessibility, less dynamic layout, slower page load, etc.).

I realized this was a perfect opportunity for a little webmaking experience. I fired up Thimble on my laptop, and with her looking over my shoulder, created a simple two-row table with the style she wanted, showing her some of the different style attributes that could be used. Once I had the start of her table, I published it as a "make". She was then able to view it on her laptop and remix it, putting in the rest of the data she wanted. She also created a second table (complete with bullet points and red text), and pasted the HTML back into the Wordpress HTML editor.

I think the part I like best about tools like Thimble are that they are very low-overhead to use. You don't need to sign in to start using it, and the interface is simple and obvious. For publishing the make, I did have to sign in, but with Persona, so I didn't have to create a new password for it. (If it required creating a more heavyweight account I would have just copied the HTML into pastebin or something, which is not nearly as cool.) And of course, the live preview of the HTML in Thimble is pretty neat too :)

Oh, and you should definitely read her blog post: Get Moving!

October 12, 2013 08:51 PM

October 08, 2013

William Lachance

First Eideticker Responsiveness Tests

[ For more information on the Eideticker software I'm referring to, see this entry ]

Time for another update on Eideticker. In the last quarter, I’ve been working on two main items:

  1. Responsiveness tests (Android / FirefoxOS)
  2. Eideticker for FirefoxOS

The focus of this post is the responsiveness work. I’ll talk about Eideticker for FirefoxOS soon. :)

So what do I mean by responsiveness? At a high-level, I mean how quickly one sees a response after performing an action on the device. For example, if I perform a swipe gesture to scroll the content down while browsing CNN.com, how long does it take after
I start the gesture for the content to visibly scroll down? If you break it down, there’s a multi-step process that happens behind the scenes after a user action like this:

input-events

If anywhere in the steps above, there is a significant delay, the user experience is likely to be bad. Usability research
suggests that any lag that is consistently above 100 milliseconds will lead the user to perceive things as being laggy. To keep our users happy, we need to do our bit to make sure that we respond quickly at all levels that we control (just the application layer on Android, but pretty much everything on FirefoxOS). Even if we can’t complete the work required on our end to completely respond to the user’s desire, we should at least display something to acknowledge that things have changed.

But you can’t improve what you can’t measure. Fortunately, we have the means to do calculate of the time delta between most of the steps above. I learned from Taras Glek this weekend that it should be possible to simulate the actual capacitative touch event on a modern touch screen. We can recognize when the hardware event is available to be consumed by userspace by monitoring the `/dev/input` subsystem. And once the event reaches the application (the Android or FirefoxOS application) there’s no reason we can’t add instrumentation in all sorts of places to track the processing of both the event and the rendering of the response.

My working hypothesis is that it’s application-level latency (i.e. the time between the application receiving the event and being able to act on it) that dominates, so that’s what I decided to measure. This is purely based on intuition and by no means proven, so we should test this (it would certainly be an interesting exercise!). However, even if it turns out that there are significant problems here, we still care about the other bits of the stack — there’s lots of potentially-latency-introducing churn there and the risk of regression in our own code is probably higher than it is elsewhere since it changes so much.

Last year, I wrote up a tool called Orangutan that can directly inject input events into an input device on Android or FirefoxOS. It seemed like a fairly straightforward extension of the tool to output timestamps when these events were registered. It was. Then, by synchronizing the time between the device and the machine doing the capturing, we can then correlate the input timestamps to events. To help visualize what’s going on, I generated this view:

taskjs-framediff-view

[Link to original]

The X axis in that graph represents time. The Y-axis represents the difference between the frame at that time with the previous one in number of pixels. The “red” represents periods in capture when events are ongoing (we use different colours only to
distinguish distinct events). [1]

For a first pass at measuring responsiveness, I decided to measure the time between the first event being initiated and there being a significant frame difference (i.e. an observable response to the action). You can see some preliminary results on the eideticker dashboard:

taskjs-responsiveness

[Link to original]

The results seem pretty highly variable at first because I was synchronizing time between the device and an external ntp server, rather than the host machine. I believe this is now fixed, hopefully giving us results that will indicate when regressions occur. As time goes by, we may want to craft some special eideticker tests for responsiveness specifically (e.g. a site where there is heavy javascript background processing).

[1] Incidentally, these “frame difference” graphs are also quite useful for understanding where and how application startup has regressed in Fennec — try opening these two startup views side-by-side (before/after a large regression) and spot the difference: [1] and [2])

October 08, 2013 12:36 AM

September 27, 2013

Margaret Leibovic

Home Page Snippets for Firefox for Android

Have you ever noticed the messages that appear under the search box on desktop Firefox’s home page? We call these messages snippets, and they’re dynamically created from web content hosted by Mozilla. These snippets provide an easy way to communicate with desktop users, but right now there’s no equivalent feature for Firefox for Android.

Since we first released the native version of Firefox for Android, we’ve included a hard-coded promotional banner as part of our home page — you’re probably seen it encourage you to set up Firefox Sync. However, we’ve been working on a complete rewrite of the home page for Firefox 26 (exciting!), and we’re taking the opportunity to replace this with a new customizable banner.

Because this banner is part of our native Java UI, we can’t just load web content the way desktop does. Instead, we decided to build a JS API to let add-ons customize it. We came up with a simple API that lets developers add and remove items from a set of messages that rotates through this banner. Right now, these messages just consist of an icon, some text, and a click handler. We put this banner API in a new Home.jsm module, since we have plans to add more home page customization APIs to this module in the future!

As a demo, I made an add-on that fetches data from a basic snippets server, and turns that data into messages that are added to the home banner. This server currently serves data from the 5 most recent @FennecNightly tweets, which is actually pretty relevant to Nightly users. If you’re using Nightly, I’d encourage you to try it out, and as always, please file any bugs you find!

September 27, 2013 01:05 AM

September 25, 2013

Mark Finkle

Firefox for Android: Team Meetup, Brainstorming and Hacking

Last week, the Firefox for Android team, and some friends, had a team meetup at the Mozilla Toronto office. As is typical for Mozilla, the team is quite distributed so getting together, face to face, is refreshing. The agenda for the week was fairly simple: Brainstorm new feature ideas, discuss ways to make our workflow better, and provide some time for fun hacking.

We spent most of our time brainstorming, first at a high level, then we picked a few ideas/concepts to drill into. The high level list ended up with over 150 ideas. These ranged from blue-sky features, building on existing features, performance and UX improvements, and removing technical debt. Some of the areas where we had deeper discussions included:

We also took some time to examine our workflow. We found some rough edges we intend to smooth out. We also ended up with a better understanding of our current, somewhat organic, workflow. Look for more write-ups from the team on this as we pull the information together. One technical outcome of the the discussions was a critical examination of our automated testing situation. We decided that we depend entirely too much on Robotium for testing our Java UI and non-UI code. Plans are underway to add some JUnit test support for the non-UI code.

The Android team is very committed to working with contributors and have been doing a great job attracting and mentoring code contributors. Last week they started discussing how to attract other types of contributors, focusing on bug triage as the next possible area. Desktop Firefox has had some great bug triage support from the community, so it seems like a natural candidate. Look for more information about that effort coming soon.

There was also some time for hacking on code. Some of the hacking was pure fun stuff. I saw a twitterbot and an IRCbot created. There was also a lot of discussion and hacking on add-on APIs that provide more integration hooks for add-on developers into the Java UI. Of course there is usually a fire that needs to be put out, and we had one this time as well. The front-end team quickly pulled together to implement a late-breaking design change to the new Home page. It’s been baking on Nightly for a few days now and will start getting uplifted to Aurora by the end of the week.

All in all, it was a great week. I’m looking forward to seeing what happens next!

September 25, 2013 03:29 PM

Sriram Ramasubramanian

Yo Zuck, Fix This!

Having worked on Firefox for Android since the native re-write, and constantly improving its performance, I know the entire UI architecture of the app. While most optimizations had gone in before I developed Droid Inspector, the tool was still useful with the new about:home that we’ve been working on lately. On the other hand, I’ve always wanted to know if I was doing things the right way, and how the top applications in the Android market were developed. Though the UiAutomator have a tree hierarchy of any app, it wasn’t as good as Droid Inspector. What if I can look at other good apps using Droid Inspector? What about Facebook through Droid Inspector’s eyes? And I did it! :D

Disclaimer: The following investigation is out of my own interest. The views and recommendations are my own and not my employer’s! Also, I never got a special copy of Facebook with my droidinpector-server.jar packed in it. When you can build AOSP and run it on a device, you can really investigate any app with Droid Inspector! Can’t you? ;)

Feed with 250 views

My initial impression when the tree structure loaded was: “OHHMG! Why does it have sooooo maaaany vieeeews??” The hierarchy had more than 250 views for what was seen on screen. Before we delve into details, as far as the good parts:

  1. They use Fragments. And properly remove the unused fragments. For e.g., the left menu is removed when not needed.
  2. Each feed item has gone threw an overdraw pass already. The light blue background of the app and the white background of the feed item (though feel like two different entries) are drawn by a single Drawable. This removes the potential overdraw (of white area blocking the blue background). Kudos!
  3. They have used compound drawables in couple of places.
  4. Each entry in the list of feed is called a RecycleViewWrapper. In their first native re-write blog, they mentioned about a better recycler for the list view items. This view probably recycles the bitmaps when not used. Nice!
  5. The list is gesture aware. So, if an user is scrolling the list, the images loaded in the background are not added to the rows until the scrolling stopped. This avoids a re-layout pass that might have happened while the user is still scrolling. Also, one might not need to unnecessarily cause a re-layout pass when the view is going to be moved to scrap heap in few milliseconds. This is a really good approach. If you want to use a custom library, you could try using Lucas Rocha‘s Smoothie, that does this job for free.

Overview

Now the parts that could be optimized easily. As I mentioned earlier, each feed item has item own “compressed” background. That way one layer of overdraw is removed. But the page still shows a blue background. This isn’t actually needed as the feed items cover the entire screen. Also, there is a black screen at the far back (pretty close to the root of the hierarchy). This is never seen (or probably shown as a part of touching the image). This could be hidden until needed. There is also a blue background to the top of the entire phone screen area. As per the hierarchy, this is called “RefreshableListViewOverflowItem”. May be the new entries are added here. But still, this needn’t be shown when there is nothing to be shown. Oh! we just removed two layers of overdraw in a jiffy. :D Now let’s start removing unwanted views.

Cover Image

Cover Images: Just like the web interface, all the images are http url links. Until these images are loaded, as we can see ourselves while scrolling, a nice blue color is shown. But then, there are couple of problems. First, the blue image is not removed once the image is loaded. Overdraw? Definitely! The other problem is that the blue is not a background of an ImageView. It’s actually the background of a ViewGroup that holds the ImageView. The idea behind this approach is that this allows an image to fade-in while the background is still seen. But then, can’t we have an alpha animation on a Drawable? Wouldn’t the “source” image then just fade-in while the background blue still waits? That removes one layer from the view hierarchy. Oh wait! That removes many such “one layers” from the view hierarchy, as all ImageViews are implemented this way! Win!

Inline Cover Image

The cover photos shown inline in the feed, say for friends having a birthday, have a mild black overlay on them, to emphasize on the profile photo. This still works as the UrlImage above. The problem here is that the black overlay is a separate View added to the hierarchy. Ummm… Shaders? Right! Adding a ComposeShader with a LinearGradient of the color the value (for both start and stop) and a BitmapShader of the actual image will draw this in one go. That removes (a) Overdraw, (b) one view in the hierarchy. Oh! we do that with our lightweight themes in Firefox! Also, there is an obscured white background for the profile image (actual profile image hidden in the screenshot for privacy). That… is another overdraw. What’s the probability of someone’s name running past the device width? (Not all have a long last name like mine: “Ramasubramanian”). But still, the two TextViews (obscured for privacy) can be combined into one single TextView. The small change in text color/size of the sub text can be made a Spannable string. We just reduced two views that way — the sub text’s view and the parent of both the views.

Horizontal Scroll View

Moving to the profile page. The horizontal scrollable list of photos, friends, map, books, apps, etc. is.. not a list! Instead its a LinearLayout with each individual tiles part of the hierarchy all the time. Basically this is not backed by an adapter. Now look closely. The photos tile in turn has 6 tiles in them. Each one is an UrlImage without the background removed. The easiest way is for the server to combine all these images and send as a single image. If that cannot be done, a custom view can be implemented where the six images are drawn to canvas inside “one” draw() call. That means, we just reduced 12 views from the hierarchy (and the 6 overdraws caused by the background). No, we shouldn’t stop there! :D. There is a small black filter on top of these UrlImage views. This isn’t a simple color filter like cover photo above. But still a shader can do the job efficiently. Each tile has two FrameLayouts at the minimum. They are redundant and not even needed. That’s like 20 unnecessary views added as a part of 10 tiles. This is all sandwiched inside a LinearLayout. Now if all these images can be drawn as a part of a single Drawable or an image, this can be easily made as a compound drawable for the TextView. We just reduced 5 layers for each tite to a single view — and there are 10-12 such tiles. That was a massacre of 60 views!

Details

Look up above a bit. The unordered list of details about a person has a small icon and a text describing each entry. Unfortunately each image is again an UrlImage. But aren’t these pre-determined images? Can’t these be shipped with the APK itself? Also, why aren’t they compound drawables. What would generally take one single view is using four views here!

Feed Item

If you are still with me, let’s move on to the major part — the feed item! This is the integral part of Facebook and its app. This has to be the “most optimized” of all. Now to give a comparison, the search results list on the Firefox app should be the most optimized and allow smooooooth scrolling. Unfortunately each feed item feels bloated. The entire hierarchy of one feed item couldn’t fit on the side screen in the screenshot. The background of the item is optimized — nice! But the drawable doesn’t define a proper drawable padding. What can be merely tweaked by adding a padding to the background is done by adding two Views of height 8dp at the top and bottom. The UrlImage (again) adds a background. In this case, a small shadow is shown for the ImageView from this background. “Mmmm.. 9-png-drawable? as a background for ImageView? Won’t that work?” is probably you are thinking. And that would work just fine in this case. Each button in the bottom bar shows an image and a text view. The reason they weren’t made as compound drawable is that the like button has a nice animation. But then, the drawable in the compound drawable can have animations too. Probably that would make them all have a single TextView. The “FeedbackActionButtonBar” holding the three buttons draws a pale white background. Since each post will have this bar, this could be merged with the optimized background. That would save one more level of overdraw.

But the biggest problem in the feed is that, to make use of the list view recycling, Facebook app adds all possible things to each feed item. This causes the hierarchy to bloat. The better option is to classify different options available and load different row item types. If that’s hard, the simple solution would be have ViewStubs until some piece needs to be inflated actually. This reduces the layout pass on each row.

After all these simpler fixes (esp. the UrlImage view problem), Facebook app could run at least 50% faster. There are even more things that can be fixed by just looking at the Droid Inspector output. As my intention is to try to see and show how apps can use Droid Inspector to optimize their UI, and not to tarnish Facebook’s image, I stop here. :D. I hope Facebook would act upon this at the earliest and fix all these issues, as they have more than 500M users!

P.S.: I know it’s a bit hard to look through the images and understood easily. I’ve put up a similar HTML capture (needs WebGL and CSS Flex box) here and there for you to try.


September 25, 2013 08:56 AM

September 01, 2013

Chris Lord

Sabbatical

As of Friday night, I am now on a two month unpaid leave. There are a few reasons I want to do this. It’s getting towards the 3-year point at Mozilla, and that’s usually the sort of time I get itchy feet to try something new. I also think I may have been getting a bit close to burn-out, which is obviously no good. I love my job at Mozilla and I think they’ve spoiled me too much for me to easily work elsewhere even if that wasn’t the case, so that’s one reason to take an extended break.

I still think Mozilla is a great place to work, where there are always opportunities to learn, to expand your horizons and to meet new people. An unfortunate consequence of that, though, is that I think it’s also quite high-stress. Not the kind of obvious stress you get from tight deadlines and other external pressures, but a more subtle, internal stress that you get from constantly striving to keep up and be the best you can be. Mozilla’s big enough now that it’s not uncommon to see people leave, but it does seem that a disproportionate amount of them cite stress or needing time to deal with life issues as part of the reason for moving on. Maybe we need to get better at recognising that, or at encouraging people to take more personal time?

Another reason though, and the primary reason, is that I want to spend some serious time working on creating a game. Those who know me know that I’m quite an avid gamer, and I’ve always had an interest in games development (I even spoke about it at Guadec some years back). Pre-employment, a lot of my spare time was spent developing games. Mostly embarrassingly poor efforts when I view them now, but it’s something I used to be quite passionate about. At some point, I think I decided that I preferred app development to games development, and went down that route. Given that I haven’t really been doing app development since joining Mozilla, it feels like a good time to revisit games development. If you’re interested in hearing about that, you may want to follow this Twitter account. We’ve started already, and I like to think that what we have planned, though very highly influenced by existing games, provides some fun, original twists. Let’s see how this goes :)

September 01, 2013 11:50 AM

August 30, 2013

Geoff Brown

Firefox for Android Performance Measures – August check-up

The August 2013 summary of performance measures for Firefox for Android.

There are quite a few regressions this month, in several Talos tests, time to throbber stop, and some Eideticker tests.

Talos

This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.

tcheck2

Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.

Image

6.5 (start of month) – 7.5 (end of month).

Regression on Aug 9 — see bug 903482.

trobopan

Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.

Image

63000 (start of month) – 1600000 (end of month).

Regression on Aug 22 — see bug 908823.

tprovider

Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.

375 (start of month) – 375 (end of month).

tsvg_nochrome

Page load test for svg. Lower values are better.

4000 (start of month) – 4070 (end of month).

Minor regression Aug 23 – bug 908810.

tp4m_nochrome

Generic page load test. Lower values are better.

590 (start of month) – 650 (end of month).

No specific regression identified.

tp4m_main_rss_nochrome

Image

131000000 (start of month) – 138000000 (end of month).

Gradual change + regression of Aug 21 – bug 908747.

tp4m_shutdown_nochrome

25000000 (start of month) – 25000000 (end of month).

ts

Startup performance test. Lower values are better.

3800 (start of month) – 4300 (end of month).

No specific regression identified.

ts_shutdown

Shutdown performance test. Lower values are better.

25000000 (start of month) – 25000000 (end of month).

Throbber Start / Throbber Stop

These graphs are taken from http://phonedash.mozilla.org.  Browser startup performance is measured on real phones (a variety of popular devices).

“Time to throbber start” measures the time from process launch to the start of the throbber animation. Smaller values are better.

Image

Time to throbber start was pretty stable throughout the month.

“Time to throbber stop” measures the time from process launch to the end of the throbber animation. Smaller values are better.

Image

There is little change for some devices, but significant regressions for other devices. See bug 904088 and bug 909380.

Eideticker

These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.

More info at: https://wiki.mozilla.org/Project_Eideticker

Most of the eideticker graphs are quite stable for August, but there are a couple of regressions.

Image

Image

awsy

See https://www.areweslimyet.com/mobile/ for content and background information.

Most awsy measurements are trending down (improving) this month — good to see!

Image

Image

Image


August 30, 2013 07:52 PM

August 26, 2013

Sriram Ramasubramanian

Droid Inspector

The concept of Overdraw has been given so much importance in the Android world. During the Google I/O, few talks emphasized on the importance of reducing overdraw. Yet there is not a single tool that can help us identify the overdraw easily. We at Mozilla worked on a new interface for about:home for the past few months. That would mean all my work for reducing overdraw on the previous version of about:home to be left unused going further. A tool that is as simple as taking a screenshot can reduce the engineering effort in such cases.

The standard Android tools like UiAutomatorViewer and HierarchyViewer are good. However, they show the same UI in a flat mode. It requires some sort of photographic memory to map different views in the hierarchy to see if something would cause overdraw. Is there a better way that is easier and faster to identify overdraw issues?

Droid Inspector

Introducing Droid Inspector! Taking cues from Firebug and Web Inspector, I’ve been working on this tool for the past few months. This tool shows a 3D view of the app by capturing the bitmap of individual Views, unlike UiAutomatorViewer. This helps visualize the app easily. Also, the background and the content layers — on a TextView, text is the content, while on an ImageView, image is the content — are shown separately. Remember Romain Guy’s example app causing overdraw issues because the background was not cleared once the image got loaded? Tracking such issues are so much easier. Additionally Droid Inspector shows a box model to make apps pixel perfect. By switching between 3D and 2D modes one can see the perspective view of the app and its overdraw with a single click. Click on individual properties to see it in the hierarchy and vice versa. Additionally one can show/hide either background or content to see how it reflects the overdraw. Droid Inspector is available in two flavors: as an Eclipse plugin that integrates with DDMS, and a web based plugin. Give it a spin and let me know how much it helps.


August 26, 2013 05:35 PM