Firefox Test Pilot is an approach to experimenting with new browser features in the open. What this means is that we plan, design, build, and evaluate experimental features with as much transparency as possible, in line with Mozilla’s mission and values. In this second year of Test Pilot, we’re making a concerted effort to make our work even more public by sharing all of the user research we conduct on our experiments.
What we already do in the open
From the beginning, Test Pilot has strived to work in the open. For example, the majority of our meetings are open to the public. We publish our documentation in places like the Mozilla Wiki and Github. The latter has also been a place where people trying our experiments can suggest and vote on enhancements.
People using our Tab Center experiment reached out to us on Twitter when they heard that the experiment would no longer be supported starting with Firefox 56.
Whenever an experiment graduates, we publish — for anyone to see — what we learned from that experiment, including the metrics that informed those learnings.
Transparency beyond code
Even with these efforts, we realize that, historically, transparency in open source development has largely focused on code and the conversations that happen around that code. What that focus has neglected are other important parts of our process for creating the products we ship, including user research.
Test Pilot user research has not been shared widely in the past because of the extensive personally identifiable information (PII) contained in our qualitative research. In most cases, we have not removed PII from our research findings because, for the Test Pilot team, those details are often critical for helping us build greater empathy for people using our experiments and for internalizing insights from the research we conduct. For example, a grimace and silence captured on video from a research participant trying to use one of our experiments is generally much more affecting and memorable than a written, anonymized quote from a research participant about how an experiment was confusing.
The benefits of making user research more open
We will continue to leverage user research containing PII internally when the Test Pilot team can learn from it. However, we realize that there are benefits that we can reap by making our user research more shareable beyond our team. With a new commitment to sharing our user research in the form of blog posts here on Medium, links to our research reports, and embedding insights from user research in public conversations happening in other channels like Github, we aim to:
Possibly expand who can contribute to our work to include people who will share ideas and feedback on our research approaches
Increase accountability around how we leverage user research findings to inform the design and development of our experiments
Share ideas about open research
We welcome additional ideas about how we can make our user research more open while maintaining its rigor and integrity. If there are open research initiatives that you think we can learn from, let us know. We look forward to hearing your thoughts on the research we share moving forward.
Originally published at medium.com on July 26, 2017.
Last year, the Firefox User Research team conducted a series of formative research projects studying multi-device task continuity. While these previous studies broadly investigated types of task flows and strategies for continuity across devices, they did not focus on the functionality, usability, or user goals behind these specific workflows.
For most users, interaction with browsers can be viewed as a series of specific, repeatable workflows. Within the the idea of a “workflow” is the theory of “flow.” Flow has been defined as:
a state of mind experienced by people who are deeply involved in an activity. For example, sometimes while surfing the Net, people become so focused on their pursuit that they lose track of time and temporarily forget about their surroundings and usual concerns…Flow has been described as an intrinsically enjoyable experience.¹
As new features and service integrations are introduced to existing products, there is a risk that unarticulated assumptions about usage context and user mental models could create obstacles for our users. Our goal for this research was to identify these obstacles and gain a detailed understanding of the behaviors, motivations, and strategies behind current browser-based user workflows and related device or app-based workflows. These insights will help us develop products, services, and features for our users.
Primary Research Questions
How can we understand users’ current behaviors to develop new workflows within the browser?
How do workflows & “flow” states differ between and among different devices?
In which current browser workflows do users encounter obstacles? What are these obstacles?
Are there types of workflows for specific types of users and their goals? What are they?
How are users’ unmet workflow needs being met outside of the browser? And how might we meet those needs in the browser?
In order to understand users’ workflows, we employed a three-part, mixed method approach.
The first phase of our study was a twenty question survey deployed to 1,000 respondents in Germany provided by SSI’s standard international general population panel. We asked participants to select the Internet activities they had engaged in in the previous week. Participants were also asked questions about their browser usage on multiple devices as well as perceptions of privacy. We modeled this survey off of Pew Research Center’s “The Internet and Daily Life” study.
In the second phase, a separate group of 26 German participants were recruited from four major German cities: Cologne, Hamburg, Munich, and Leipzig. These participants represented a diverse range of demographic groups and half of the participants used Firefox as their primary browser on at least one of their devices. Participants were asked to download a mobile app called Paco. Paco cued participants up to seven times daily asking them about their current Internet activities, the context for it, and their mental state while completing it.
In the final phase of the study, we selected 11 of the participants from the Experience Sampling segment from Hamburg, Munich, and Leipzig. Over the course of 3 weeks, we visited these participants in their homes and conducted 90 minute interview and observation sessions. Based on the survey results and experience sampling observations, we explored a small set of participants’ workflows in detail.
Field Team Participation
The Firefox User Research team believes it is important to involve a wide variety of staff members in the experience of in-context research and analysis activities. Members of the Firefox product management and UX design teams accompanied the research team for these in-home interviews in Germany. After the interviews, the whole team met in Toronto for a week to absorb and analyze the data collected from the three segments. The results presented here are based on the analysis provided by the team.
Based on our research, we define a workflow as a habitual, frequently employed set of discrete steps that users build into a larger activity. Users employ the tools they have at hand (e.g., tabs, bookmarks, screenshots) to achieve a goal. Workflows can also span across multiple devices, range from simple to technically sophisticated, exist across noncontinuous durations of time, and contain multiple decisions within them.
We observed that workflows appear to be naturally constructed actions to participants. Their workflows were so unconscious or self-evident, that participants often found it challenging to articulate and reconstruct their workflows. Examples of workflows include: Comparison shopping, checking email, checking news updates, and sharing an image with someone else.
Based on our study, we have developed a general two-part model to illustrate a workflow.
Part 1: Workflows are constructed from discrete steps. These steps are atomic and include actions like typing in a URL, pressing a button, taking a screenshot, sending a text message, saving a bookmark, etc. We mean “atomic” in the sense that the steps are simple, irreducible actions in the browser or other software tools. When employed alone, these actions can achieve a simple result (e.g. creating a bookmark). Users build up the atomic actions into larger actions that constitute a workflow.
Part 2: Outside factors can influence the choices users make for both a whole workflow or steps within a workflow. These factors include software components, physical components, and pyscho/social/cultural factors.
Factors Influencing Workflows
While workflows are composed from atomic building blocks of tools, there is a great deal more that influences their construction and adoption among users.
Software components are features of the operating system, the browser, and the specs of web technology that allow users to complete small atomic tasks. Some software components also constrain users into limited tasks or are obstacles to some workflows.
The basic building blocks of the browser are the features, tools, and preferences that allow users to complete tasks with the browser. Some examples include: Tabs, bookmarks, screenshots, authentication, and notifications.
Physical components are the devices and technology infrastructure that inform how users interact with software and the Internet. These components employ software but it is users’ physical interaction with them that makes these factors distinct. Some examples include: Access to the internet, network availability, and device form factors.
Psycho/Social/Cultural influences are contextual, social, and cognitive factors that affect users’ approaches to and decisions about their workflows.
Memory Participants use memory to fill in gaps in their workflows where technology does not support persistence. For example, when comparison shopping, a user has multiple tabs open to compare prices; the user is using memory to keep in mind prices from the other tabs for the same item.
Control Participants exercised control over the role of technology in their lives either actively or passively. For example, some participant believed that they received too many notifications from apps and services, and often did not understand how to change these settings. This experience eroded their sense of control over their technology and forced these participants to develop alternate strategies for regaining control over these interruptions. For others, notifications were seen as a benefit. For example, one of our Leipzig participants used home automation tools and their associated notifications on his mobile devices to give him more control over his home environment.
Other examples of psycho/social/cultural factors we observed included: Work/personal divides, identity management, fashion trends in technology adoption, and privacy concerns.
Using the Workflows Model
When analyzing current user workflows, the parts of the model should be cues to examine how the workflow is constructed and what factors influence its construction. When building new features, it can be helpful to ask the following questions to determine viability:
Are the steps we are creating truly atomic and usable in multiple workflows?
Are we supplying software components that give flexibility to a workflow?
What affect will physical factors have on the atomic components in the workflow?
How do psycho-social-cultural factors influence users’ choices about the components they are using in the workflow?
Design Principles & Recommendations
New features should be atomic elements, not complete user workflows.
Don’t be prescriptive, but facilitate efficiency.
Give users the tools to build their own workflows.
While software and physical components are important, psycho/social/cultural factors are equally as important and influential over users’ workflow decisions.
Make it easy for users to actively control notifications and other flow disruptors.
Leverage online content to support and improve offline experiences.
Help users bridge the gap between primary-device workflows and secondary devices.
Make it easy for users to manage a variety of identities across various devices and services.
Help users manage memory gaps related to revisiting and curating saved content.
Future Research Phases
The Firefox User Research team conducted additional phases of this research in Canada, the United States, Japan, and Vietnam. Check back for updates on our work.
¹ Pace, S. (2004). A grounded theory of the flow experiences of Web users. International journal of human-computer studies, 60(3), 327–363.
The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.
Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.
In a survey we ran in early 2017, developers reported to feel they
spend too much time identifying the right specs to build from,
spend too much time waiting for feedback from designers, and
spend too much time mapping new designs to existing UI elements.
In the same survey designers reported to feel they
spend too much time identifying current UI to re-use in their designs, and
spend too much time re-building current UI to use in their designs.
All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.
Let’s help people spend time on what they love to do.
Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.
We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.
Photon Design System
Based on what we learned from people, we are building our design system to help people:
find what they are looking for easily,
understand the context of that quickly, and
more deeply understand Firefox Design.
Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.
With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.
If you are working on Firefox UI
We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.
Like many organizations, Mozilla Firefox has been experimenting with the Google Ventures Design Sprint method as one way to quickly align teams and explore product ideas. Last fall I had the opportunity to facilitate a design sprint for our New Mobile Experiences team to explore new ways of connecting people to the mobile web. Our team had a productive week and you can read more about our experience in this post on the Sprint Stories website.
What is the most important thing to do in preparing for a Sprint?
“The most important thing is planning, planning, and then more planning. I want to emphasize the fact that there are unknowns during a Design Sprint, so you have to even plan for those. And what that looks like is the agenda, how you want to pace the activities, and who you want in the room.” — Ratna Desai, Google UX Lead
When I was planning my first Design Sprint as a facilitator, I was grateful for the detailed daily outlines in the GV Library, as well as the Monday morning kickoff presentation. But, given the popularity of Design Sprints, I was surprised that I was unable to find (at the time at least) examples of daily presentations that I could repurpose to keep us on track. There is a a lot to cover in a 5 day Design Sprint. It’s a challenge to even remember what is coming next, let alone keep everything timely and orderly. I relied heavily on these Keynote slides to pace our activities and I hope that they can serve as a resource for other first time Design Sprint facilitators by cutting down on some of your planning time.
These slides closely follow the basic approach described in the Sprint book, but could be easily modified to suit your specific needs. They also include some presenter notes to help you describe each activity. Shout out to Unsplash for the amazing photos.
10:00 — Introductions. Sprint overview. Monday overview. 10:15 — Set a long term goal. List sprint questions. 11:15 — Break 11:30 — Make a map 12:30 — Expert questions 1:00 — Lunch 2:00 — Ask the experts. Update the map. Make “How might we” notes. 4:00 — Break 4:15 — Organize the HMW notes. Vote on HMW notes. 4:35 — Pick a target
We launched Focus as a free content blocker for Safari on iOS back in December 2015. The idea was to bring control back into users’ hands. We wanted to let them dictate how they wanted to spend their browsing data, even if they weren’t using Firefox.
About a year later, we wanted to give the product an update. We wanted to experiment more on Mobile. But how?
What if we could give mobile users an app that only allowed private browsing?
Our team was very small. So we needed to set expectations early — who’s responsible for what?
We tried to stay focused (heh) on a small set of well-defined goals. Everyone had a part to play. A clearly defined timeline helped us avoid scope creep too. It set the tone for our discussions and made it a lot easier to prioritize issues.
We had weekly check-ins to make sure everyone was on the same page. All of our meetings were clearly directed and to the point. As things came up they would also be filed as GitHub issues so we always had a way to track our discussions.
It’s important to remember that getting 100% consensus is pretty much impossible. But “moving together” doesn’t necessarily mean everyone has to think the same. It’s natural to have doubts. Trust each other and try to move forward as a team. Define what success looks like together. Then go out, and gather some data.
Share early and often
Even if you aren’t finished, design comps/mock ups can really help provide a shared vision. Try to share them more! I’m guilty too sometimes. I know it can be hard. But I’ve always been glad that I did.
Where possible, we made all design changes “on device” using Buddybuild. This helped us arrive at decisions quicker, and with more confidence. It was great for design and engineering issues but it also kept the entire team in sync (especially useful when working across offices and timezones). Basically, it was our way of sharing early and often.
Then there was Slack . Lots of it. Our team was small, but distributed. Slack made it easy for anyone to give feedback, report/resolve GitHub issues, or even just give an encouraging 👍.
Sharing early might have pitfalls too. But it can get everyone involved and excited about the final product. Visual mock ups create a common language and helps avoid miscommunication later on. Share the vision, spread the workload. Insightful feedback can uncover interesting opportunities, but don’t let criticism stall you either.
Originally, Focus was a slightly different product. I didn’t work on the first version myself (that was mostly Darrin) but Brian (one of our engineers) did.
As designers, we should be opinionated but we also need to be flexible. Whatever decision we arrive at right now might not be ideal, but maybe that’s OK. Who knows, maybe sometimes we don’t have the right answer. Remember, it’s iterative!
Brian and I shared the same timezone but not the same office. So we spent a fair amount of time on Vidyo. Being on a video call over a long period of time can be really tiring. But it helped us hammer out the details quickly. When something didn’t make sense, we didn’t have to wait for the next weekly meeting to chat.
This is probably terrible of me, but I actually didn’t have a “full design spec” until towards the end of the project.
Unfortunately, there isn’t a one-size-fits-all type of solution here. For us, Vidyo and Slack made the most sense but there were still drawbacks. Everyone’s different. Every team is different. Find a solution that works for you and your team. The key is to be flexible, to adapt.
There are always more opportunities to learn, and to grow. Within our products and ourselves as practitioners too. In the end, I really think it’s a constant process.
If you’re interested in contributing, here are the GitHub repos for Firefox Focus on iOS, and Android. Currently, we’re working towards the initial release of Firefox Focus on Android. Send any feedback, questions and comments you might have!
My name is Philip Walmsley, and I am a Senior Visual Designer on the Firefox UX team. I am also one of the people tasked with making addons.mozilla.org (or, “AMO”) a great place to list and find Firefox extensions and themes.
There are a lot of changes happening in the Firefox and Add-ons ecosystem this year (Quantum, Photon, Web Extensions, etc.), and one of them is a visual and functional redesign of AMO. This has been a long time coming! The internet has progressed in leaps and bounds since our little site was launched many years ago, and it’s time to give it some love. We’ve currently got a top-to-bottom redesign in the works, with the goal of making add-ons more accessible to more users.
I’m here to talk with you about one part of the add-ons experience: ratings and reviews. We have found a few issues with our existing approach:
Some users just want to leave a rating and not write a review. Sometimes this is referred to as “blank page syndrome,” sometimes a user is just in a time-crunch, sometimes a user might have accessibility issues. Forcing users to do both leads to glib, unhelpful, and vague reviews.
On that note, what if there was a better way to get reviews from users that may not speak your native tongue? What if instead of writing a review, a user had the option to select tags or qualities describing their experience with an add-on? This would greatly benefit devs (‘80% of the global community think my extension is “Easy to use”!’) and other users (‘80% of the global community believe this extension is “Easy to use”!’).
We don’t do a very good job of triaging users actual issues: A user might love an extension but have an (unbeknownst to them) easily-solved technical problem. Instead of leaving a negative 1-star review for this extension that keeps acting weird, can we guide that user to the developer or Mozilla support?
We also don’t do a great job of facilitating developer/user communication within AMO. Wouldn’t it be great if you could rectify a user’s issue from within the reviews section on your extension page, changing a negative rating to a positive one?
So, as you can see, we’ve got quite a few issues here. So let’s simplify and tackle these one-by-one: Experience, Tags, Triage.
The star rating has its place. It is very useful in systems where the rating you leave is relevant to you and you alone. Your music library, for example: you know why you rate one song two stars and another at four. It is a very personal but very arbitrary way of rating something. Unfortunately, this rating system doesn’t scale well when more than one person is reviewing the same thing: If I love something but rate it two stars because it lacks a particular feature, what does that mean to other users or the overall aggregated rating? It drags down the review of a great add-on, and as other users scan reviews and see 2-stars, they might leave and try to find something else. Not great.
What if instead of stars, we used emotions?
Some of you might have seen these in airports or restrooms. It is a straightforward and fast way for a group of people to indicate “Yep, this restroom is sparkling and well-stocked, great experience.” Or “Someone needs to get in here with a mop, PRONTO.” It changes throughout the day, and an attendant can address issues as they arise. Or, through regular maintenance, they can achieve a happy face rating all day.
What if we applied this method to add-ons? What if the first thing we asked a user once they had used an extension for a day or so was: “How are you enjoying this extension?” and presented them with three faces: Grinning, Meh, and Sad. At a very high level, this gives users and developers a clear, overall impression of how people feel about using this add-on (“90% grinning face for this extension? People must like it, let’s give it a try.”).
So! A user has contributed some useful rating data, which is awesome. At this point, they can leave the flow and continue on their merry way, or we can prompt them to quickly leave a few more bits of even MORE useful review data…
Writing a review is hard. Let me rephrase that: Writing a good review is hard. It’s easy to fire off something saying “This add-on is just ok.” It’s hard to write a review explaining in detail why the add-on is “just ok.” Some (read: most) users don’t want to write a detailed review, for many reasons: time, interest, accessibility, etc. What if we provided a way for these users to give feedback in a quick and straightforward way? What if, instead of staring down a blank text field, we displayed a series of tags or descriptors based on the emotion rating the user just gave?
For example, I just clicked a smiling face to review an extension I’m enjoying. Right after that, a grid of tags with associated icons pops up. Words like “fast”, “stable”, “easy to use”, well-designed”, fun”, etc. I liked the speed of this extension, so I click “fast” and “stable” and submit my options. And success: I have submitted two more pieces of data that are useful to devs and users. Developers can find out what users like about their add-on, and users can see what other users are thinking before committing to downloading. We can pop up different tags based on the emotion selected: if a user taps Meh or Sad, we can pop up tags to find out why the user selected that initially. The result is actionable review data that can is translated across all languages spoken by our users! Pretty cool.
Finally, we reach triage. Once a user submits tag review data, we can present them with a few more options. If a user is happy with this extension and wants to contribute even more, we can present them with an opportunity to write a review, or share it with friends, or contact the developer personally to give them kudos. If a user selected Meh, we could suggest reading some developer-provided documentation, contacting support, or writing a review. If the user selected Sad, we’d show them developer or Mozilla support, extension documentation, file a bug/issue, or write a review. That way we can make sure a user gets the help they need, and we can avoid unnecessary poor reviews. All of these options will also be available on the add-on page as well, so a user always has access to these different actions. If a user leaves a review expressing frustration with an add-on, devs will be able to reply to the review in-line, so other users can see issues being addressed. Once a dev has responded, we will ask the user if this has solved their problem and if they’d like to update their review.
We’ve covered a lot here! Keep in mind that this is still in the early proposal stage and things will change. And that’s good; we want to change this for the better. Is there anything we’ve missed? Other ideas? What’s good about our current rating and review flow? What’s bad? We’d love constructive feedback from AMO users, extension developers, and theme artists.
Please visit this Discourse post to continue the discussion, and thanks for reading!
Philip (@pwalm) Senior Visual Designer, Firefox UX
Rollie started rapping many years ago, and since the beginning, Dollie was with him. In fact, when Rollie was born, Dollie was already a puppy. Dollie doesn’t talk, but with her expressions she can support Rollie before his concerts. Rollie already recorded many songs and videos, in which Dollie always appear. Dollie is also a theme in many of Rollie’s songs. She is the only dog Rollie ever had and besides of being close like siblings, Dollie is an inspiration for Rollie. Rollie wrote songs in which he talks of Dollie as his sister and he is going to sing it in his next concert. Now Rollie and Dollie doesn’t know if it’s a good idea to give milk to their dog.