Performance/Fenix
This page describes some basics of Fenix performance. For an in-depth look at some specific topics, see:
- Best Practices for tips to write performant code
- Getting Started for comparison of profiling and benchmarking tools
Performance testing
Performance tests can have a goal of preventing regressions, measuring absolute performance as experienced by users, or measuring performance vs. a baseline (e.g. comparing fenix to fennec). It can be difficult to write tests that manage all of these. We tend to focus on preventing regressions.
List of tests running in fenix
The perftest team is working to dynamically generate the list of tests that run on the fenix application. Some progress can be seen in this query and this treeherder page. Until then, we manually list the tests below.
As of Feb. 23, 2021, we run at least the following performance tests on fenix:
- Page load duration: see the query above for a list of sites (sometimes run in automation, sometimes run manually; todo: details)
- media playback tests (TODO: details; in the query above, they are prefixed with ytp)
- Start up duration (see Terminology for start up type definitions)
- COLD VIEW tests on mach perftest. Runs per master merge to fenix on unreleased Nightly builds so we can identify the commit that caused a regression
- COLD MAIN & VIEW tests on FNPRMS. Runs Nightly on production Nightly builds. This is being transitioned out in favor of mach perftest.
- Speedometer: JS responsiveness tests (todo: details)
- tier 3 unity webGL tests (todo: details)
There are other tests that run on desktop that will cover other parts of the platform. We also have other methodologies to check for excessive resource use including lint rules and UI tests that measure things such as
Notable gaps in our test coverage includes:
- Duration testing for front-end UI flows such as the search experience
- Testing on non-Nightly builds (does this apply outside of start up?)
Preventing regressions automatically
We use the following measures:
- Crash on main thread IO in debug builds using
StrictMode
(code) - Use our StartupExcessiveResourceUseTest, for which we are Code Owners, to:
- Avoid StrictMode suppressions
- Avoid
runBlocking
calls - Avoid additional component initialization
- Avoid increasing the view hierarchy depth
- Avoid having ConstraintLayout as a RecyclerView child
- Avoid increasing the number of inflations
- Use lint to avoid multiple ConstraintLayouts in the same file (code)
Glossary
Start up "type"
This is an aggregation of all of the variables that make up a start up, described more fully below. Currently, these variables are:
- state
- path
For example, a type of start up could be described as cold_main
.
Start up "state": COLD/WARM/HOT
"State" refers to how cached the application is, which will impact how quickly it starts up.
Google Play provides a set of definitions and our definitions are similar to, but not identical, to them:
- COLD = starting up "from scratch": the process and HomeActivity need to be created
- WARM = the process is already created but HomeActivity needs to be created (or recreated)
- HOT = basically just foregrounding the app: the process and HomeActivity are already created
Start up "path": MAIN/VIEW
"Path" refers to the code path taken for this start up. We name these after the action
inside the Intent
s received by the app such as ACTION_MAIN
that tell the app what to do:
- MAIN = a start up where the app icon was clicked. If there are no existing tabs, the homescreen will be shown. If there are existing tabs, the last selected one will be restored
- VIEW = a start up where a link was clicked. In the default case, a new tab will be opened and the URL will be loaded
Caveat: if an Intent
is invalid, we may end up on a different screen (and thus taking a different code path) than the one specified by the Intent
. For example, an invalid VIEW Intent
may instead be treated as a MAIN Intent
.