Performance/Fenix/Performance reviews: Difference between revisions

Jump to navigation Jump to search
Starting information for running benchmarks remotely or in CI.
(→‎Profile: Clean up results)
(Starting information for running benchmarks remotely or in CI.)
Line 9: Line 9:


The trade-offs for each technique are mentioned in their respective section.
The trade-offs for each technique are mentioned in their respective section.
== Benchmark remotely ==
You can run benchmarks remotely in CI/automation now. Section in progress.


== Benchmark locally ==
== Benchmark locally ==
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. However, automated benchmarks are time consuming and difficult to write so sometimes it's better to perform manual tests.
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results.


Unfortunately, we don't yet support benchmarks in CI so you'll have to run them manually. '''Please use a low-end device.'''
See the [[#Benchmark remotely|Benchmark remotely]] section for information about how you can run these tests in CI/automation.


'''To benchmark, do the following:'''
'''To benchmark, do the following:'''
83

edits

Navigation menu