83
edits
(→Profile: Clean up results) |
(Starting information for running benchmarks remotely or in CI.) |
||
Line 9: | Line 9: | ||
The trade-offs for each technique are mentioned in their respective section. | The trade-offs for each technique are mentioned in their respective section. | ||
== Benchmark remotely == | |||
You can run benchmarks remotely in CI/automation now. Section in progress. | |||
== Benchmark locally == | == Benchmark locally == | ||
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results | A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. | ||
See the [[#Benchmark remotely|Benchmark remotely]] section for information about how you can run these tests in CI/automation. | |||
'''To benchmark, do the following:''' | '''To benchmark, do the following:''' |
edits