TestEngineering/Performance/RunningTests: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Add --print-tests info.)
 
(19 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Guide to running performance tests =
{{DISPLAYTITLE:Running Performance Tests}}
== Prerequisites ==
=== What? ===
To be able to run one or more performance tests you need to know which are those. The [[TestEngineering/Performance#Projects|Performance Test Projects]] from [[TestEngineering/Performance|Performance Test Engineering]] page contains the projects and their test list.


=== How? ===
= Prerequisites =
There’s a tool called [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] we use to run the tests, but you need to at least have [https://phabricator.services.mozilla.com/ phabricator] and [https://bugzilla.mozilla.org/ bugzilla] accounts to be able to run on [https://treeherder.mozilla.org/#/jobs?repo=try try].  
To run the tests you will need to have the [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Source_Code source code] available locally, and satisfy the requirements for running [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the [[ReleaseEngineering/TryServer|try server documentation]].  


== Running tests ==
= Identifying tests to run =
=== Where? ===
To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. The performance tests and frameworks available can be found from our [[TestEngineering/Performance#Projects|projects list]]. You can also see our new [https://firefox-source-docs.mozilla.org/testing/perfdocs/index.html performance test documentation], which will eventually replace our wiki pages.
As long as you have [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] installed and configured, you can run a performance test locally or on try. But, first command you need to run to get familiar with the complexity of mach is:<br />
$ ./mach help


==== On try ====
= Running tests locally =
here’s the option on running a performance test on try (treeherder). This option will run the test on an integration environment from mozilla, meaning that it won’t use any of your workstation resources to get the results.<br />
== Webext ==
<br />
Webextension tests can be ran locally via mach. The command is:<br/>
The command is:<br />
'''> ./mach raptor-test -t [test-name]'''<br/>
$ ./mach try chooser<br />
e.g.<br/>
to get a nice UI to choose the tests to run, platform, build type<br />
'''> ./mach raptor-test -t raptor-tp6-amazon-firefox'''<br/>
[[File:Try chooser.png|800 px|try chooser]]<br />
or<br />
$ ./mach try fuzzy<br />
to get a CLI on your console to choose the tests you want to run.<br />
[[File:Try fuzzy.png|800 px|try fuzzy]]<br /><br />
If you know exactly how to spell some words in the test signature, like platform, test name or build type, you might want to consider using the simple quote ‘ to filter up the results to your preference.<br />
[[File:Try fuzzy filter platform test.png|800 px|Try fuzzy filter platform test]]<br /><br />
If you know exactly how the signature starts you can use a ^ in front of your search string and the results will narrow down to just that signature<br />
[[File:Try fuzzy filter starts with.png|800 px|try fuzzy filter starts with]]<br /><br />
If you want to skip the chooser or the fuzzy and to push the test to try directly from the command line, you can use the option -q (from query). The option --no-push will allow you to check the jobs you’re pushing by printing them to the command line without actually pushing them to try. The option -m is pretty handy when you push several times the same test (and believe me, you will!).<br />
$ ./mach try fuzzy --full -q="'linux64 'raptor-tp6-1-firefox" -m="base for [bug/commit]" --no-push <br />
[[File:Try fuzzy filter platform test.png|400 px|try fuzzy filter platform test]]<br /><br />
And narrowing the results down<br />
$ ./mach try fuzzy --full -q="'linux64-shippable/opt-raptor-tp6-1-firefox" -m="base for [bug/commit]" --no-push <br />
[[File:Mach try fuzzy filter platform build type test.png|400 px|mach try fuzzy filter platform build type test]]<br /><br />
Or just specifying how the test signature starts:<br />
[[File:Try fuzzy query filter starts with.png|400 px|try fuzzy query filter starts with]]<br /><br />
Once you found your query, you just need to remove the --no-push to push to try.<br /><br />


Those are just some examples of what performance sheriffs do when pushing tests to try. You can do anytime the command below to get the full help of the try command.<br />
There are some extra options you could use like '''--cold''' that will run the cold version of the test.<br/>
$ ./mach try --help
 
For the entire list of extra options you can use:<br/>
'''> ./mach raptor-test -t [test-name] --help'''
 
You can find all the tests available to run by using '''--print-tests'''.
 
== Browsertime ==
Browsertime tests can be ran locally via mach. The command is:<br/>
'''> ./mach raptor-test -t [test-name] --browsertime'''<br/>
e.g.<br/>
'''> ./mach raptor-test -t amazon --browsertime'''<br/>
 
There are some extra options you could use like:<br/>
'''--app''' that allow the selection of app used to run the test<br/>
'''--cold''' that will run the cold version of the test.<br/>
 
For the entire list of extra options you can use:<br/>
'''> ./mach raptor-test -t [test-name] --browsertime --help'''
 
You can find all the tests available to run by using '''--print-tests'''.
 
== Talos ==
Follow [[TestEngineering/Performance/Talos/Running| this guide]] to find out how to run talos tests locally.
 
== AWSY ==
Follow [[Project_Fission/MemoryProject_Fission/Memory| this guide]] to find out how to run awsy tests locally.
 
= Scheduling tests on try =
Follow [[ReleaseEngineering/TryServer#How_to_push_to_try|this guide]] to schedule jobs against the try server.
 
== Rebuilds ==
Due to the variance in performance test results it is a good idea to schedule multiple rebuilds. We typically recommend 3 rebuilds. This can be achieved by adding <code>--rebuild 3</code> to your try syntax.
 
== Presets ==
If you're unsure which tests to run, there are some [https://firefox-source-docs.mozilla.org/tools/try/presets.html mach try presets] that can help:
 
;perf
: Runs all performance (raptor and talos) tasks across all platforms. Android hardware platforms are excluded due to resource limitations.
 
;perf-chrome
: Runs the talos tests most likely to change when making a change to the browser chrome. This skips a number of talos jobs that are unlikely to be affected in order to conserve resources.
 
== Scheduling hidden jobs ==
Some jobs are hidden by default to reduce them being scheduled unintentionally. These are typically jobs that run on limited pools of hardware such as mobile devices. To make these available to your try run add the <code>--full</code> option.
 
== Viewing test results ==
After pushing the jobs to try, you will be given a link with the Treeherder job view. While there you can see when the tests failed or passed, their results and many other information, we do not recommend to use the graph view in order to see data-points trend for the try repo, only for the others. Instead, you can use the compare view to make a thorough comparison between 2 pushes. You can also compare 2 pushes from different repos as long as they contain comparable jobs.
 
== Comparing results from multiple try jobs ==
[[TestEngineering/Performance/Sheriffing/CompareView| Follow this guide]] to be able to compare results from multiple try jobs.

Latest revision as of 18:36, 12 April 2021


Prerequisites

To run the tests you will need to have the source code available locally, and satisfy the requirements for running mach commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the try server documentation.

Identifying tests to run

To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. The performance tests and frameworks available can be found from our projects list. You can also see our new performance test documentation, which will eventually replace our wiki pages.

Running tests locally

Webext

Webextension tests can be ran locally via mach. The command is:
> ./mach raptor-test -t [test-name]
e.g.
> ./mach raptor-test -t raptor-tp6-amazon-firefox

There are some extra options you could use like --cold that will run the cold version of the test.

For the entire list of extra options you can use:
> ./mach raptor-test -t [test-name] --help

You can find all the tests available to run by using --print-tests.

Browsertime

Browsertime tests can be ran locally via mach. The command is:
> ./mach raptor-test -t [test-name] --browsertime
e.g.
> ./mach raptor-test -t amazon --browsertime

There are some extra options you could use like:
--app that allow the selection of app used to run the test
--cold that will run the cold version of the test.

For the entire list of extra options you can use:
> ./mach raptor-test -t [test-name] --browsertime --help

You can find all the tests available to run by using --print-tests.

Talos

Follow this guide to find out how to run talos tests locally.

AWSY

Follow this guide to find out how to run awsy tests locally.

Scheduling tests on try

Follow this guide to schedule jobs against the try server.

Rebuilds

Due to the variance in performance test results it is a good idea to schedule multiple rebuilds. We typically recommend 3 rebuilds. This can be achieved by adding --rebuild 3 to your try syntax.

Presets

If you're unsure which tests to run, there are some mach try presets that can help:

perf
Runs all performance (raptor and talos) tasks across all platforms. Android hardware platforms are excluded due to resource limitations.
perf-chrome
Runs the talos tests most likely to change when making a change to the browser chrome. This skips a number of talos jobs that are unlikely to be affected in order to conserve resources.

Scheduling hidden jobs

Some jobs are hidden by default to reduce them being scheduled unintentionally. These are typically jobs that run on limited pools of hardware such as mobile devices. To make these available to your try run add the --full option.

Viewing test results

After pushing the jobs to try, you will be given a link with the Treeherder job view. While there you can see when the tests failed or passed, their results and many other information, we do not recommend to use the graph view in order to see data-points trend for the try repo, only for the others. Instead, you can use the compare view to make a thorough comparison between 2 pushes. You can also compare 2 pushes from different repos as long as they contain comparable jobs.

Comparing results from multiple try jobs

Follow this guide to be able to compare results from multiple try jobs.