TestEngineering/Performance/RunningTests: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Improve prerequistives)
No edit summary
Line 2: Line 2:


= Prerequisites =
= Prerequisites =
To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. You can also discover the performance tests and frameworks available from our [[TestEngineering/Performance#Projects|projects list]]. You can also see our new [https://firefox-source-docs.mozilla.org/testing/perfdocs/index.html performance test documentation], which will eventually replace our wiki pages.
To run the tests you will need to have the [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Source_Code source code] available locally, and satisfy the requirements for running [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the [[ReleaseEngineering/TryServer|try server documentation]].  


To run the tests you will need to have the [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Source_Code source code] available locally, and satisfy the requirements for running [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the documentation at [[ReleaseEngineering/TryServer]].  
= Identifying tests to run =
To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. The performance tests and frameworks available can be found from our [[TestEngineering/Performance#Projects|projects list]]. You can also see our new [https://firefox-source-docs.mozilla.org/testing/perfdocs/index.html performance test documentation], which will eventually replace our wiki pages.


== Running tests ==
= Running tests locally =
=== Where? ===
{{todo|}}
As long as you have [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach mach] installed and configured, you can run a performance test locally or on try. But, first command you need to run to get familiar with the complexity of mach is:<br />
'''$ ./mach help'''


==== On try ====
= Scheduling tests on try =
here’s the option on running a performance test on try (treeherder). This option will run the test on an integration environment from mozilla, meaning that it won’t use any of your workstation resources to get the results.<br />
Follow [[ReleaseEngineering/TryServer#How_to_push_to_try|this guide]] to schedule jobs against the try server.  
<br />
The command is:<br />
'''$ ./mach try chooser'''<br />
to get a nice UI to choose the tests to run, platform, build type<br />
[[File:Try chooser.png|800 px|try chooser]]<br />
or<br />
'''$ ./mach try fuzzy'''<br />
to get a CLI on your console to choose the tests you want to run.<br />
[[File:Try fuzzy.png|800 px|try fuzzy]]<br /><br />
If you know exactly how to spell some words in the test signature, like platform, test name or build type, you might want to consider using the simple quote ‘ to filter up the results to your preference.<br />
[[File:Try fuzzy filter platform test.png|800 px|Try fuzzy filter platform test]]<br /><br />
If you know exactly how the signature starts you can use a ^ in front of your search string and the results will narrow down to just that signature<br />
[[File:Try fuzzy filter starts with.png|800 px|try fuzzy filter starts with]]<br /><br />
If you want to skip the chooser or the fuzzy and to push the test to try directly from the command line, you can use the option -q (from query). The option --no-push will allow you to check the jobs you’re pushing by printing them to the command line without actually pushing them to try. The option -m is pretty handy when you push several times the same test (and believe me, you will!).<br />
'''$ ./mach try fuzzy --full -q="'linux64 'raptor-tp6-1-firefox" -m="base for [bug/commit]" --no-push''' <br />
[[File:Try fuzzy filter platform test.png|600 px|try fuzzy filter platform test]]<br /><br />
And narrowing the results down<br />
'''$ ./mach try fuzzy --full -q="'linux64-shippable/opt-raptor-tp6-1-firefox" -m="base for [bug/commit]" --no-push''' <br />
[[File:Mach try fuzzy filter platform build type test.png|600 px|mach try fuzzy filter platform build type test]]<br /><br />
Or just specifying how the test signature starts:<br />
[[File:Try fuzzy query filter starts with.png|600 px|try fuzzy query filter starts with]]<br /><br />
Once you found your query, you just need to remove the --no-push to push to try.<br /><br />


Those are just some examples of what performance sheriffs do when pushing tests to try. You can do anytime the command below to get the full help of the try command.<br />
== Rebuilds ==
'''$ ./mach try --help'''
Due to the variance in performance test results it is a good idea to schedule multiple rebuilds. We typically recommend 5 rebuilds. This can be achieved by adding <code>--rebuild 5</code> to your try syntax.
 
== Presets ==
If you're unsure which tests to run, there are some [https://firefox-source-docs.mozilla.org/tools/try/presets.html mach try presets] that can help:
 
;perf
: Runs all performance (raptor and talos) tasks across all platforms. Android hardware platforms are excluded due to resource limitations. All jobs are scheduled to run 5 times.
 
;perf-chrome
: Runs the talos tests most likely to change when making a change to the browser chrome. This skips a number of talos jobs that are unlikely to be affected in order to conserve resources.
 
== Scheduling hidden jobs ==
Some jobs are hidden by default to reduce them being scheduled unintentionally. These are typically jobs that run on limited pools of hardware such as mobile devices. To make these available to your try run add the <code>--full</code> option.
 
== Viewing test results ==
{{todo|Treeherder, Perfherder graphs}}
 
== Comparing results from multiple try jobs ==
{{todo|Perfherder compare view}}

Revision as of 11:38, 5 June 2020


Prerequisites

To run the tests you will need to have the source code available locally, and satisfy the requirements for running mach commands. If you want to run tests against the same infrastructure as our continuous integration then you will need to follow the try server documentation.

Identifying tests to run

To be able to run one or more performance tests you first need to know which tests you would like to run. If you're responding to a regression bug then the test names will be listed in the bug report. The performance tests and frameworks available can be found from our projects list. You can also see our new performance test documentation, which will eventually replace our wiki pages.

Running tests locally

[TODO]

Scheduling tests on try

Follow this guide to schedule jobs against the try server.

Rebuilds

Due to the variance in performance test results it is a good idea to schedule multiple rebuilds. We typically recommend 5 rebuilds. This can be achieved by adding --rebuild 5 to your try syntax.

Presets

If you're unsure which tests to run, there are some mach try presets that can help:

perf
Runs all performance (raptor and talos) tasks across all platforms. Android hardware platforms are excluded due to resource limitations. All jobs are scheduled to run 5 times.
perf-chrome
Runs the talos tests most likely to change when making a change to the browser chrome. This skips a number of talos jobs that are unlikely to be affected in order to conserve resources.

Scheduling hidden jobs

Some jobs are hidden by default to reduce them being scheduled unintentionally. These are typically jobs that run on limited pools of hardware such as mobile devices. To make these available to your try run add the --full option.

Viewing test results

[TODO] Treeherder, Perfherder graphs

Comparing results from multiple try jobs

[TODO] Perfherder compare view