TestEngineering/Performance/Talos/Misc: Difference between revisions
(→History of tp Tests: - update) |
m (→Steps to add a test to production: - first pass) |
||
Line 58: | Line 58: | ||
* [[https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=Talos File a bug]] to add tests to [[https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py talos]] [[https://bugzilla.mozilla.org/show_bug.cgi?id=893071 example]]. | * [[https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=Talos File a bug]] to add tests to [[https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py talos]] [[https://bugzilla.mozilla.org/show_bug.cgi?id=893071 example]]. | ||
* for aurora/central on desktop only - just land your new test to your in-tree commit (on inbound/autoland) and ensure that you adjust talos.json to run the test. | |||
** NOTE: one exception is adding an entirely new suite- this requires a [[http://hg.mozilla.org/build/buildbot-configs/file/tip/mozilla-tests/config.py#l176 buildbot]] patch to add the new scheduled job. Example: [[https://bugzilla.mozilla.org/show_bug.cgi?id=886288 Bug 886288 - deploy new tsvgx and tscrollx talos tests to m-c only]] | ** NOTE: one exception is adding an entirely new suite- this requires a [[http://hg.mozilla.org/build/buildbot-configs/file/tip/mozilla-tests/config.py#l176 buildbot]] patch to add the new scheduled job. Example: [[https://bugzilla.mozilla.org/show_bug.cgi?id=886288 Bug 886288 - deploy new tsvgx and tscrollx talos tests to m-c only]] | ||
** if you add a new suite, edit [https://searchfox.org/mozilla-central/source/taskcluster/ci/test/talos.yml talos.yml] to add the testname to a new suite, then add it to [https://searchfox.org/mozilla-central/source/taskcluster/ci/test/test-sets.yml test-sets.yml] for all the different talos test-sets. | |||
* Update talos | * Update talos | ||
** just land code in-tree (inbound/ | ** just land code in-tree (inbound/autoland) | ||
* Ensure the test follows the branch through the [[http://mozilla.github.io/process-releases/draft/development_overview/ release stages]] | * Ensure the test follows the branch through the [[http://mozilla.github.io/process-releases/draft/development_overview/ release stages]] | ||
** if we are just updating talos.json, this will be automatic | ** if we are just updating talos.json, this will be automatic | ||
* if this is an update to an existing test that could change the numbers, this needs to be treated as a new test and run side by side for a week to get a new baseline for the numbers. | * if this is an update to an existing test that could change the numbers, this needs to be treated as a new test and run side by side for a week to get a new baseline for the numbers. | ||
* [https:// | * document the test on [[https://wiki.mozilla.org/Buildbot/Talos/Tests]] | ||
** NOTE: this should be updated every time the branch moves closer to release. Once on release we don't need to update | ** NOTE: this should be updated every time the branch moves closer to release. Once on release we don't need to update | ||
Revision as of 15:24, 1 March 2018
Adding a new test
Adding a new performance test or modifying an existing test is much easier than most people think. I general we need to create a patch for talos which has:
- determine if this is a startup test or a page load test and create the appropriate folder for your test in talos
- add all files and resources for the test (make sure there is no external network accesses) to that folder
- file a bug in the testing:talos component with your patch
What we need to know about tests
When adding a new test, we really need to understand what we are doing. Here are some questions that you should know the answer to before adding a new test:
- What does this test measure?
- Does this test overlap with any existing test?
- What is the unit of measurement that we are recording?
- What would constitute a regression?
- What is the expected range in the results over time?
- Are there variables or conditions which would affect this test?
- browser configuration (prefs, environment variables)?
- OS, resources, time of day, etc... ?
- Indepenedent of Observation? Will this test produce the same number regardless of what was run before it?
- What considerations are there for how this test should be run and what tools are required?
Please document the answers to these questions on Buildbot/Talos/Tests.
Making a Pageloader test work
A pageloader test is the most common. Using the pageloader extension allows for most of the reporting and talos integration to be done for you. Here is what you need to do:
- Create a new directory in the tests subdirectory of talos
- Create a manifest
- add a <testname>.manifest file in your directory (svg example)
- raw page load will just be a single url line (svg example)
- internal measurements will have a % prior to the url (svg example)
- add a <testname>.manifest file in your directory (svg example)
- Add your tests
- for self reporting tests, use [tpRecordTime]
- tsvgx example - single page/value
- tart example - multiple pages/values
- for self reporting tests, use [tpRecordTime]
- Add your test definition to talos via [test.py] to add a new class for your test. ([tscrollx example])
- Add an item for tpmanifest, ${talos} is replaced with the current running directory.
- example: tpmanifest = '${talos}/tests/scroll/scroll.manifest'
- Add an item for tpcycles (we recommend 1) and tppagecycles (we recommend 25 for a simple page, and 5 for a internal recording benchmark)
- Add an item for filters. ([tcanvasmark example])
- Add an item for tpmanifest, ${talos} is replaced with the current running directory.
Making a Startup test work
A startup test is designed to load the browser with a single page many times. This is useful for tests which shouldn't have any extensions, can handle all loading and measuring themselves, or require the measurement of the browser startup. Here is what you need to do:
- Create a new directory in the startup_test subdirectory of talos
- Add your tests to the folder, these will be accessed by a raw URL
- the tests need to report [__start_report<value>__end_report] so talos will find it ([tspaint example])
- if you plan on doing shutdown times, you need to add in [__startTimestamp<value>__endTimestamp]. ([tspaint example])
- Include [MozillaFileLogger.js] and [quit.js] in your script ([tspaint example])
- Add your test definition to talos via [test.py] to add a new class for your test. ([tresize example])
- Add an item for cycles, we recommend 20.
- Add an item for url, this is relative to the talos directory and what Firefox will load
- Add tpmozafterpaint and set it to 'True' by default, or 'False' if your test does other internal measurements unrelated to rendering a page
- Add an item for filters. ([tresize example])
Steps to add a test to production
- [File a bug] to add tests to [talos] [example].
- for aurora/central on desktop only - just land your new test to your in-tree commit (on inbound/autoland) and ensure that you adjust talos.json to run the test.
- NOTE: one exception is adding an entirely new suite- this requires a [buildbot] patch to add the new scheduled job. Example: [Bug 886288 - deploy new tsvgx and tscrollx talos tests to m-c only]
- if you add a new suite, edit talos.yml to add the testname to a new suite, then add it to test-sets.yml for all the different talos test-sets.
- Update talos
- just land code in-tree (inbound/autoland)
- Ensure the test follows the branch through the [release stages]
- if we are just updating talos.json, this will be automatic
- if this is an update to an existing test that could change the numbers, this needs to be treated as a new test and run side by side for a week to get a new baseline for the numbers.
- document the test on [[1]]
- NOTE: this should be updated every time the branch moves closer to release. Once on release we don't need to update
While that is a laundry list of items to do, if you are developer of a component just talk to the a*team ([jmaher]) and they will handle the majority of the steps above.
Background Information
Hardware Profile of machines used in automation
2018 - Firefox 60 and forward:
- linux64 (, win10x64 (we also test win32 builds on win10x64 instead of win7)):
HPE Moonshot 1500 System (45 cartridges per 4.3U) 1500W Hot Plug redundant (1+1) Power Supply 45 m710x ProLiant cartridges 1 Intel E3-1585Lv5 3.0GHz CPU 8GB DDR4 2400MHz RAM 1 256GB PCIe M.2 2280 SSD 1 64GB SATA M.2 2242 SSD 1 Intel Iris Pro Graphics P580
Older IX hardware for versions prior to Firefox 60:
- linux64, win7x32, win10x64:
iX21X4 2U Neutron "Gemini" Series Four Node Hot-Pluggable Server (4 nodes per 2U) 920W High-Efficiency redundant (1+1) Power Supply 1 Intel X3450 CPU per node 8GB Total: 2 x 4GB DDR3 1333Mhz ECC/REG RAM per node 1 WD5003ABYX hard drive per node 1 NVIDIA GPU GeForce GT 610 per node
- OSX 10.8 (mtnlion):
Model Name: Mac mini Model Identifier: Macmini5,3 Processor Name: Intel Core i7 Processor Speed: 2 GHz Number of Processors: 1 Total Number of Cores: 4 L2 Cache (per Core): 256 KB L3 Cache: 6 MB Memory: 8 GB Disk: 2 x 500G SATA drives (RAID)
- All Mac Minis have EDID devices attached that set the resolution at 1600x1200.
Naming convention
't' is pre-pended to the names to represent 'test'. Thus, ts = 'test startup', tp = 'test pageload', tdhtml = 'test dhtml'.
History of tp Tests
tp4m
This is a smaller pageset (21 pages) designed for mobile Firefox. This is a blend of regular and mobile friendly pages.
We landed on this on April 18th, 2011 in bug 648307. This runs for Android and Maemo mobile builds only.
tp5
Updated web page test set to 100 pages from April 8th, 2011. Effort was made for the pages to no longer be splash screens/login pages/home pages but to be pages that better reflect the actual content of the site in question.
tp6
Created June 2017 with recorded pages via mitmproxy using modern google, amazon, youtube, and facebook. Ideally this will contain more realistic user accounts that have full content, in addition we would have more than 4 sites- up to top 10 or maybe top 20.