TestEngineering/Performance: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Redirected page to Performance/Tools)
 
(42 intermediate revisions by 7 users not shown)
Line 1: Line 1:
{{DISPLAYTITLE:Firefox Performance Test Engineering 🔥🦊⏱}}
#REDIRECT [[Performance/Tools]]


== Who we are ==
{{DISPLAYTITLE:Firefox Performance Test Engineering 🔥🦊⏱️}}[[File:Fxperftest.png|thumb|right]]
* Dave Hunt [:davehunt] 🇬🇧
* Stephen Donner [:stephend] 🇺🇸
* Rob Wood [:rwood] 🇨🇦
* Greg Mierzwinski [:sparky] 🇨🇦
* Ionuț Goldan [:igoldan] 🇷🇴
* Florian Strugariu [:bebe] 🇷🇴
* Marian Raiciof [:marauder] 🇷🇴
* Alex Ionescu [:alexandrui] 🇷🇴
* Alex Irimovici [:air] 🇷🇴


== Where to find us ==
= New contributors =
* [[IRC]]: #perftest
* Slack: #perftest


== What we do ==
If you are a new contributor, or would like to start contributing you can find a guide to help you [[/NewContributors|here]].
We provide automated testing support for measuring the performance of Firefox products. Here are a few examples of what you can expect our team to be working on.
* Provide advice/troubleshooting related to performance testing or any of our tools and harnesses.
* Develop test plans that involve automation of performance testing.
* Prototype, build, and maintain test harnesses for performance testing.
* Monitoring and reporting of performance regressions.


== What we don't do ==
= Where to find us =
* Own all performance tests. We work on the test harnesses and tools that are used for performance testing, but the tests themselves are often developed outside of our team. Every test should have an owner, who is responsible for responding to questions related to the test, and may be asked to assist when the test detects a regression.
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]
* Review all performance tests. Similar to test ownership, we enable others to contribute performance tests. We can provide advice and reviews, but do not impose this as a restriction to landing test changes.
* Maintain the infrastructure the tests run on
* Maintain the continuous integration pipeline
* Maintain the reporting tools


== Meetings ==
= Team purpose =
{{/Meetings}}
To support the infrastructure and creation of automated tests for evaluating the performance of Firefox products. This provides value by exposing gaps in coverage, revealing areas where we can make performance gains, identifying performance regressions in a timely manner, and by providing key performance metrics that assist in determining how Firefox measures against release criteria.


== Onboarding ==
= What we do =
You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.
* Identification of gaps in performance test infrastructure and monitoring.
* Designing and building performance test infrastructure and monitoring solutions.
* Supporting Firefox engineers on writing performance tests.
* Supporting Firefox engineers on investigating regressions identified by tests.
* Collaboration with release operations on infrastructure requirements.
* Standing up performance tests in continuous integration environments.
* Monitoring performance test results and identifying potential regressions.
* Supporting performance sheriffs with tools to assist in identifying regressions.
* Developing test plans for performance testing.
* Running adhoc manual or partially automated performance testing.


== Task Workflow ==
= What we don't do =
* Maintenance of infrastructure hardware.
* Maintain the continuous integration pipeline.
* Writing/maintaining all performance tests.


=== New requests ===
= Meetings =
New requests can be created via [https://moz-pi-test.atlassian.net/secure/CreateIssue.jspa?issuetype=1&pid=10012 Jira Service Desk] (requires account), or [https://mana.mozilla.org/wiki/display/PI/Performance+Test+Engineering Mana].
{{/Meetings}}
 
=== Triage ===
Occurs weekly on Mondays at 14:45 UTC using the [https://moz-pi-test.atlassian.net/issues/?filter=10019 Untriaged] filter.
# Set status to '''Current Quarter''' if work is to be considered for the current quarter
#* Set due date if there is a deadline, else use last working day in quarter
#* Set priority if known to be '''<span style="color:red">high</span>''' or '''<span style="color:green">low</span>''', else use '''<span style="color:orange">medium</span>'''
#* Set assignee to be responsible for planning
#* Set status to '''Future Quarter''' if work is to be considered for next quarter planning
# Set deferred date to bump to a future triage date
 
=== Current quarter planning ===
Occurs regularly using the [https://moz-pi-test.atlassian.net/issues/?filter=10034 Needs Planning (me)] and [https://moz-pi-test.atlassian.net/issues/?filter=10033 Needs Planning] filters.
 
# Create links for blockers, dependencies, etc
# Create subtasks if needed, for each subtask:
#* Set priority if known to be '''<span style="color:red">high</span>''' or '''<span style="color:green">low</span>''', else use '''<span style="color:orange">medium</span>'''
#* Set assignee to be responsible for development
#* Set due date if there is a deadline, else reflect parent task due date
#* Create links for blockers, dependencies, etc
#* Set time estimate (in weeks)
#* If no further planning is needed, set status to '''Ready'''
# If no subtasks are created:
#* Set assignee to be responsible for development
#* Set time estimate (in weeks)
# If no further planning is needed, set status to '''Ready'''
 
=== Weekly review ===
Occurs weekly on Mondays at 15:30 UTC using the [https://moz-pi-test.atlassian.net/issues/?filter=10038 Blocked], [https://moz-pi-test.atlassian.net/issues/?filter=10015 Overdue] and [https://moz-pi-test.atlassian.net/issues/?filter=10017 Upcoming] filters.
 
# Review due date, priority, and status
 
=== Estimates ===
Occurs regularly using the [https://moz-pi-test.atlassian.net/issues/?filter=10035 Needs Estimate (me)] and [https://moz-pi-test.atlassian.net/issues/?filter=10032 Needs Estimate] filters.
 
# Set time estimate (in weeks)
 
=== Next quarter planning ===
Occurs each quarter using the [https://moz-pi-test.atlassian.net/issues/?filter=10036 Future Quarter] filter.
 
# To be considered for the upcoming quarter:
#* Set status to '''Current Quarter'''
#* Set due date if there is a deadline, else use last working day in next quarter
#* Set priority if known to be high or low, else use medium
#* Set assignee to be responsible for planning
# To be reconsidered for a future quarter leave status as '''Future Quarter'''
 
=== Current quarter review ===
Occurs regularly using the [https://moz-pi-test.atlassian.net/issues/?filter=10040 Current Quarter] filter.
 
# Determine time (weeks) remaining in quarter
# Review assignee, due date, priority, and status
 
=== Development ===
Occurs whenever we are free to take on new work, using the [https://moz-pi-test.atlassian.net/issues/?filter=10041 Ready (me)] and [https://moz-pi-test.atlassian.net/issues/?filter=10039 Ready] filters.
 
# Set status to '''Dev'''
# Raise a tracking bug or issue against the appropriate project
# Add link to the tracking bug or issue
 
=== Blocked ===
Occurs whenever a task is blocked.
 
# Set status to '''Blocked'''
# If task is blocked by other task(s):
#* Add link(s) to the blocking task(s)
# Add a comment detailing the circumstances of the blocker(s).
 
== Objectives ==
 
=== [https://docs.google.com/spreadsheets/d/1P1d7rzA27t3OHljraxapNgDl-1Hsc_novoj78w8qog0/edit#gid=584732224 2019/Q3] ===
* KR1.3: Fenix Startup Test Development [rwood] {{notstarted|}} [%]
* KR3.1: Video QoE testing for desktop [marauder] {{ok|}} [90%]
* KR3.2: Video QoE testing for Fenix [marauder] {{ok|}} [90%]
* KR3.4: Complete MVP for browsertime in CI [rwood] {{ok|}}  [100%]
* KR4.1: Migrate CI from Chromium to Chrome [sparky] {{notstarted|}} [80%]
* KR4.2: Power tests on macOS laptops [sparky] {{ok|}} [100%]
* KR6.6: Improve perf sheriff efficiency [igoldan] {{ok|}} [70%]
 
=== 2019/H1 ===
* Increase regression detection '''coverage''' to page load on Android products including WebView comparisons
* Increase quality standards by measuring and alerting on power usage for Android and ARM64
* Build dashboards to improve '''visibility''' of how we are tracking against our release criteria for Android
* Reducing noise in test results and adding annotations to provide '''clarity''' of signal for regressions
* Develop measurements and mechanisms for reporting on our tools, policies and documentation for how to improve '''clarity''' and '''efficiency''' of risk assessments
 
=== [https://docs.google.com/spreadsheets/d/1P1d7rzA27t3OHljraxapNgDl-1Hsc_novoj78w8qog0/edit#gid=104649774 2019/Q2] ===
* Meet Fenix blocking performance release criteria
** Fenix 1.0 geo mean cold page load time to onload event is >20% faster than Fennec 64 for tp6m reference sites (2019Q1 snapshots) on Fenix reference hardware
* Perform well on priority use cases for Qualcomm's H2 product launches
** Firefox v68 for Windows 10 on ARM battery utilization is within 20% of Edge on Qualcomm 8xx series reference hardware when watching the tp6m YouTube reference video to completion
** The proportion of dropped frames during playback of the tp6m YouTube reference video at 1080p resolution does not exceed 4% on Firefox v68 on the Qualcomm 8xx series reference hardware.
** Firefox v68 for Windows 10 on ARM geomean cold page load time to the onload event is within 20% of Edge for tp6 reference sites on Qualcomm 8xx series reference hardware.
* Meet blocking performance objectives for GeckoView-powered Fire TV pitch
** The proportion of dropped frames during playback of the tp6m 4K YouTube reference video using GeckoView-powered Firefox for Fire TV does not exceed 4% on the Fire TV 4k stick.
* Lay foundation to support performance work in H2
** Develop key metrics and scenarios needed to build CI-integrated tests for at least three of video quality, scroll responsiveness, input responsiveness, memory use, CPU utilization, and heat.
* Make the Firefox desktop browser feel fast and responsive
 
== Projects ==
 
=== Raptor ===
Raptor is a Python testing framework used to run browser benchmark and browser page-load performance tests. Raptor is cross-browser compatible and is currently running in Mozilla taskcluster production on Firefox Desktop, Firefox Android, and on Google Chromium.
 
* Owner: Rob Wood [:rwood]
* Source: https://searchfox.org/mozilla-central/source/testing/raptor
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor
* Documentation: https://wiki.mozilla.org/Performance_sheriffing/Raptor
 
=== Talos ===
Talos is a Python performance testing framework that is usable on Windows, Mac and Linux. Talos is our versatile performance testing framework we use at Mozilla. It was created to serve as a test runner for the existing performance tests that Mozilla was running back in 2007 as well as providing an extensible framework for new tests as they were created.
 
* Owner: Rob Wood [:rwood], Joel Maher [:jmaher]
* Source: https://searchfox.org/mozilla-central/source/testing/talos
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DTalos
* Documentation: https://wiki.mozilla.org/Performance_sheriffing/Talos
 
=== WebPageTest ===
* Owner: Stephen Donner [:stephend]
* Source: https://github.com/mozilla/wpt-api
* Good first bugs: https://github.com/mozilla/wpt-api/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
* Documentation: https://mozilla-wpt-api-docs.readthedocs.io/en/master/
 
=== Perfherder ===
Perfherder is an interactive dashboard intended to allow monitoring and analysis of automated performance tests run against Mozilla products (currently Firefox and Firefox for Android).
Perfherder is part of the Treeherder project.
 
* Location: https://treeherder.mozilla.org/perf.html
* Owner: Ionuț Goldan [:igoldan]
* Source: https://github.com/mozilla/treeherder
* Good first bugs: https://codetribute.mozilla.org/projects/reporting?project%3DPerfherder
* User documentation: https://wiki.mozilla.org/EngineeringProductivity/Projects/Perfherder
* Developer documentation: https://treeherder.readthedocs.io/
 
== Dashboards ==


=== Firefox Are We Fast Yet Dashboard ===
= Onboarding =
Shows a variety of benchmarks, run on a variety of platforms. Meant to be a detailed view of performance.
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.


* Location: https://arewefastyet.com
= Workflow =
* Owner: Armen Zambrano Gasparnian [:armen]
* [[/Triage Process/]]
* Source: https://github.com/mozilla-frontend-infra/firefox-performance-dashboard
* [[/Review Process/]]


=== Firefox Health Dashboard ===
= Projects =
The health dashboard tracks metrics and statistics important for tracking performance improvements.  Meant to be a high level view.
* [[/Fenix/]]
* [[/Raptor/]]
* [[/Raptor/Browsertime/]]
* [[/Talos/]]


* Location: https://health.graphics/
= Results =
* Owner: Armen Zambrano Gasparnian [:armen], Kyle Lahnakoski [:ekyle]
See our {{wip|[[/Results|results page]]}}.
* Source: https://github.com/mozilla-frontend-infra/firefox-health-dashboard/
* Good first bugs: https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22


== Resources ==
= Resources =
* [[/Glossary/]]
* [[/Glossary/]]
* [https://docs.google.com/document/d/1SswqYIAm4h8vlwfMc0pfGEJwXpFECyubGDezRZHHPFE/edit Strategies for investigating intermittents]
* [https://docs.google.com/document/d/1SswqYIAm4h8vlwfMc0pfGEJwXpFECyubGDezRZHHPFE/edit Strategies for investigating intermittents]
* [https://docs.google.com/document/d/1HV2_z8hwhI2w8EbURtkYjpikVG5g9QeKEPo9h5msuRs/edit Following up perf bugs]
* [https://docs.google.com/document/d/1HV2_z8hwhI2w8EbURtkYjpikVG5g9QeKEPo9h5msuRs/edit Following up perf bugs]
* [https://docs.google.com/document/d/103SRVVcE2SZNYP3kFXGeiVQrusH2Wj2yv8SWaDvB9SM/edit Excessive Android device queue response plan]
* [https://docs.google.com/document/d/103SRVVcE2SZNYP3kFXGeiVQrusH2Wj2yv8SWaDvB9SM/edit Excessive Android device queue response plan]
* [https://docs.google.com/document/d/1jVSaTlMZx-9DCjJurDpZJElm3yxN7N3OyjVh7hmwBb4/edit Perf bisection workflows]
* [[/FAQ#How_can_I_do_a_bisection.3F|Bisection Workflow]]
* [[/Sheriffing/CompareView|CompareView]]

Latest revision as of 17:57, 24 July 2023

Redirect to:

Fxperftest.png

New contributors

If you are a new contributor, or would like to start contributing you can find a guide to help you here.

Where to find us

Team purpose

To support the infrastructure and creation of automated tests for evaluating the performance of Firefox products. This provides value by exposing gaps in coverage, revealing areas where we can make performance gains, identifying performance regressions in a timely manner, and by providing key performance metrics that assist in determining how Firefox measures against release criteria.

What we do

  • Identification of gaps in performance test infrastructure and monitoring.
  • Designing and building performance test infrastructure and monitoring solutions.
  • Supporting Firefox engineers on writing performance tests.
  • Supporting Firefox engineers on investigating regressions identified by tests.
  • Collaboration with release operations on infrastructure requirements.
  • Standing up performance tests in continuous integration environments.
  • Monitoring performance test results and identifying potential regressions.
  • Supporting performance sheriffs with tools to assist in identifying regressions.
  • Developing test plans for performance testing.
  • Running adhoc manual or partially automated performance testing.

What we don't do

  • Maintenance of infrastructure hardware.
  • Maintain the continuous integration pipeline.
  • Writing/maintaining all performance tests.

Meetings

Onboarding

Welcome to the team! You are encouraged to improve the onboarding page. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.

Workflow

Projects

Results

See our 🚧 results page 🚧.

Resources