Auto-tools/Projects/AddonStartupPerf: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
 
(12 intermediate revisions by 4 users not shown)
Line 1: Line 1:
==Goal==
= Current Manual Testing =
Get addon startup impact (and in the future, general performance and unittest failures) automated and reporting on a regular basis.


=Pieces=
== code ==
<b>addon fetcher</b> - look at feed of popular addons and download latest versions to put in db


<b>profile generator</b> - create clean profile with given set of addons/preferences (will run firefox once with these addons)
https://github.com/jonallengriffin/dirtyharry


<b>Firefox runner</b> - firefox runner than can receive results asynchronously (jsbridge or something like it that is not an extension - maybe <tt>dump</tt> with dump preference turned on)
== results ==


<b>worker</b> - checks db for addons, for each addon: get profile with addon installed, add listener to runner, run firefox with profile, listener gets performance numbers, puts in db.
Top 500 addons  


<b>numbers feed</b> - numbers from db for other people to feed on.
'''Linux:'''


=Questions=
[http://github.com/jonallengriffin/dirtyharry/blob/master/results/results_sorted_linux0.csv sorted average Talos Ts]
<ul style="margin-left: 0"><li>Talos?</li>
 
<li>cold startup</li>
[http://github.com/jonallengriffin/dirtyharry/blob/master/results/raw_results_linux0.csv Talos Ts]
<li>jsbridge, results communicating</li></ul>
 
= Automated Testing =
 
== Limitations  ==
 
*no plans to allow addons to 'call home' - we will still be working in the talos testing environment where we are proxied to localhost, so there will be no live web interaction
*no current plans to interact with the addon (no clicks, no visiting specific pages)
**this sort of perf test would have to be designed/built per-addon to get the most bang for the buck
 
== Plans  ==
 
*integrate into buildbot and have run on production machines
**how frequently?
**which tests?
***for now, we'll limit to clean ts starts with just the addon installed
**where would the list of addons be maintained?
**where do we download the addons from?
*results reported as .csv files
**where should these be sent?
**do we want the data on the graph server?
**how do we compare the results?
 
= Information for Add-on Authors =
 
*How to test for performance differences yourselves: [[Firefox/Projects/StartupPerformance/MeasuringStartup|Measuring Startup]].
*Authors of add-ons with the greatest performance impact are being contacted about this, almost always with suggestions on where to improve.
* [http://blog.mozilla.com/addons/2010/06/14/improve-extension-startup-performance/ How to Improve Extension Startup Performance].

Latest revision as of 17:58, 4 January 2012

Current Manual Testing

code

https://github.com/jonallengriffin/dirtyharry

results

Top 500 addons

Linux:

sorted average Talos Ts

Talos Ts

Automated Testing

Limitations

  • no plans to allow addons to 'call home' - we will still be working in the talos testing environment where we are proxied to localhost, so there will be no live web interaction
  • no current plans to interact with the addon (no clicks, no visiting specific pages)
    • this sort of perf test would have to be designed/built per-addon to get the most bang for the buck

Plans

  • integrate into buildbot and have run on production machines
    • how frequently?
    • which tests?
      • for now, we'll limit to clean ts starts with just the addon installed
    • where would the list of addons be maintained?
    • where do we download the addons from?
  • results reported as .csv files
    • where should these be sent?
    • do we want the data on the graph server?
    • how do we compare the results?

Information for Add-on Authors