Auto-tools/Projects/AddonStartupPerf: Difference between revisions

Jump to navigation Jump to search
Adding section for add-on authors
No edit summary
(Adding section for add-on authors)
Line 1: Line 1:
=Current Manual Testing=
= Current Manual Testing =
==code==
http://github.com/harthur/dirtyharry


==results==
== code ==
Top 500 addons


<strong>Linux:</strong>
http://github.com/harthur/dirtyharry


[http://github.com/harthur/dirtyharry/blob/master/results/results_sorted_linux0.csv sorted average Talos Ts]
== results ==


[http://github.com/harthur/dirtyharry/blob/master/results/raw_results_linux0.csv Talos Ts]
Top 500 addons


=Automated Testing=
'''Linux:'''
== Limitations ==
 
* no plans to allow addons to 'call home' - we will still be working in the talos testing environment where we are proxied to localhost, so there will be no live web interaction
[http://github.com/harthur/dirtyharry/blob/master/results/results_sorted_linux0.csv sorted average Talos Ts]
* no current plans to interact with the addon (no clicks, no visiting specific pages)
 
** this sort of perf test would have to be designed/built per-addon to get the most bang for the buck
[http://github.com/harthur/dirtyharry/blob/master/results/raw_results_linux0.csv Talos Ts]
== Plans ==
 
* integrate into buildbot and have run on production machines
= Automated Testing =
** how frequently?
 
** which tests?
== Limitations ==
*** for now, we'll limit to clean ts starts with just the addon installed
 
** where would the list of addons be maintained?
*no plans to allow addons to 'call home' - we will still be working in the talos testing environment where we are proxied to localhost, so there will be no live web interaction  
** where do we download the addons from?
*no current plans to interact with the addon (no clicks, no visiting specific pages)  
* results reported as .csv files
**this sort of perf test would have to be designed/built per-addon to get the most bang for the buck
** where should these be sent?
 
** do we want the data on the graph server?
== Plans ==
** how do we compare the results?
 
*integrate into buildbot and have run on production machines  
**how frequently?  
**which tests?  
***for now, we'll limit to clean ts starts with just the addon installed  
**where would the list of addons be maintained?  
**where do we download the addons from?  
*results reported as .csv files  
**where should these be sent?  
**do we want the data on the graph server?  
**how do we compare the results?
 
= Information for Add-on Authors =
 
*How to test for performance differences yourselves: [[Firefox/Projects/StartupPerformance/MeasuringStartup|Measuring Startup]].
*Authors of add-ons with the greatest performance impact are being contacted about this, almost always with suggestions on where to improve.
canmove, Confirmed users
1,448

edits

Navigation menu