Dromaeo

From MozillaWiki
Revision as of 03:57, 26 March 2008 by Jresig (talk | contribs)
Jump to navigation Jump to search

The Dromaeo JavaScript Performance Test Suite. Named after the Dromaeosaurs - or 'fast lizard' - and created by John Resig (jresig at mozilla.com).

How to Use It

Dromaeo: http://dromaeo.com/

Visiting the main Dromaeo page you are given a collection of tests which can be run. Generally these tests are designed to be more "real world" in nature (testing a number of features simultaneously). Each test should have a full description explaining what the test is achieving along with an indication of what is being tested.

If you don't wish to run all of the available tests, and only run a sub-section of them, you can filter tests via the URL, like so:

You have full regular expression support so something like the following will work, as well:

Running the Tests

To begin you can run the tests by simply hitting the 'Run' button. It should be capable of running in Firefox 2+, Safari 3+, and Opera 9+, and Internet Explorer 6+. Please report any problems that you may have.

You can pause/resume the tests at any time, using the 'Pause' button (it won't affect the final numbers).

Viewing the Results

After the tests have finished running you'll be presented with a full breakdown of the tests (including sub-tests). Each individual sub-test will have its mean value presented in conjunction with an error spread of the results.

Additionally, you'll be given a URL for your results, saved on the server, which you can refer back to at any point, like the following:

Once you've received a URL (and associated unique ID) for your test results you can then use it to compare against other result sets , like so:

Downloading the Suite

If you wish to run the suite offline you can download it from the following location:

The result should be identical to the suite running on dromaeo.com. You'll need to have PHP support if you wish to save results to the central server.

Methodology

There are a number of techniques that the Dromaeo suite uses in order to achieve accurate results. Together they provide a solid foundation for allowing significant performance analysis to be completed.

Versioning

All tests have an automatic version number, meaning that when the contents of a test change its results will no longer be used for comparison against other mis-matched results. This is taken care of, automatically, by the suite. For example, if you run the the test "3D Mesh Transformation v115" and then (at some point later, after changes) run "3D Mesh Transformation v116" and try to compare the results - no comparison will be allowed.

This is especially important as it allows for a tangible upgrade process to exist, for tests, considering that bugs or adjustments will inevitably arise causing some amount of conflict to occur. With this being completely baked into the process

Saved Results

All test results are automatically saved to the server and stored in a database for later retrieval and analysis. This is important for a couple reasons:

  • Encoding test result data in the URL (such as via a query string) is not a scalable solution - and provides too little granularity.
  • Server-side storage provides limitless information store and can even store additional information to be used later (such as min, max, and deviation).
  • The simple URLs are easy to pass around are are quickly identifiable.
  • Simple test IDs make test comparisons trivial for the end user.

This solution is much more scalable, and highly usable, providing a good long-term solution.

Statistical Confidence

A number of checks are put in place to make sure that statistically significant results can be provided to the user. Some of the techniques used are:

  • All tests are run, at minimum, 5 times. If a significant level of error cannot be reached then more runs are completed (up to 10).
  • All results are fit on to a T-Distrubtion, set to a 95% confidence interval (values from this table). 95% confident +/- error intervals are provided with all results.
  • All comparisons between results take into account the error intervals - offering a tie for result sets that are statistically tied within the associated error ranges.

The sum result is a system that's able to provide consistent results and present them in a fair, and meaningful, way.

Script Speed vs. Rendering Speed

When analyzing JavaScript performance it's important to make sure that all that you're testing is just the immediate performance of the script. Accidentally including factors that can lead to rendering effects can cause slowdowns from unrelated aspects of the browser.

For example, in the SunSpider suite they loaded tests within iFrames, initiating timestamp-logging calls before and after the test completed. When coalesced updates landed for Firefox 3 a significant performance improvement to the SunSpider results was found - even though the change had absolutely nothing to do with JavaScript performance, only actual rendering speed.

To combat this issue all tests are pre-loaded by the browser and are run on-demand. This allows for no extra overhead and much more accurate results.

Test Sources

Tests are brought in from a number of locations. Generally, however, tests are chosen to based upon how complicated they are and how applicable the result is to practical JavaScript code.

The vast majority of the tests are brought in from the Computer Language Shootout and adapted to run within the suite (often with greater levels of complexity).

All the tests in this suite are also capable of running in the JavaScript Engine Speed suite released last year, the results of which can be seen here.