Mobile/Fennec Unittests: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Created page with 'This is a basic page that will be a central spot to learn more about unit tests on Fennec. == General == Fennec will run all the unittests that come with Firefox with few excep...')
 
mNo edit summary
 
(50 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This is a basic page that will be a central spot to learn more about unit tests on Fennec.
== Overview ==
 
[[Mobile/Fennec|Fennec]] unittests have evolved over time.  Originally we have ported the tests to run all [[https://wiki.mozilla.org/Mobile/Fennec_Unittests#On_Device on device]] by installing python on our phone (Nokia N810) and running tests there. 
 
In recent times, we have found that running everything on the device is not always possible or the best idea, so we have started to run them throw a [[https://wiki.mozilla.org/Mobile/Fennec_Unittests#Remote_Testing Remote Testing]].
 
The last piece of this puzzle is to make [[https://wiki.mozilla.org/Mobile/Fennec_Unittests#Reporting Reporting]] useful by having no failures.


== General ==
== General ==
Line 5: Line 11:
Fennec will run all the unittests that come with Firefox with few exceptions.  We are interested in:
Fennec will run all the unittests that come with Firefox with few exceptions.  We are interested in:


** Mochitest
* Mochitest
** Mochitest-Chrome
* Mochitest-Chrome
** Mochitest-Browser-Chrome
* Mochitest-Browser-Chrome (used for fennec specific tests)
** Reftest
* Reftest
** Crashtest
* Crashtest
** XPCShell
* XPCShell
* [[https://wiki.mozilla.org/Mobile/Fennec_TestDev#NSPR_Unit_Tests NSPR]]
 
In general there will be some tests that are unique for Fennec and will not run on Firefox.  For example automating the tab strip, bookmark manager or just panning and zoom.  We will store all the Fennec specific unittests in [[http://mxr.mozilla.org/mobile-browser/source/chrome/tests/ mobile-browser/chrome/tests]].
 
On the flip side there are specific tests in Firefox that we will want to exclude from Fennec.  For example:
* rss feeds (not supported for Fennec at the moment)
* browser-chrome and some chrome tests (chrome elements are different)
* private browsing (not supported for Fennec at the moment)
 
These excluded tests are tracked in {{bug|464081}}.  Ideally these will not be included in the 'make package-tests'.  This will be done by moving all tests inside of #ifdef's that exclude them.  For example if we are removing rss feeds, however we remove the source is the same method for removing the tests.  In general all tests that we don't want should all live in browser/ since the rest of the code we should have no problem sharing.
 
== On Device ==
 
Originally we battled to get the tests running outside of a build tree and resolve some of the limited resource issues. [[https://wiki.mozilla.org/Mobile/Fennec_Unittests/On_Device On Device]] testing is done for the Maemo platform (N810 and now N900) and our automated Tinderbox run with this method.
 
In addition these techniques are also still known as the most reliable method for running performance tests on a device.
 
== Remote Testing ==
 
Designed as a solution to run on Windows Mobile where we had no reliable way to run python, a local webserver or even enough resources.
 
This method requires a small lightweight agent which runs on the device and all the tests run on a host machine (like your desktop or a objdir).  The test harnesses have options to run through a proxy (devicemanager.py) which talks to the agent on the device.  Also the tests themselves have been adjusted to work with an arbitrary webserver (your desktop, setup via the test harness scripts). 
 
[[https://wiki.mozilla.org/Mobile/Fennec_Unittests/Remote_Testing Here are the details]] of requirements, setup, usage, and remaining work items.
 
== Desktop ==
 
This is the easiest setup to start working with and things should work well on desktop before moving to device.


In general there will be some tests that are unique for Fennec and will not run on FirefoxFor example automating the tab strip, bookmark manager or just panning and zoom.
If you are on Windows, you need the mozilla tools found on this page [[https://developer.mozilla.org/en/Windows_Build_Prerequisites Windows Prerequisites]] with a [[http://ftp.mozilla.org/pub/mozilla.org/mozilla/libraries/win32/MozillaBuildSetup-1.4.exe .exe installer]].


We will store all the Fennec specific unittests in mobile-browser/tests.  As tests arise, we will separate these based on type. For example:
To get started with unittests, download a fennec build and a tests buildThese can be found on the [[http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mobile-1.9.2/ ftp server]]:
* mobile-browser/tests/browser  <- for browser chrome (the majority will live here)
* [[http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-mobile-1.9.2/fennec-1.0b6pre.en-US.win32.zip Windows Desktop Fennec Build]]
* mobile-browser/tests/mochitest <- for plain mochitest
* [[http://ftp.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-1.9.2-win32-unittest/1258636588/firefox-3.6b4pre.en-US.win32.tests.tar.bz2 win32 tests.tar.bz2]]
* mobile-browser/tests/unit <- for xpcshell
** NOTE: this is from the regular desktop tests since we don't build win32 desktop tests for fennec specifically, there are very few differences in the test packages


On the flip side there are specific tests in Firefox that we will want to exclude from Fennec.
Next unpack these to a local directory such as c:\tests (such that you have a c:\tests\fennec, c:\tests\mochitest, c:\tests\reftest, c:\tests\xpcshell, etc... directory structure)


== Requirements ==
Now in your shell (use the c:\mozilla-build\start-msvc*.bat to get mingw32 on windows), cd to c:\tests and we can start running tests:
*mochitest
**python mochitest/runtest.py --appname=fennec/fennec.exe --xre-path=fennec/xulrunner --certificate-path=certs --utility-path=bin --autorun --logfile mochitest.log --close-when-done
*reftest
**python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/layout/reftests/reftest.list
*crashtest
**python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/testing/crashtest/crashtests.list
*xpcshell
**python xpcshell/runxpcshelltets.py --manifest=xpcshell/tests/all-test-dirs.list fennec/xulrunner/xpcshell.exe


== Maemkit ==
== Desktop (Build Tree) ==


== Assumptions / Decisions ==
TODO


== Maemo ==
== Maemo ==
Automation on Maemo is fairly straightforward.  There have been two major changes in order to get this working:
* tests running outside of the source tree - {{bug|421611}} - resolved
* splitting tests into smaller chunks - resolved by maemkit
Other issues which have surfaced is the need for fonts (hebrew fonts required for reftest {{bug|471711}}), multiple devices to run the tests faster (1 device takes 26 hours in debug, 12 hours for release).
* Automation [[https://wiki.mozilla.org/Mobile/Fennec_Automation timeline table]] and reference
* Notes on how to run automation on Fennec:
** [[https://wiki.mozilla.org/Mobile/Fennec_Mochitest Mochitest]]
** [[https://wiki.mozilla.org/Mobile/Fennec_Chrome Chrome]]
** [[https://wiki.mozilla.org/Mobile/Fennec_Reftests Reftest / Crashtest]]
** [[https://wiki.mozilla.org/Mobile/Fennec_Xpcshell XPCShell]]
* Tracking bugs for [[http://people.mozilla.com/~jmaher/mochitest.htm failures]] on Fennec:
** Mochitest - {{bug|473558}}
** Chrome - {{bug|473562}}
** Reftest - {{bug|473564}}
* Bug to track issues while getting tests running in tinderbox - {{bug|495164}}
What is left is:
* Stabalizing the results that are generated from tinderbox.
* Fixing the existing bugs.
* Investigating all unknown failures.
* Providing an out of band toolset to diff results between runs
** it is difficult to compare a test run of 75000 tests between two runs
** need to establish a baseline (will be moving)


== Windows Mobile ==
== Windows Mobile ==


== Symbian ==
This is not under active development anymore and there are no working builds of Windows Mobile Fennec.  Leaving this here in case we decide to pick up Windows Mobile in the near future if the OS changes.
 
We went down a couple paths here, but ended up with the Remote Testing solution.  This is fully implemented for Windows Mobile with a fully functioning agent and working tests.
 
== Android ==
 
Under initial development for the browser and the tests
 
== Reporting ==
 
Currently there are thousands of test failures when running the Firefox tests on Fennec.  With so many failures it is next to impossible to detect if a checkin caused a new test to fail.  As a result nobody really looks at the results of the automation.
 
A few months ago we sat down and decided to fix all the issues with [[https://wiki.mozilla.org/Mobile/Fennec_Unittests/green reftest, crashtest, and xpcshell]].  We have made great progress on this, but still have a long way to go.
 
Once that is done, we need to revisit the mochitest, chrome and browser-chrome tests.  The technique we will use in the short term is [[https://elvis314.wordpress.com/2010/04/27/filtering-mochitests-for-remote-testing/ filtering out failing tests]] and only running ones that pass.

Latest revision as of 21:02, 7 May 2010

Overview

Fennec unittests have evolved over time. Originally we have ported the tests to run all [on device] by installing python on our phone (Nokia N810) and running tests there.

In recent times, we have found that running everything on the device is not always possible or the best idea, so we have started to run them throw a [Remote Testing].

The last piece of this puzzle is to make [Reporting] useful by having no failures.

General

Fennec will run all the unittests that come with Firefox with few exceptions. We are interested in:

  • Mochitest
  • Mochitest-Chrome
  • Mochitest-Browser-Chrome (used for fennec specific tests)
  • Reftest
  • Crashtest
  • XPCShell
  • [NSPR]

In general there will be some tests that are unique for Fennec and will not run on Firefox. For example automating the tab strip, bookmark manager or just panning and zoom. We will store all the Fennec specific unittests in [mobile-browser/chrome/tests].

On the flip side there are specific tests in Firefox that we will want to exclude from Fennec. For example:

  • rss feeds (not supported for Fennec at the moment)
  • browser-chrome and some chrome tests (chrome elements are different)
  • private browsing (not supported for Fennec at the moment)

These excluded tests are tracked in bug 464081. Ideally these will not be included in the 'make package-tests'. This will be done by moving all tests inside of #ifdef's that exclude them. For example if we are removing rss feeds, however we remove the source is the same method for removing the tests. In general all tests that we don't want should all live in browser/ since the rest of the code we should have no problem sharing.

On Device

Originally we battled to get the tests running outside of a build tree and resolve some of the limited resource issues. [On Device] testing is done for the Maemo platform (N810 and now N900) and our automated Tinderbox run with this method.

In addition these techniques are also still known as the most reliable method for running performance tests on a device.

Remote Testing

Designed as a solution to run on Windows Mobile where we had no reliable way to run python, a local webserver or even enough resources.

This method requires a small lightweight agent which runs on the device and all the tests run on a host machine (like your desktop or a objdir). The test harnesses have options to run through a proxy (devicemanager.py) which talks to the agent on the device. Also the tests themselves have been adjusted to work with an arbitrary webserver (your desktop, setup via the test harness scripts).

[Here are the details] of requirements, setup, usage, and remaining work items.

Desktop

This is the easiest setup to start working with and things should work well on desktop before moving to device.

If you are on Windows, you need the mozilla tools found on this page [Windows Prerequisites] with a [.exe installer].

To get started with unittests, download a fennec build and a tests build. These can be found on the [ftp server]:

Next unpack these to a local directory such as c:\tests (such that you have a c:\tests\fennec, c:\tests\mochitest, c:\tests\reftest, c:\tests\xpcshell, etc... directory structure)

Now in your shell (use the c:\mozilla-build\start-msvc*.bat to get mingw32 on windows), cd to c:\tests and we can start running tests:

  • mochitest
    • python mochitest/runtest.py --appname=fennec/fennec.exe --xre-path=fennec/xulrunner --certificate-path=certs --utility-path=bin --autorun --logfile mochitest.log --close-when-done
  • reftest
    • python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/layout/reftests/reftest.list
  • crashtest
    • python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/testing/crashtest/crashtests.list
  • xpcshell
    • python xpcshell/runxpcshelltets.py --manifest=xpcshell/tests/all-test-dirs.list fennec/xulrunner/xpcshell.exe

Desktop (Build Tree)

TODO

Maemo

Automation on Maemo is fairly straightforward. There have been two major changes in order to get this working:

  • tests running outside of the source tree - bug 421611 - resolved
  • splitting tests into smaller chunks - resolved by maemkit

Other issues which have surfaced is the need for fonts (hebrew fonts required for reftest bug 471711), multiple devices to run the tests faster (1 device takes 26 hours in debug, 12 hours for release).

What is left is:

  • Stabalizing the results that are generated from tinderbox.
  • Fixing the existing bugs.
  • Investigating all unknown failures.
  • Providing an out of band toolset to diff results between runs
    • it is difficult to compare a test run of 75000 tests between two runs
    • need to establish a baseline (will be moving)

Windows Mobile

This is not under active development anymore and there are no working builds of Windows Mobile Fennec. Leaving this here in case we decide to pick up Windows Mobile in the near future if the OS changes.

We went down a couple paths here, but ended up with the Remote Testing solution. This is fully implemented for Windows Mobile with a fully functioning agent and working tests.

Android

Under initial development for the browser and the tests

Reporting

Currently there are thousands of test failures when running the Firefox tests on Fennec. With so many failures it is next to impossible to detect if a checkin caused a new test to fail. As a result nobody really looks at the results of the automation.

A few months ago we sat down and decided to fix all the issues with [reftest, crashtest, and xpcshell]. We have made great progress on this, but still have a long way to go.

Once that is done, we need to revisit the mochitest, chrome and browser-chrome tests. The technique we will use in the short term is [filtering out failing tests] and only running ones that pass.