|
|
Line 5: |
Line 5: |
|
| |
|
| '''Repository''': https://github.com/mozilla/mozbase | | '''Repository''': https://github.com/mozilla/mozbase |
| | |
| | '''Documentation''': http://mozbase.readthedocs.org |
|
| |
|
| '''Bugs''': | | '''Bugs''': |
Line 16: |
Line 18: |
| ''Mozbase requires python 2.5, 2.6, or 2.7'' | | ''Mozbase requires python 2.5, 2.6, or 2.7'' |
| '''however, Talos still requires compatibility with python 2.4 ({{bug|734466}}), so mozbase dependencies in http://hg.mozilla.org/build/talos/file/tip/setup.py *must* be kept compatible with python 2.4''' | | '''however, Talos still requires compatibility with python 2.4 ({{bug|734466}}), so mozbase dependencies in http://hg.mozilla.org/build/talos/file/tip/setup.py *must* be kept compatible with python 2.4''' |
|
| |
| == Packages ==
| |
|
| |
| MozBase is composed of several [https://developer.mozilla.org/en/Python python] packages. These packages work together to form the basis of a test harness.
| |
|
| |
| * Firefox is launched via [https://github.com/mozilla/mozbase/tree/master/mozrunner mozrunner]
| |
| ** which sets up a profile with preferences and extensions using [https://github.com/mozilla/mozbase/tree/master/mozprofile mozprofile]
| |
| ** and runs the application under test using [https://github.com/mozilla/mozbase/tree/master/mozprocess mozprocess]
| |
| * [https://github.com/mozilla/mozbase/tree/master/mozinstall mozInstall] is used to install the test application
| |
| * A test harness may direct Firefox to load web pages. These may be served using [https://github.com/mozilla/mozbase/tree/master/mozhttpd mozhttpd] for testing
| |
| * The machine environment is introspected by [https://github.com/mozilla/mozbase/tree/master/mozinfo mozinfo]
| |
| * A test manifest may be read to determine the tests to be run. These manifests are processed by [https://github.com/mozilla/mozbase/tree/master/manifestdestiny ManifestDestiny]
| |
| * For mobile testing, the test runner communicates to the test agent using [https://github.com/mozilla/mozbase/tree/master/mozdevice mozdevice]
| |
|
| |
| === Process Management - mozprocess package ===
| |
| Cross-platform process management.
| |
| See [https://github.com/mozilla/mozbase/blob/master/mozprocess/mozprocess/processhandler.py processhandler.py] for the mozprocess API.
| |
|
| |
| '''Goals:'''
| |
| * ability to reliably terminate processes across platforms
| |
|
| |
| '''Status:'''
| |
| * implemented: https://github.com/mozilla/mozbase/tree/master/mozprocess
| |
| * [https://github.com/mozilla/mozbase/blob/master/mozprocess/README.md documentation]
| |
| * [https://github.com/mozilla/mozbase/tree/master/mozprocess/tests tests]
| |
|
| |
| === Profile Management - mozprofile package ===
| |
|
| |
| The mozprofile package is complete with the exception that it might need the ability to install plugins as well as addons. We may expand these interfaces to use a packaged xpcshell to insert state-specific items into the profile (i.e. fire up xpcshell and use JS + XPCOM to create a set of bookmarks, for instance).
| |
|
| |
| You can find the code in the [https://github.com/mozilla/mozbase/tree/master/mozprofile MozProfile package].
| |
|
| |
| '''Status:'''
| |
|
| |
| * implemented: https://github.com/mozilla/mozbase/tree/master/mozprofile
| |
| * [https://github.com/mozilla/mozbase/blob/master/mozprofile/README.md documentation]
| |
| * [https://github.com/mozilla/mozbase/tree/master/mozprofile/tests tests]
| |
|
| |
| ==== Profiles Creation in Existing Mozilla Python Code ====
| |
|
| |
| Currently several test harnesses modify profiles. The collected knowledge should be upstreamed to the mozprofile package and existing code made to use this package.
| |
|
| |
| 1) [https://wiki.mozilla.org/Buildbot/Talos Talos] has profile directories with the prefs.js and other related files already in there. We copy those to the tmp dir and point firefox at it. In addition, we create a user.js file from prefs stored in the .config file. The last thing we copy is the extensions (such as pageloader). {{bug|694638}}
| |
|
| |
| 2) reftest creates a profile by writing user.js and putting files in the folder to setup the extension. It runs firefox with the -silent flag and this allows for full registration of the reftest handler (as we have a cli flag that needs to be parsed).
| |
|
| |
| 3) mochitest follows suite and creates a profile but has a HUGE pac config and a lot of permissions.sqlite insertions. While that is the extent of the complexity (prefs.js, user.js, permissions.sqlite, extensions), it is a lot of stuff that is all hacked into automation.py.in.
| |
|
| |
| These and other test harnesses should be brought up to speed to use mozprofile and, correspondingly, mozprofile built-out to fill the needs of the harnesses
| |
|
| |
| === Platform Information - mozinfo package ===
| |
|
| |
| mozinfo wraps python utilities that gather system information.
| |
|
| |
| '''Status:'''
| |
|
| |
| * implemented: https://github.com/mozilla/mozbase/tree/master/mozinfo
| |
| * [https://github.com/mozilla/mozbase/blob/master/mozinfo/README.md documentation]
| |
| * duplicate in mozilla-central: http://mxr.mozilla.org/mozilla-central/source/build/mozinfo.py
| |
|
| |
| === Test Manifests - ManifestDestiny package ===
| |
| We have one manifest parser that will be used across the test systems. The parser reads ".ini" files with each section header representing a test path and the section's key,value pairs are the test's metadata for consumption of the caller. The code
| |
| is in a working state and used, it is found in the [https://github.com/mozilla/mozbase/tree/master/manifestdestiny/ ManifestDestiny package]. The project page is at https://wiki.mozilla.org/Auto-tools/Projects/ManifestDestiny
| |
|
| |
| '''Status:'''
| |
|
| |
| * Implemented: https://github.com/mozilla/mozbase/tree/master/manifestdestiny/
| |
| * Documentation: https://github.com/mozilla/mozbase/blob/master/manifestdestiny/README.md
| |
| * Tests: https://github.com/mozilla/mozbase/tree/master/manifestdestiny/tests
| |
| * Used by: [https://wiki.mozilla.org/Auto-tools/Projects/Mozmill mozmill], xpcshell
| |
|
| |
| Currently reftest use a different syntax. These are planned to be unified in the future.
| |
|
| |
| === Android Device Access - mozdevice package ===
| |
|
| |
| This provides an abstraction called DeviceManager useful for interacting with an Android device. There are two variants of this class, one which allows you to interact with an agent process on the device using a custom TCP/IP protocol (DeviceManagerSUT), another that allows you to interact with the device using Android's adb interface.
| |
|
| |
| Because DeviceManager is the backbone of mobile and B2G automated testing, always run your changes through [[TryServer|try]]. To save time and resources, you can just run the Android tests with the [[Build:TryChooser|TryChooser]] syntax "try: -b o -p android -u all -t none".
| |
|
| |
| '''Status:'''
| |
| * implemented: https://github.com/mozilla/mozbase/tree/master/mozdevice
| |
| * documentation: ''NEEDED''
| |
| * tests: https://github.com/mozilla/mozbase/tree/master/mozdevice/tests , https://github.com/mozilla/mozbase/tree/master/mozdevice/sut_tests
| |
|
| |
| === Unified Logging - mozlog package ===
| |
|
| |
| This package will need to be implemented in both JavaScript and Python so that it is accessible from both sides of the test harnesses and we can get away from hand-formatted "print" and "dump" statements.
| |
|
| |
| See also: https://developer.mozilla.org/en/Test_log_format
| |
|
| |
| '''Status:'''
| |
| * Initial development is at https://github.com/mozilla/mozbase/tree/master/mozlog
| |
| * Partially implemented in [https://github.com/mozautomation/mozmill/blob/master/mozmill/mozmill/logger.py Mozmill] and mochitest's [http://mxr.mozilla.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/MozillaFileLogger.js MozillaFileLogger].
| |
|
| |
| === Test Instantiation - moztest package ===
| |
| Moztest is a package that allows you to store and process test results.
| |
|
| |
| You can use the classes in the <tt>results</tt> submodule to store results and then the classes in the <tt>output</tt> subpackage to get useful representations of them (for example xUnit, Autolog).
| |
|
| |
| You can store environment data (i.e. OS that was used) using the <tt>TestContext</tt> class.
| |
|
| |
| Moztest supports two ways of storing test results: either creating them live, while running the tests, or creating them after the tests have been run.
| |
|
| |
| ==== Creating the results objects while running the tests ====
| |
|
| |
| 1. Instantiate a <tt>TestResult</tt> object:
| |
| <pre>
| |
| t = TestResult('example', test_class='doc', context=TestContext(product='Kuma'), result_expected='PASS')
| |
| </pre>
| |
|
| |
| The test's <tt>time_start</tt> property will be set the current time.
| |
|
| |
| 2. Finalize the object (assuming the test passed):
| |
| <pre>
| |
| t.finish('PASS')
| |
| </pre>
| |
|
| |
| The test's <tt>time_end</tt> property will be set to the current time as expected.
| |
|
| |
| Or, if the test failed:
| |
|
| |
| <pre>
| |
| t.finish('FAIL', output=['Traceback:', 'Line ..', ..], reason='AssertionError: True is not False')
| |
| </pre>
| |
|
| |
| After you call <tt>finish()</tt>, the test's <tt>result</tt> property will contain the standard string for the corresponding result it has. For example, <tt>UNEXPECTED-FAIL</tt>.
| |
|
| |
| Also, you can use the <tt>duration</tt> property to see how long a test took.
| |
|
| |
|
| |
| You manage test result data by using a <tt>TestResultCollection</tt>. The <tt>output</tt> subpackage expects <tt>TestResultCollection</tt> objects as well. These behave similar to lists. In fact, they are lists.
| |
|
| |
| The general pattern is this:
| |
|
| |
| <pre>
| |
| collection = TestResultCollection(suite_name='Example test run')
| |
| for test in tests_to_run:
| |
| tr = TestResult(...)
| |
| # run test..
| |
| tr.finish(...)
| |
| collection.append(tr)
| |
| collection.time_taken += test.duration # this was not automated because of the multiple way of adding stuff to a list
| |
| </pre>
| |
|
| |
|
| |
| ==== Creating the results objects after running the tests ====
| |
|
| |
| One use case would be: you have some <i>insert test harness here</i> results and you want to generate a xUnit file with the data.
| |
|
| |
| The pattern differs, depending on the type of the results.
| |
|
| |
| If they are python <tt>unittest</tt>-based results (i.e. <tt>Marionette</tt>), you can use a convenience classmethod of <tt>TestResultCollection</tt>:
| |
|
| |
| <pre>
| |
| # assuming results is an iterable (say, a list) of unittest result objects
| |
| collection = TestResultCollection.from_unittest_results(*results)
| |
| # you can optionally specify a context parameter as well
| |
| </pre>
| |
|
| |
| If the results you have are not based on python's <tt>unittest</tt> results - for example, <tt>XPCShellTest</tt> results are not, the general pattern is something like this (the way in which you get the relevant data from the test may vary):
| |
|
| |
| <pre>
| |
| collection = TestResultCollection('example suite')
| |
| for result in results: # these would be the results you already have
| |
| duration = result.get('time', 0) # or some other way of finding the duration
| |
|
| |
| # figure out the result of the test
| |
| if 'skipped' in result:
| |
| outcome = 'SKIPPED'
| |
| elif 'todo' in result:
| |
| outcome = 'KNOWN-FAIL'
| |
| elif result['passed']:
| |
| outcome = 'PASS'
| |
| else:
| |
| outcome = 'UNEXPECTED-FAIL'
| |
|
| |
| # find its output, and maybe even the reason
| |
| output = None
| |
| if 'failure' in result:
| |
| output = result['failure']['text']
| |
|
| |
| # if you only know the duration (no start & end times), just pass in 0 as start time
| |
| t = TestResult(name=result['name'], test_class='ExampleTestClass',
| |
| time_start=0, context=context)
| |
| # pass in the result and Moztest will infer your expected and actual results
| |
| t.finish(result=outcome, time_end=duration, output=output)
| |
|
| |
| collection.append(t)
| |
| collection.time_taken += duration
| |
| </pre>
| |
|
| |
|
| |
| ==== Managing test data - TestResultCollection's methods ====
| |
|
| |
| <pre>
| |
| # get a list of unique test contexts used
| |
| contexts = collection.contexts
| |
|
| |
| # get a list of tests that had errors
| |
| errors = collection.tests_with_result('ERROR')
| |
|
| |
| # get a list of tests that took longer than one minute
| |
| long_tests = collection.filter(lambda t: t.duration > 360)
| |
|
| |
| # get a new TestResultCollection with a subset of the tests
| |
| new_collection = collection.subset(lambda t: t.context == desired_context)
| |
| </pre>
| |
|
| |
| === OS Environment Handling - MozEnv ===
| |
| This is currently not implemented. I envision something that wraps os.environ but
| |
| provides good methods for adding and removing attributes from the environment.
| |
|
| |
| <pre>
| |
| interface Environment {
| |
| void __init__(dict env = None) # Defaults to os.environ if None
| |
|
| |
| void add(string name, string value)
| |
|
| |
| void remove (string name, string value)
| |
|
| |
| # TODO: Might make this the native __str__ in python so we don't need a method
| |
| # called out, but explicitly stating we should have a way to dump the environment
| |
| # to a string (for logging or display or debugging)
| |
| string to_string()
| |
| }
| |
| </pre>
| |
|
| |
| Status:
| |
| * not yet implemented
| |
|
| |
| === Command Line Parsing - mozoptions ===
| |
|
| |
| Implemented to varying degrees in every single test harness. This is simply a specialized subclass of the python optparse (or argparse, come python 2.7) object.
| |
| We should use it to define common options across all the test harnesses, and allow each harness to add in a
| |
| callback for verification of these options. Otherwise, the methods are the standard optparse methods.
| |
|
| |
| '''Status:''' not implemented
| |
|
| |
| === File Handling - mozfile ===
| |
|
| |
| Common file-related code
| |
|
| |
| '''Status:''' proposed: {{bug|774916}}
| |
|
| |
|
| == Development Practices == | | == Development Practices == |