DevTools/mochitests coding standards: Difference between revisions
Line 33: | Line 33: | ||
For your test to be run, it needs to be referenced in the '''browser.ini''' file that you'll find in the same directory. For example: | For your test to be run, it needs to be referenced in the '''browser.ini''' file that you'll find in the same directory. For example: | ||
browser/devtools/debugger/test/browser.ini | browser/devtools/debugger/test/browser.ini | ||
You should add a line with your filename between square brackets and make sure that the list of files '''is always sorted by alphabetical order'''. | You should add a line with your filename between square brackets and make sure that the list of files '''is always sorted by alphabetical order'''. |
Revision as of 17:57, 11 June 2014
This article gives some suggestions about how to write devtools browser-chrome mochitests.
The following suggestions assume you want to write new tests for a given new feature or bug that you are working on. If instead you are fixing a failing test then it should be easy enough to change the test's code according to what's in the file already.
One of the first things to keep in mind when creating tests is that it's almost always a better idea to create a new test file rather than add new test cases to an existing one.
- This prevents test files from growing up to the point where they timeout for running too long (test systems may be under lot's of stress at time and run a lot slower than your regular local environment).
- But this also helps with making tests more maintainable, with many small files, it's easier to track a problem rather than in one huge file.
Adding a new browser chrome test
Creating the new file
The first thing you need to do is create a file. This file should go next to the code it's testing. We have test directories already. For instance, an inspector test would go to the following directory
browser/devtools/inspector/test/
Naming the new file
Naming your file is pretty important to help other people get of feeling for what it is supposed to test. Having said that, the name shouldn't be too long either.
Here is a good naming convention:
browser_<panel>_<short-description>[_N].js.
Where:
- <panel> is one of debugger, markupview, inspector, ruleview, etc.
- <short-description> should be something short, like 3 to 4 words, separated by hyphens (-)
- optionally add a number at the end if you have several files testing the same thing.
Here's one real example:
browser_ruleview_completion-existing-property_01.js
Note that not all devtools tests are consistently named, the most important is to be consistent with how other tests in the same test folder are named.
Referencing the new file
For your test to be run, it needs to be referenced in the browser.ini file that you'll find in the same directory. For example:
browser/devtools/debugger/test/browser.ini
You should add a line with your filename between square brackets and make sure that the list of files is always sorted by alphabetical order.
Adding support files
Sometimes your test may need to open an HTML file in a tab, this file may need to load CSS or JavaScript for example. If you want this to work, you'll need to create these files in the same directory and also reference them in the browser.ini file.
There's also a naming convention for support files:
doc_<support-some-test>.html
But again, follow the style of the other support files currently in the same test directory.
Then reference your new support file in the support-files section of browser.ini and also make sure this section is in alphabetical order.
Support files can be accessed via a local server that is started while tests are running. This server is accessible at [1] (see the section about head.js below for more information).
Leveraging helpers in head.js
At the time of writing, each panel in devtools has its own test directory with its own head.js, so you'll find different things in each panel's head.js file.
head.js is a special support file that is loaded in the scope the test runs in before the test starts. It contains global helpers that are useful for most tests. Read through the head.js file in your test directory to see what functions are there and therefore avoid duplicating code.
As an example, the head.js files in the markupview and styleinspector (at the time of writing) test folders contain these useful functions and constants:
- Base URLs for support files: TEST_URL_ROOT. This avoids having to duplicate the http://example.com/browser/browser/devtools/styleinspector/ URL fragment in all tests,
- waitForExplicitFinish() is called in head.js once and for all. All tests are asynchronous, so there's no need to call it again in each and every test,
- asyncTest(function*(){...}) this function makes it easy to define an asynchronous test that can yield promises,
- auto-cleanup: the toolbox is closed automatically and all tabs are closed,
- tab addTab(url)
- {toolbox, inspector} openInspector()
- {toolbox, inspector, view} openRuleView()
- selectNode(selectorOrNode, inspector)
- node getNode(selectorOrNode)
- ...
Here is what the basic structure of a test looks like:
/* vim: set ft=javascript ts=2 et sw=2 tw=80: */ /* Any copyright is dedicated to the Public Domain. http://creativecommons.org/publicdomain/zero/1.0/ */ "use strict"; // A detailed description of what the test is supposed to test const TEST_URL = TEST_URL_ROOT + "doc_some_test_page.html"; let test = asyncTest(function*() { yield addTab(TEST_URL_ROOT); let {toolbox, inspector, view} = yield openRuleView(); yield selectNode("#testNode", inspector); yield checkSomethingFirst(view); yield checkSomethingElse(view); }); function* checkSomethingFirst(view) { /* ... do something ... this function can yield */ } function* checkSomethingElse(view) { /* ... do something ... this function can yield */ }
Asynchronous tests
Most (if not all) browser chrome devtools tests are asynchronous. One of the reasons why they are asynchronous is that the code needs to register event handlers for various user interactions in the tools and then simulate these interactions. Another reason is that most devtools operations are done asynchronously via the debugger protocol.
Here are a few things to keep in mind with regards to asynchronous testing:
- head.js already calls waitForExplicitFinish() so there's no need for your new test to do it too. Since calling that function makes it mandatory to call finish() when the test does end, this is done automatically in the promise handler of the asyncTest() function. So if you used it, you don't need to worry about calling it.
- Using asyncTest with a generator function means that you can yield calls to functions that return promises. It also means your main test function can be written as synchronous code would be, simply adding yield before calls to asynchronous functions. Here is, for example, a for loop:
for (let i = 0; i < testData.length; i ++) { yield testCompletion(testData[i], editor, view); }
Each call to testCompletion is async but the code doesn't need to rely on nested callbacks and maintain an index, a standard for loop can be used.
- Define your test functions as generators that yield, no need for them to be tasks since they are called from one already. In some cases you'll need to return promises anyway (if you're adding a new helper function to head.js for example). If this is the case, it sometimes is best to define your function like so:
let myHelperFunction = Task.async(function*() { ... });
Writing clean, maintainable test code
Test code is as important as feature code itself, it helps avoiding regressions of course, but it also helps understanding complex parts of the code that would be otherwise hard to grasp.
Since we find ourselves working with test code a large portion of our time, we should spend the time and energy it takes to make this time enjoyable.
Logs and comments
Reading test output logs isn't exactly fun and it takes time but is needed at times. Make sure your test generates enough logs by using:
info("doing something now")
it helps a lot knowing around which lines the test fails, if it fails.
One good rule of thumb is if you're about to add a JS line comment in your test to explain what the code below is about to test, write the same comment in an info() instead.
Also add a description at the top of the file to help understand what this test is about. The file name is often not long enough to convey everything you need to know about the test. Understanding a test often teach you about the feature itself.
Not really a comment, but don't forget to "use strict";
Callbacks and promises
Avoid multiple nested callbacks or chained promises. They make it hard to read the code. Thanks to our task-based asyncTest function (see the markupview head.js support file for instance), it's easy to write asynchronous code that looks like flat, synchronous, code.
Clean up after yourself
Do not expose global variables in your test file, they may end up causing bugs that are hard to track. Most functions in head.js return useful instances of the devtools panels, and you can pass these as arguments to your sub functions, no need to store them in the global scope. This avoid having to remember nullifying them at the end.
If your test needs to toggle user preferences, make sure you reset these preferences when the test ends. Do not reset them at the end of the test function though because if your test fails, the preferences will never be reset. Use the registerCleanupFunction helper instead. It may be a good idea to do the reset in head.js, once and for all.
Write small, maintainable code
Split your main test function into smaller test functions with self explanatory names.
Make sure your test files are small. If you are working on a new feature, you can create a new test each time you add a new functionality, a new button to the UI for instance. This helps having small, incremental tests and can also help writing test while coding.
If your test is just a sequence of functions being called to do the same thing over and over again, it may be better to describe the test steps in an array instead and just have 1 function that runs each item of the array. See the following example
const TESTS = [ {desc: "add a class", cssSelector: "#id1", mutate: function() {...}, expectedAttributes: {class: "c"}}, {desc: "change href", cssSelector: "a.the-link", mutate: function() {...}, expectedAttributes: {href: "..."}}, ... ]; let test = async(function*() { yield addTab("..."); let {toolbox, inspector} = yield openInspector(); for (let step of TESTS) { info("Testing step: " + step.desc); yield selectNode(desc.cssSelector, inspector); desc.mutate(); assertExpectedAttributes(getNode(desc.cssSelector, desc.expectedAttributes); } }); function assertExpectedAttributes(node, attrs) { ... }
As shown in this code example, you can add as many test cases as you want in the TESTS array and the actual test code will remain very short, and easy to understand and maintain (note that when looping through test arrays, it's always a good idea to add a "desc" property that will be used in an info() log output).
Avoid exceptions
Even when they're not failing the test, exceptions are bad because they pollute the logs and make them harder to read. They're also bad because when your test is run as part of a test suite and if an other, unrelated, test fails then the exceptions may give wrong information to the person fixing the unrelated test.
After your test has run locally, just make sure it doesn't output exceptions by scrolling through the logs.
Often, non-blocking exceptions may be caused by hanging protocol requests that haven't been responded to yet when the tools get closed at the end of the test. Make sure you register to the right events and give time to the tools to update themselves before moving on.
Avoid test timeouts
When tests fail, it's far better to have them fail and end immediately with an exception that will help fix it rather than have them hang until they hit the timeout and get killed.
Adding new helpers
In some rare cases, you may want to extract some common code from your test to use it another another test. If this common part isn't common enough to live in head.js, then it may be a good idea to create a helper file to avoid duplication. Here's how to create a helper file:
- Create a new file in your test directory, the naming convention should be help_<description_of_the_helper>.js
- Add it to the browser.ini support-files section, making sure it is sorted alphabetically
- Load the helper file in the tests
- browser/devtools/markupview/test/head.js has a handy loadHelperScript(fileName) function that you can use.
- The file will be loaded in the test global scope, so any global function or variables it defines will be available (just like head.js).