Labs/Jetpack/Reboot/Best Practices
The following are best practices for reboot development. Nothing is (yet) set in stone, so propose and debate and stuff.
Style Guidelines
See Labs/Jetpack/Reboot/Style_Guide for official guidelines for general coding style and structure.
Testing
Testing has been designed from the ground-up to be as easy as possible to get into. Just make a directory called tests in your package, make a file called test-something.js and have it export functions that take a single argument called test. Running cfx test will automatically find and call all these functions, passing in a Test Runner Object as the argument.
More notes:
- Unit tests should be written so that all dependencies—i.e., what's not being tested—are stubbed/faked/mocked. This also ensures that the tests run as quickly as possible, which makes the test-driven-development process of "write a little bit of code, run the test suite" as easy as possible.
- At the same time, integration tests with the Mozilla platform are important because the platform is constantly evolving and we need to sure that our code doesn't break as it changes underneath us; we also need to be able to ensure that our code works fine under different Mozilla applications.
- If your module is going to be used by lots and lots of other modules, as is often the case with e.g. xhr, consider actually writing a mock class and making it available to other modules. This way others won't have to constantly re-create such objects for their tests.
Module Unloading
All code that touches the Mozilla platform needs to properly manage its resource use and unload all resources currently in-use when an unload signal is sent to it. See the xhr module source code for a great example of this.
Testing to ensure that you're unloading your resources properly is also important, and generally consists of using the test runner's (currently undocumented) makeSandboxedLoader() method, which creates a new module loader. Your test can then use the new instance of your module to allocate resources, and then unload them, and finally test to make sure there aren't any leaks. See the xhr module test suite for an example of this.
Module Instantiation
Aside from having a lifecycle that's independent of the host application, a module can actually have multiple instances of itself running inside an application at once (sometimes even different versions of different instances of itself).
This means that you shouldn't mutate your outside environment in ways that could collide with other instances of your module: for example, don't add a property to every browser window's global namespace called __myModule_secret_id.
Memory Leak Prevention
- The --times (or -x) option to cfx is useful in detecting leaks. This causes multiple iterations of your test suite to be run, with memory statistics displayed after each iteration. It's difficult to tell from pure heap information whether your code has a leak from one iteration to the next, but if you run e.g. 5000 iterations and you eventually eat up gobs of memory, then you know you have a leak.
- Use memory.track(this) on the first line of an object constructor to make sure you don't create extra instances of it from one test suite iteration to the next. The number of tracked objects in the testing sandbox is displayed as part of the memory statistics at the end of each iteration, and this should not increase from one run to the next.
- The test runner automatically keeps track of the global scope of all loaded modules and makes sure that they're garbage collected when your test suite is finished. If they're not, the test runner will display a warning at the end of the test run. This could means that e.g. a function belonging to a "leaking" module has been registered as a callback to Mozilla platform code and not been unregistered. However, in some cases the leak warnings actually appear to be erroneous, and running e.g. an even number of iterations of the test suite reports leaks while running an odd number doesn't.
- Add atul-packages to your packages directory and then pass --extra-packages=nsjetpack to cfx when you run your tests. This will cause the test runner to enable optional JS memory profiling functionality and display extra JS object statistics between iterations. In particular, a "diff" of the JS heap from one iteration to the next is displayed, which can be helpful in pinpointing where a leak is occurring.
Exception Logging
We want to be able to track uncaught exceptions in a much richer way than Mozilla's nsIConsoleService currently allows. For instance, we want to be able to filter messages on a per-Jetpack basis, possibly even a per-module basis, and we also want full stack tracebacks.
In order to do this without making pervasive modifications to the Mozilla platform, we're simply catching exceptions before they propagate into platform code, and logging them via a call to console.exception().
Any platform event handlers your code registers should be "wrapped" such that this is the case.
More Stuff
We also need to cover:
- how to namespace packages (e.g., require("foo/bar/baz"))
- out-of-code documentation (e.g., tutorials and guides)
- security
- localization
- API design
- Some Cuddlefish modules, like file.js, take pains to be broadly compatible as JS modules and loadable via script tags in XUL documents, in addition to being CommonJS modules used in Jetpack. Do we really want to go that route for all modules?
- When creating objects, do we want jetpack.thing() or new jetpack.Thing()?
- Atul has found the case of forgetting the "new" operator to be unforgivingly difficult to debug.
- Atul has also found that "new" operator problems are hard to debug when the operator associates w/ a different operand than one intends, e.g. new require("foo").Bar().