Auto-tools/Projects/Bisect in the cloud: Difference between revisions

Jump to navigation Jump to search
Line 44: Line 44:
= Initial Notes =
= Initial Notes =
[https://www.evernote.com/shard/s63/sh/ab854f47-6f2c-4244-963d-8e63ce541380/c2842d83df9bafd86c60dcadc1d2bfbf David's original notes]
[https://www.evernote.com/shard/s63/sh/ab854f47-6f2c-4244-963d-8e63ce541380/c2842d83df9bafd86c60dcadc1d2bfbf David's original notes]
= JMaher notes/questions =
* for tbpl/buildbot/mozharness based automation, we need to run test chunks, not test cases/files, in general this is true because test cases can have side effects on future test cases during the same browser session.
* could this fit into tbpl easily?
* how do we work around intermittent oranges?  Say M1 (webgl_conformance.html) failed, do we care if a different test case in M1 fails?  Do we run the chunk 3 or 4 times to ensure consistent green?
** if m1 fails in a different case, we would have to fail this, it is too hard to differentiate based on the individual test case, especially when thinking about this being fully automated.
* How would we determine the last known good run of this?  Would we have to give a changeset id or something like that?
* Will this tool be web based, command line based, or both?
* Making this fully automated, it would need to have a server somewhere that could take the two changesets (good and bad), then manage the bisecting and email results.
* Scheduling jobs via buildbot are not too hard, but with the current implementation, it could get tricky.  Take for example the scenario where we don't run m2 and m3 on all builds,but we do on pgo and nightly builds.  We would have a different buildbot configuration for the m2/m3 runs and that would apply to pgo and nightly.  Somehow we would need to take the builds from the CI pushes and apply the nightly config to that. 
* Would this only test on a given platform, or a set of platforms?
Confirmed users
753

edits

Navigation menu