|
|
Line 55: |
Line 55: |
| | | |
| == Bug Triage == | | == Bug Triage == |
| ; ''Methodology for bug triage''
| | Document any bug triage meetings and/or processes, including priorities: |
| | | * unconfirmed bugs |
| * Hold twice weekly triage sessions for the bug states below follows:
| | * development bugs |
| ** Mondays from 4pm-5pm Eastern Standard Time (time intended slot for West coast US)
| | * fixed bugs |
| ** Fridays from 9am-10am Eastern Standard Time (time slot for East coast US and Europeans)
| | * regressions |
| ** In #qa on irc.mozilla.org
| | * tracking bugs |
| | | * blocking bugs |
| ; Queries (in order of priority)
| |
| * verification of FIXED bugs:
| |
| ** We have only time for fixes that are part of a new, or major rework of an existing, feature. The main task is determining if the level of automated testing is sufficient in these focus areas.
| |
| *** If a fix is determined there can't be sufficient automation coverage, flip flags to in-testsuite- and qe-verify? These must be verified manually. As such, ensure there are clear STR's. (once verified flip flag to qe-verify+ note: not part of this triage process)
| |
| *** If sufficient automation exists flip flag to in-testsuite+ | |
| *** If automation is possible but insufficient, flip flags to in-testsuite? and qe-verify? Manual verification, as in above, will be needed, if automated test won't be added in a timely manner. | |
| * [http://mzl.la/1BFmneX unconfirmed all]: see if there's any bugs that need reproducing or need clearer STR's
| |
| * [http://mzl.la/1wSHQgk unconfirmed general]: move bugs into the appropriate sub-component | |
| * intermittent failures: developers feel this is the least useful task we can be doing. But if time and interest allows, they suggest: | |
| ** 1. Getting an idea on how reproducible the issue is. For example, can you reproduce the failure in 20 test runs locally? Can you reproduce on the try server? What if you got a loan on a slave and ran tests on the slave? If the failure happens on Linux, and you have a machine that engineers can log into remotely, capturing an rr <http://rr-project.org/> trace of the failure would be tremendously helpful. | |
| ** 2. When did it start to happen? Did it happen for as long the test was added or did it start to happen way after the test was originally written? Can you go back on TBPL and retrigger test runs there to try to narrow down when the failure started to happen? (Being able to reproduce locally obviously helps with bisection as well.) | |
|
| |
|
| == Risks == | | == Risks == |