Confirmed users
4,072
edits
No edit summary |
|||
Line 103: | Line 103: | ||
== Bug Triage == | == Bug Triage == | ||
; ''Methodology for bug triage'' | ; ''Methodology for bug triage'' | ||
* Criteria for determining priority | * Criteria for determining priority (??? discussion item) | ||
* Minimum criteria for internal verification (qe-verify+) | * Minimum criteria for internal verification (qe-verify+) | ||
; Queries | ; Queries (in order of priority) | ||
* | * [http://mzl.la/1BFmneX unconfirmed all]: see if there's any bugs that need reproducing or need clearer STR's | ||
* [http://mzl.la/1wSHQgk unconfirmed general]: move bugs into the appropriate sub-component | * [http://mzl.la/1wSHQgk unconfirmed general]: move bugs into the appropriate sub-component | ||
* | * intermittent failures: developers feel this is the least useful task we can be doing. But if time and interest allows, they suggest: | ||
** 1. Getting an idea on how reproducible the issue is. For example, can you reproduce the failure in 20 test runs locally? Can you reproduce on the try server? What if you got a loan on a slave and ran tests on the slave? If the failure happens on Linux, and you have a machine that engineers can log into remotely, capturing an rr <http://rr-project.org/> trace of the failure would be tremendously helpful. | |||
** 2. When did it start to happen? Did it happen for as long the test was added or did it start to happen way after the test was originally written? Can you go back on TBPL and retrigger test runs there to try to narrow down when the failure started to happen? (Being able to reproduce locally obviously helps with bisection as well.) | |||
== Risks == | == Risks == |