33
edits
(October 31 2023) |
(November 17 2023) |
||
Line 1: | Line 1: | ||
== November 17 2023 == | |||
=== Trends === | |||
Paul: I've set up the metrics for the TREND OKR, and we can go through each label present in the list and used by QA, to see which are irrelevant and which are relevant, and to provide more clarity over some of them and the process that they are being used. | |||
Honza: What happens when you can use more than 1 label for an issue? | |||
Raul: There are cases where we do that, and we have reports that we have moved to `needsdiagnosis`, as QA was unable to pinpoint exactly the label behind the report. For example, the video is not playing because the play button is not responding. That is a case where 2 labels can be used. | |||
Honza: Is there a way we can improve this? | |||
Raul: We could use just 1 label where more than 1 label is needed. This means that QA should stick to 1 labe, as they see fit for the issue. | |||
Honza: Can we create more labels? | |||
Paul: If the need arises, sure. | |||
Honza: I can see the `graphic glitch` label. What exactly does that mean? | |||
Raul: Elements not being rendered properly, broken items from a graphic point of view, text overflowing, elements overlapping other elements, cut text, etc | |||
Honza: So, would it be better to rename this label? Something that is more related to the issue. | |||
Paul: We could try using layout instead of the graphic glitch title. | |||
Honza: Sure, that sounds better. | |||
Honza: What tools are you using for the `performance` trend label? What helps you identify the issue as being related to `performance`? | |||
Raul: We are using the Task Manager, and the performance tab from the `about:performance` config option, to see and compare if Firefox is using more resources compared to another browser | |||
Paul: Regarding the `other` or `unknown` label? Should we keep it? | |||
Honza: We could keep it and use it in some cases when we are not sure how to classify the issue, for example, if we are trying to pick the best fit between 3 or more labels. | |||
Honza: We can amend the label list after this meeting. Surely, we will use this system in the new dashboard system, and most likely this will evolve. | |||
Honza: We will discuss with the team regarding the labels used for TRENDS, to make further clarifications. Thanks for that. | |||
== October 31 2023 == | == October 31 2023 == | ||
=== QA Triage Trends === | === QA Triage Trends === | ||
Since our last meeting we've started using the new format on how we submit the QA Triage Trends. | Since our last meeting, we've started using the new format on how we submit the QA Triage Trends. | ||
We are looking for | We are looking for feedback: https://github.com/mozilla/webcompat-team-okrs/issues/275#issuecomment-1772768903 | ||
Regarding the trend metrics, we are still working on the document. | Regarding the trend metrics, we are still working on the document. | ||
Line 10: | Line 53: | ||
We were wondering if the total number of issues (per label) received each month regardless if it's reproducible or not would be enough for the metrics. | We were wondering if the total number of issues (per label) received each month regardless if it's reproducible or not would be enough for the metrics. | ||
Paul: Should we go this deep, or is it enough to mention a link and | Paul: Should we go this deep, or is it enough to mention a link and several total issues per milestone, instead of copying the link for each issue? | ||
Honza: I am also thinking about the new system. We can search for individual reports by label. An important thing to add, if we are not sure about a label, it is best to not label the issue. We should use categories where we are certain. To answer your question, the numbers are more important, is that trend growing or not | Honza: I am also thinking about the new system. We can search for individual reports by label. An important thing to add, if we are not sure about a label, it is best to not label the issue. We should use categories where we are certain. To answer your question, the numbers are more important, is that trend growing or not? Higher management wants to have some kind of metrics to see the impact the platform is making, we are trying to understand all the user reports, estimate trends, which issues have the most impact, and how to prioritize them so that we can give all the relevant info to the platform team. Whatever numbers we have should help the higher management to see if the platform team is going in the right direction. | ||
Paul: How do we measure the impact? | Paul: How do we measure the impact? | ||
Honza: That is the big question. They need to know that the actions we are taking are making things better or worse. We can not base this metric on the number of reports, getting more reports vs getting | Honza: That is the big question. They need to know that the actions we are taking are making things better or worse. We can not base this metric on the number of reports, getting more reports vs getting fewer reports, the goal of the system is to have as much data as possible. It is in our interest to have more reports. We might want to identify things like Firefox not supported, and we can somehow follow that trend. We would not collect the number of reports, but the domains regarding this. We can measure how quickly the platform fixes issues. We can do issue scoring, like the State of Webcompat report, top 20 issues, that we think the platform should fix. Or the popularity of sites, using Firefox Telemetry and comparing it with Chrome Telemetry. Like are there sites used by a lot of people just in Chrome? We should come up with the same data to see if we are in the right direction. The trends, the numbers, is part of that goal. Are we able to spot some trends, and trust the numbers? | ||
The overall numbers | The overall numbers indicate the trend, what is our main focus. | ||
Line 30: | Line 73: | ||
Honza: For the new system, I am interested | Honza: For the new system, I am interested in the triage process of the QA. That means that you would assign labels to issues. Right now you are using GitHub label, which would likely remain the same. | ||
Line 41: | Line 84: | ||
Paul: Or we could assign the issues directly to the QA member. | Paul: Or we could assign the issues directly to the QA member. | ||
Raul: It would help us if we could mass assign a batch directly to a member without assigning each issue manually. | Raul: It would help us if we could mass-assign a batch directly to a member without assigning each issue manually. | ||
Paul: Right now we are using keywords for specific OKRs. I'm not sure if its feasible to add a label for each issue (for e.g. counting number of issues triaged in a week, for now we use "[qa_44/2023]" or "[inv_44/2023]" for investigated, 44 stands for week 44). The new dashboard should have something in order to count the issues received each week. | Paul: Right now we are using keywords for specific OKRs. I'm not sure if its feasible to add a label for each issue (for e.g. counting number of issues triaged in a week, for now, we use "[qa_44/2023]" or "[inv_44/2023]" for investigated, 44 stands for week 44). The new dashboard should have something in order to count the issues received each week. | ||
Raul: These keywords help us a lot when counting our issues in different metrics/reports. | Raul: These keywords help us a lot when counting our issues in different metrics/reports. | ||
Honza: The current system is based on the reports received on | Honza: The current system is based on the reports received on Git Hub. But in the new system, it will be harder to filter that out because more reports would be received. Maybe we would have to mass close an issue that has the same domain. | ||
Paul: The bot also closes issues that he finds inappropriate or irrelevant. | Paul: The bot also closes issues that he finds inappropriate or irrelevant. |
edits