33
edits
(4th of april meeting) |
(April 21 2023) |
||
Line 1: | Line 1: | ||
== April 21 2023 == | |||
=== 1. Top 100 testing(SV) === | |||
It seems that the Desktop QA team is also running tests for checking build compatibility with websites. | |||
- I've discussed with them to see if they want to continue to do this, so we won't overlap. | |||
- We'll have a meeting on Wednesday to discuss the coverage. | |||
Paul: Initially I have discussed this with the mobile team and we've decided to move the task to the WebCompat team. We are still waiting for a response from the Desktop QA team if they would let us do the task instead. | |||
Honza: So there's FF desktop QA doing similar things like testing Firefox builds and there's also the mobile team doing web compatibility testing. | |||
Paul: The mobile team won't be doing that anymore since they are switching... | |||
Honza: So mobile would give up on testing compatibility on those websites? From my understanding, different builds are available for different OS (mobile and desktop) | |||
Paul: When this OKR was created, we did not consider that other teams are testing this, since this seems more like a webcompat team thing. Now we have to decide if they will start testing this, or should we keep this OKR for other teams inside the organization. | |||
Honza: Given the fact that 5 platforms would be tested, how do we see this from the workload point of view? | |||
Paul: Other teams are running the tests in one quarter for 100 Top Sites, 30 sites per round. Their estimations show that it takes them around 20 minutes per website, 11h for 32 websites. For about 100 websites that would be around in the 33 hours mark, but maybe we should do the testing extensively, going deeper compared to them, so that would mean double. But the plan would look more doable after we talk with the desktop team. | |||
Honza: How much time would be left for other things, since this is not the only OKR? How much time should we spend on this? Unless they are testing different features. | |||
Paul: I think it is general. I think we could test 2 times a year the top 100 websites. | |||
Raul: I've seen from their previous tests that they test features such as saved logins but from our point that's not webcompat. | |||
Honza: We will lay out a plan to see what it is covered from our point of view. | |||
Honza: Is mobile covered? | |||
Paul: Both mobile and desktop. | |||
Honza: How much time is needed for mobile? | |||
Paul: About 20 minutes per website. We need to cover also mobile. We plan to run it twice a year. We will see what the stakeholder has to say. | |||
=== UX Research (Honza) === | |||
Honza: we have the results that summarize all the responses given by people. James was looking at that document and provided small summary. Since the feedback was related to Social Sites, I was curious if there was something overlapping, like common ground, to identify the same set of priorities/ conclusions. I did not see any overlaps there, but maybe there is something I am missing. | |||
Paul: From what websites did the reports come? | |||
Honza: It was user research, no specific sites were targeted. The feedback given in the survey is related to webcompat. I was curious if there are any overlaps in this. But maybe we could coordinate a little bit more, maybe we can sync with these efforts and see if people are saying the same things as our findings from our testing. | |||
Paul: We might not get to the same conclusions due to hardware availability, as some users have a ton of different configurations. | |||
Honza: Maybe we can get the results once the survey is completed and adjust our testing accordingly, maybe concentrate and narrow our efforts to sites we pick and testing we are doing. | |||
Paul: That sounds like a plan, as we can concentrate on areas reported in the reports. We would know where to stress the application more. | |||
Honza: I will try to keep you informed about the results. | |||
Raul: As Paul said we can either do tests for the top 10 or do for a specific feature that fails on certain websites. | |||
Honza: [This is the output](https://docs.google.com/document/d/1TQu-R95zkeF4rsEYRBMk8-Xf5WBhDs1J3qllKQ216Mc/edit) from James, and this is the original [document](https://docs.google.com/document/d/1xnq33IbwSjL7DV5pHhP83bVTnKND4LZMr2LG4uF-8qU/edit) | |||
=== Interventions testing (Honza) === | |||
Honza: Are there any differences between the automated tests that are run manually? | |||
Raul: I'm guessing the automated tests are the same as the manual testing. We have runs for manual and automated tests. Some test runs require 2FA authenticator and these will fail when running the automation suite. Also, geographical restrictions, environmental restrictions, and incomplete login credentials are also taken into consideration for automated runs, as these will fail if the correct setup is not available. | |||
Paul: Could we mark which tests could be run manually and which ones could be run automatically? | |||
Raul: We run the automated tests at the end of the manual run for interventions. Usually on the first automated run, there is a high number of failed tests, which is lower on the second run | |||
Some tests need to be run manually because it requires authentications and/or VPN. | |||
At the end of the runs, we have a clear view of why some automated tests fail. | |||
Honza: Then as Paul says, we should make a list of which ones can be run manually and which ones can be automated. | |||
Honza: Could you make a new column in the doc and classify which ones are which? | |||
Raul: Sure, we could try to do that. Usually, Tom knows better which ones can be automated and which ones has to be tested manually. | |||
Honza: Yes, Tom knows more about this, so please feel free to contact him and sync on this subject. | |||
== April 4 2023 == | == April 4 2023 == | ||
=== Google offline feature (Honza) === | === Google offline feature (Honza) === |
edits