QA/Platform/DOM

From MozillaWiki
< QA‎ | Platform
Revision as of 22:54, 27 January 2015 by Ctalbert (talk | contribs)
Jump to navigation Jump to search

Summary

This page documents the strategy for testing and ensuring the quality of Gecko's DOM code.

QA Contacts

Goal

Support DOM efforts to bring firefox back to the forefront of developer mindshare through exposing more of the web platform to community and ensuring the quality of the new features being implemented for the web platform.

Priorities - Q1 2015

  1. Bug triage and documentation to further contributor involvement
  2. Define and establish base-line quality metrics for DOM
  3. Service Workers spec coverage in web platform tests
  4. Picture Tag spec coverage in web platform tests

To Do

1) Establishing a Baseline

  • Define "high quality" for this project and its stakeholders, and establish metrics
  • Define how are bugs logged and triaged, establish metrics on bug flow
  • Review and document existing automated/manual test coverage
  • Document dependencies this code has on other components
  • Document prefs for enabling/disabling features, default values and their effect
  • Document special hardware, software, and/or skills required to contribute
  • Narrow down "DOM" into a set of scoped things so we can prioritize and strategize the scopeable things

2) Developing a Strategy

  • Identify the features and their primary use cases
  • Define minimum acceptance and develop smoketests to ensure that's covered
  • Define end-to-end tests and how those should be implemented
  • Define areas community can contribute, establish metrics to measure community success
  • Define bug triage process including how bugs are prioritized
  • Define the criteria for what qualifies a bug as needing QE verification
  • Define best practices and begin on-boarding community members to support them

3) Develop an execution plan

  • Divide the strategy into work that can be accomplished within two weeks
  • Document a roadmap of what's being worked on, what's next on the list, and what's down the road
  • Determine how success of each sprint will be measured and make adjustments based on these measurements
  • Review accomplishments and setbacks at the end of each sprint

4) Establish Milestones

  • Code not riding the trains: verify merged branches do *not* have the code and that nothing has regressed
  • Code riding the trains: verify automation, manual test coverage, metrics in place to qualify
  • Define go-no-go requirements for each branch milestone
  • Define any all hands testing requirements for the Beta phase

Template

To be edited as necessary

Introduction

Brief description of the area/feature(s) covered by this document.

The primary purpose here is provide enough information about this functional area to a new person that they will be able to achieve a basic understanding of the feature and provide links to the any documentation or engineering docs for further reading is needed. If this is a new feature include which release it is targeted at.

Testing Approach

High level overview of the testing methodologies used in each type of testing done for this project (Manual and Automated)

The purpose of this section is provide guidance on how this area can be tested and what methodologies are most likely to be productive. For example when testing WebRTC the approach for manual testing would be to initiate calls connections between to clients and verify audio and video quality. The automated approach would be to use predictable data sources for audio and video steams allowing you to preform data analysis on the call statistics. Additionally you will want to provide some guidance on what can and can not be tested.

Include:

  • Examples of things to watch for.
  • What are some of common errors and issues that this testing is targeted at finding
  • Filing Bugs
    • How are bugs reported.
    • What component(s) should they be filed under
    • Define keywords, whiteboard tags and other flags or verbage that is expected to be used when reporting bugs

Get Involved

How can volunteers and community members become involved in this project.
  • Links to One and Done tasks
  • Links to Moztrap tests
  • Good First Verify in bugzilla
  • Links to any tutorials and other QA introductory material
  • Contact information and Meetings schedules and information on how to join

Requirements

What are the minimum requirements for becoming involved (Hardware, Software, Skills)
  • Describe the required test environment and provide instructions on how to create it.
  • If special skills are required, provide links to any tutorials that may be available on the subject.
  • If special hardware is required, provide steps on how to verify that the testers systems meet the minimum requirements.

Related Prefs

Define any preferences or about:config options that are related to the behavior of this area

Describe what pref or option does and what values should be used and how they will change the behavior of the browser. Be sure to include what the default value should be.

Related Features

What other features are either directly related too or can be affected by changes made to this feature.

For instance, changes to the javascript engine can have effects on emscipten and asm.js. WebRTC has dependencies on graphics (Open H.264) and Networking.

Test Cases

Define the test cases required to test this feature/area.

Include which tests can and should be automated, which framework used and how often the should be executed. * Provide link to repository(ies) for automated tests.

  • Smoke
  • Describe basic smoke tests required to prove minimum acceptance
  • Functional
  • List each major functional area to be tested and basic concepts for testing
  • End-to-end User Stories
  • Describe primary use cases
  • Exploratory
  • Describe some related areas and user stories that may be useful to explore

Bug Triage

Methodology for bug triage
  • Criteria for determining priority
  • Minimum criteria for internal verification (qe-verify+)
Queries

Risks

What are the primary areas of risk involved in this area.

For example Graphics has the risk of not being to have a broad enough test bed to provide coverage for edge case testing and may result in unexpected behavior when released to a wider audience.

Reporting and Status

Describe how are test results reported and provide links to any automated test reports or dashboards.
  • List milestones and current progress
  • Include bug queries for tracked bugs
  • Sign-off status for each release tested.

Ramp Up

People
  • Lead: Andrew Overholt
  • Mentors: Ehsan Akhgari, Anne van Kesteren, Josh Matthews, Boris Zbarsky
2015 Roadmap
  • Web Platform Tests
    • Automated cross-browser testing of features that involve UI (e.g. permission UI)
  • Web Components
  • Service Workers
  • Making bugs actionable
    • Communicating with the bug reporter to ask them about the specifics of their report
    • Attempting to reproduce the bugs, perhaps by writing test cases
    • Gaining an understanding of who works on what in DOM, to be able to get the right eyes on bugs
    • Providing regression ranges and ideally identifying the commit that regressed something
    • Using various debugging tools (such as the built-in devtools, gdb, address sanitizer, rr, etc.) to gain more information on bugs
  • Exploratory testing new DOM features before we ship them - in particular breaking things by doing stuff the developer didn't think of - if it's automated (e.g. fuzzers) so much the better
Documentation
  • How to bisect
    • If you are working on a bug that needs a regression range and you have reproducible steps, try http://mozilla.github.io/mozregression/. If you can narrow it down to a one day regression window, add that information to the bug.
  • wiki
Mailing Lists
  • mozilla.dev.platform
  • mozilla.dev.webapi

Meetings

Kickoff

Good starting points: etherpad

  • tracking down regression ranges for intermittents (Ehsan will need info us on some bugs)
  • web platform tests: docs, repo - start with reaching out to ms2ger/jgraham to find a good-first-issue to work on (DOM, HTML)
  • incoming triage
    • interesting to see what the volume of bug work is per sub-component
    • moving bugs from DOM:Core General to something more appropriate
    • consult bz's etherpad
  • community
    • work with ms2ger to find ways we can involve community

Not discussed:

  • outgoing triage
  • features

Next Steps:

  • Anthony to email team about outgoing triage (testing fixes or making sure fixes have sufficient coverage), features (what we need to be paying attention to), and incoming bug triage (All of DOM -1m to start then 24hr going forward, if load becomes too big we'll prioritize subcomponents and seek to involve community to pick up the slack)
  • Ehsan to needinfo us on regression window wanted bugs
  • Tracy & Anthony select some work from web platform tests
  • Tracy & Anthony to figure out goals and document (review *this* document as well)
  • Set up a triage meeting for the DOM team to go through to teach us what to do for different bugs