QA/Date Time Input Types

From MozillaWiki
Jump to navigation Jump to search

Revision History

This section describes the modifications that have been made to this wiki page. A new row has been completed each time the content of this document is updated (small corrections for typographical errors do not need to be recorded). The description of the modification contains the differences from the prior version, in terms of what sections were updated and to what extent.

Date Version Author Description
08/02/2016 1.0 Cynthia Tang Initial draft

Overview

Purpose

Detail the purpose of this document. For example:

  • The test scope, focus areas and objectives
  • The test responsibilities
  • The test strategy for the levels and types of test for this release
  • The entry and exit criteria
  • The basis of the test estimates
  • Any risks, issues, assumptions and test dependencies
  • The test schedule and major milestones
  • The test deliverables

Scope

This wiki details the testing that will be performed for Date/Time Input Types project. It defines the overall testing requirements and provides an integrated view of the project test activities. Its purpose is to document:

  • What will be tested
  • How testing will be performed

Ownership

Engineering Program Manager: Wesley Huang

Engineers: Jessica Jong(platform: DOM/layout part), Scott Wu (front-end: xul element/html content)

UX Designers: Morpheus Chen, Tina Hsieh

Visual designer: Helen Huang

TDC QA: Cynthia Tang, William Hsu


QA:
Rares Bologa - PM for QA team


Testing summary

Scope of Testing

In Scope

This project is to enable the feature on Desktop that HTML input fields should contain a date or time or allow users to select a week in a year.

The project aims to provide basic input/pickers (date, month, time, date time, and week) for web authors and at the same time to meet HTML standard criteria including "Preset Value", "Preset List", "Step", and "Max/Min".

Other than the functions listed above, the project shall also take care of localization (incl. the different format per locale) and accessibility (Input = Cursor Only, Cursor+Keyboard, or Keyboard only). RTL currently is not in the scope.

Last but not least, as per the current scope, only Gregorian calendar is considered.

The scope of our testing is the Date/Time Input Types and its functionality. The testing effort for Date/Time Input Types will be invested on the following areas:

  1. Release Acceptance testing
  2. Compatibility testing - Backward compatibility
  3. Continuous integration
  4. Destructive testing (Forced-Error Test)
  5. Functional vs non-functional (Scalability or other performance) testing
  6. Performance testing
  7. Regression testing
  8. Usability testing

Out of Scope

Following areas/features are considered out of scope and will not be considered as testing zones to be handled in this test plan.

  1. Accessibility testing (UX will deal with it)
  2. L10N test (L10n team will deal with it)
  3. Scan security hole (Security team will deal with it. If any patches exist security concern, we will add "sec-review" flag on the bug)


Requirements for testing

Environments ([TBD])

Testing will be performed on following OSes:

  • Windows XP (x32)
  • Windows Vista (x32 & x64)
  • Windows 7 (x32 & x64)
  • Windows 8.1 (x32 & x64)
  • Windows 10 (x32 & x64)
  • Ubuntu 14.04 (x32 & x64)
  • Ubuntu 15.04 (x32 & x64)
  • Fedora 23 (x64)
  • Mac OS X 10.9
  • Mac OS X 10.10
  • Mac OS X 10.11

Test Strategy

Test Objectives

This section details the progression test objectives that will be covered. Please note that this is at a high level. For large projects, a suite of test cases would be created which would reference directly back to this master. This could be documented in bullet form or in a table similar to the one below.

Ref Function Test Objective Evaluation Criteria Test Type Owners
1 Time picker (w/o l10n) Verify that the time picker works well Test Link Manual Eng Team
2 Date picker (w/o l10n) Verify that the date picker works well Manual Eng Team
3 Time picker l10n integration All l10n should be correct Manual Eng Team
4 Date picker l10n integration All l10n should be correct Manual Eng Team
5 Month picker Verify that the month picker works well Manual Eng Team
6 Week picker Verify that the week picker works well TBD Manual -

Builds

This section should contain links for builds with the feature -

  • Nightly builds with the fix are available to the the link
  • Links for Aurora builds
  • Links for Beta builds

Test Execution Schedule

The following table identifies the anticipated testing period available for test execution.

Project phase Start Date End Date
Start project 07/01/2016 -
Study documentation/specs received from developers 07/15/2016 -
QA - Test plan creation 08/02/2016
QA - Test cases/Env preparation
QA - Nightly Testing
QA - Aurora Testing
QA - Beta Testing
Release in Test Pilot Date
Release Date

Testing Tools

Detail the tools to be used for testing, for example see the following table:

Process Tool
Test plan creation Mozilla wiki
Test case creation Google docs, MozTrap
Test case execution TBD
Bugs management Bugzilla

Status

Overview

[PLANED] Nightly 51: first landed on - 08.27.2016
Track the dates and build number where feature was merged to Aurora
Track the dates and build number where feature was merged to Release/Beta

Testing risks and mitigation

TESTING RISK

Risks can be organized into these categories.

  • Test planning and scheduling : It may occur when there is no separate test plan, but rather highly incomplete and superficial summaries in other planning documents. Also, test plans are often ignored once we are written. Regarding the schedule, the schedule of testing is often inadequate for the amount of testing that should be performed in TDC, especially when testing is primarily manual.
  • Stakeholder involvement : The wrong mindset would introduce wrong thought of testing, having wrong testing expectations, and having stakeholders who are inadequate committed to and supporting of the testing effort. Therefore, we must align expectations with reality between stakeholders before we kick off testing.
  • Process integration : It often occurs when testing and engineering processes are poorly integrated. We sometimes take a "one-size-fits-all" approach taken to testing, regardless of the specific needs of the project.
  • Test communication risk : This problems often occurs when test documents are not maintained or inadequate communication.

RISK MITIGATION

QA team would like to use following flow to address risk.

  • Risk Identification: Risks can be identified using a number of resources. E.g., project objectives, risk lists of past projects, prior knowledge, understanding of system architecture or design, prior bug reports, and complaints. For example, if certain areas of the system are unstable and those areas are being developed further in the current project, it should be listed as a risk. It is good to document the identified risks in detail so that it stays in project memory and can be clearly communicated to project stakeholders.
  • Risk Prioritization : If a risk is fully understood, it is easy for us to prioritize a risk by two measures. (1) Risk Impact and (2) Risk Probability are applied to each risk. Risk Impact is estimated in tangible terms or on a scale (e.g., 10 to 1 or High to Low). Risk Probability is estimated somewhere between 0 (no probability of occurrence) and 1 (certain to occur) or on a scale (10 to 1 or High to Low). For each risk, the product of Risk Impact and Risk Probability gives the Risk Magnitude. Sorting the Risk Magnitude in descending order gives a list in which the risks at the top are the more serious risks and need to be managed closely.
  • Risk Treatment : Each risk in the risk list is subject to one or more of the following Risk Treatments.
    1. Risk Avoidance : For example, if there is a risk related to a new feature, it is possible to postpone this feature to a later release.
    2. Risk Transfer : For example, if the risk is insufficient security testing of a feature, it may be possible to borrow the other expertise (Engineer) to perform the security testing.
    3. Risk Mitigation : The objective of Risk Mitigation is to reduce the Risk Impact or Risk Probability or both. For example, if the QA team is new and does not have prior system knowledge, a risk mitigation treatment may be to have a knowledgeable team member join the team to train others on-the-fly.
    4. Risk Acceptance : This happens when there is no viable mitigation available due to reasons such as resources. For example, if all testers are at the same place, risk acceptance means no another QA resource. When holiday comes, some tests will be stopped and it may be a concern in the project.

References

Testcases

Overview

  • Summary of testing scenarios

Test Areas

Test Areas Covered Details
Private Window YES
Multi-Process Enabled YES
Multi-process Disabled YES
Theme (high contrast) NO
UI
Mouse-only operation YES
Keyboard-only operation NO
Display (HiDPI) NO
Interraction (scroll, zoom) YES
Usable with a screen reader N/A e.g. with NVDA
Usability and/or discoverability testing YES UX team will help with it
Help/Support
Help/support interface required TBD Make sure link to support/help page exist and is easy reachable.
Support documents planned(written) TBD Make sure support documents are written and are correct.
Install/Upgrade
Feature upgrades/downgrades data as expected N/A
Does sync work across upgrades YES
Requires install testing NO
Affects first-run or onboarding N/A
Does this affect partner builds? Partner build testing N/A yes/no options, add comment with details about who will lead testing
Enterprise Raise up the topic to developers to see if they are expecting to work different on ESR builds
Enterprise administration N/A
Network proxies/autoconfig N/A
ESR behavior changes N/A
Locked preferences N/A
Data Monitoring
Temporary or permanent telemetry monitoring N/A List of error conditions to monitor
Telemetry correctness testing N/A
Server integration testing N/A
Offline and server failure testing N/A
Load testing N/A
Add-ons If add-ons are available for testing feature, or is current feature will affect some add-ons, then API testing should be done for the add-on.
Addon API required? N/A
Comprehensive API testing N/A
Permissions N/A
Testing with existing/popular addons N/A
Security Security is in charge of Matt Wobensmith. We should contact his team to see if security testing is necessary for current feature.
3rd-party security review N/A
Privilege escalation testing ?
Fuzzing ?
Web Compatibility depends on the feature
Testing against target sites NO
Survey of many sites for compatibility NO
Interoperability depends on the feature
Common protocol/data format with other software: specification available. Interop testing with other common clients or servers. NO
Coordinated testing/interop across the Firefoxes: Desktop, Android, iOS ?
Interaction of this feature with other browser features Yes

Test suite

  • Full Test suite - [Full Functional test suite]
  • Smoke Test suite - [Smoke test suite]
  • Regression Test suite - N/A (using scenarios from Full Functional for now)

Bug Work

Meta: Bug 888320 - [meta] Implement all time and date related input types

Ship Bugs

No results.

0 Total; 0 Open (0%); 0 Resolved (0%); 0 Verified (0%);


Sign off

Criteria

Check list

  • All test cases should be executed
  • Has sufficient automated test coverage (as measured by code coverage tools) - coordinate with RelMan
  • All blockers, criticals must be fixed and verified or have an agreed-upon timeline for being fixed (as determined by engineering/RelMan/QA)

Results

Nightly testing

List of OSes that will be covered by testing

  • Link for the tests run
    • Daily Smoke, use template from link
    • Full Test suite, use template from link
    • Regression Test suite, if needed/available

Merge to Aurora Sign-off List of OSes that will be covered by testing

  • Link for the tests run
    • Full Test suite

Checklist

Exit Criteria Status Notes/Details
Testing Prerequisites (specs, use cases) [IN PROGRESS]
Testing Infrastructure setup NO
Test Plan Creation [IN PROGRESS]
Test Cases Creation [IN PROGRESS]
Full Functional Tests Execution [NOT STARTED]
Automation Coverage [NOT STARTED]
Performance Testing [NOT STARTED]
All Defects Logged [NOT STARTED]
Critical/Blockers Fixed and Verified [NOT STARTED]
Daily Status Report (email/etherpad statuses/ gdoc with results) [NOT STARTED]
Metrics/Telemetry N/A
QA Signoff - Nightly Release Email to be sent
QA Aurora - Full Testing
QA Signoff - Aurora Release Email to be sent
QA Beta - Full Testing
QA Signoff - Beta Release Email to be sent