QA/Execution/Web Testing/roles/buildmaster

From MozillaWiki
Jump to navigation Jump to search

Introduction

Create a set of guidelines for investigating failures in our public Jenkins instance. This would be used by whoever is monitoring for failures, but would also be valuable for community who want to dive in. It should include how to identify failures, how to determine if they're already known, how to replicate them locally, how to determine if they're application bugs or test bugs, where to raise them, who to notify, and even how to fix them (if they're test failures) and submit pull requests. This could form part of a boot camp similar to other teams. -- from our 2015, Q2 goals brainstorm.

Open Questions

  • In the interests of reducing complexity, do we need tiers? Are we anticipating that there will be so many failures that some need prioritizing over others?
  • On the topic of sending daily emails - I think this would be extra work and considered noise by most recipients. It could be done via a whiteboard entry and Bugzilla whines.
  • On the topic of checking builds once a day - There are several methods for doing this: view the web dashboard, subscribe to RSS feeds, read e-mail alerts, watch IRC notifications. We could consider others using Jenkins plugins.

Rotation

Warning signWarning: This template is no longer in use. Please directly include the Web QA BuildMaster Rotation page instead.

The Web QA Buildmaster Rotation page contains the past and upcoming schedule.

These entries are in reverse chronological order.

  • 2016-05-05 - 2016-05-19 - stephend
  • 2016-04-21 - 2016-05-05 - mbrandt
  • 2016-04-07 - 2016-04-21 - davehunt
  • 2016-03-24 - 2016-04-07 - rbillings
  • 2016-03-10 - 2016-03-24 - krupa

Definition

  • buildmaster role last 2 weeks
  • edit jenkins desc with your name as buildmaster
  • buildmaster is point of contact for open issues/bugs
  • role includes filing bugs/issues, sending out emails, investigating issues
  • does NOT include escalation paths, prioritizing fixes, following up with other teams
  • send daily email with list of generally prioritized github issues to be fixed, blocking bugs that were filed
  • check builds at least once per day (there are several methods for doing this: view the web dashboard, subscribe to RSS feeds, read e-mail alerts, watch IRC notifications. We could consider others using Jenkins plugins)
    • investigate failures
    • If it is a locator issue, if you have a question, if you wonder if the test is still valid or important then file a GitHub issue
    • File bugs on projects based on info below, then contact the noted team members

Support Tiers

Tier 1

  • Marketplace
  • AMO
  • Mozilla.org

Tier 2

  • SUMO
  • Socorro

Tier 3

  • BIDPOM
  • Moztrap
  • One and Done

Tier 4 (Unsupported)

  • Affiliates
  • Mozillians

Projects

amo

bidpom

  • towards low priority
  • fail: john morrison [jrgm] if infrastructure related (time outs, buttons not loading, etc.); bob or davehunt are the ones to fix
  • IRC: mozwebqa
  • known bug file bug and also need info him; esp if you know who checked in the change who made it fail

Hello (Loop)

Marketplace

mozilla.org

Moztrap

mozwebqa dashboard

One and Done

Socorro

Sumo

QMO

FAQ

Who do I contact if the issue is related to Persona?

If you trace an issue to Persona (the sign-on service) you should contact :jrgm in #persona. You can also raise issues in the project's GitHub repository.

Why is the failure only happening on Sauce Labs?

It could be that the failure is only presenting itself on specific browser window sizes. Sauce Labs uses virtual machines with screen resolutions that may differ from our internal Selenium Grid. You could try specifying a browser window size to make the results consistent, or at least consider how the size of the browser might affect the tests that are failing.