Engagement/Mozilla.org Durable Team/Testing Playbook: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 2: Line 2:
## Why test?
## Why test?
### Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc) supporting key performance indicators.
### Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc) supporting key performance indicators.
## Test tracker
## [https://docs.google.com/spreadsheets/d/1BSAR6EJj_lToGNNNp3k9RjRTlGcVrpRfZvEw9OwH_88/edit#gid=0 Test tracker]
# Planning.  Define the following in bugzilla/google docs linked from wiki page:
# Planning.  Define the following in bugzilla/google docs linked from wiki page:
## Hypothesis
## Hypothesis
Line 27: Line 27:
##### Basic user-agent  
##### Basic user-agent  
## Review  
## Review  
### Checklist for reviewing Optimizely set up
### [https://gist.github.com/jpetto/30396fbfdd62794d8e02 Checklist] for reviewing Optimizely set up
#### Does test look and work as expected on demo server?
#### Does test look and work as expected on demo server?
#### Are correct measurements being reported in GA?
#### Are correct measurements being reported in GA?
Line 35: Line 35:
# Next steps
# Next steps
## Review results
## Review results
### Data studio links on wiki page
### [https://datastudio.google.com/#/reporting/0B6voOaUZL-jwcGg1ZVZvSUJ4dUU Newsletter conversion]
### Deploy winning tests globally with L10N team
### [https://docs.google.com/presentation/d/15izvYKdGkCdczRu1jjjsid2KbNqAiezknH7AnQeDwto/edit?ts=56e33125#slide=id.p Participation tasks]
### Define additional hypotheses and tests based on test data
## Deploy winning tests globally with L10N team
## Define additional hypotheses and tests based on test data

Revision as of 22:55, 17 March 2016

  1. Testing: Why
    1. Why test?
      1. Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc) supporting key performance indicators.
    2. Test tracker
  2. Planning. Define the following in bugzilla/google docs linked from wiki page:
    1. Hypothesis
    2. Test Plan
    3. Measurement requirements
  3. Implementation
    1. Choose testing tool(s)
      1. what tool do we use to split traffic?
        1. Optimizely offers the most detailed targeting options
        2. Custom js keeps the page weight lighter and doesn’t depend on third party tools
        3. GA
      2. what tool do we use to run the test?
        1. When do we use GA?
          1. More control over the code changes
          2. More complex changes in Design and Page functionality
          3. Pages change based off of information in the Browser (eg. Welcome page - changes based off whether your browser is set as default
          4. Segmenting results
          5. Multiple Pages
        2. When do we use Optimizely?
          1. Simple Changes
            1. Copy - testing a lot of different versions
            2. Design - basic changes
          2. Can use Optimizely for directing traffic to any page.
          3. Basic user-agent
    2. Review
      1. Checklist for reviewing Optimizely set up
        1. Does test look and work as expected on demo server?
        2. Are correct measurements being reported in GA?
  4. Reporting
    1. Tests run in Optimizely: use simple Optimizely reports
    2. Tests run in GA: work with Analytics team to pull/build more complex reports
  5. Next steps
    1. Review results
      1. Newsletter conversion
      2. Participation tasks
    2. Deploy winning tests globally with L10N team
    3. Define additional hypotheses and tests based on test data