Engagement/Mozilla.org Durable Team/Testing Playbook: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Created page with "# Numbered list item Testing: Why Why test? Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign...")
 
No edit summary
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
# Numbered list item
# Testing:  Why
Testing:  Why
## Why test?
Why test?
### Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc) supporting key performance indicators.
Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc)
## Backlog (link to come)
Test tracker
# Planning.  Define the following in bugzilla/google docs linked from the [https://docs.google.com/spreadsheets/d/1BSAR6EJj_lToGNNNp3k9RjRTlGcVrpRfZvEw9OwH_88/edit#gid=0 test tracker]:
Planning.  Define the following in bugzilla/google docs linked from wiki page:
## Hypothesis
Hypothesis
## Test Plan
Test Plan
## Measurement requirements
Measurement requirements
# Implementation
Implementation
## Choose testing tool(s)
Choose testing tool(s)
### what tool do we use to split traffic?
what tool do we use to split traffic?
#### Optimizely offers the most detailed targeting options
Optimizely offers the most detailed targeting options
#### Custom js keeps the page weight lighter and doesn’t depend on third party tools
Custom js keeps the page weight lighter and doesn’t depend on third party tools
#### GA
GA
### what tool do we use to run the test?
what tool do we use to run the test?
#### When do we use GA?
When do we use GA?
##### More control over the code changes
More control over the code changes
##### More complex changes in Design and Page functionality
More complex changes in Design and Page functionality
##### Pages change based off of information in the Browser (eg. Welcome page - changes based off whether your browser is set as default
Pages change based off of information in the Browser (eg. Welcome page - changes based off whether your browser is set as default
##### Segmenting results
Segmenting results
##### Multiple Pages
Multiple Pages
#### When do we use Optimizely?
When do we use Optimizely?
##### Simple Changes
Simple Changes
###### Copy - testing a lot of different versions
Copy - testing a lot of different versions
###### Design - basic changes
Design - basic changes
##### Can use Optimizely for directing traffic to any page.
Can use Optimizely for directing traffic to any page.
##### Basic user-agent  
Basic user-agent  
#### When do we use funnel cakes?
Review  
###### Funnel cakes are special Firefox builds that are used to measure the impact of changes to the onboarding flow on user retention, (primary use case for this team).
Checklist for reviewing Optimizely set up
###### Funnel cake set up process (link to come)
Does test look and work as expected on demo server?
## Review  
Are correct measurements being reported in GA?
### [https://gist.github.com/jpetto/30396fbfdd62794d8e02 Checklist] for reviewing Optimizely set up
Reporting
#### Does test look and work as expected on demo server?
Tests run in Optimizely:  use simple Optimizely reports
#### Are correct measurements being reported in GA?
Tests run in GA:  work with Analytics team to pull/build more complex reports
# Reporting
Next steps
## Tests run in Optimizely:  use simple Optimizely reports
Review results
## Tests run in GA:  work with Analytics team to pull/build more complex reports
Data studio links on wiki page
# Next steps
Deploy winning tests globally with L10N team
## Review results
Define additional hypotheses and tests based on test data
### [https://datastudio.google.com/#/reporting/0B6voOaUZL-jwcGg1ZVZvSUJ4dUU Newsletter conversion]
### [https://docs.google.com/presentation/d/15izvYKdGkCdczRu1jjjsid2KbNqAiezknH7AnQeDwto/edit?ts=56e33125#slide=id.p Participation tasks]
## Deploy winning tests globally with L10N team
## Define additional hypotheses and tests based on test data

Latest revision as of 23:54, 4 April 2016

  1. Testing: Why
    1. Why test?
      1. Testing gives us data to optimize user experience, leading to increased conversion (downloads, Accounts sign ups, newsletter sign ups, etc) supporting key performance indicators.
    2. Backlog (link to come)
  2. Planning. Define the following in bugzilla/google docs linked from the test tracker:
    1. Hypothesis
    2. Test Plan
    3. Measurement requirements
  3. Implementation
    1. Choose testing tool(s)
      1. what tool do we use to split traffic?
        1. Optimizely offers the most detailed targeting options
        2. Custom js keeps the page weight lighter and doesn’t depend on third party tools
        3. GA
      2. what tool do we use to run the test?
        1. When do we use GA?
          1. More control over the code changes
          2. More complex changes in Design and Page functionality
          3. Pages change based off of information in the Browser (eg. Welcome page - changes based off whether your browser is set as default
          4. Segmenting results
          5. Multiple Pages
        2. When do we use Optimizely?
          1. Simple Changes
            1. Copy - testing a lot of different versions
            2. Design - basic changes
          2. Can use Optimizely for directing traffic to any page.
          3. Basic user-agent
        3. When do we use funnel cakes?
            1. Funnel cakes are special Firefox builds that are used to measure the impact of changes to the onboarding flow on user retention, (primary use case for this team).
            2. Funnel cake set up process (link to come)
    2. Review
      1. Checklist for reviewing Optimizely set up
        1. Does test look and work as expected on demo server?
        2. Are correct measurements being reported in GA?
  4. Reporting
    1. Tests run in Optimizely: use simple Optimizely reports
    2. Tests run in GA: work with Analytics team to pull/build more complex reports
  5. Next steps
    1. Review results
      1. Newsletter conversion
      2. Participation tasks
    2. Deploy winning tests globally with L10N team
    3. Define additional hypotheses and tests based on test data