Foundation/Metrics/Optimizely Process

From MozillaWiki
Jump to: navigation, search

This is our process for running front-end tests on Mozilla Foundation websites and tools. The process is new so please feedback on it, and we can improve these guides, and the process.

This is a detailed working document on how to run tests. You may also be interested in this higher-level introduction document: A/B Testing on Webmaker.

Golden rules for A/B testing

  • Testing is continuous, it is never finished
  • Be brave: You can always be testing something else, so focus on potential impact
  • Traffic is opportunity. At any given time, we should have a test running on each of our sites and tools
  • There are no fails. Every test is a learning opportunity (especially if it makes something worse)
  • It's a team sport. Testing is as good as the number of people who see the results

How to get something tested

  1. If you have quick ideas, dump them here: Webmaker test ideas Etherpad
  2. If you have something specific you definitely want 'actioned', scroll down and follow the guide on how to file the bug and fill out the report template

What tests are we running?

And what did we learn from previous tests?

How do we choose?

Choosing which test to run next is a judgement call combining the following:

  • How long a test will take to run
  • How difficult a test is to setup
  • The potential impact of making the change

A formula to calculate this would be artificial as the impact of a test is hard to predict. Just keep these three things in mind and continually look to have the biggest impact. And when it's hard to decide, remember that testing anything is better than testing nothing.

Our process to run a test

  1. Open a Bugzilla ticket under the component the test is being run in to assign the test and track it's progress
  2. Add [optimizely] to the ticket's whiteboard
  3. Estimate time to complete the test and record this in the ticket: Test duration calculator
  4. Make a copy of this Google doc template and fill out the content: Mofo A/B testing report template
  5. Add the URL of your test report into your ticket
  6. Add your test into the Webmaker testing hub
  7. Setup your test in Optimizely, or request someone to do this
  8. Add the preview URL from Optimizely into the ticket for review:
    • Technical review: (make sure we haven't broken other things on the page)
    • Content review: with design and copy people as appropriate
  9. Put the test live in Optimizely
  10. Update the status to live on the Webmaker Webmaker Testing Hub
  11. People affected by testing should watch the wiki page for updates
  12. Announce new test to:
    • webmaker on IRC
    • webmaker@lists.mozilla.org
  13. Let the test run until results are statistically significant (Optimizely will do the calculations for you)

Concluding a test

  1. Some tests don't produce significant results. If this happens, don't be afraid to close the test. There is always something else we can be testing that might have more impact.
  2. Add the results and Optimizely screenshots into your test write-up
  3. Move your test into the 'Completed Tests' section in the Webmaker testing hub
  4. Announce the write-up to:
    • webmaker on IRC
    • webmaker@lists.mozilla.org
  5. Close the ticket
  6. Share this on the next weekly team call and get peer-review on your conclusions

Weekly updates

In the weekly cross-team calls, include a short segment on testing:

  1. Announce any tests closed in that week and link to the write up for peer-review
  2. Remind people to log new test ideas
  3. Ask if any planned tests needed priority

Potential Complications caused by A/B testing

When we run A/B tests on the interface of our tools, this can cause confusion to the people using them and teaching with them. Every test we setup requires finding a balance between potential confusion caused and potential improvements made.

All change causes complications (especially in documentation) but the most important thing we can do here is to make our testing process highly visible. If our community know we are running tests, and they know why (i.e. to make the tools better) than they can use this story as a part of their teaching rather than having it cause problems for them.

Some parts of this testing process can be time consuming to implement, but let's be careful not to skip of the communication part of the process.