QA/Execution/Web Testing/Goals/2016/Q2
From MozillaWiki
< QA | Execution | Web Testing
Contents
Web QA Q2 2016 Goals
Discussion Etherpad here: https://public.etherpad-mozilla.org/p/webqa-2016-q2-goals-brainstorm
Team Goals and Themes
Previous Themes
- Move our test-automation code into each project’s main-development repository to enable faster feedback and visibility for developers
- Prototyping the future with new tools, skills, processes
New Themes
- Improve consistency and stability of automated tests
- Reduce dependency on buildmaster role
Individual Goals
Krupa
- Work with devs and ops to review and improve AMO's release process
- Work on Webextensions release for 48 to ensure cross-platform coverage
- Check if there are opportunities to engage with Chrome webextensions before the 48 release. Need to discuss with Amy and Dan before confirming on this.
Stephen
- Get a Dockererized OWASP ZAP (CLI) instance up and running against a staged instance of one of our key sites: either AMO, Mozilla.org, or MDN, in Web QA's Jenkins, on either/both a cronjob or on-demand, as goal
- Document (i.e. blogpost[s]) the goals, the process, the progress, to try to help increase awareness
Dave
- Set up UI functional tests for the add-ons website to run against pull requests
- The UI functional tests currently live in https://github.com/mozilla/Addon-Tests and are run against deployed instances of the add-ons website. This deliverable will mean that UI functional tests will be run whenever a contributor submits a patch for consideration against an instance of the application including the change. This will reduce the feedback loop for failures, and will prevent regressions from being introduced.
Rebecca
- Ramp up and own Shavar deployments
- Learn and implement Docker container locally, assist with One and Done Docker-ization
Matt
- Dockerize One and Done
- MDN - Create a community oriented test plan
Open-ended Questions for Q2
- How can we increase community contribution?
- OneandDone goodfirstbugs import
- Bugsahoy
- Creating Outreachy tasks for evaluating candidates
- Clean up backlog of test automation bugs dashboard (and github issues)
- How can we improve GitHub issue and review discovery?
- Gaia has a history around this, might help to talk with them
- Bugzilla as much as possible just for history and discoverability
- Our dashboard could have bugzilla component tracking as well as GitHub Issues to try to combine
- Justin Potts had a redesign that might help
- Testrail test plans could generate manual and automated