B2G/QA/Test Plan Review

From MozillaWiki
< B2G‎ | QA
Jump to: navigation, search

Feature Test Cases Review

Overview

The following document summarizes the process for executing an effective review of test cases for B2G features.

Internal QA Workflow

Internally, you should aim to have at least one reviewer review your test cases analyzing two main themes:

  • Quality of the test cases itself
  • Understandability of test cases


Quality of the test cases itself analyzes if the test cases for the feature sufficiently cover happy path and negative cases sufficiently for each user story. Understandability of the test cases analyzes if the test cases for the feature can be understood by a person who did not create the test cases to allow that same person to run that test case without ambiguity.

To setup a review for feature test cases, you should generate a query off of MozTrap that contains the relevant test cases for the feature and provide that to the reviewer. Then, the reviewer can send his review results in the following form:

  • Quality of test cases - review+ for pass, review- for needs work
  • Understandability of test cases - review+ for pass, review- for needs work
  • Additional comments explaining rationale behind review decisions


Here are guidelines of when to know that improvements are needed in reviews (i.e. issuing a review-):

  • If a single or few happy path cases are only covered, but other obvious non-happy path cases are not called out for, then that's a case to review- to improve quality test case cases
  • If the test cases are vague in what they are intending to do, then that's a case to review- to improve test case understandability


Note: One reviewer is the minimum requirement, but that should not stop you from getting more reviewers to look at your test cases if think there's value to having more reviewers.

External Workflow

Externally, you should aim to gather feedback from reviewers from the following roles on your feature:

  • Developer Lead
  • UX Lead
  • Product Lead


The feedback you should primarily go after with these parties is to look at the high-level definition of your test cases to validate that they sufficiently cover known development code, UX, and requirements flows. The details of the test cases and understandability is not important to expose to these parties, as that is already sufficiently covered by the internal QA review workflow and introduces overhead to these parties, which could potentially reduce the chance that the review will actually take place sufficiently.

To setup a review for test cases with external parties, you should generate a high-level list of the test cases with titles only in an etherpad or shared document that external parties can interact with. Then, the external parties can send his review results in the following form on a per role basis:

  • Development Code Flow Coverage by Developer Lead - review+ for sufficient coverage, review- for coverage needing improvement
  • UX Flow Coverage by UX Lead - review+ for sufficient coverage, review- for coverage needing improvement
  • Requirements Coverage by Product Lead - review+ for sufficient coverage, review- for coverage needing improvement
  • Additional comments explaining rationale behind review decisions


Note: One reviewer per each role (development, UX, product) is the minimum requirement, but that should not stop you from getting more reviewers to look at your test cases if think there's value to having more reviewers.