QA/e10s Multi
Contents
Overview
Purpose
This wiki details the testing that will be performed by QA and other teams for e10s-multi. It defines the overall testing requirements and provides an integrated view of the project test activities. Its purpose is to document:
- Areas of risk
- What will be tested
- How testing will be performed
- criteria
- deliverables
- ownership
- schedule
- What data will be monitored
To help ensure the best possible release and long term maintenance of this feature.
Risk Analysis
Risk area | Requirement | Status |
---|---|---|
Performance | No browser responsiveness regressions | PASS |
Memory Usage | No regressions in comparable system memory usage | PASS |
Stability | No regressions in crash rate | PASS |
Major site compatibility | No regressions at target websites | PASS |
Testing Strategy
Scope
Test Areas | Covered |
---|---|
Private Window | Yes |
Multi-Process Enabled | Yes |
Single-Content Process Enabled | Yes (baseline) |
non-e10s | Yes (baseline) |
Stability | Yes |
UI | |
Interraction (scroll, zoom) | Yes |
Multi-Tab/Window | Yes |
Web Compatibility | |
Testing against target sites | Yes |
Data Monitoring | |
Temporary or permanent telemetry monitoring | Yes |
If it's not listed above, it is currently out of scope.
Objectives
Criteria Description | Metric | Single-Content Process | Multi-Content Process | Criteria Met? | QA Owner |
---|---|---|---|---|---|
Manual testing | Tests passed | Passed in e10s release | results (1) | Pass (20170405) | SV - Andrei |
Unit testing | Automated tests pass | perfherder | perfherder | YES (20170509) | Gabor |
UI | Tab/Window responsiveness | Jank Analysis | Jank Analysis | YES (20170602) | tracy |
Memory monitoring | System memory use | Memory Use Analysis | Memory Use Analysis | YES (20170602) | tracy |
Stability | Crash rate stable | Stability Analysis | Stability Analysis | YES (20170516) | jimm |
(1) Manual test results indicate known bugs. Those are not e10s-multi specific.
Manual Testcases
- Focus on observed performance of target sites notorious for performance issues; Broadly, these tend to be webmail services ( GMail ), social networks ( Facebook, Twitter ) or productivity tools ( Google Apps, Office 365 )
- Test around multiple content process creation and assignment
- Check for reasonable load balance across processes when opening many tabs from various sources.
Environments
Full Testing will be performed using Nightly for Desktop builds on:
- Windows
- Mac OS X
- Linux
Channel dependent settings (configs) and environment setups
- In Nightly dom.ipc.processCount should be 4 by default
Project Information
Ownership
Project Manager: Erin Lancaster
Developer contacts: Blake Kaplan, Gabor Krizsanits
QA: Tracy Walker
QA Peer: Andrei Vaida (SV)
Builds
This section should contain links for builds with the feature
Schedule
The following table identifies the anticipated testing period available for test execution.
Project phase | Start Date | End Date |
---|---|---|
Start project | Q3/2016 | Q3/2017 |
QA - Test plan creation | Q1/2017 | Q1/2017 |
QA - Test cases/Env preparation | Q1/2017 | Q2/2017 |
QA - Nightly Testing | Q1/2017 | Q2/2017 |
QA - Aurora Testing | Q2/2017 | Q2/2017 |
QA - Beta Testing (1) | Q2/2017 | Q3/2017 |
Release Date (1) | FF54 | 2017-06-13 |
(1) Note: beta testing targeted for 54, if everything looks great, e10s-multi could go to release with FF54 on 2017-06-13. Otherwise target is 55.
References
- List and links for specs
- Meta bug: Turn e10s-multi on in Nightly