Auto-tools/Goals/2011Q3
Goals
This time around, we are limited to three goals. Coming up with three goals general enough to cover what we do and specific enough to be goals is not easy. So, beneath each goal, I list the projects that I think each of these goals covers.
- [DONE] Support developers by reducing the overall end-to-end time from push to test results and by finalizing tools for crash analysis and streamlining sheriffing.
- [MISSED] bughunter
- [DONE] gofaster
- [DONE] WOO
ANALYSIS: Calling this one done since our goal with bughunter was to deliver a UI by end of quarter. However, that UI has landed in a patch, but is not yet deployed. We are about 95% of where we wanted to be on this goal.
- [MISSED] Support future development efforts (E10S, new automation harnesses etc) by leading the effort to make our tests E10S ready and by ensuring we have concrete building blocks for new harnesses to ease harness-automation-integration issues.
- [MISSED] Specialpowers/test moving (i.e. getting rid of enable Privilege)
- [DONE] Mozbase
- [DONE] webux/autolog (as part of our tool-chain integration)
-
[NEW] Any new harnesses we need (Rust?)(decided that new harnesses weren't priority this quarter, except for user responsiveness) - [DONE] I put bugzilla improvements in this category even though they aren't called out. Because the bugzilla improvements we're doing are for supporting future development.
ANALYSIS: Special powers was a huge part of our goals this quarter. Unfortunately, one of our special powers patches uncovered an unrelated memory leak in the platform and our progress to continue doing special powers work has been blocked on getting this bug resolved. Once that bug is resolved, we can land the three dependent patches and we'll be 90% of the way to completing our special powers goal. I'm reserving the remaining 10% for a necessary find/fix step. Because this was a core element, I'm marking this entire goal as a miss, even though good work was done on mozbase and autolog and bugzilla.
- [DONE] Continue to improve our performance testing: complete addons performance testing service for startup time, create plan with Release Engineering to improve talos deployment process, codify new approaches to performance measurements for user responsiveness.
- [DONE] Eideticker
- [DONE] talos addons
- [DONE] talos xperf
- [ON TRACK] talos better deployment
- [ON TRACK] talos-izing user responsiveness metric for E10S
ANALYSIS: Marking this as done. We completed the talos xperf and addons projects. We made great progress on creating a talos suite for user responsiveness - that code is now in testing and we're waiting on the final equation that the developers are providing us for use in the "official" test. We got video capture for eideticker working and we are already working toward integrating the eideticker capture with talos tests which is work that will be covered in Q4. While we did not achieve any lasting conclusions for streamlining talos deployment in general, we did perform three talos deployments this quarter that went quite well. So the specific process seems to be working. I believe the spirit of this goal has been met, and much of this ongoing effort will be continued into Q4.