QA/TDAI/TestDev Presentation 2009-09-11

From MozillaWiki
< QA‎ | TDAI
Jump to: navigation, search

End of Sprint II Review

Welcome to the end of sprint review. We will be reviewing the status of the completed projects for the second sprint of the third quarter. This will follow the same spirit of the earlier sprint review where each developer does a demo and gives a status of their project. Feel free to ask questions, but we will be pressing forward quickly so that we can get through all the projects. If we don't get to your question, feel free to add it to the Notes section beneath each topic below.

This time for demos we are working with gotomeeting. Instructions for remote people *only* (our demo license limits us to fifteen connections).

1. Go to gotomeeting.com (you must use Firefox 3.5 if you have a mac Minefield will NOT work). 2. Allow the application to install itself 3. Start the application 4. Put in the meeting number that we give you.

The conference call will be on the normal Mozilla call system, conference room number 304.

  • 650.903.0800 x92, conf: 304
  • 1.800.707.2533 PIN 369, conf: 304

It will be held at 10AM and should finish by 11:30AM (PDT)

The Projects & Presenters

Fennec Log Parser by Joel Maher & Hans Sebastian

  • Notes and Questions
    • In the last few weeks, we have wired it to tinderbox that pulls the real data in from tinderbox running on a cron job
    • Intermediate post layer to mitigate the performance issues with the large "new failure" query
    • Also have the new failures list - tracks new failures that haven't been seen before.
    • Think about other types of queries that we can pull out of this data now that we have it.
    • Only showing reftest results - some columns have zeros in them, this has to do with tests crashing on maemo and devices hanging etc.
    • mochi* aren't running on maemo
    • xpcshell tests are running, but the way we differentiate checks and tests are messing up our counts for the logs.
    • Need to figure out the proper build step to process the build log, so that we only upload builds that have the full end to end run.
    • <Bob> If you're not going to put results up when you have a device crash how do you track that?
      • Aki and releng crew have some means to track them. What we care about here are tracking new failures on tests. So getting partial results in won't help. As far as tracking the devices, we need to find a better way to flash them more frequently
      • the releng team manages keeping the tests alive. The buildteam uses the buildbot waterfall to track failing devices.
      • it would be an interesting piece of data that the devices are running.
    • <Henrik>Mozmill results one day - how difficult is it to use the views of couchdb to do the same thing for Mozmill?
      • It's actually easier. The reporting system is already built into mozmill and so there is no real parsing involved. The per test indexes and views have to be written, which isn't hard but isn't done yet.
    • <TODO> Modify the log parser to deal with partial logs
    • Are we flushing on every write?
      • Yes, mostly. Something to consider
    • When you parse the tinderbox data, what do you get?
      • gets the tinderbox logs - which is everything. If it crashes a cell shows up in tinderbox, but we don't get any test runs. We do get the stdout up to that point in time. So the runs that show zero, did those never attempt to run? No, we are parsing the end of the log file and not the stdout so if that end doesn't show up then we will not see those.
    • We should modify that to be more accurate.
      • failure is relative concept...whatmikeal said.

TopCrash Automated Analysis & Reproduction by Bob Clary

  • Notes and Questions
    • moving to automated, getting of urls and pushing of urls to couchDB not automated yet
    • add logic about what crashes do we care about, how often have certain urls crashed.
    • Tomcat running on multiple machines/VMs

Desktop & Mobile QAC by Heather Arthur and Aaron Train

  • Notes and Questions
  • Demos the stats tab, shows the stars. Mostly written by Aaron
  • Demos the fennec QAC
    • something useful - prepopulate the BuildID in the Description on the fennec QAC

XBL2 Test Harness Plans and Research by Heather Arthur

  • Notes and Questions
    • Same idea as XBL, but more features, simpler to use
    • Has testplan up at wiki.mozilla.org/User:Harthur/XBL2/Testplan
      • most of these tests will be content, so this would be mochitest
      • has a shadow tree with elements in the dom that you aren't supposed to be accessed or seen.
      • Mikeal - may want to use mozmill because we will have to deal with this in mozmill in order to get to these elements for testing shadowtrees.
    • Tests similar to mochitest, some reftest to test styling scoping, something o test prefetching

Test Case Manager and Brasstacks Infrastructure by Mikeal Rogers

  • Notes and Questions
    • Moved TCM workflows to project specific work flows
    • Doesn't have the runner yet.
    • On the product page we want to have the ability to show the collections here and be able to define what collections we want people to run.
    • Has a tag for root that has a funky tag cloud
    • The runner is coming up next.
    • After the runner is generating some feedback, plan to polish the editor UI
    • How usable is the tag cloud?
      • Could be useful for creating collections?
      • How is size of words?
      • Jquery does it based on a frequency list
      • based on number of testcases that have this tag on this product.
    • Doesn't seem to be useable when people want to run a specific test area.
      • the intended use is for people creating collection
      • but that isn't something people care about when creating collections.
    • collection idea is nice to put a bunch of different ideas all together, but the tag thing seems to not be that useful when doing this - and having the tag cloud in the workflow when doing that isn't useful.
      • Having it as part of the workflow takes away from the collection workflow
    • For a release say 3.5.7 if there are changes that went in for on ly the tab portion and if I want to make a test suite that focuses only on that portion will this allow me to do that?
      • Yes. We could also specify a run for the test cases?
      • You'd point people testing 3.5.7 at this collection.
      • Have code in the runner that prioritizes collections for people running a specific version.
  • how do we address forking testcases?
    • We need to be smarter and detect the changes between the forks.
    • Collapse test test cases, and perhaps undo the dupes.
    • Need to test for identical-ness in testcases when we do the import.

Mochitest on WINCE and WINMO by Joel Maher and Clint Talbert

  • Notes and Questions
    • Items on blog
    • 75 tests don't work on Firefox running remote tests
    • Got them running on winmo and wince.
    • On winmo there is a bug loading the tests in the iframe and so we had to go to running them one at a time since that loads it outside the iframe.
    • Run directory of tests on tegra and it worked fine with the remote web server
    • This approach relies on activesync - hasn't been the most stable way to do it on wince and for releng.
    • big challenges are how do we get rid of activesync?
    • still need python on the device, so cleaning up pythonce.
    • cleaning up the speed issues and the active sync.
    • running remotely makes it easy to run across multiple devices, and it does but who knows how many devices can hit one server?

Remote Reftest and Electrolysis Test Infrastructure by Jonathan Griffin

  • Notes and Questions
    • Added animation tests from the W3C SVG1.1 testsuite
    • Some tests are hanging, that's probably bug 483584.
    • Some tests still failing, probably my fault
    • dholbert has some nice methods in his patch queue, which makes testing intermediate animation stages easier
    • Non-SVG attributes will not be supported at this point
    • Some tests to consider: printing, selection, foreignObject, plugins, events
    • Bug 510110 Extend MozAfterPaint using a paint request API - could perhaps be relevant for testing animation and smoothness
    • no dedicated server to host reftests yet for community members to use

JavaScript Animation API Harness and SMIL Testing by Martijn Wagers

  • Notes and Questions
    • SMIL - in refests can set time and take snapshots. "matrix" to compare with animation in intermediate steps.
    • moving forward:
      • CSS transistions - parsing has landed but that's it
      • unimplemented SMIL parts
      • printing with SMIL.
      • plugins, foreign objects
      • maybe a test plugin to mimic playing a flash video, etc.

Mozmill Build Integration by Clint Talbert

  • Notes and Questions
    • Mozmill can be added to Buildbot easily.
    • Couple issues:
      • log output was a problem, conflict with Tinderbox - taking solution from Thunderbird. Mikeal - could just report to brasstacks instead of Tinderbox, could just add url to Tinderbox log.
      • when it crashes, Mozmill just hangs
    • No Makefile integration

Brasstacks Future

  • Two use cases right now - we have a results server and an end-user focused tools.
    • Should we use brasstacks completely for results?
    • Should we take the testing.mozilla.org for end user testing tools?
    • We will take it to the list (general consensus)