Mobile/Fennec Unittests
Contents
Overview
Fennec unittests have evolved over time. Originally we have ported the tests to run all [on device] by installing python on our phone (Nokia N810) and running tests there.
In recent times, we have found that running everything on the device is not always possible or the best idea, so we have started to run them throw a [Remote Testing].
The last piece of this puzzle is to make [Reporting] useful by having no failures.
General
Fennec will run all the unittests that come with Firefox with few exceptions. We are interested in:
- Mochitest
- Mochitest-Chrome
- Mochitest-Browser-Chrome (used for fennec specific tests)
- Reftest
- Crashtest
- XPCShell
- [NSPR]
In general there will be some tests that are unique for Fennec and will not run on Firefox. For example automating the tab strip, bookmark manager or just panning and zoom. We will store all the Fennec specific unittests in [mobile-browser/chrome/tests].
On the flip side there are specific tests in Firefox that we will want to exclude from Fennec. For example:
- rss feeds (not supported for Fennec at the moment)
- browser-chrome and some chrome tests (chrome elements are different)
- private browsing (not supported for Fennec at the moment)
These excluded tests are tracked in bug 464081. Ideally these will not be included in the 'make package-tests'. This will be done by moving all tests inside of #ifdef's that exclude them. For example if we are removing rss feeds, however we remove the source is the same method for removing the tests. In general all tests that we don't want should all live in browser/ since the rest of the code we should have no problem sharing.
On Device
Originally we battled to get the tests running outside of a build tree and resolve some of the limited resource issues. [On Device] testing is done for the Maemo platform (N810 and now N900) and our automated Tinderbox run with this method.
In addition these techniques are also still known as the most reliable method for running performance tests on a device.
Remote Testing
Designed as a solution to run on Windows Mobile where we had no reliable way to run python, a local webserver or even enough resources.
This method requires a small lightweight agent which runs on the device and all the tests run on a host machine (like your desktop or a objdir). The test harnesses have options to run through a proxy (devicemanager.py) which talks to the agent on the device. Also the tests themselves have been adjusted to work with an arbitrary webserver (your desktop, setup via the test harness scripts).
[Here are the details] of requirements, setup, usage, and remaining work items.
Desktop
This is the easiest setup to start working with and things should work well on desktop before moving to device.
If you are on Windows, you need the mozilla tools found on this page [Windows Prerequisites] with a [.exe installer].
To get started with unittests, download a fennec build and a tests build. These can be found on the [ftp server]:
- [Windows Desktop Fennec Build]
- [win32 tests.tar.bz2]
- NOTE: this is from the regular desktop tests since we don't build win32 desktop tests for fennec specifically, there are very few differences in the test packages
Next unpack these to a local directory such as c:\tests (such that you have a c:\tests\fennec, c:\tests\mochitest, c:\tests\reftest, c:\tests\xpcshell, etc... directory structure)
Now in your shell (use the c:\mozilla-build\start-msvc*.bat to get mingw32 on windows), cd to c:\tests and we can start running tests:
- mochitest
- python mochitest/runtest.py --appname=fennec/fennec.exe --xre-path=fennec/xulrunner --certificate-path=certs --utility-path=bin --autorun --logfile mochitest.log --close-when-done
- reftest
- python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/layout/reftests/reftest.list
- crashtest
- python reftest/runreftest.py --appname=fennec/fennec.exe reftest/tests/testing/crashtest/crashtests.list
- xpcshell
- python xpcshell/runxpcshelltets.py --manifest=xpcshell/tests/all-test-dirs.list fennec/xulrunner/xpcshell.exe
Desktop (Build Tree)
TODO
Maemo
Automation on Maemo is fairly straightforward. There have been two major changes in order to get this working:
- tests running outside of the source tree - bug 421611 - resolved
- splitting tests into smaller chunks - resolved by maemkit
Other issues which have surfaced is the need for fonts (hebrew fonts required for reftest bug 471711), multiple devices to run the tests faster (1 device takes 26 hours in debug, 12 hours for release).
- Automation [timeline table] and reference
- Notes on how to run automation on Fennec:
- [Mochitest]
- [Chrome]
- [Reftest / Crashtest]
- [XPCShell]
- Tracking bugs for [failures] on Fennec:
- Mochitest - bug 473558
- Chrome - bug 473562
- Reftest - bug 473564
- Bug to track issues while getting tests running in tinderbox - bug 495164
What is left is:
- Stabalizing the results that are generated from tinderbox.
- Fixing the existing bugs.
- Investigating all unknown failures.
- Providing an out of band toolset to diff results between runs
- it is difficult to compare a test run of 75000 tests between two runs
- need to establish a baseline (will be moving)
Windows Mobile
This is not under active development anymore and there are no working builds of Windows Mobile Fennec. Leaving this here in case we decide to pick up Windows Mobile in the near future if the OS changes.
We went down a couple paths here, but ended up with the Remote Testing solution. This is fully implemented for Windows Mobile with a fully functioning agent and working tests.
Android
Under initial development for the browser and the tests
Reporting
Currently there are thousands of test failures when running the Firefox tests on Fennec. With so many failures it is next to impossible to detect if a checkin caused a new test to fail. As a result nobody really looks at the results of the automation.
A few months ago we sat down and decided to fix all the issues with [reftest, crashtest, and xpcshell]. We have made great progress on this, but still have a long way to go.
Once that is done, we need to revisit the mochitest, chrome and browser-chrome tests. The technique we will use in the short term is [filtering out failing tests] and only running ones that pass.