Apps/QA/Top Apps Analysis Strategy
Note: The following document is a draft.
Contents
Overview
The top apps analysis during preparation for Mobile World Congress provided high-value into understanding not only different app experiences themselves, but whether the desktop and mobile experience allows the apps to be effectively used. Additionally, they provide insight into underlying platform issues that our app platform needs to support and whether currently do or do not support them. Going forward post mobile world congress, client quality assurance evaluation shall include exploratory testing of the apps themselves to develop ratings and rationale, tracking of bugs applicable to top apps, and manual sanity test cases for testing platform and top app experience scenarios, and dogfooding.
Exploratory Testing
Exploratory testing of the top apps allows for capturing platform issues existing test cases may not have, discovering new platform test cases, and evaluating the overall apps experience with real customer apps. Exploratory testing of top apps typically is done with tier 1 apps and other apps as needed by Ron. To conduct an exploratory test run of top apps, use the following approach:
- Select an app from the list of top apps (typically a tier 1 app)
- Do the following with different desktop (e.g. Win 7, Mac OS X, Win XP) and android operating systems (e.g. 2.3, 3.2, 4.0):
- Install a build a Firefox with web apps enabled
- Install the app online here
- Note: If the app does not exist at the above link, then the app needs to be faked out, or making a setting URL as an app
- For Soup, you'll need to add a fake app manifest here
- For desktop this is currently not supported
- Launch and play with the app
- If the app has account management, login into the app using one of our sample accounts here
- Navigate around the app through clicking links and pressing the back button
- If the app has audio or video, play audio and video
- Try to navigate outside of the scope of the app within the app to see what happens
- If the app allows you to upload files, try uploading a file
- Evaluate the app's quality experience using the following template here on phone, tablet, and native/web desktop.
- Log bugs on platform issues discovered while using the app that are problems with the desktop or mobile experience with the app
- Notify the developers and Ron of this report to indicate what the developers need to do improve app quality and what the partner needs to do to improve app quality
- Document any new platform test cases discovered that are not covered in existing platform app test cases
- Note: If the app does not exist at the above link, then the app needs to be faked out, or making a setting URL as an app
Bug Tracking
All tracking for top apps on-going will be done using the whiteboard entry in bugzilla:
- topapps: Indicates that a top app will experience this bug
- gecko: In linking with the topapps entry, this means that the top app has a bug on the gecko mobile platform
- website-compatibility: Indicates that there is a difference in the app rendering on webkit vs gecko
For gecko-related issues, these issues will be tracked in these different areas in bugzilla:
- Product - Core: For layout issues regarding gecko that need improvement for the apps experience
- Product - Fennec Native: For issues specific to gecko on mobile. These are split up by two components:
- General: For mobile layout issues regarding gecko on mobile
- Evangelism: For problems with apps on gecko that the app developer needs to fix
For tracking app-related issues not related to gecko and are more general, a consideration being thought about is tracking evangelism issues in the web apps product in bugzilla. This needs more discussion to determine if this is worth to track in that location.
Platform Test Cases
Platform test cases will be used to analyze the underlying core of the native app experience to capture whether the web app runs correctly or incorrectly in our environment. In the past, this area has not been analyzed too much. However, after doing mobile world congress preparation testing, we have discovered many problems during the use of the app, such as being unable to load certain links, HTML dumps appearing to the screen, not being able to upload files, etc. Going forward, apps testing shall include these platform test cases captured in a document and used during test runs. Avenues to generate platform test cases include exploratory testing top apps and getting a better understanding of the HTML, CSS, and JavaScript platform our developers are using. While using these avenues, a set of minimized platform test cases needs to be generated as apps to allow testing on both the desktop and native environment that follows ideas that were captured. As a result, an app directory page will be needed to host these platform test cases for testing purposes. The app directory shall be constructed by hosting an app directory on the staging environment and hosting test cases as subdomains of the staging environment (e.g. testcase1.staging.com, testcase2.staging.com). The apps on these app directories will directly link to test cases in the platform test cases document being tracked.
Scenario Testing
Scenario tests will be used with the top apps during testing on the desktop and mobile to simulate real-use of the application. Examples being considered for scenario tests follow the model shown at the bottom of the MWC Pod Demo Script. To create a scenario off a top app, follow the following process:
- Install the latest build of Soup and ensure you have a build of Firefox allowing for desktop apps testing
- Install the app for desktop and mobile respectively you wish to generate a scenario for
- If the app does not exist on any avenue, such as being hosted on an app directory here, then contact Ron to find out if the app developer has a manifest hosted for the app
- If there is no app manifest hosted, contact Ron about working with the developer to get a manifest. In the future, there is consideration for faking websites as apps as a long-term solution
- Exploratory test the application to determine use cases for the application that different stakeholders use
- Note: Collaborate with Jen Arguello on this to determine these use cases
- For use cases captured, add these use cases as test cases to a list of scenario test cases
Upon completing development of the scenario, then the scenario will be added to list of scenario test cases for top apps. These test cases will be used in conjunction with manual test cases on the system to determine quality state of the desktop and mobile apps experience when a test run is required. If any of these scenarios fail, then an investigation must take place to determine if this is problem with our system or the app itself. If the problem exists our system, bugs should be logged accordingly using the topapps whiteboard keyword. If the problem exists with the app, then writeup detailed feedback about the underlying app issues and send that feedback to Ron. Ron will then follow-up with the developer with the feedback. As questions come back from the developers on the feedback, more feedback may need to be provided.
Going into the future, we'll want to track evangelism issues with apps that requires communication with developers. The mechanism to track this is still being discussed.
Dogfooding
Dogfooding needs to be emphasized throughout the company and community to increase the amount of testing coverage, user experience analysis, etc of the apps infrastructure to better understand how we can put these apps into practical use. To create this practice, we'll need to work with users of both our company and community to understand what apps they use and in what way. Then, when the product gets to a stable enough level, allow our users in the company and community to put the apps in practical use to see what works well and does not work well in our Apps infrastructure. With an understanding of the practical use of the apps in our infrastructure, we'll better understand how we can get users to actively to use web apps in a practical sense.