Testing apps for mobile devices has always been a difficult area to tackle. There are many unexplored areas with no real industry standard. One option is to use third-party companies. They often charge thousands of dollars a month and claim to be the best mobile testing provider out there. Yet different companies use different platforms, often building in-house tools or a custom framework. Nevertheless, the end goal is the same: ensure that a mobile application is verified quickly and effectively for functionality and security among a multitude of different environments.
The problem is compounded when we consider testing on various devices across different operating systems, newly released firmware, and across foreign languages in other countries. The QA team has been investigating mobile automation with the intention of providing scalable automated testing for all high-priority applications.
In the past, we had to manually perform everything. Installing an application was done by typing the link to the build on each device, often for ten or more devices one at a time. Once downloaded over an intermittent wifi signal and installed (don’t forget to uninstall older versions first), most apps require user login or customized location data to be input before being able to test a core feature.
This scene shown above may be familiar to other test developers since it’s also one of the largest bottlenecks in the mobile QA process. The overhead setup is painfully slow. Performing this repeatedly each week is tiresome. Probability of human error is high. Things get even worse when a developer comes running over waving their hands and saying that there’s a newer build to test.
So how can this be improved? Automating the mobile device is a critical component to improving the procedures of quality assurance. It allows for redundant tasks to be performed by a computer, and manual testers to focus on other key areas. The power of the automated script is extremely scalable. A well designed setup can be very robust in terms of test preparation, verification, security, load, and performance testing particularly with mobile hand-held devices.
The world is going mobile and to remain stuck in the dark ages of manually testing everything will eventually drag the development process down with inefficiencies and delay. This is why we have defined two very powerful programmable methods of performing automation setup and test execution for Android and iOS, as these are the two most popular platforms in the market today.
Robotium for Android and SuperInstall
With the help of our one of our previous coops, Harold Treen, we spent a few weeks getting Robotium working alongside adb (Android Debug Bridge). Installing the latest Eclipse IDE for Java EE Developers (we use the Juno platform) along with the latest Android SDK will ensure that you have all the necessary binaries (this is easier for the dev teams).
We chose Robotium due to its ability to automate without requiring root access to the device. Test cases are written in Java using Robotium jar file libraries. This allows for writing intelligents scripts that are robust and capable of handling dynamic run time delays or unexpected popup alerts. Best of all, app installation and test execution can both be invoked from the terminal using adb commands.
Some issues that we had to overcome was setting up Eclipse properly to build the test project with all the right dependencies. The AndroidManifest.xml file needs to be modified to reflect the correct test package name and targetPackage name, as per the sample below for our Sofit application test case:
<?xml version="1.0" encoding="utf-8"?>
<uses-sdk android:minSdkVersion="10" />
<uses-library android:name="android.test.runner" />
We were also faced with setting up the individual test projects so that they contained all the right Robotium libraries. Frequent visit to the java build path eventually led to a working environment as per the image below.
The Java code syntax is very readable and easy to maintain by writing callable functions and reusable methods. Below is a sample test function written to automate the location settings in the Cineplex application for Android:
private void confirmLocation()
assertTrue(solo.searchText("Please set your location."));
Log.v(TestTag, "Asserted that we are trying to detect the current location");
solo.waitForText("Your location was successfully detected.");
assertTrue(solo.searchText("Your location was successfully detected."));
Log.v(TestTag, "Asserted that the location has been set");
However, we were still faced with the issue of installing our app on multiple devices as well as figuring out how to run our automation test on each device. This was overcome by using adb commands in a script named “superInstall”. From the terminal we can now sign a given application file with a debug key and install it to each connected device in parallel. Each phone is ready to test within minutes saving valuable time and removing human error from the installation process.
The superInstall eventually became bundled with the option to uninstall a specified .apk file, as well as run a specified java test script on the device after installing an app. The requirement for humans to install and run basic tests is now fully automated, saving significant time and resources in the QA process.
One key point to note is that having a result output of the test should be mandatory for any test environment. Since the test is written in Java, we use Logcat to collect log information on each test. Our script will then output the automation results to a file uniquely identified by the device that it ran on. Some tools that we looked at did not provide this. What’s the point of running a test and never knowing if it passed or failed? If it failed, where and why did it fail? Log files are absolutely necessary for the post-analysis of a test run.
Since the test is written in Java, we can define intelligent scripts that verify behaviour, UI context, expected values, and even handle unexpected dialogs using event-driven behaviour. This cannot be achieved using recorded scripts that are UI gesture based. The procedural nature of such recorded UI tests doesn’t allow for reordering of tests, handling of unexpected delays, or the interruption of display popups or push alerts. Tests like these will fail needlessly and report false negatives more often than desired.
Robotium tests written in Java significantly help to alleviate redundant testing behaviours while allowing for app changes from build to build. The core test case creation in Eclipse takes the most time, at most a day or two. However, once the test script is completed, running it on concurrent build releases from any Mac terminal requires negligible effort for such a valuable form of test execution and verification results.
UI Automation for iOS and SuperIOSinstall
Similar to the Android solution, we wanted to have something similar for iOS. This posed to be very challenging, since iOS is inherently very locked down and not as easily managed by open source development as is Google’s Android. This is known across the industry which is why there are many third-party solutions available that run on jail broken iOS devices. One of our requirements at Xtreme Labs is to avoid jail breaking our phones whenever possible. Jail breaking a device voids warranties, opens up security issues, and can even change the behaviour of the device under test. We release important mobile software to clients all across North America, and testing on rooted phones isn’t acceptable.
The final step remained in finding a way to install apps in parallel to all connected iOS devices, in the same way that superInstall worked. Instead of reinventing the wheel, we turned to the web and found a script called fruitstrap that did exactly that. It also provides for listing iOS devices and uninstalling applications, but for one device only. We simply wrapped this up into a bash script that allows it to be called in parallel for all connected iOS devices. At this time, we are still trying to get this to work for iOS 6 due to changes in the way the UDID can be utilized.
Finally, Instruments allows UI Automation test cases to be called from the terminal bash shell by specifying the desired UDID to run on. One issue that we repeatedly faced was that Instruments does not allow multiple instances to be run. We had to completely close Instruments before using the following command to run our test from the bash shell:
This also means that one draw back of this is that we cannot run tests in parallel the way we can for Android. Furthermore, after upgrading to Xcode 4.5, we noticed that we could successfully run test cases for iOS 6 devices from the console, but not for previous iOS versions. Similar postings have been found on the web that have experienced this problem. We will be following up with this issue closely.
Although this is still a work in progress, we preferred this form of automation solution over others due to the ability to script test code in an environment that can be invoked from the terminal shell all without jail breaking the device under test. Moving forward, we will continue to look into ways of automating the test execution itself. We are evaluating how automated device testing should link to our continuous integration servers so that automated runs from devices may be done without ever requiring human intervention.