Adaptive Test Case technology accommodates the rapid and inevitable changes and modifications that will arise in networks and handsets.

By David Haggerty, TestQuest Inc.

Creating a robust platform upon which to deliver mobile and cellular services and applications presents a myriad of implementation challenges across the entire value chain. From a handset manufacturer’s standpoint, handsets must be able to interoperate with multiple carriers and service providers reliably. Services and content providers, for

click to enlarge

Figure 1. Adaptive Test Case technology enables developers to reuse test cases effectively across different handsets without affecting the test cases themselves. Test Verbs absorb platform differences, such as different operating systems, while handset variations are absorbed by Navigation Maps and Intelligent Components. In this example, the same Test Verb has two different implementations (polymorphic) to address the differences in the application structure of the two devices.
their part, must be able to provide a high-quality subscriber experience across multiple networks supporting hundreds of different models of handsets. And carriers need to be able to verify that each handset is fully interoperable with deployed services and serve subscribers as expected before adopting them into their networks to avoid support nightmares.

In each of these cases, it is important for the concerned party to be able to test the reliability of handsets over live networks using actual services, posing a logistical challenge as this requires handsets to be located in different and separated geographic locations. For example, a domestic handset manufacturer providing devices to a foreign carrier must physically perform testing in that other country. Moreover, testing usually involves a variety of fairly exacting operating conditions, such as particular
Benefits of Automated Testing Technology
Increased reliability: Computers don’t get tired, don’t confuse “0” and “O” on a tiny screen, don’t misinterpret test procedures, and won’t accidentally skip a page of test cases.

Tightened testing conformance: The actual tests desired are the actual tests run. This is important because not every test case focuses simply on verifying a specific device function. It may be the case that when certain operations are executed in a specific order, this stresses internal processing resources. If the test technician accidentally returns to the main menu before completing the specified sequence of operations, the desired test is not actually completed. Automated tools can never make such an error.

Increased accountability and reporting: If a manual test technician falls behind schedule, there may be a temptation to skip tests, with or without management approval. With automated testing, reports verify and prove that testing did indeed take place and that the results are acceptable.

Input errors eliminated: Automated test cases always type text messages correctly.

Output errors eliminated: Automated testing enables exact-match verification of output results. For example, a person may not catch an error in a 100-character text message. For audio and image output, automated testing can work with an acceptable error tolerance.

Guaranteed error catching: A human test technician can miss that an error has occurred. With automated testing tools, errors cannot be overlooked.

Faster testing: By eliminating breaks, interruptions, and errors, automated testing can complete the same test bench in significantly less time than a manual tester. In addition, there are certain operations which automated testing system can perform significantly faster than a person, such as button pushing. For example, a test case that sends a 100 character text message could take several minutes for a person to type in and then to verify whereas the same test can be completed in seconds by an automated system.

More complex interactions: Certain tests are difficult for a single person to execute manually, such as initiating a three-way call with multiple text messages being sent between all the handsets. Automated testing makes such scenarios relatively easy to verify.

Reduced personnel: With time-to-market so important, reducing the actual time it takes to complete the test bench is critical. While it is possible to speed testing by hiring more test technicians, this also increases personnel overhead. Additionally, such personnel must be maintained all year round.

Consecutive testing enabled: Automated testing suites are willing to operate 24 hours a day, especially over weekends and holidays. Unless a company runs three shifts, this alone decreases testing time by two thirds.

Easier reproducibility of errors: Once a defect is discovered, it must be reproduced and demonstrated to a handset engineer in order to correct it.
equipment interoperability or real-world quality issues, which can only be tested in specific geographical locations. In addition to the problem of getting the handset to the appropriate network location, handset availability is often a significant barrier to handset validation as well. Often, there are simply not enough handsets to accommodate all of the ecosystem players that need to perform handset validation.

Remote connectivity technology is one of the key pieces required to overcome device availability and physical deployment issues. Remote connectivity makes it possible to easily and inexpensively access handsets located in the required networks. Also, remote connectivity makes it possible to share handsets, as well as makes them available around the clock. In order for remote connectivity to bring the most efficiencies to the testing process, however, it must be complemented by mechanisms that support testing procedures that streamline and automate every stage of testing including:
• easing development of and maximizing reuse in test cases
• ensuring comprehensive and error-free test coverage
• managing test assets across innumerable handset models and configurations for easier workflow management
• collecting and aggregating test data worldwide for comprehensive analysis.

Automated Testing

Running through a complete test bench using manual testing mechanisms is an intensive — and tedious — endeavor. Test benches can be comprised of thousands of individual test cases for a single handset, particularly at the manufacturer level of testing, easily requiring several weeks to perform a single regression of the test bench. The iterative and repetitive nature of manual testing processes is also prone to human error. Additionally, as many test cases involve multiple handsets — i.e., sending a message from one handset to another and verifying accurate receipt — this eliminates having to manually coordinate multiple devices. These problems are further compounded when a technician must travel to distant locations to perform testing.

Automated testing overcomes the inefficiencies of manual testing while significantly reducing testing cost and accelerating time-to-market (see sidebar below, Benefits of Automated Testing Technology). Remote handset connectivity is implemented using a concept called virtual handset technology which supports both automated and manual testing alike. Within the testing environment, the virtual handset could actually be a handset directly connected to the testing laptop, a remote handset connected through the network, or even a simulator when a handset is not available. This enables test technicians to work with a remote handset located in another country as if it were in the same room.

With Test Management technology, developers are able to set up unattended remote testing sites around the world that can be accessed from any location. For handset manufacturers, this means that engineers can download revised application code to handsets in a remote location and directly verify interoperability with any network. For service and content providers, remote testing sites can be used to verify portal functionality, such as that content about to be released to the masses is downloading as expected. For carriers, remote testing sites can verify individual handset conformance across a carrier’s entire network. Alternatively, handsets used in autonomous remote testing sites can be left in place to profile network performance, giving carriers critical real-time measurements necessary for determining such network characteristics as peak traffic across time-of-day or for taking proactive preventative measures when network traffic is non-standard.

Beyond Automation: Adaptive Test Case Technology
Ensuring that test cases are robust is a key consideration during test development. In order for handset manufacturers and carriers to leave remote testing sites unattended for days, sometimes even months, at a time, they must have confidence that the testing tools will not fail when they encounter unexpected operating conditions. Testing platforms must be able to recover from any operating situation that can occur, ranging from a change in mobile services to a complete network outage.

Unfortunately, there are numerous ways in which test cases can be “broken”. For example, a simple change in a carrier portal, such as modifying the order of menu items, can cause some testing systems to become “lost” or “stuck”, requiring an on-site visit from a technician to reset the system. Such changes are beyond the control of testing technicians, and such changes can occur without notice, making preserving the integrity of test assets one of the major challenges in automated handset testing.

In order to maintain the high reliability and resiliency of test cases, Adaptive Test Case technology has been developed to accommodate the rapid and inevitable changes and modifications that will arise in networks and handsets. Adaptive Test Case technology also handles the challenges of working with multiple handsets, facilitates reuse of test assets across platforms and handsets, and eases system maintenance when carriers change over to new handsets. Rather than adjust test cases manually, test cases are constructed using Test Verbs, Navigation Maps, and Intelligent Components that abstract handset operation in a way that captures change efficiently and effectively (see Figure 1).

Test Verbs define core test functions in a way that absorbs and manages behavioral differences between handsets, such as how many screens it takes to select a function. In this way, Test Verbs abstract the specific platform implementation, making it possible to utilize the same test case across virtually any handset or platform. Navigation maps likewise abstract test implementation by abstracting the differences in GUIs, such as language and theme differences. Intelligent Components further abstract test implementation through dynamic interpretation of screen component states, such as the order in which items in a menu appear. Combined with test development tools based around a graphical user interface and virtual handset capabilities, these elements of Adaptive Test Case technology enable the rapid creation and modification of test logic, allowing test designers to focus on developing test logic instead of coding (see Figure 2).

Adaptive Test Case technology increases autonomous remote site testing resiliency by increasing the tolerance to change that test cases can manage without the need for manual adjustment by test technicians. Not only this, existing test assets can be carried over to new handsets with a fraction of the original effort, in some cases no effort, depending upon the similarity of the handset to previous versions. Compare this to the extensive effort required to re-develop test cases for new platforms and handsets when using testing platforms that are not based on Adaptive Test Case technology.

Sharing and Managing Test Assets and Results
While automated and remote testing capabilities make testing easier and more cost-effective, test technicians still need ways to easily manage test assets and the results generated when these test assets are utilized. For example, once created, test cases become valuable assets given their adaptability across various handset models and types. Managing these test assets in a shared and centralized repository greatly facilitates collaboration and test logic reuse between test technicians around the globe, enabling the efficient exchange of a company’s assets. For example, a test technician in the UK could discover a new feature introduced to a mobile portal, update the appropriate test navigation map to reflect the change, upload the modified test logic to the Test Asset Repository, and then propagate the change throughout autonomous remote testing sites across Europe, all without having to involve another person. Such collaborative capabilities are a critical part of any testing platform, eliminating undesirable duplication of effort and maximizing reuse of intellectual investments.

Having a shared repository does more than just facilitate the exchange of test assets. Since test cases are managed from a single location, testing is always done with the most up-to-date test assets. Additionally, common testing errors are avoided by automatically ensuring that the right test assets are used for a particular handset under test. In other words, test technicians cannot accidentally run a test case or load a configuration file designed for a different handset.

Test assets, however, are not limited to just test cases and logic. Test results provide the measurements required to evaluate network performance, quality, robustness and efficiency. However, these network qualities are often measured over time and across a specific region of a network. By consolidating test results collected from anywhere

click to enlarge

Figure 2. Test development tools based around a graphical user interface (GUI) and virtual handset capabilities can leverage Adaptive Test Case technology to enable the rapid creation and modification of test logic, allowing test designers to focus on developing test logic instead of coding.
in the world to a centralized location as well, it becomes possible to evaluate and analyze these results in extremely powerful ways — such as profiling a network under specific operating conditions including peak usage time, location, by particular service, by carrier, and so on — that can help handset manufacturers, service providers, and carriers maximize the reliability, quality and performance of handsets in real-world networks.

Testing Gets Done
One of the less obvious benefits to be derived from automated and remote testing is that significantly more testing actually gets done. As development schedules slip, less time is left for testing across the value chain since every week a handset is delayed to market results in damaging market erosion and losses. However, a handset that locks up, has nonfunctioning features, or adversely affects the network produces a flood of expensive service calls and may even lead to a complete handset recall.

Automated and remote testing provides an answer. Because automated test cases can run 24 hours a day with multiple test suites running in parallel, handset testing can be significantly accelerated without loss of quality or reliability, making testing of revenue-critical services less of a factor in delaying product release. Additionally, such testing improves overall testing quality by reducing errors, leading to better test coverage, more accurate and reproducible test results and more reliable handsets.

The cost and time savings that automated and remote testing can bring to handset manufacturers, service providers and carriers are very tangible. While employing multiple manual testers can also reduce testing time by an equivalent factor, these savings are offset by increases in personnel commitments, investment, and concerns regarding the reliability of the testing. Manufacturers also experience reduced cost of testing, increased revenue through accelerated time-to-market and savings based on improved product quality.

Testing of handsets in live networks is a critical process that protects handset manufacturer, service provider and carrier investment. However, with the myriad of handsets, networks and services available, test technicians across the value-chain need technology that helps them reign in the complexity of testing. Through the combination of unattended remote testing capabilities and Adaptive Test Case technology, test technicians have the means to validate handset and carrier interoperability using autonomous remote testing sites located around the world while having the confidence that valuable test assets will remain resilient and robust as they automatically adapt to changes in handsets and network services. Additionally, Test Asset Management technology provides the final piece for enabling efficient collaboration and reuse of test assets, as well as the centralized consolidation and analysis of test results. Together, these technologies provide end-to-end supervision and management of the entire testing process, removing perceived barriers to testing so that testing gets done and reliable handsets make it into the hands of users on schedule.

About the Author
David Haggerty is the chief research officer of TestQuest Inc., 1825 South Grant Street, Ste. 100, San Mateo, CA 94402; (800) 756-1877;