QA Beginner’s Guide

Identifying the right testing devices for a mobile app is one of the most difficult challenges a mobile test manager has to solve. In the article “Testing on the Right Devices”, I explained the approach to minimize the amount of testing devices and provided some practical steps on how to select the test devices for the product. However, that would only be half the job without grouping and prioritizing the different test devices in order to maintain a professional mobile device management (MDM).

Create a List of Test Devices

To begin, the target group must be identified. Once the target group is known, a mobile test manager can gather data about them and create personas. Based on their usage habits, specific mobile devices will be identified on which the mobile app must be tested.

Depending on the mobile app and its user base, it is possible that a mobile test manager has identified more than 30 devices that need to be covered during the development and testing phase. The mobile test manager should list all devices from highest to lowest priority and separate them into groups.

Create Device Groups and Prioritize Them

In the following example, the first group has the highest priority, ‘A.’ Devices in this group are the most used devices among the user base and represent the latest devices on the market. They have powerful hardware, a big screen with high resolution and density, and have the latest operating system and browser version installed.

Group 1, Priority A:

  • High-End devices
  • Quad Core CPU
  • +3GB RAM
  • Display size >5”
  • Retina, Full-HD display
  • Latest OS available for the device

The second group has a medium priority, ‘B.’ Devices in this group are mid-range devices. They have average hardware like a smaller CPU, screen resolution and size similar to the devices in group A, and the operating system version should be no older than one year.

Group 2, Priority B:

  • Mid-range devices
  • Dual Core CPU
  • 1GB RAM
  • Display size <5”
  • No Retina or Full-HD display
  • OS less than one year old

The third group has a low priority,’C.’ These devices have a slower CPU, a small screen resolution and density, with a software version older than one year.

Group 3, Priority C:

  • Slow devices
  • Single Core CPU
  • <1GB RAM
  • Display size <4”
  • Low screen resolution
  • OS older than one year

Define Requirements for Each Group

If the groups are in place, the mobile test manager needs to define requirements for each group. Requirements can be that the functionality, design, and usability is covered 100%. In this case, Group A devices must 100% fulfill all three requirements.

Devices in Group B still need to support the functionality and usability 100%. However, the design doesn’t need to be perfect due to smaller screen sizes.

For Group C, the design and usability doesn’t need to be perfect due to the old device hardware and software. However, the functionality must still be at 100%.

Keep the Groups and Devices Up-to-Date

Once the groups are established within the mobile development lifecycle of a company, the mobile test manager must ensure they remain up-to-date. This means constantly observing the mobile market for newly available devices and monitoring the device usage of your target customer.

If a new operating system is provided by the manufacturer, the mobile test manager must check on whether the new version is being used by the customers then update device groups accordingly.

What appears clear and simple in theory proves technically challenging in practice. Keeping track of an ever-growing amount of mobile devices remains a crucial but tricky part of mobile testing. Depending on a company’s mobile development organization, user base, and apps in development, mobile device management (MDM) can be a full-time job. Many companies cannot realistically support this through their internal QA lab or testing teams, making crowdtesting a logical solution.

Want to see more like this?
Daniel Knott
Mobile Testing Expert
Reading time: 5 min

How to Assess Testing Resource Allocation

If you can’t measure the value of your efforts, you can’t explain or even justify your testing investment

Using Questionable Datasets to Train AI Could Come With High Costs

As companies look to capitalize on AI development, they must stay mindful of how they source training data — AI algorithms developed from private or non-consensual data may cost businesses in the long run.

Why Payment Testing is a Constant for Media Companies

Simulated transactions and pre-production testing won’t ensure friction-free subscriber payment flows

How Mobile Network Operators Can Leverage e- and iSIMs

We explain what e- and iSIMs are, what they mean for the industry and how MNOs and MVNOs can leverage them to their advantage.

Jumpstart Your Software Testing Education

Testers have a variety of options to upskill and grow professionally

The Future of Generative AI: An Interview with ChatGPT

We ask ChatGPT about where it sees itself in the future, what needs to happen for it to get there and how Applause can help.