Software test cases provide the bedrock for the quality assurance organization’s approach to digital quality. Test cases make sure every software requirement is covered and that the features and functionality of a digital product work according to everyone’s expectations — the business, the developer, the tester and the user.
In this blog, we’ll answer several key test case FAQs to help inform your digital quality approach.
What is a test case and why is it important in software development?
A test case is a set of instructions for testing a particular application feature or function. Each test case includes specific details, such as test steps, data, variables, conditions, expected results, actual results and statuses to clarify the procedure and progress of each individual test.
Software testing is important in software development to validate that all features perform as expected and conform to established standards, benchmarks and requirements. Test cases provide the instructions for individual tests, giving testing professionals a clear set of directions to validate the quality of the app. Without software testing, software development is bound to result in failing apps; and without test cases, it’s impossible to perform structured testing on apps.
What are the best practices for writing and designing test cases?
Each test case should aim to be accurate, concise, clear, independent, traceable, repeatable and reusable. These test case characteristics help testers remove ambiguity from testing and maximize their test coverage. Additionally, each test case applies to a specific test and includes a test ID number for tracking purposes.
However, it’s impossible to write a test case for every possible condition of an application’s features and functionality. Testers should use QA techniques such as boundary value analysis, equivalence partition and more to narrow the field of potential test cases to write. When in doubt, it’s common to conduct peer reviews to help sharpen the focus and scope of a test case, ultimately improving the test design and coverage as a whole.
How can test cases be automated, and what are the benefits of automation?
Some test cases are a better fit for automation, while others make sense to execute manually. The organization typically selects test cases for automation that are repeatable, quicker to conduct via automation, stable and repetitive. Thus, unit tests are a great fit for automation, as they test the smallest pieces of application functionality and rarely change. A developer or test engineer typically loads automation test cases into a test automation platform, where they can schedule tests to be executed either immediately or at a time that benefits the organization, such as overnight or on weekends so as not to disrupt ongoing work.
The benefits of test automation are numerous. Test automation saves the organization time, money and effort, increases test coverage, reduces the risk of human error, enables more comprehensive overall testing, improves team morale and collaboration, and speeds time to release. When done right, and in a way that fosters learning across the organization, test automation is an incredibly useful QA technique.
How can test cases be integrated with the software development process?
There are a number of ways that organizations can implement software testing in the development process. Development testing (DevTest) refers to the general and continuous approach of applying testing throughout the SDLC, such as developers writing unit tests. By pushing the dev and test phases closer together, the organization can test code as it is checked in, enabling faster bug detection and fostering better collaboration between the two groups.
Some additional methods of integrating some form of quality assurance or testing in software development include the following:
Static code analysis. In this debugging process, the source code is analyzed without executing the program as a whole. This analysis is conducted independently of the environment or programming language.
Test-driven development. This iterative software development approach combines coding, testing and design into one collaborative technique by converting software requirements into test cases that specify what the code should do. Prior to writing new code, the group must write a test that fails, and the code must get the test to pass before it is committed.
Code reviews. Assessing the source code for flaws and clarity helps avoid issues later. Code reviews can be conducted in a variety of ways, including via email, over-the-shoulder or with the help of tools.
Pair programming/pair testing. Similar to code reviews, pair programming involves a second engineer making notes, asking questions and pointing out potential issues while the programmer codes. Pair testing puts extra emphasis on quality and collaboration by having a tester sit with the programmer to find bugs early and share their perspective.
Linting. Typically conducted with the help of a tool, linting checks source code for stylistic errors. While no testing is involved, linting helps reduce errors and keep code clean and readable for other programmers.
Unit testing. This testing technique method checks individual units or components of source code for errors. Developers often write and automate unit tests as an initial check on application quality.
Integration testing. Going a step beyond unit testing, integration testing validates individual units of code when combined as a group to make sure they will work in a larger system. Again, developers typically write integration tests, and many of these can be automated as well.
What are the different types of test cases (e.g. unit, integration, functional, regression, etc.)?
There is more than one way to classify the different types of test cases. One way is to classify them as formal or informal test cases. With formal test cases, the inputs and data are known ahead of time, which makes it easy to predict an expected outcome. Informal test cases do not have known inputs and thus have no expected outcomes — exploratory and ad-hoc testing techniques can reveal interesting findings with these approaches.
Formal test case types include unit test cases, functionality test cases, integration test cases, UI test cases, performance test cases, security test cases, usability test cases, database test cases and user acceptance test cases. All of these have their own specific objectives and means of validating the software as an important component of software quality.
How can test cases be used to ensure the quality and reliability of software?
Test cases ensure that software requirements are sufficiently met prior to release, helping the product succeed when it reaches the hands of customers. Thoroughly documented test cases define what testers must validate prior to release, how many test cases were executed, which components were tested (and whether they are stable) and other questions about the overall quality of the software.
These test cases help the QA organization determine what fixes are required, whether they are needed immediately or can wait until a future sprint, and how well the software conforms to specifications, requirements and user expectations. In short, test cases tell the story of an application’s overall quality.
What are the criteria for determining the success or failure of a test case?
Test cases typically derive an expected result based on the logic of the test case and the expected user story — i.e. how the user will interact with the software. If the expected results are met, the test passes. If the application fails to achieve the expected results, the test fails.
Acceptance criteria help define the steps of an individual user story in the planning stages of the SDLC. Acceptance criteria shed light on the expected result of a test case and the various logical dependencies that go into it. For example, acceptance criteria might outline all the logical steps needed to complete registration for a subscription service, including:
User sees new registration form field after selecting “Sign Up.”
User can enter all information in fields without disruption.
User receives email confirmation after completing all required fields and submitting.
Acceptance criteria like the example above help teams define expected results that ultimately result in passing or failing test cases.
How will the test cases be maintained and updated as the software changes over time?
Developers and testers should plan, maintain, update and execute test cases throughout the life of the software. Testers write test cases during requirements gathering and must collaborate with developers to repair and update test cases in accordance with code changes.
Some ways to maintain test cases include regular review by stakeholders, prioritizing maintenance over new test case creation, deleting unneeded test cases, editing test case names and details to make sure they’re clear, and tracking changes, dependencies, audits and other activities that might change test cases. This underscores the importance of concise and clear test cases — it’s easier to update a test case when it is clearly written and documented.
Keep in mind that particularly adept software development organizations might make changes multiple times per day. In these situations, it is important to make the most of your test maintenance and execution time, by automating as much as possible as part of continuous integration and prioritizing the most important features in your testing. If it’s impossible to maintain all test cases in one sprint, the team might opt to prioritize the most important features to make the most of their time.
What tools and technologies are available for managing and executing test cases?
As with many aspects of software testing and development, a variety of open source and vendor tools and services are available to help organizations manage and execute their test cases. Each organization should choose from these myriad tools based on their specific needs, such as existing tools within their ecosystem and budget concerns. The organization might also choose to build tools that can help with specific needs that existing options cannot address.
Some test case management tools include (in no particular order):
Test case management tools (vendor): Applause Test Case Management, TestRail, TestCollab, Requirements and Test Management for Jira, TestFLO for Jira, Qase, XQual, Qucate, Testpad, JunoOne, TestMonitor, TestLodge, TestCaseLab, TestLog.
Test case management tools (open source): TestLink, QaTraq, Kiwi TCMS, Squash.
Many of these tools make it easier to manage and execute test cases, but it’s ultimately up to the organization to ensure all digital quality goals are met.
What are some common challenges faced when designing and implementing test cases and how can they be overcome?
Test case design is challenging, as descriptions are supposed to be simple, yet might have to cover a wide range of possibilities. Test cases can also become quite lengthy without careful attention. Additionally, test cases written by other members of the team might not necessarily be clear to the tester using the test case in the present. To mitigate these challenges, limit the number of steps in an individual test case, and provide clear documentation and preconditions to avoid any vague instructions.
A lack of collaboration between testers and other members of the organization can also cause challenges. Developers and testers, in particular, must work side by side to fully grasp what the code does and to enable enough time and resources to test it. Likewise, project managers or the product team might make sudden changes to the requirements, which fundamentally changes the test cases. Testers should be given as much notice as possible to account for these changes.
Ultimately, testers face a number of challenges designing and implementing test cases. Other examples include problems with test case management tools, lack of resources/time and inconsistent testing environments. There’s not necessarily an easy answer to these test case challenges — the primary goal should be to make digital quality a priority throughout the organization and enable testers in any reasonable way possible.
What is the expected outcome of each test case?
When referring to test cases, the expected outcome or expected result informs the tester what should ideally happen when they finish executing the test case. If the expected outcome matches the actual outcome, the test passes — conversely, it typically fails if they do not match.
Let’s consider an example. An expected outcome for a test case might involve a tester switching an app to dark mode. At the conclusion of the test steps, the expected outcome might be: the app background switches to black, and all text now displays in white font.
What is the timeline for test case development and execution?
The timeline for test case development and execution varies from one organization or business to the next. Some businesses work in long build cycles that involve lots of parts or monolithic applications and, thus, release software much more slowly than in other industries. These situations contrast starkly with a company that pushes releases daily, or even multiple times per day. Test automation likely enables the latter to execute dozens or potentially hundreds of test cases in a short amount of time. In terms of writing test cases, testers can often write multiple test cases per hour.
While there is always a give and take between releasing quickly and releasing with high quality, the QA organization should be given sufficient time to effectively test its products. Time estimates might be dynamic depending on available resources, which makes communication and collaboration even more important to understand what is required of the various teams working on a product.
What are the objectives and requirements for test cases?
The objective of a test case describes in plain language what the test case aims to validate. A test case objective is typically one or two sentences. The goal for the objective is to clarify for the whole team — and future members of the team — the purpose of the test case. A test case objective for adding an item to a cart might be, “To check that the user can successfully add their selected item to cart.”
The requirements determine the test cases the organization will execute. Requirements essentially tell you what to test and should be the first component of understanding and writing any test case. As part of writing test cases, the organization should implement traceability to ensure full test coverage in accordance with requirements. As requirements change, so too must test cases, which makes visibility into these requirements and traceability especially important.
One note of disambiguation: Test case requirements might also refer to the setup requirements needed to execute a test case, such as the particular user path being tested or the environment in which testing will occur.
What is the process for reporting and tracking bugs and issues discovered during testing?
Software testing inevitably uncovers defects (also called bugs) in the code or design of an application. When a tester finds a defect via test cases, they must submit it to the dev team for approval and triage. Much like writing a test case, a bug report must include a number of fields for the sake of clarity and documentation, including:
Steps to reproduce
The dev team will then typically include additional information, such as:
Tracking issues through the defect life cycle typically involves the use of a defect management or reporting tool. Some examples of these tools include Jira, Linear, Bugzilla, Pivotal Tracker, Redmine and many others. These tools enable hybrid and disparate teams across the organization to standardize this work, including assigning tasks, visualizing work in progress and tracking defect resolution. This becomes especially important for lower-priority defects that might take several weeks or longer to resolve.
How will the results of the testing be documented and communicated to stakeholders?
Testers execute the test cases and report defects to the dev team. Now what? The organization typically compiles a test summary report (also called a test execution report or test closure report), which provides a high-level look at the testing work completed in the sprint. This report contains a variety of information, including the modules under test, the data for those modules, the number of test cases executed along with how many passes and fails occurred, and any relevant comments. This document is important to clarify work done by testers in the past, improve dev/testing practices in the present and avoid extra work in the future.
Once the test summary report is finished, the organization can communicate it to stakeholders much like any other report. Typically, the organization uploads the reports to a relevant and accessible repository, then sends the report in whatever form makes sense for stakeholders — email, direct link, printed copy, etc. The organization might also hold a meeting with stakeholders to discuss results, which can help clearly communicate findings, open the door for feedback and help foster a shared responsibility toward digital quality.