Time flies in the fast-paced world of software development. QA and engineering teams rarely get to take a deep breath and reflect on what has happened in the past few weeks, months, or even years. This is exactly what we’ll do now.
From Waterfall to Agile: A brief history of software development
Although Agile development frameworks such as Kanban were invented in the 1940s, the waterfall approach remained the most popular method for a long time. However, with new technologies starting to emerge and markets becoming more and more global — and as a result, more competitive — consumers started to raise their requirements and expectations. At the same time, the average lifespan of products started decreasing. Long story short: the waterfall model slowly began to phase out.
Agile methodologies rapidly gained popularity and are nowadays widely used for software development projects. They enable companies to adapt faster to the ever-changing external factors, and swiftly integrate learnings in their development processes. In a VUCA (volatile, uncertain, complex, ambiguous) world, being Agile became a competitive advantage.
By using Agile methodologies, companies are able to quickly react to changing requirements. Fast releases have become the new normal, and the speed factor has lost its status as a competitive advantage. So what’s next?
A brief introduction to DevOps, and how it complements Agile methodologies
Let’s not ask ourselves what’s next, but rather what’s now. And the answer is DevOps. DevOps is the ability to constantly ensure full operational and business readiness. It is a set of practices that automates and manages the full end-to-end engineering processes between software development, QA and IT operations teams.
Through this methodology, those teams can build, test, and release software continuously and more reliably. This helps teams get a more holistic view of their internal processes, and complements the external and customer requirements-oriented approach of Agile frameworks.
For organizations wanting to ensure high quality and a smooth deployment process while having multiple teams work together on the same products, DevOps represents an attractive and modern option.
Let’s automate everything!
In the past, both in waterfall and Agile methodologies, the QA department received a notification when development was over and a new build was ready for testing. QA would continue with their manual regression tests, complete feature-based tests for all new components and add some exploratory testing on top. Once QA completes testing, the build would either go back to development to be reworked, or the organization would push it live.
Scaling up to multiple teams, this approach is not sustainable anymore. The need to constantly juggle different builds on different staging environments often creates confusion.
Therefore, QA teams need to adapt — and align much more. Here is how they can do so:
- Standardize environments
- Automate deployments
- Automate testing (including pre-testing and post-testing tasks)
- Automate and group test cases to efficient sets (smoke tests, regression suites) to achieve 100% code coverage
- Reliably time-box testing
To make it clear: teams should automate as much of the testing process as possible in order to run automatically when needed, in an efficient and effective way. Automation creates less manual work, and helps bring the QA and the IT infrastructure teams together — specialized automation and CI/CD tools can successfully achieve this. Furthermore, a mature automation framework becomes paramount so that teams can swiftly script and add new test cases.
Can the whole testing process really be automated?
Unfortunately, even if 100% code coverage can be achieved, you can never have a 100% certainty that there will be zero bugs or problems. Therefore, the answer to the above question is ‘no’, for three reasons:
1. You can’t know everything
Automated test cases have one big potential flaw: they are written by humans. Now, that may sound mean, but let’s face it, an individual can only think of a limited amount of scenarios. And even if the full code is covered, different ways of processing it with other variables/test data or on other test environments may break the code. Furthermore, after a while, your extensive experience will backfire in the form of operational blindness. You may no longer expect any problems in a certain area because nothing ever happened. The problem is, you don’t know what you don’t know.
2. You can’t do everything
Now, let’s imagine you are an exception and you do in fact know everything! (Please note that I am jealous at this point.) Yet, there are technological limitations, which means that even if you would like to automate certain cases, you couldn’t even do so. One concrete example: a script for which the phone needs to be turned off and on again. Too bad!
3. You won’t do everything
Oh! So you are a smart one, and suggested building a robot that takes care of the physical interactions, such as pushing the on-button again and again using the fingerprint sensor. This could work, but typically everyone has to deal with scarce resources — money and time. On the one hand, purchasing all devices to run tests on them would just be too expensive. On the other hand, this would take ages, and for that reason, block the process.
You will therefore most likely go for a prioritized list of devices to cover and a selected fraction of the test case variations that could be run.
Furthermore, testing in the real world does not only consist of covering the functionality. There is much more to be covered such as the usability, payment integration, localization, etc. Much of which is beyond the regular scope of DevOps.
The key to successfully testing in DevOps: Find the right testing mix
The world is never black and white, and the happy path isn’t always enough. In DevOps, automation is key and moving in this direction is vital for all companies that seek a competitive advantage in the digital field.
Yet, for most businesses, automating everything that can be automated simply won’t work. The risk of failure would be too high in many areas, such as:
- in non-functional areas
- when there’s a broad customer base with different systems
- for scenarios that weren’t considered when scripting the test cases
For that reason, manual testing — especially exploratory testing with real users and real devices — is a great addition to ensure that all areas and external factors are covered.
How to integrate manual testing into DevOps?
The fact that DevOps puts emphasis on automating processes makes it look like there is no space left for manual testing. This is not true, and as the points above illustrate, manual testing remains a very important part of the testing mix for most companies.
One way of finding the right balance between manual testing and automation is the usage of feature flags. Feature flags are used to enable, disable, or hide the feature in production. Thanks to these, the code can be shipped to production and run through all the automated testing and deployment processes, guaranteeing a certain level of quality.
Then, when in production (or staging, etc.), the feature flags can be turned on for a certain percentage of the user base or for the QA team, so that manual testing can take place. This way, additional insights can be gained and circled back to the development team. The length of such tests also does not impact the new DevOps processes and are, for that reason, a great addition.
Read more about our Integrated Functional Testing solution to learn how to combine manual testing and test automation in one cohesive approach.