A couple of years ago I was working with an engineering group in an established startup that was responsible for developing and maintaining an online e-commerce system providing the ability for U.S.-based retailers to extend their services internationally.
Prior to adopting agile development and testing, the team was using a waterfall-like process involving months-long releases, upfront design stages and lengthy and painful testing phases. The team did this despite having considerable automated testing coverage at the system level.
One of the first steps when moving to agile was to move from three separate functional teams—application, server and testing—into two feature teams with matching application/server and testing capabilities.
The department decided to start with two-week sprints. Each sprint was to include two days of code freeze and regression testing, followed by a deployment to a staging environment where changes can be previewed by stakeholders. One week after deployment to staging, the sprint deliverable would be deployed to production.
The development was using one Kanban board in AgileZen (a lot like a Trello board or a whiteboard with paper sticky notes) to track its workflow. And yet, when the team moved from its waterfall process to a more agile development and testing process, organization became an issue.
Testing The Agile Way
One aspect of the process that the team spent considerable time on was how to do software testing in an agile environment effectively. How does the agile method mesh with the Kanban board?
Initially, completed projects were handed off to testing after the development phase was completed. At this stage, quality assurance was a separate lane on the Kanban board that followed the demo stage. The outcome was that the sprints were bottlenecked when developers dumped entire projects onto a single, overworked tester. Unsurprisingly, this proved difficult in short sprints.
As a natural evolution, the testers started to do some testing on the story while it was in the development phase. By working closely with the developers as part of the same team, the barriers dropped and natural collaboration emerged. Still, the formal process was an obstacle. Representing this earlier collaboration on the Kanban board was challenging. Testers were worried that their real work wasn’t being represented. The solution was to include multiple tasks under each project (like development and testing checkboxes).
Since this practice was generally smart and pragmatic, the team changed the name of the Kanban board from “Dev” to “Dev&Test.” A separate lane for “Advanced Testing” was created for leftover and final testing.
Driving From Acceptance Tests
In order to really get a solid quality project delivered by the time the development and testing phase was done (and to leave some time for exploratory testing), the team needed to figure out how to align the developers and testers more closely on what the story should look like and what tests should pass so that it can be considered a solid-quality story.
To get to the situation where advanced testing was no longer the bottleneck, the majority of the testing effort needed to shift left to the development and testing phase and be replaced by automation. The team experimented with ATDD (Acceptance Test Driven Development) where testers would spend some time early on in each story’s life cycle figuring out with developers what to test and what to automate. This practice created a stronger common understanding of the story and how it should work leading to less obvious defects.
This process did not happen overnight. Since testers were required to define acceptance tests upfront using this technique, they now became a bottleneck before development even started. This happened for two reasons:
- Testers still had testing work to do that the developers were not helping with. The testers could not free up time to go and work in front of the developers. Over time, offloading some of the automation work to developers solved this issue.
- Testers were new to acceptance tests and they took too long to define them. They were used to defining complete test cases and were not sure how comprehensive the acceptance tests needed to be.
These two root causes manifested in one of two ways: 1) slack time where developers didn’t have stories they could start working on; 2) developers pulling stories that did not have acceptance tests already written.
The workload was smoothed out by changing to a more lightweight definition of acceptance tests. One trick that worked especially well was the approach of writing a few key acceptance tests on a note during the story elaboration meeting and sharing these notes with the developers right after the meeting so that they would at least have an initial glimpse at the test plan.
How Much Quality Is Enough?
A general concern expressed about testing in agile, scrum and Kanban is that testing is something you can never have enough of. Testers are often perfectionists and are held accountable for minimizing the amount of escaping defects and the number of defects found.
In classic phase-driven approaches—and to some extent also in timebox-driven approaches like scrum—there is a limit to how much time there is for testing. Usually there is not enough time for testing. The dynamics force some risk-taking on the part of the testing effort. Also, in the classic approach, testing—by implication—has to be more risk-averse since the product quality handed off to tester is often lower since developers rely on the testers to find the issues.
Since teams are working in pull mode in Kanban—without a push-driven deadline for each phase of the work— and usually without a committed date for the stories to be completed, there is little to box in the testing effort. Testing can start take up more time than is warranted by the quality built into the software throughout the development process.
In this team’s case, they lessened their bottleneck with ATDD and advanced testing. There was then a temptation to aim for perfection. They took more time to write acceptance tests and plan for exploratory testing. It was clear that without doing something about it, many of the improvements in organizational flow accomplished could be lost due to relaxed tension in the development and testing chain. It’s a classic conundrum: how do you increase efficiency while preventing complacency?
The team experimented with a number of techniques to counteract this behavior. These techniques included:
- Work in Process (WIP) limits provided some positive pressure to deliver and prevent testing from being a bottleneck. At a minimum, once a bottleneck was identified, the WIP provided an opportunity to stop the line and discuss what to do at the team-level.
- Scrumban sprints or cadences provided another form of positive pressure to deliver. While the commitment was less specific, the fact that there was a delivery date and you wanted your stories to be part of the next delivery, energized and kept teams focused.
- The vision and purpose provided by the product owner for features that the team was working on helped raise the sense of urgency.
- Real external commitments, like delivering features for a specific client project, made for much better motivation than abstract, virtual commitments.
- Measuring velocity and cycle time and then presenting it to the team—and agreeing to try and stabilize and improve it—provided another form of motivation. The message was that we are trying to become more fit, not trying to “make” the sprint.
- Measuring defects that escaped into the wild showed that, despite accelerating velocity, the team still provided great quality. Increased number of escaped defects was considered a “stop the line” event to figure out what was going on and slow down to enable more test coverage.
- A cap was put on the amount of time to spend on exploratory testing per story as well as other similar open-ended activities.