‘A Good Puzzle’: Reproducing a Hard-to-Find Bug

How many times have you seen a bug closed as “not reproducible”? I’d venture most everyone has at least once.

Does this mean the bug doesn’t exist? No. Chances are it’s there, but when it’s not easily reproducible, development teams will need to analyze the bug’s potential impact versus the cost in time it will take to determine what actually happened.

More often than not, that cost is not worth the effort and the bug is closed. But what happens when users start reporting the issue in production? Taking the time to find the pattern causing an issue in the early stages of an application’s life cycle can reduce future costs, in terms of both actual dollars and brand reputation.

In my current role as an Applause Test Architect, I help form testing teams for Applause’s customers by selecting testers from our uTest community, of which I used to be a member. uTest teams specialize in identifying hard-to-find bugs and patterns — which can help companies exponentially reduce those future costs.

I want to share with you an example from my days as a tester, when I reproduced a hard-to-find bug for a customer and gave the customer the details so they could fix it. Hopefully this will show you that testing is not just a series of pressing buttons until one works — it’s a series of investigations that requires training to execute.

Finding a bug = a good puzzle

Many years ago, a “WebZine” company was getting multiple complaints from their readers about HTML code showing up on certain stories, blocking out the content. The readers sent in screenshots, and there were enough complaints to warrant investigation.

However, the company could not reproduce the issue seen in the screenshots. The company was perplexed — the evidence showed the issue was indeed happening, but the pattern for producing the issue was a mystery. The company reached out to engage uTesters to help find the pattern(s).

I love a good puzzle, so the opportunity to jump in and figure out what was going on was an easy choice. Along with a team of 10-to-20 other testers, the race was on.

Working through the options

The first thing I did was to look at the existing evidence, the description of the issue, and — more importantly — the screenshots the readers had provided.

While a majority of the screenshots were showing the issue on various articles, a handful of them were from the same article. Starting with this information, I did a search on the site for that particular article and loaded up the page. As expected, the issue did not present itself.

The first thing I did was to look at the existing evidence, the description of the issue, and — more importantly — the screenshots the readers had provided.

The reports and screenshots didn’t give a clue to what OS or browsers these readers were using, a common theme in user-reported issues. So the next step was to pull up the same article on every browser available at the time, on every OS I had access to. Once again, the issue did not present itself. I was starting to see why the client reached out for help with finding the pattern. By now I was very invested in solving this riddle.

I started looking at the page elements themselves in an attempt to see if there were any variables in play. There were. Every page served advertisements. Refreshing the page served new ads, so I kept refreshing the page, making note of the ads that were served. I noticed that the same ads would appear about once every 10 or so page loads, but still the issue refused to show itself. The next step was to run through the ad catalog on the other browsers. My run through Firefox did not produce the issues, neither did Chrome.

When I ran the ad catalog through Internet Explorer (IE), I was pleased to see a page full of code blocking out the content. I had found it. I took some screenshots and refreshed the page once again.

I knew the issue was real now, but can it be reproduced by the development team in order to push a fix? I needed more information. I refreshed the page again, no issue. A few more refreshes. No issue. A few more and there it was again. I noted that the ad being served was the same ad as my prior evidence. I was getting closer — I was pretty certain this ad was the culprit, but what was it doing that was causing the issue?

Narrowing in on the problem

Going back to the other browsers, I proved that this ad was not affecting them, so I went back to IE and chose different articles. The issue would present itself on every article where this ad was served. I could have left it there and let the developers figure out why, but I was on a mission now.

I could have left it there and let the developers figure out why, but I was on a mission now.

Since there was actual HTML code being presented on the page, I looked at it more closely. The code was for a submission form. I viewed the source code of the page and found the exact code. The source code also revealed that there were no other forms present. Since the code was exactly the same on all pages where the issue was not present, I surmised that the offending ad was submitting its own form, most-likely in a nefarious attempt to drop a cookie, and that its form had the exact same input value as the client’s form code, thus causing the client’s code to break.

Since I couldn’t see the ad’s code, I couldn’t be 100% certain, but I was confident enough that I had found the pattern and provided enough evidence for the WebZine’s development team to be able to fix the issue. After about an hour of work, I logged my bug report.

The client was more than pleased, and a quick change to the input value of their form rendered the issue legitimately unreproducible in subsequent test runs.

The cool thing about working for Applause is that our testers have the technical expertise, experience and know-how to reproduce these kinds of bugs on a regular basis — and the fact that we’re competing against one another to reproduce them quickly brings out our competitive spirit, which quickly delivers actionable results for organizations.

You can read Part 2 of my series on bug reports here.

Whitepapers

Crowdtesting 101

Find out why it's time to start rethinking your QA lab to meet the demands of the digital world.

Read 'Crowdtesting 101' Now
Want to see more like this?
John Kotzian
Test Architect
Reading time: 5 min

How to Assess Testing Resource Allocation

If you can’t measure the value of your efforts, you can’t explain or even justify your testing investment

Using Questionable Datasets to Train AI Could Come With High Costs

As companies look to capitalize on AI development, they must stay mindful of how they source training data — AI algorithms developed from private or non-consensual data may cost businesses in the long run.

Why Payment Testing is a Constant for Media Companies

Simulated transactions and pre-production testing won’t ensure friction-free subscriber payment flows

How Mobile Network Operators Can Leverage e- and iSIMs

We explain what e- and iSIMs are, what they mean for the industry and how MNOs and MVNOs can leverage them to their advantage.

Jumpstart Your Software Testing Education

Testers have a variety of options to upskill and grow professionally

The Future of Generative AI: An Interview with ChatGPT

We ask ChatGPT about where it sees itself in the future, what needs to happen for it to get there and how Applause can help.