The right testing environment goes a long way, but it alone can’t prevent builds from being sent back to the developer. Many failures found in testing can be avoided sooner. Before handing builds off to the testing team, developers can run some simple smoke tests to catch issues early. In fact, smoke testing uncovered one of my favorite bugs I ever found.
One of my previous employers developed an in-store information kiosk to help customers find products. The customer would key an item into the search functionality, and the kiosk would present the item’s in-store location. The customer could also print the information via an on-board thermal printer. If the item was out of stock, the customer could place a special order for it, and the printer would produce a receipt.
Excited to roll these kiosks out to our stores, the development team informed my QA manager they were ready for testing. I was assigned to test these kiosks.
I headed to our QA lab, where a sleek-looking, company-branded kiosk waited. It resembled an old arcade video game cabinet, only a bit more stylish. The kiosk had a screen, keyboard, trackball and thermal receipt printer built into it. It truly looked beautiful, and I thought it would make a nice addition to our stores.
But, first, it had to pass my tests.
I started with a simple product search. The kiosk reported that the item was in stock and gave me the shelf location. I checked this information against the lab store database, and the data matched. So far so good.
Thinking like a customer, my next test was to hit the print button, so I could carry the location information around the store. What happened next scared the heck out of me.
Once I selected Print, the kiosk started up the thermal printer, which fired off a 5-foot-long printout, travelling at what seemed like 100 miles per hour. The printout shot through the air, inches from my neck, only to land on the floor about 10 feet behind where I had been standing. Luckily, my fight-or-flight response kicked in quickly — a well-timed drop to the floor saved me from a devastating paper cut.
Once the shock of this incident wore off, there was only one thing left to do: see if the issue was reproducible. Wonderful.
So, standing as far away from the printer’s target zone as possible, I ran my test again. Sure enough, the kiosk fired its salvo across the room. To its credit, after I picked up and examined the monstrous printout, I found that the printer did manage to print all the expected data. Score one for the printer.
I called my manager and told him to come see this bug I had found. Being a busy man, he told me to explain it to him. This, however, was a bug he simply had to experience for himself. By the time he arrived at the lab, I had marked the floor where he should stand, then asked him to run the same test. I watched from afar. He hit the Print button, and found himself, like I had, lying on the floor in self-defense a split second later.
He got up and composed himself a bit. Then, he looked at me with a wry smile. “We have to show this to Mark,” he said.
Mark, the company’s CIO, came down to the lab to see this issue. Not wanting to expose a company executive to any danger, we decided that I would run the test while he watched from a safe distance. Even from a distance, Mark flinched when the paper projectile shot across the room. Mark gave my manager a deadpan look and said, “That is not going into my stores.”
In the end, the case of the killer kiosk was a simple mistake that could have been easily detected before it was sent to be tested. The development team put the wrong driver for the printer on the system when they assembled the kiosk, but they hadn’t taken the time to print anything to make sure it worked correctly. While the print projectiles made for a great story, the error cost the company a bit more time — and money — to correct than necessary.
At Applause, we have seen this shortcut countless times. Our clients are on tight deadlines, and so eager for our teams to test their builds that they neglect simple checks. Without these quick assessments, you can’t be sure whether a build is actually able to be tested. In some instances, when issues are clearly evident, our teams will catch them during a dry run, then inform our clients so they don’t waste time on an untestable product.
But, there are times when the problem isn’t immediately detectable, such as a build pointing to the wrong environment. When this happens, all expected results in regression can fail, which costs our clients time and money. Take a few moments for a simple smoke test prior to sending builds for testing. That’s the best way to ensure digital products are ready for testing — and save hours of rework.