This article was published on July 30, 2019

Better testing could solve most tech problems, so why aren’t companies doing it?

Why too many tech companies under-invest in QA testing and what can be done about it.


Better testing could solve most tech problems, so why aren’t companies doing it?

The headlines are packed with stories of high-tech gadgets, apps, and video games failing to meet consumer expectations due to performance failures or otherwise disappointing features. Samsung smartphones have been exploding, Nintendo Joy-Cons eventually run into a drifting problem, and countless apps and software programs have launched riddled with bugs.

Obviously, these problems exist in many different companies, so it isn’t a problem within a single company’s culture. You could argue that these examples were disproportionately highlighted by the media, making us believe tech products are more problematic than they actually are. Or you could argue that these types of product failures are inevitable, and not that big of a deal.

However, it seems obvious that most of these problems should have been detected—and dealt with—long before the products went to market. There’s an abundance of automated testing tools for companies to use when testing their products, and even a cursory evaluation could have revealed many of the hardware problems that now seem rampant in major tech releases.

So if better testing could solve most of these high-profile tech problems, why aren’t companies doing it?

The problem with the “race to market”

Much of the problem comes from a byproduct of short consumer attention spans. We tend to best remember the brand who gets to market first—rather than the brand who accomplishes some goal the best. Because of this, tech companies are constantly racing to get their latest and greatest products to market before their competitors have a chance. Accordingly, companies need to make cuts in their timeline to accelerate the process. Sometimes, this means intentionally dropping the quality. Other times, it means hiring more people or outsourcing to other companies. More commonly, it means cutting corners—like limiting the quality or length of the testing process.

On some level, this strategy makes sense, especially for software products. Getting to market early, even with a few bugs, could conceivably grant you a business advantage that exceeds the reputational costs of a product failure. This is a complicated equation that doesn’t usually have predictable results. Without testing, it’s impossible to tell exactly which bugs exist or how severe they are, so it’s necessarily a gamble to forgo or limit testing. Companies also sometimes use the product launch as a kind of test phase in itself, ready to deploy changes based on whatever bugs or problems first-generation users encounter. Agile software development almost encourages this, allowing developers to adapt as new information comes in.

But this philosophy doesn’t necessarily hold water. Plenty of successful tech companies found success not from being the first in a given market, but by being the best. Google, for example, emerged in an era with many competing search engines, but it succeeded because it handled searches better. Taking the extra time to thoroughly test would give companies the disadvantage of being late to the party, but a great advantage of superior quality. If testing quality was consistent, eventually they’d develop a reputation for making quality products, and gain market share based on that reputation.

The minimum viable product philosophy

photography training

Another complicating factor is the “minimum viable product” philosophy. Typically used by startups and companies launching brand-new products, the basic idea here is to focus on launching a version of your product that perhaps isn’t as robust as you’d like, but is functional enough to attract customers. Launching with a minimum viable product plays into the race to market, as it encourages you to launch as soon as possible. However, it has a slightly different and equally problematic underlying philosophy.

Creating a minimum viable product means limiting your front-end costs until you can start generating revenue. Without revenue to counterbalance your costs, your operation gets expensive fast, and could implode if you end up missing your internal deadlines. Accordingly, launching quick limits your sunk costs and gets you more revenue, faster.

The problem with this view is that it prioritizes revenue over customer impressions and satisfaction, at least in the short term. And it does its job well; most of the time, emergent problems only become a big deal because so many customers already paid for the product. In this case, repair costs and (potentially) legal action can be costly, but these expenses may not outweigh the benefits of immediate income.

How much is enough?

Companies may also struggle with determining how much testing is truly “enough” to consider their product safe to use or functional. There’s no standard handbook for this. Instead, different types of testers (like QA or QC) each intentionally look for problems, or try to break the product in different ways. In most cases, scenarios are fabricated. Testers evaluate a product in different ways, but not in context that directly mirrors how it would be used in real life.

How many hours of testing is enough to determine that a product is suitable? What should those hours consist of? These aren’t easy questions to answer.

Toward a better solution

The above complications are suitable to determine why testing appears to be insufficient in the tech industry, but there are other, more speculative factors that enter the equation as well. For example, how much of a role does planned obsolescence play in companies’ decisions to make products? Are tech companies intentionally sabotaging or limiting the lifespan of their own products intentionally for the sake of driving customers to purchase newer versions of those products.

So what would be better? Obviously more robust testing standards for tech companies would result in products less susceptible to basic and infuriatingly preventable technical issues. But should this be mandated? If so, how would you enforce this policy? And who would be responsible for determining “sufficient” testing?

A better first step could be providing more transparency to consumers, with publicized details on testing standards and processes for various new tech products. In other words, how was this product tested, and how many iterations did it go through? What were the results? This way, if consumers choose to buy a poorly tested product, they at least know the risks going in—and companies that do spend the extra money and take the extra time for more thorough testing get rewarded with higher customer appeal, even if they’re late to market.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with