What is the right amount of testing?

#career

What is the right amount of testing?

Over the years my answer to this question has changed drastically. From fully embracing integration testing and requiring code coverage numbers on Pull Requests (PRs). To writing only unit tests covering the critical paths of the application. Leaving much of code untested.

Of course there’s no definitive answer for the right amount of testing. If there was, everyone would be doing it. Instead, I’d like to share what I consider a good default amount of testing. But before diving into that, let’s discuss the problem we’re trying to solve.

The primary goal of testing is to prevent bugs from reaching production. Too little testing leads to a brittle application. As more features get pushed out, more fires need to be fought. And eventually you’re worried that any change might cause something unrelated to break.

In response, we swing the pendulum to the other side. We write extensive suites of integration tests that get run on every PR. Code coverage numbers need to be met before a PR can be approved. We aim to test every possible path, leaving no room for bugs to slip through. Unfortunately, our feedback loops begins to slow down. And meaningful features take longer to be released.

Both of these are extreme ends of the spectrum. Too little testing leads to brittle applications. Exhaustive integration tests slow down feedback loops and delay feature releases. So what is a happy middle ground? Here’s what I think we should default to.

While there’s no one size fits all solution for the right amount of testing. I’ve found that these defaults have worked well in the past. They strike a balance between maintaining healthy developer feedback loops while minimizing game breaking bugs that make it to production.

Want to stay connected?

Subscribe to my newsletter

Weekly, bite-sized, practical tips and insights that have meaningfully improved how I code, design, and write.

No spam. Unsubscribe anytime.