Posted on August 5, 2015 by Chris Harrington

6 Reasons Why You Should Be Unit Testing Your Code

Most of the stuff I write here is code-heavy. I typically set out with a specific goal that I want the reader to accomplish by reading my articles, and most of the time, the posts I write take on a “how to” sort of format. I’m going to change things up a little today and focus more on the big picture with respect to unit testing. I think most coders out there have been preached to about the benefits of unit testing and how important it is, and I suppose this article is just another version of that.

So what is unit testing, exactly? Unit testing is writing code to test your code. It involves writing (or receiving) a test case in non-code which describes the input and the expected output for a small portion of a codebase. Once the test case is well understood, the coder can then build the test around it. A test is run using what’s called a “test runner”, unsurprisingly, which indicates the result of a test or group of tests (almost always “pass” or “fail”). A failed test will typically have a reason for the failure (for example, “expected 4, but got 5″) along with a stack trace. There are many, many different ways of writing unit tests, with many different libraries to make it easier, but the reasons for doing it are the same. Here’s six reasons why you should be unit testing your code, in no particular order.

Confidence

Writing unit tests that cover any code you’ve written will increase the level of confidence you and others have in that code. I might think I’m infallible, but others aren’t so sure, so it’s really, really important to give high visibility to your tests. This confidence is important for anyone who you’re expecting to use your code, be it your coworkers, customers or even yourself in a few years. It’s doubly important for relatively unknown open source software, because relatively few people have used said software and bugs are likely to be present.

Often, confidence in code is boiled down to a percentage known as “code coverage”. It’s a percentage of your code that’s executed when all your tests are run. Having a high code coverage doesn’t necessarily mean you have good tests that cover all the angles, but having a low code coverage is indicative of the opposite. 85% coverage is usually going to be better than 20% coverage, but not always, as the 85% case could contain very, very poor tests, while the 20% group could be well written. It’s not a perfect metric, but it’s nice to set a goal for unit testing. It’s also great to show management, as more often than not, a manager won’t know the ins and outs of the codebase, but can understand a simple coverage count. For what it’s worth, aiming for 100% code coverage is almost always a fool’s quest. It takes roughly the same amount of effort to go from 0% to 60% as it does to go from 60% to 80%, and then again from 80% to 90%, and so on. The amount of extra work you’ll have to do to get a really high code coverage number gets higher as your code coverage percentage approaches 100%. In some cases, it’s almost impossible to properly test some code as the built in libraries don’t expose an interface, for example (looking at you, Microsoft).

Code changes causes tests to fail elsewhere

This is an important concept to understand. If I’ve written an app from the ground up, chances are I know that fiddling with the code here will require changes over there. That’s definitely not true for anyone else who starts working on that app, as they’re missing the familiarity that comes with working with a large codebase for a long period of time. This is especially true of relatively new developers who haven’t quite grasped the impact of any changes they’re making. Bugs caused by this sort of error can be tricky to track down, as it often appears as though the problem arose seemingly at random.

In a scenario without unit tests, these changes could cause code failures that might go unnoticed because the affected area has worked forever. In the event a QA department catches this, it’s because they’re aware that changes in one portion of the app routinely affect other areas, which leads me to my next point…

Lower QA effort

If you’ve got unit tests covering a large portion of your codebase, your QA department can focus on the new stuff as opposed to regression testing all of the old stuff whenever new code is committed. That’s not to say that QA should never look at the big picture once each portion has been tested, but having tested code inspires confidence that most of the angles have been covered. As an example, QA could share test cases for a story as it gets assigned to a developer, and the dev can then write unit tests around those test cases. This will reduce the amount of back and forth a task has between QA and the developer for bugs and missed features, freeing up your QA team for other features.

Testable code is better written code

When I first started coding, I’d write huge classes with multiple entry points and dozens of private methods. This is not easily testable code. The term “cyclomatic complexity” describes the number of paths a function can take. For example, a function with a single if statement has a cyclomatic complexity of two, as the code can take two possible paths to get to the end. As a rule of thumb, you’ll need at least one test for each branch in your code: one for each of the paths in the if statement example above. If your code is overly complex, the number of tests that are required will be astronomically high. A better approach would be to split up this code into separate classes that are responsible for one thing and one thing only (the Single Responsibility Principle). Code that follows this principle is inherently more testable because the classes and objects will be smaller and far more manageable, with a much lower cyclomatic complexity. The number of tests you have to write won’t change, but the effort expended in writing those tests will be greatly diminished because you won’t have to include a bunch of set up for the stuff you’re not testing.

Test descriptions work as documentation

If you follow Behavior-driven development, your tests serve as a de facto documentation scheme for your code. The whole idea behind BDD is that behavior is what drives the generation of your tests cases, which in turn is what drives the creation of your tests. It typically looks like this: “Given …, when …, then …” For example, take an Add method. “Given that I want to add four and six together, when calling add, then I should get 10.” This describes what the add method does in a specific circumstance and what the result should be. I’m not suggesting this replace any documentation you’ve already written for a project, but it’s better than nothing.

A large effort now saves time later

Unit testing will take more time than not unit testing. At least, at first. If you skip unit tests, at some point down the road you’re going to encounter bugs that could have easily been avoided by writing a test to wrap that particular functionality. In the worst case scenario, a bug could go back and forth between “fixed” and “failed testing” a number of times due poor coding or a failure to properly understand test cases, both of which could be easily solved had a unit test been in place. This also applies to the maintainability of your codebase: changing code that is well tested won’t result in unexpected bugs because your tests catch them before they make it out of the gate.

A corollary to this point revolves around when a bug is first discovered. Once the bug is well understood, a test case can be written that encapsulates the functionality. The test case can be handed off to a developer, who then writes a test that describes the test case. Finally, the bug can be fixed. That bug will never bother you again as the unit tests covers that eventuality.

gisonline-me-gray