Automated testing is moving into the mainstream, adopted as a “best practice” by more companies each year. But why? Here are my reasons, originally intended as bullet points in a presentation on how to write tests.
Tests verify that the program behaves as expected
Let's get one thing out of the way up front: tests can find bugs, but they can't prove that no bugs exist. Or, as my friend Drew puts it: “tests can only show that your incorrect assumptions are internally consistent.”
However, as you increase test coverage, using well-designed tests, you gain confidence that the program will do what you want it to. In other words, that there aren't any obvious bugs. And unless you're writing code for the space shuttle, that's probably good enough.
Tests verify that the program continues to behave as expected when changed
The major portion of a program's development happens after it's released (80% is commonly quoted, but I couldn't find an authoritative reference). The bugs that got through testing will be found by end-users. Requirements will change, ranging from a simple UI facelift, through the addition of new business rules, to the deep structural changes needed to support increased load.
And when you change code, you risk breaking it. Usually in a place that you didn't think would be affected. Even in well-written code, there may be hidden side-effects. A test suite can protect you from the unintended consequences of change, provided again that it has complete coverage and well-designed tests. In my opinion, this is how automated tests provide the most value to the organization.
Of course, a test suite can also become part of a change. If your business rules change, then your tests have to change as well. This should be less of an issue at the level of “unit” tests, but it still happens. Unfortunately, many organizations consider such changes as an undesired cost. Instead, they should view them as a warning that the code may contain hidden dependencies on the old behavior, and budget extra time for release.
Tests serve as documentation
The idea of test-as-specification has long been part of Agile orthodoxy. Although, in practice, it can take a lot of work to make that happen with mainstream testing tools. I know that I've written more than my share of test methods with names like testOperation()
. But if you have the discipline, a method named testFailureWhenArgumentsWouldCauseIntegerOverflow()
is far more useful.
Tests give you a chance to think about your design
To me, this has always been the main benefit of testing: “if it's hard to test, it will be hard to use.” Of course, you can take this to an extreme: I have actually been asked by a traditional QA developer to store an application's internal data in comma-delimited format so that they could validate it (in that case, the binary format already took over 1GB, and was heavily optimized for access speed). While actively harming your design in the name of testability is foolish, it's not the common case.
More realistic is some code that I recently refactored: a single class that created a listener for external data, applied some business logic to the messages received, and sent messages based on that logic. As written, this code was impossible to test without instantiating the entire messaging framework. After refactoring, the business logic was in its own class, with separate listener and sender objects that could be mocked for testing. And that core business logic could now be tested in the form of a specification, with test names like testIgnoresThirdAndSubsequentDuplicateMessages()
.
No comments:
Post a Comment