In my article on code coverage I was careful to stress that a coverage tool can only tell you what you haven't tested. It can't tell you if you're missing mainline code. Which made a recent bugfix to one of my open-source libraries even more embarrassing: the class had 100% coverage but was missing 7 characters of mainline code. And those seven characters were the difference between correct behavior and a bug that would manifest as non-deterministic behavior in consuming code.
When I have a bug, my first task is to write a unit test that exposes it (and demonstrates that I fixed it). Then I look in all similar code for the same bug (and in this case I found it). Finally, I think about how the bug came into existence, and more importantly, how come I didn't uncover it when I was first implementing the class and writing tests.
In this case, the answer was simple: the bug was in the single-byte read method of an InputStream
implementation, and since I rarely use that method I got sloppy when writing tests for it. I wrote “happy path” tests that validated correct behavior, but did not try to trick the class into exhibiting incorrect behavior.
This experience has pushed me to think, again, about the role of unit tests. And it reminded me that there's a big difference between the way that a software developer thinks and the way that a good QA tester thinks. The former wants the code to work, the latter wants it to fail. Both mindsets are needed, but I'm not sure that any one person can hold both at the same time. And that's what's needed for unit tests to truly exercise code.
And I also wonder how such tests fit into the mindset of test-driven design. In my experience, TDD is very much focused on the happy path. Each test describes a small piece of the overall functionality, a “specification in code.” Can detailed edge-case tests contribute to that specification without cluttering it? And if yes, how does a developer switch mindsets to come up with them?
No comments:
Post a Comment