Sunday, February 1, 2009

TDD Isn't Just QA

I enjoy reading Joel On Software, and have started listening to the podcasts that he does with Jeff Atwood. It's a great distraction from my commute, particularly when Joel says something that's way over Jeff's head and Jeff goes off on a completely different topic. In general I agree with Joel's positions, but in the past few week's he's been slamming test-driven development and programming to interfaces.

In last week's podcast, about 40 minutes in, Joel gives an example of what he would consider excessive testing: a feature that will change the compression level used by the Fog Creek Copilot application. And his argument was that testing that feature would require actually comparing images under the different compression schemes, which was an enormous amount of work to simply verify that you can click a button.

At this point I realized that one of us doesn't understand test-driven development. And I don't think that it's me.

I will give Joel credit for one thing: he gave a good example of how not to write a unit test.

First, because what he's describing is an integration test. And I'll agree, there's usually little return from writing a test to demonstrate that clicking a single button changes the overall flow of a program in a predetermined way. Particularly if your approach to such a test involves validating the output of a library that you do not control.

But why does he need to do an end-to-end test to verify this one button?

I suspect it's because he doesn't partition the program's logic across a series of interfaces. Instead, clicking that button sets off a chain of explicit function calls that ultimately reaches to the other end of a TCP connection. And this is where the discipline of real unit tests comes in.

One of the underlying themes of test-driven development: if it's hard to test, it will be hard to use and maintain. Maybe not today, maybe not in the next revision, but at some point all of those strands of untestable code will wrap themselves into tangled mess. And one way to make code easy to test is to create boundary layers in your application: the two sides of a boundary are easier to test than the whole.

For Joel's example, my first thought is that such a boundary could be created by introducing an object to hold configuration data. On one side of this boundary, the GUI sets values in the configuration object; on the other, the internals use those values to configure the actual libraries. Introducing that single object will improve testability, even without unit tests.

How is the application tested today? I'm guessing that the tester sets a few configuration options, makes a connection, and visually inspects the result. This accomplishes two things: verification that the GUI controls are actually hooked up to something, and verification that the controls have some effect. But why should the former be tied to the latter?

Once you have a configuration object, GUI testing is simply a matter of adding logging statements to that object. You click a button, you should see the appropriate log message. While you could automate those tests, there's no reason to do so. Without the overhead of actually making a connection, any rational number of controls can be manually validated in a matter of minutes.

The other side of the boundary is a different matter: rather than force your tester to manually inspect the results of each configuration change, write automated tests that do it for you. Different compression ratios and color depths can be verified using a single test image, with assertions written in code (does a higher compression ratio result in a smaller output? good, the test passed). There's no need to do low-level validation of the library, unless you don't believe that its authors did enough testing (in which case you have other issues).

But wait, what if that compression code is part of the communications stack?!? OK, introduce another boundary layer, one that separates data processing from data transmission. And this brings me to my biggest complaint with Jeff and Joel: they seem to feel that testing is only important when you're writing an API.

In my opinion, if you're doing object-oriented programming correctly, you are writing an API. Objects interact with each other through their interfaces: you give them certain inputs, and they produce defined outputs. This doesn't mean that every object needs have MyIntf and MyImpl classes. It does mean that you need to define a contract that describes how those inputs and outputs are related.

By writing unit tests — true unit tests, not integration tests in disguise — you force yourself to define that contract. Your goal is not to write tests for their own sake, or even for the sake of future quality assurance, but to tell you when the interface meets your requirements.

No comments:

Post a Comment