So here I am, with a SkipList implementation developed test-first. I have 100% test coverage (actually, more than 100% coverage), and didn't have to break encapsulation or pollute the class' interface to get it. Along the way, I wrote down some observations.
First: I remain convinced that test-first is the way to develop code, if only to ensure that you're doing what you think you are. In the case of the SkipList, this point was driven home early, when my tests caught a bug in the tier assignment method. It was a subtle bug: I knew that the size of the list determined the maximum tier for an element, so wrote the tier-assignment code to pass the current size into
Random.nextInt(). My test however, kept failing. When I looked at the actual counts I discovered they were skewed — the bottom tier had about half the elements it should, and they were distributed to higher tiers. Without the tests, I never would have discovered the problem, nor been able to experiment with a solution (which was to pass a power of two).
And that leads to my second observation: it's really hard to avoid preconceived design. Although I wrote the test first, I already knew (or thought I knew) how I was going to implement the method. A pure TDD approach would have been to write separate tests for a variety of list sizes, and perhaps that would have led me directly to the “needs to be a power of two” solution.
Third: TDD is best accomplished in a single session, at least at the scale of a single method or group of methods. This one surprised me, because I thought that the tests would read like a story, leading me back to the point I had stopped. In reality, I put the half-finished code aside while attending the TDD training, and when I picked it up again I puzzled over some of the decisions that I'd made along the way.
On reflection, the storybook metaphor is actually the right one: you can't expect to put a novel down for a week and jump right back into it (at least, I can't). I think this may be one of the reasons for the XP practice of deleting uncommitted code before going home for the night. It's faster to start from scratch.
However, this observation concerns me, because it has direct bearing on the “code as specification” benefit of TDD. I find that the purist approach of small, limited test cases actually hinders comprehension, because the knowledge of how an object works is spread over many tests. I tend to write larger tests, such as
testElementCreation() that explore multiple facets of a single behavior. Regardless of test size, this observation highlights the need for test clarity.
Fouth observation: implementation tends to be a quantum behavior. There are points at which you make a dramatic step in functionality that's impossible to break into smaller pieces. Either it works as expected, or it doesn't work at all. The
SkipListManager.insert() method is an example: although I have a plethora of test scenarios, the code was pretty much complete (if slightly buggy) after the first. The only way to avoid this is to test something other than a SkipList.
And this brings me to a point that Uncle Bob made during the training: “TDD won't give you quicksort.” That statement seems to invalidate the entire approach: if a minimal failing test won't get you closer to your goal, why do it? The answer, I think, is that dogma inculcates discipline, and the TDD purists recognize that the real route to better code is disciplined programmers. Test-driven development mandates discipline: you have to to write the tests, and this will (hopefully) get you to thinking about the ways your code is used.
But this leads to my last observation: there were too many cases where the tests told me that I was fine, but my intuition said otherwise. I've developed that intuition over more than 30 years of writing code; many TDD proponents have been at it longer. An inexperienced programmer won't have that intuition, and could easily find him/herself at the end of a road with nowhere to turn.