Thursday, June 11, 2009

Reinforcement

My postings on SkipList finish with observations and concerns, but no conclusions — no resolution to my concerns. Actually, when I originally posted the article, there was a concluding paragraph, but I decided that it deserved its own posting.

So, the concern: test-first-development, or even test-driven-design, is a great thing for experienced programmers. It provides a sandbox in which to explore ideas and design, with the safety — fearlessness, in the words of TDD proponents — to change code without breaking it. In the hands of an inexperienced programmer, however, I believe that it can turn into a blind alley, through a process in which small rational decisions lead to an irrational result. I can easily imagine the dead mother counter arising from TDD.

So what's the solution? Well, one simple approach is to have an experienced programmer look at the tests. Code reviews are an accepted part of many development processes, but they usually focus on the mainline code. While that's certainly important, it seems less-so when you have high test coverage (as you will in a pure-TDD process). The tests, on the other hand, not only tell you whether the requirements were met, but give insight into the mind of the programmer. If you see a series of tests that don't make sense, it's a flag to look at the mainline code: there's probably something in it that doesn't make sense either.

Another benefit of code review for tests is that it can happen early and often. Mainline code reviews generally happen after the code is done, because that's when you can see the logic flow. Test reviews, however, can happen at any time: even if your mainline method/class is half-written, you should still have a bunch of tests for it.

Taken to the extreme (pun intended), you'll have a person sitting by your side and reviewing the tests as you write them: pair programming. I imagine that most readers are cringing after that last sentence. Programmers, because who wants someone looking over your shoulder, and managers, because pair programming is a great way to take two people and get the output of one.

But is it really? If your shop has code reviews as part of its process, have you ever considered how many person-hours goes into one? As a case study from my own career, one shop scheduled 2 to 3 reviews a week, with the presenter and 3 other developers. A review took about an hour of preparation, and an hour of meeting time, giving a total of 8 person-hours. And those reviews covered only the “important” code: perhaps 10% of the codebase.

So, 2-3 developer-days per week spent on code reviews, with more time spent by the reviewed developer to clean up any issues. Of course, you can do less rigorous reviews, but where's the value in that? It's too easy to end up with a review “process” that's nothing more than a formality: a rubber stamp, as overworked developers take at best a cursory look at the code under review. And if you're in that situation, the time is well and truly wasted.

Pair programming is a practice espoused by many Agile processes, in particular Extreme Programming (XP). Now, to be fair, I've never worked in a shop that adopted XP, so you might reasonably ignore what follows. However, I have followed many of the XP practices (including limited pair programming) at other shops, so am not completely talking through my hat.

And looking at the various practices — and also at shops that claim to practice Agile but never seem to reap any benefits — I've come to the conclusion that the real benefit of XP is that the various practices reinforce each other. OK, this isn't a great insight: it's right there on the front page of extremeprogramming.org: “The rules and practices must support each other.”

But that message seems to get lost. Indeed, if you click the “How do I start” link on that page, you'll see a suggestion that you can add a few practices at a time. This is not what I remember when I was first exposed to XP: at the time, the prevailing opinion was that you should start with all dozen or so practices, and eliminate practices only once your team had enough experience to do so with reason.

Reinforcement is important. It's too easy, when you see a deadline looming, to say “there's no need to write tests for this code, it's simple.” It's surprising how often that “simple” code ends up harboring a bug that doesn't turn up until you hit production. It's a lot harder to skip that test when the person sitting next to you raises his/her eyebrow and asks why.

Which brings me to what should be the strongest reinforcement of all: not disappointing your coworkers. But that's another posting.

No comments: