I'm going to interrupt the postings on SkipList, because the TDD class is over. Did I learn anything?
I came into class thinking that I had a fairly good understanding of TDD, albeit as a skeptic that it could directly drive design. I left thinking the same thing. I'm sure there were some pieces of information that I picked up without realizing — it's impossible for that not to happen if you have a good instructor. And I was exposed to Fitnesse, a tool that will probably be more useful in a future career stop. But I came in test-infected, left test-infected, and didn't have a revelation in the middle. My thoughts may not have changed, but the class pushed me to think deeply about the subject again, and test those thoughts.
Where I thought the class was tremendously valuable was in interacting with Uncle Bob. He's a lot more pragmatic in person, admitting that there are places where test-driven design falls down (eg, TDD will give you bubble-sort, not quicksort). I can understand this difference in personality: written words have to stand on their own for all time, presentations are marketing (of ideas, if not of self), while the classroom is a place for discussion.
The biggest revelation, however, came from observing my fellow students. At any given time, about half of them were working on other things: responding to email, reviewing defects, writing code for their current projects, and generally ignoring what was happening around them. And to be honest, during the last example of the first day I joined them: a minor crisis had appeared in my inbox over lunch, and I decided that it was more important to respond to that crisis than to work through the example — after all, I already knew this stuff, right?
Uncle Bob finished the class with a talk on what he saw as the future of TDD. He used an interesting example: in the 1970s there was a lot of argument about the value of structured programming, yet today every mainstream language uses structured constructs — “goto” is no longer considered harmful, because it no longer exists. His opinion is that 30 years from now we won't be discussing TDD either, because it will be standard practice.
I think there's a flaw in this reasoning: computer languages are created by language designers, made whole by compiler writers, and are in effect handed down from on high for the rest of us. Even Fortran and Cobol have evolved: while there may be dusty decks filled with gotos, code written yesterday (and there is some) uses structured constructs. TDD won't follow that path, unless Eiffel becomes a mainstream languge, because of the commitment that it requires.
Anything that requires commitment also has costs: what do I have to invest to travel this path (direct cost), and what do I have to give up (opportunity cost)? In an economics classroom, choices are easy: you add the direct and opportunity costs, and take the low-cost option. In real life, it's often hard to identify those costs — most people consider staying in the same place to have zero cost. And given that, there's no reason to change.
I can't remember when I became test-infected: I know that I was writing test programs to isolate defects some 20 years ago. As I read the testing literature that started to appear around 1999, a lightbulb went on in my head, and I tried writing my mainline code with tests. And learned that no, it didn't take me any longer, and yes, I found problems very early. In other words, that TDD was in fact the low-cost option.