I've never liked the term “computer science.” Science, and the scientific method, is all about increasing our knowledge of the unknown. Computer programs are deterministic, moving from state to state according to specific rules; there's nothing unknown about them. Well, at least until you introduce a bug into your program. And then scientific method can be extremely useful, helping you to discover the real rules that your program is using.
For those who slept through 8th grade, here's a summary of the scientific method:
- Gather data.
- Come up with a hypothesis to explain how that data could exist.
- Design experiments to prove that hypothesis wrong (this is the hard part).
- Once you've run out of experiments, call your hypothesis a theory (8th grade science teachers are now upset).
That's right: science doesn't grow by proving hypotheses correct, it grows by proving them wrong. It's impossible to prove a hypothesis about the natural world, because someone might find data that disproves it. For example, Newtonian mechanics sufficed for the 17th and 18th centuries, but by the end of the 19th century scientists had gathered evidence that showed it didn't always apply. Einstein replaced Newtonian mechanics with general relativity in the early 20th century, and that hypothesis remains with us today. But it might be overturned tomorrow, next year, or a million years from now.
So what does that have to do with debugging?
It's a way to break free from your assumptions. When following the scientific method, your goal is not to prove your assumptions correct, but to prove them wrong. And you devote all of your effort to that proof. Only when you've exhausted all attempts to prove your assumptions wrong are you allowed to form new assumptions.
The first assumption that you need to prove wrong, of course, is that your code is correct. Which is where the Small, Self-Contained Example Program comes in.