Wednesday, August 10, 2011

Defining Done

It seems that a lot of Agile teams have a problem defining when a task or story is “done.” I've seen that on the teams that I've worked with, occasionally leading to heated argument at the end of a sprint.

For stories, the definition of done is simple: it's done when the product owner says it is. That may horrify people who live and die by acceptance criteria, but the simple fact is that acceptance criteria are fluid. New acceptance criteria are often discovered after the story is committed, and although most of these should spur the creation of a new story, more likely is that they simply get added to the existing one. And sometimes (not often enough), the product owner decides that the story is “good enough.”

At the task level, however, there are no acceptance criteria. Most teams that I've seen have worked out some measure of doneness that involves test coverage and code reviews. But the problem with such criterial is that they don't actually speak to what the task is trying to accomplish. The code for a task could be 100% covered and peer reviewed, but contribute nothing to the product. I think this is especially likely in teams where individual members go off and work on their own tasks, because peer reviews in that situation tend to be technical rather than holistic. As long as the code looks right, it gets a pass.

In my experience, the last chance for holistic review is the sprint planning meeting. Unfortunately, by the time the team gets to tasking, it's often late in the day and everyone wants to go home. But I've found that by simply asking “how do you plan to test this,” the task descriptions get more exact, and — not surprisingly — the time estimates go up.

No comments: