This blog has moved. Go to for the latest posts.

Thursday, July 26, 2012

A better way to predict project release date!

Many people commented on my previous post about Story Points, and how there are much better ways to estimate Agile projects. In this post I'll concentrate on one project where the predictive power of counting the number of stories/items completed/iteration was a better predictor of the release date and the scope delivered than story points. As I was going around Europe and presenting my talk on Story Points Considered Harmful, I always got very interesting questions. The most common:

But aren't Story Points much better at predicting the release date and scope delivered than the method you propose?

Despite all of the other reasons not to use Story Points, I decided to tackle this question specifically. And the results are in! Story points are less accurate when predicting the release date and scope delivered, than just counting the number of stories (or items) delivered per iteration! This seems counter-intuitive because we have less "detail" when we merely count the number of stories delivered. Many asked me:

But if you don't know the size of the work how can you predict when it is going to be done?

In God we trust, all others must bring data

Before speculating, let's look at the data! The case I want to present is: a long project (24 iterations) for which we collected both Story Points and number of items completed per iteration. I had one question with two sub questions in mind:

Which metric (SP's or # of items) was a more accurate predictor for the output of the whole project?
a) When we calculated based on the averages for the first 3 iterations
b) When we calculated based on the averages for the first 5(!) iterations

Why this question is important is that, if we can predict with high accuracy the output of a project based on the first 3-5 sprints, we have a good case to stop doing up-front estimation altogether! After all, investing 3-10 weeks in actual development delivers much more information about the product then spending 2-4 weeks in Requirements/Architecture/Design discussions (not to mention that it bores people out of their minds!)

So what were the results? First of all a disclaimer: this is data from one single project; we do need more data to make a better case for not estimating at all! See below for more on how to contribute data to this project! The results are in, and counting the number of items is a better predictor than Story Points based estimations!

When we try to assess the release date and ammount of scope delivered based on only the first 3 iterations. Using Story Points overesimated the output by 20% (!) in this particular project, while counting the number of stories/items delivered underestimated the otuput by 4% (yes, four percent).

How about if we increase the sample and take into account the first 5 sprints? In this case the Story Points based prediction was more accurate, but it still overestimated the delivered scope by 13%, while counting the number of stories/items underestimated the output by 4% (yes, four percent).

In this project, the answer to the question: "which metric is more accurate when compared to the actual output of the project?" is: Counting the number of stories/items delivered at the end of each iteration is a better predictor for the output of a project than estimating based on Story Points delivered!

Final note, how to contribute data to this study

The case I presented above is based on one single project. We currently have data for more than 20 projects and 14 different teams; but we need more data to investigate the claims I make here and in the previous post.

I call upon the community to share the data they have. I have made my contribution by sharing the data I have collected over the last years in a world-accessible spread-sheet that you can see and download here.

Please share the data for your projects in a google-doc or similar world-accessible spreadsheet and leave a comment below with the link to the data. For us to learn more about how to better predict project outcomes we need to be able to look at a large data set. Only then we will be able to either verify or destroy the claim that Story Points are useful for our projects. Thank you all in advance for your contribution! Photo credit: NASA's Marshall Space Flight Center @ flickr

at 12:20 | 23 comments links to this post
RSS link

Bookmark and Share

(c) All rights reserved