Tired of useless boring planning meetings? Read on, I've got a solution for you
In a meeting today I was graphically reminded of how some of the basic concepts in software development still escape many of us. Case in point, the meaning of capacity.
In many people's minds capacity still means "how many man-hours we have have available for real work". This is plain wrong.
Let's decompose this assumption to see how wrong it is.
- First, in this assumption is implicit that we can estimate exactly how many man-hours we have available for "real work". The theory goes like this: I have 3 people, the sprint is 2 weeks/10 days, so the effort available, and therefore capacity, is 30 man-days. This is plain wrong!!! How? let's see:
- Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
- Then there's the issue that not 100% of the time available for each people can actually be used for work. For example, there are meetings about next sprint's content, then there's interruptions, time to go to the toilet... You get the picture. In fact it is impossible to predict how many each person will use for "real" work.
- Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
- Then there are those pesky things we call "dependencies". Sometimes someone in the team is idle because they depend on someone else (in or out of the team) and can't complete their feature. This leads to unpredictable delays, and therefore ineffective use of the effort available for a Sprint.
- Finally (although other reasons can be found) there's the implicit assumption that even if we could know perfectly the amount of effort available we can know how much some piece work actually takes from beginning to end, in exact terms! This is implicit in how we use the effort numbers by then scheduling features against that available effort. The fact is that we (humans) are very bad at estimating something we have not done before, which is the case in software most of the time.
Implications of this definition of capacity
There are some important implications of the above statement. If we recognize that capacity is closer to the traditional definition of Throughput then we understand that what we need to estimate is not just size of a task, plus effort available. No, it's much more complex than that! We need to estimate the impact of dependencies, errors, meetings, etc. on the utilization of the effort available.Let me illustrate how complex this problem is. If you want to empty a tank of 10 liters attached to a pipe, you will probably want to know how much water can flow through the pipe in 1 minute (or some similar length of time) and then calculate how long it takes to completely empty the tank. Example: if 1 liter flows through the pipe in 1 minute then it will take 10 minutes to empty a 10 liter tank. Easy, no?
Well, what if you now try to guess the time to empty the same tank but instead of being given the metric that 1 liter of water can flow in the piper for each minute, you are instead given:
- Diameter of the pipe
- Material that the pipe is built in
- Viscosity of the liquid in the tank
- Probability of obstacles existing in the pipe that could impede the flow of the liquid
- Turbulence equations that allow you to calculate flow when in the presence of an obstacle
Get the point? In software we are in the second situation! We are expected to calculate capacity (which is actually throughput) given a huge list of variables! How silly is that?!
For a better planning and estimating framework
The fact is that the solution for the capacity (and therefore planning) problem in software is much, much easier!Here's a blow by blow description:
- Collect a list of features for your product (not all, just the ones you really want to work on)
- With the whole team, assess the features to make sure that neither of them is "huge" (i.e. the team is clueless about what it is or how to implement it). If they are too large, split them in half (literally). Try to get all features to fit into a sprint (without spending a huge effort in this step).
- Spend about 3 sprints working on that backlog (it pays off to have shorter sprint!)
- After 3 sprints look at your velocity (number of features completed in each sprint) and calculate an average
- Use the average velocity to tell the Product Owner how long it will take to develop the product they want based on the number of Features available in the backlog
- Update the average expected velocity after each sprint
The theory behind
There's a couple of principles behind this strategy. First, it's clear that if you don't change the team, the technology or any other relevant environmental variable the team will perform at a similar level (with occasional spikes or dips) all the time. Therefore you can use historical velocity information to calculate a long term velocity in the future!Second, you still have to estimate, but you do that at the level of one Sprint. The reason is that even if we have the average velocity, an average does not apply to a single sprint but rather to a set of sprints. Therefore you still need to plan your sprint, to identify possible bottlenecks, coordinate work with other people, etc.
The benefits
Finally the benefits. The most important benefit here is that you don't depend on estimations based on unknown assumptions to make your long term plans. You can rely on data. Sure, sometimes the data will be wrong, but compare with the alternative: when was the last time you saw a plan hold up? Data is your friend!Another benefit is that you don't need to twist anyone's arm to produce the metrics needed (velocity and number of features in backlog), because those metrics are calculated automatically by the simple act of using Scrum.
All in all not a bad proposition and much simpler than working with unavoidably incorrect estimates that leave everybody screaming to be saved by the bell at the end of the planning meeting!
Photo credits:
uggboy @ flickr
johnmcga @ flickr
Labels: agile, agile estimation, capacity, estimation, lean, lean thinking, planning, systems thinking, theory of constraints, throughput, toc
RSS link
0 Comments:
Post a Comment
<< Home