This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Friday, April 30, 2010

Tired of useless boring planning meetings? Read on, I've got a solution for you


In a meeting today I was graphically reminded of how some of the basic concepts in software development still escape many of us. Case in point, the meaning of capacity.

In many people's minds capacity still means "how many man-hours we have have available for real work". This is plain wrong.

Let's decompose this assumption to see how wrong it is.


  1.  First, in this assumption is implicit that we can estimate exactly how many man-hours we have available for "real work". The theory goes like this: I have 3 people, the sprint is 2 weeks/10 days, so the effort available, and therefore capacity, is 30 man-days. This is plain wrong!!! How? let's see:

    1.  Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
    2. Then there's the issue that not 100% of the time available for each people can actually be used for work. For example, there are meetings about next sprint's content, then there's interruptions, time to go to the toilet... You get the picture. In fact it is impossible to predict how many each person will use for "real" work.

  2. Then there are those pesky things we call "dependencies". Sometimes someone in the team is idle because they depend on someone else (in or out of the team) and can't complete their feature. This leads to unpredictable delays, and therefore ineffective use of the effort available for a Sprint.
  3.  Finally (although other reasons can be found) there's the implicit assumption that even if we could know perfectly the amount of effort available we can know how much some piece work actually takes from beginning to end, in exact terms! This is implicit in how we use the effort numbers by then scheduling features against that available effort. The fact is that we (humans) are very bad at estimating something we have not done before, which is the case in software most of the time.
The main message here is: effort available (e.g. man-hours) is not the same as capacity. Capacity is the metric that tells us how many features a team or a group of teams can deliver in a Sprint, not the available effort!

Implications of this definition of capacity

There are some important implications of the above statement. If we recognize that capacity is closer to the traditional definition of Throughput then we understand that what we need to estimate is not just size of a task, plus effort available. No, it's much more complex than that! We need to estimate the impact of dependencies, errors, meetings, etc. on the utilization of the effort available.

Let me illustrate how complex this problem is. If you want to empty a tank of 10 liters attached to a pipe, you will probably want to know how much water can flow through the pipe in 1 minute (or some similar length of time) and then calculate how long it takes to completely empty the tank. Example: if 1 liter flows through the pipe in 1 minute then it will take 10 minutes to empty a 10 liter tank. Easy, no?

Well, what if you now try to guess the time to empty the same tank but instead of being given the metric that 1 liter of water can flow in the piper for each minute, you are instead given:

  • Diameter of the pipe
  • Material that the pipe is built in
  • Viscosity of the liquid in the tank
  •  Probability of obstacles existing in the pipe that could impede the flow of the liquid
  • Turbulence equations that allow you to calculate flow when in the presence of an obstacle

Get the point? In software we are in the second situation! We are expected to calculate capacity (which is actually throughput) given a huge list of variables! How silly is that?!

For a better planning and estimating framework

The fact is that the solution for the capacity (and therefore planning) problem in software is much, much easier!

Here's a blow by blow description:

  • Collect a list of features for your product (not all, just the ones you really want to work on)
  • With the whole team, assess the features to make sure that neither of them is "huge" (i.e. the team is clueless about what it is or how to implement it). If they are too large, split them in half (literally). Try to get all features to fit into a sprint (without spending a huge effort in this step). 
  • Spend about 3 sprints working on that backlog (it pays off to have shorter sprint!)
  • After 3 sprints look at your velocity (number of features completed in each sprint) and calculate an average
  • Use the average velocity to tell the Product Owner how long it will take to develop the product they want based on the number of Features available in the backlog
  • Update the average expected velocity after each sprint
Does it sound simple? It is. But most importantly, I have done many experiments based on this simple idea, and I'm yet to find a project where this would not apply. Small or big, it worked for all projects where I've been involved in the past.

The theory behind

There's a couple of principles behind this strategy. First, it's clear that if you don't change the team, the technology or any other relevant environmental variable the team will perform at a similar level (with occasional spikes or dips) all the time. Therefore you can use historical velocity information to calculate a long term velocity in the future! 
Second, you still have to estimate, but you do that at the level of one Sprint. The reason is that even if we have the average velocity, an average does not apply to a single sprint but rather to a set of sprints. Therefore you still need to plan your sprint, to identify possible bottlenecks, coordinate work with other people, etc.

The benefits

Finally the benefits. The most important benefit here is that you don't depend on estimations based on unknown assumptions to make your long term plans. You can rely on data. Sure, sometimes the data will be wrong, but compare with the alternative: when was the last time you saw a plan hold up? Data is your friend!
Another benefit is that you don't need to twist anyone's arm to produce the metrics needed (velocity and number of features in backlog), because those metrics are calculated automatically by the simple act of using Scrum.

All in all not a bad proposition and much simpler than working with unavoidably incorrect estimates that leave everybody screaming to be saved by the bell at the end of the planning meeting!

Photo credits:
uggboy @ flickr
johnmcga @ flickr

Labels: , , , , , , , , , ,

at 21:08 | 0 comments links to this post
RSS link

Bookmark and Share

Wednesday, April 28, 2010

Why do we keep on giving up any control over our project? It would be so easy to keep it...


I get very often shocked by the comments that I hear from supposedly very smart people. Today was no exception. I heard the following comment:
There is content we don't want to timebox, therefore there's no need to link it to any timeline...

-- Quote from someone that has direct responsibility over scope in a project (my emphasis)

This quote betrays a complete misunderstanding of the dynamics of software development, and a complete (albeit unintentional) ignorance of the market forces we need to deal with.

Here's my point: by scope-boxing a particular Feature what we are doing is effectively giving up control of it's size. Once the team is given the Feature they will work on it until it is "perfect", that means we don't have a clue when it will be done and therefore the schedule for that feature is completely unpredictable! Yes, sure we will stop development on it at some point, but will it happen before it is too late?

The advantage of timeboxing the content for a Feature (Feature must fit in a Sprint) is that we have a clear deadline at which point we evaluate if the feature is ready to go to the market! Without this constraint the team is left "alone" to decide when the feature is ready. But the team is the wrong actor to decide when a feature is ready! The product owner should be doing that work based on market intelligence, user knowledge, etc.

By scope boxing (as opposed to timeboxing) our features we are effectively giving up control of our projects!

Why is it so hard to understand this point? Where is my reasoning missing clarity?

Can you help?

Photo credit: danielygo @ flickr

Labels: , , , , , ,

at 09:00 | 1 comments links to this post
RSS link

Bookmark and Share

Tuesday, April 20, 2010

Setting targets actually decreases your performance. Don't believe me? See this video...

Many of the readers of this blog have probably faced this situation already in the past. When I was adopting Scrum in a local company, I was faced with the tyranny of setting targets. Targets and bonuses were part of that company's HR policies. "This is how we motivate people" as the saying goes.

Now picture this. When we started adopting Scrum we were, as expected, adopting also timeboxed software development (a key part of scrum). But the targets were that we should always be on time (+-10%).

So, get this: you get more money if you are on time (to motivate you to be on time, I guess) but your process is timeboxed! Easy money as they say...

The morale in this story is that targets are useless! If you want to improve the system (R&D for example) you need to manage the system, not set targets!

By adopting Scrum (effectively changing the system) we were able to be 100% on time! And that had nothing to do with the target setting, the system design (using Scrum) was the reason!

This is an insight that many managers don't have today: as manager you must design and manage the system. Setting arbitrary targets and not providing a method (system) is useless!

The video below of a conversation with
John Seddon, is a very good explanation of how targets are useless and even counter productive. I especially loved his points:
1. there's no reliable method for setting a target
2. when you use targets or any other arbitrary measure you drive waste into your system
3. once you learn how to use measures derived from the work, and you involve the people who do the work you achieve a level of performance you would never have set as a target.


Labels: , , , ,

at 21:20 | 0 comments links to this post
RSS link

Bookmark and Share

Monday, April 05, 2010

We all want more value. Fine! But what do you mean by value? A discussion on the meaning of "value"


In the agile community there's been lately a great deal of talk about "value" and why that is more important than "process".

I also believe that we should try to optimize for value in our software environment, be it a small ISV or a big SW corporation. But what does value mean for you?

The post-agilists (people who believe they know better, and they may...) have started touting a new goal, a new Holy Graal for the software industry, that is "value". But very little is clear about what is now called "value".

I started looking around for definitions that may useful in a discussion of "value" in the context of the software development industry. Here's what I found.

The TPS way


In the early days of TPS (Toyota Production System), Taiichi Ohno defined value very simply by stating that value are that things that the customer, if observing our actions, would be willing to pay for. Example: if you write software for consumers, but spend considerable amount of time filling in paperwork that does not directly or indirectly improve the product, that would be waste and therefore "non-value add" work.
On the other hand, if you were developing software for the medical industry and spent considerable amount of time filling in forms that would ultimately lead to the successful certification of your product by the authorities, then that would be "value add" work.

Ohno's definition is simple and useful, but requires immense knowledge of your particular industry and company before it can be applied successfully. In Ohno's practice, managers would be asked to "stand in the circle" (a circle he would draw on the floor of the factory) for hours and hours. Ohno would then come back and ask the manager, "what did you observe?". If the manager would not have a satisfactory answer, Ohno would scold them and command them to continue to observe until they had found a way to improve the operation or the process in order to add more value to the product with less effort or waste.

This story illustrates a principle that is at the core of the TPS system: Deep Understanding. Inherent to this principle is another interesting concept, that it is impossible to create a "formula" for value that would apply to all situation. In other words: only you know what is value in the context of your work and company.

Now, that's quite different than what is being discussed back and forth in the agile community now.

What others say



Let's see how other people, linked to the software industry, define value:

Some of the loudest proponents of the focus on "value" are Kai and Tom Gilb. Both have a lot of credit and Tom, in particular, has a long history of contributions to the improvement of our industry so it is interesting to see what is described as "value" in their approach

Kai and Tom, in their site write a lot about "value", in fact they talk about many different types of value. Here are some:

  • value requirements
  • value decisions
  • value delivery (as a verb/action)
  • product values
  • value product owner certification (yes, certification)
  • value management certification (again: yes, certification)


About requirements they write that they should be:
"Clear, Meaningful, Quantified, Measurable and Testable".

These Value Requirements seem to be designed to structure and formalize the specification of "values".

Interestingly, they clearly assign Scrum a role that has (at least in their depiction) very little to do with "value management". See for yourself.

The emphasis is on "scales" or quantification of value:
Stakeholder Values are the scalar improvements a Stakeholder need or desire.
Product Qualities are the scalar attributes of a Product.


These are important additions to the effort to define value, but not really conclusive or contributing a clear definition of what value is. Ohno's definition was clear, but impossible to measure/understand without a deep understanding of one's business.


In VersionOne's blog Mike Cottmeyer argues that:
Lean tends to take a broader look at value delivery across the entire value stream... across the enterprise... Scrum by it's very nature tends to look only at the delivery team.

(...) the Lean folks say that Scrum focuses on velocity and Lean focuses on value... "


This in itself is not really helpful in defining value, but it helps us make a distinction, if we agree. The distinction is that delivering more features may, or may not, deliver more value. They are not necessarily the same.

This argument is easy to make when you look at the market out there and see that many products with "less" features tend to deliver the same (if not more) value to their customers. (examples include the irreverent 37signals crew as well as Nintendo or Apple).

In this non-exhaustive research I came across another interesting definition of Value. This one by Chris Matts:
Value is profit


Although this may seem logical, I doubt that your customers would agree that the more profit you make the better "they" feel. In fact I would argue otherwise. If profit is your only measure of value you will be driven to make decisions that effectively reduce the value delivered to your customer, even when your profits are increasing.
Now, it is important to understand that profit has to be part of the equation, but is also important to understand that, from your customer's perspective, your profit is only value in the measure that allows you to continue to deliver value to your customer, not as an absolute value.

Conclusion


What can we conclude from this superficial evaluation of the discussion out there?

Well, for starters we can easily say that the discussion about "value" in software development is just about to get started.

We have some properties that help us understand and define value:

  1. Value is that what the customer, if observing our actions, would be willing to pay for. (Ohno's definition)
  2. Gilb says that value must be translated in some form (Value Requirements) and that the key characteristic is that these Value Requirements are Quantifiable and Measurable
  3. Mike Cottmeyer, on the other hand, states that Features (stuff that gets developed as described in requirements) are different than Value. This means that even if we would have Value Requirements they would not necessarily add value from the customers point of view.


Mike's and Gilb's discussions seem, on the face of it, to be contradictory so we are left finally with a definition (Ohno's) that is useful but very hard to apply.

So, for the time being, Value is something that we cannot easily define "a priori", and for us to be able to define it requires a deep understanding of the business we are in (to be able to "guess" what the customer would pay for).

I'd say that we are still very far away from a viable (mass-usable) definition of Value for our industry.

What do you think? Have you defined value in the context of your project/company? What was that and how did you reach that definition?

Photo credit: Will Lion @ flickr

Labels: , , , , , , , , , ,

at 20:31 | 1 comments links to this post
RSS link

Bookmark and Share

 
(c) All rights reserved