This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Tuesday, October 14, 2014

5 No Estimates Decision-Making Strategies


One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?

Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.

Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.

What do you mean by decision-making strategy?

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Some possible goals for business strategies might be:

  • Growth: growing the number of customer or users, growing revenues, growing the number of markets served, etc.
  • Market segment focus/entry: entering a new market or increasing your market share in an existing market segment.
  • Profitability: improving or maintaining profitability.
  • Diversification: creating new revenue streams, entering new markets, adding products to the portfolio, etc.

Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.

Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.

The two questions that I will consider when defining a decision-making strategy are:

  • 1. How well does this decision proposal help us reach our business goals?
  • 2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Are you taking into account the risks inherent in the decisions made with those frameworks?

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

A different risk profile requires different decisions

Each decision we make has an impact on the following risk dimensions:

  • Failing to meet the market needs (the risk of what).
  • Increasing your technical risks (the risk of how).
  • Contracting or committing to work which you are not able to staff or assign the necessary skills (the risk of who).
  • Deviating from the business goals and strategy of your organization (the risk of why).

The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.

There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).

An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.

Aligning decisions with business goals: decision-making strategies

When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.

Decision-making Strategy 1: Do the most important strategic work first

If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

Decision-making Strategy 2: Do the highest technical risk work first

When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

Decision-making Strategy 3: Do the easiest work first

Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

Decision-making Strategy 4: Do the legal requirements first

In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

Decision-making Strategy 5: Liability driven investment model

This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.

None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!

In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.

If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!

Your ideas about decision-making strategies that do not require estimation

You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.

Labels: , , , , , , , , , , , ,

at 06:00 | 3 comments
RSS link

Bookmark and Share

Tuesday, July 08, 2014

What is an Estimate?

If you don’t know what an estimate is, you can’t avoid using them. So here’s my attempt to define what is an estimate.
The "estimates" that I'm concerned about are those that can easily (by omission, incompetence or malice) be turned into commitments. I believe Software Development is better seen as a discovery process (even simple software projects). In this context, commitments remove options and force the teams/companies to follow a plan instead of responding to change.

Here's my definition: "Estimates, in the context of #NoEstimates, are all estimates that can be (on purpose, or by accident) turned into a commitment regarding project work that is not being worked on at the moment when the estimate is made."

The principles behind this definition of an estimate

In this definition I have the following principles in mind:
  • Delay commitment, create options: When we commit to a particular solution up front we forego possible other solutions and, as a consequence we will make the plan harder to change. Each solution comes with explicit and implicit assumptions about what we will tackle in the future, therefore I prefer to commit only to what is needed in order to validate or deliver value to the customer now. This way, I keep my options open regarding the future.
  • Responding to change over following a plan: Following a plan is easy and comforting, especially when plans are detailed and very clear: good plans. That’s why we create plans in the first place! But being easy does not make it right. Sooner or later we are surprised by events we could not predict and are no longer compatible with the plan we created upfront. Estimation up front makes it harder for us to change the plan because as we define the plan in detail, we commit ourselves to following it, mentally and emotionally.
  • Collaboration over contract negotiation: Perhaps one of the most important Agile principles. Even when you spend time and invest time in creating a “perfect” contract there will be situations you cannot foresee. What do you do then? Hopefully by then you’ve established a relationship of trust with the other party. In that case, a simple conversation to review options and chose the best path will be enough. Estimation locks us down and tends to put people on the defensive when things don’t happen as planned. Leaving the estimation open and relying on incremental development with constant focus on validating the value delivered will help both parties come to an agreement when things don’t go as planned. Thereby focusing on collaboration instead of justifying why an estimated release date was missed.
  Here are some examples that fit the definition of Estimates that I outlined above:
  • An estimate of time/duration for work that is several days, weeks or months in the future.
  • An estimate of value that is not part of an experiment (the longer the time-frame the more of a problem it is).
  • A long term estimate of time and/or value that we can only validate after that long term is over.

How do you define Estimates in your work? Are you still doing estimates that fit the definition above? What is making it hard to stop doing such estimates? Share below in the comments what you think of this definition and how you would improve it.

This definition of an estimate was sent to the #NoEstimates mailing list a few weeks ago. If you want to receive exclusive content about #NoEstimates just sign up below. You will receive a regular email on the topic of #NoEstimates as Agile Software Development.

Subscribe to our mailing list

* indicates required

Picture credit: Pascal @ Flickr

Labels: , , , , , ,

at 06:00 | 1 comments
RSS link

Bookmark and Share

Friday, June 27, 2014

Coming out of the closet - the life and adventure of a traditional project manager turned Agilist


I’m coming out of the closet today. No, not that closet. Another closet, the tabu closet in the Agile community. Yes, I was (and to a point still am) a control freak, traditional, command and control project manager. Yes, that’s right you read it correctly. Here’s why this is important: in 2003 when I first started to consider Agile in any shape or form I was a strong believer of the Church of Order. I did all the rites of passage, I did my Gantt charts, my PERT charts, my EVM-charts and, of course, my certification.

I was certified Project Manager by IPMA, the European cousin of PMI.

I too was a control freak, order junkie, command and control project manager. And I've been clean for 9 years and 154 days.

Why did I turn to Agile? No, it wasn’t because I was a failed project manager, just ask anyone who worked with me then. It was the opposite reason. I was a very successful project manager, and that success made me believe I was right. That I had the recipe. After all, I had been successful for many years already at that point.

I was so convinced I was right, that I decided to run our first Agile project. A pilot project that was designed to test Agile - to show how Agile fails miserably (I thought, at that time). So I decided to do the project by the book. I read the book and went to work.

I was so convinced I was right that I wanted to prove Agile was wrong. Turned out, I was wrong.

The project was a success... I swear, I did not see that coming! After that project I could never look back. I found - NO! - I experienced a better way to develop software that spoiled me forever. I could no longer look back to my past as a traditional project manager and continue to believe the things I believed then. I saw a new land, and I knew I was meant to continue my journey in that land. Agile was my new land.

Many of you have probably experienced a similar journey. Maybe it was with Test-Driven Development, or maybe it was with Acceptance Testing, or even Lean Startup. All these methods have one thing in common: they represent a change in context for software development. This means: they fundamentally change the assumptions on which the previous methods were based. They were, in our little software development world a paradigm shift.

Test-driven development, acceptance testing, lean startup are methods that fundamentally change the assumptions on which the previous software development methods were based.

NoEstimates is just another approach that challenges basic assumptions of how we work in software development. It wasn’t the first, it will not be the last, but it is a paradigm shift. I know this because I’ve used traditional, Agile with estimation, and Agile with #NoEstimates approaches to project management and software delivery.

A world premier?

That’s why me and Woody Zuill will be hosting the first ever (unless someone jumps the gun ;) #NoEstimates public workshop in the world. It will happen in Finland, of course, because that’s the country most likely to change the world of software development. A country of only five million people yet with a huge track record of innovation: The first ever mobile phone throwing world championship was created in Finland. The first ever wife-carrying world championship was created in Finland. The first ever swamp football championship was created in Finland. And my favourite: the Air Guitar World Championship is hosted in Finland.

#NoEstimates being such an exotic approach to software development it must, of course, have its first world-premier workshop in Finland as well! Me and Woody Zuill (his blog) will host a workshop on #NoEstimates on the week of October 20th in Helsinki. So whether you love it, or hate it you can meet us both in Helsinki!

In this workshop will cover topics such as:

  • Decision making frameworks for projects that do not require estimates.
  • Investment models for software projects that do not require estimates.
  • Project management (risk management, scope management, progress reporting, etc.) approaches that do not require estimates.
  • We will give you the tools and arguments you need to prove the value of #NoEstimates to your boss, and how to get started applying it right away.
  • We will discuss where we see #NoEstimates going and what are the likely changes to software development that will come next. This is the future delivered to you!

Which of these topics interest you the most? What topics would you like us to cover in the workshop. Tell us now and you have a chance to affect the topics we will cover.

Contact us at vasco.duarte@oikosofy.com and tell us. We will reply to all emails, even flame bombs! :)

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , , ,

at 06:00 | 2 comments
RSS link

Bookmark and Share

Friday, May 02, 2014

Real stories of how estimates destroy value in Software Development


A friend shared with me a few stories about how Estimates are destroying value in his organization. He was kind enough to allow me to share these stories anonymously. Enjoy the reading, I know I did! :)

The story of the customer that wanted software, but got the wrong estimates instead

Once, one of our customers asked for a small feature and, according to this customer it was quite clear from the beginning what he wanted. Alas, in my experience things often don’t go as planned and therefore I have added a good buffer on top of the needed estimates. Just like all developers I know do. Every day.

However, the story was about to get more interesting. A bit later, the sales team said those numbers were too high. "The customer will never accept those numbers!", they said. "And we really want this case."

After much negotiations we reduced our estimates. After all, we had added some buffer just in case. So, we reduced the estimates on the account, and I heard myself say (I should have known better): "Well, I guess that if everything goes well we could do it in that time."

My real surprise was that, a few days after the estimates were given to the sales team and the customer, I heard from the project manager that the company had agreed a Fixed Price Project with the customer. "That is madness", I said to the sales team. I felt betrayed as a developer! I had been asked for an estimate for the project, but I was tricked into accepting a Fixed Price Project through the estimation process! During the estimation I was not told that this would be a Fixed Price project.

The result was that the project was delivered late, we exceeded the estimates we gave. However, I did learn a valuable lesson: If you are forced to create estimates and the customer requires fixed price then never obey wishes from sales team: their target is different. Or, alternatively just do away with the estimate process altogether: just ask the sales team what number they want to hear ;-).

History repeated: never trust the sales team to handle estimates

A few months ago a customer called in and requested a small feature for their existing product. They were asking for an estimate. Part of the work was very clear and would be very easy to implement. But there was a part that wasn’t that clear. After some pre-analysis and discussions with other development team I knew about what would be needed to finish the job. Unlike the story above, this time I knew the customer asked for a Fixed Price Project and therefore added some buffer – just to be on the safe side.

After carefully estimating the work with the team we sent the total estimate to the sales team. This estimate included everything that was needed: from analysis, implementation, test, deployment. I did not split the estimates into its components parts because I didn’t want the customer to think that testing would be optional (Yes! It as it has happened before).

However, the sales teams split my estimate into the different roles. Big mistake! Magically the estimate that the customer received was half-day shorter than the one we provided. Not a big difference, but I learned my lesson!

Surprise number two was about to hit me: The customer said they hadn't used some hours from a previous contract and that they should get a discount. Long story short, when the project was started we had to reduce our "cost" to 50% of the estimation I had originally given to the sales team. I learned that I should never trust the sales team with mathematical calculations!

We did manage to deliver the feature to our customer, thanks to some very aggressive scope management, this meant that we had to deliver functionality that was requested, but did not perform as well as it could have if we had been allowed to work on the feature from the start, without the estimation back-and-forth.

I did learn my lesson: If you sales team doesn’t know how to sell software projects, and before you given them any estimates, add even more buffer on top of things learnt from the previous story! ;-).

Estimation as an organizational smell

Certainly there are many organizations in the world that do not go through similar stories as these. However, many still do! For those organizations that are affected by stories like these I suggest that we look at different ways to manage software projects. #NoEstimates provides a clear alternative that has proved very successful in the past. For example, in the paper below I mention a story where a team would have given an estimate within 4% of the actual time it took the project to complete, if they had used #NoEstimates!

The Silver Lining

The story continues, however. Here's what my friend added at the end of his email:

Well, after all having successfully changed the mindset of many in the company and in our team. We are already doing quite well.

Now I do things differently. The first thing I always discuss with the customer is how much money they would like to spend for the work they want to have done. This already gives me a good picture if we are even close in understanding of the amount of work that is necessary or more in the direction of “insane”.

Today I don't negotiate traditional contracts with my customers. Every job is now build on trust, communication and transparency.

Labels: , , , , , , , ,

at 11:20 | 8 comments
RSS link

Bookmark and Share

Wednesday, September 18, 2013

The #NoEstimates questions, Part II


Last week we did not cover all the questions on #NoEstimates. Twitter gave us a lot more questions than could be listed last week, and even more interesting ones. Read on for the questions and my take on their meaning.

    How do I choose between options or opportunities (without having estimates regarding the cost)?
    • Choices are hard to make. When you are deep into the development process you inevitably come up with many ideas. But how should we choose between all those ideas without a detailed estimate? I believe this question implies that we have no way to assess the opportunity cost of selecting one of the options. I propose we challenge this assumption in a later post.
    How to win a government contract without estimates?
    • A very common question, also present implicitly in last week’s post when a questions asked: How to practice #NoEstimates in companies where estimates are "mandatory"? When the legal framework (or contract) requires you to provide an estimate of the length of the work you just have to do it, of course. But a more interesting question to tackle will be: how do we use #NoEstimates to our own (and the customer’s) benefit even when we have to provide estimates before we can win the contract?
    What visual indicators do you have in #NoEstimates to show if you deviate from the "optimum"?
    • Are we going in the right direction? Are we progressing towards the goal? Do we have the right features? Will we be on time? There are many implicit questions behind this one. While exploring the answers to these questions, I hope we can generate “heuristics” that are relevant and actionable for people interested in the “end game” for a project or a delivery. Including heuristics based on visualization.
    Will #NoEstimates give me better predictability even when not using estimates?
    • The holy Grail: predictability. In my view, the temptation is to say: if you have time-boxes you don’t need more predictability, you already have it. However, I think this question goes beyond “delivering on time”. In fact, I believe that this question is about predicting “everything” (scope, quality, cost, length, …) We have to explore what #NoEstimates can help with, and what it can’t help with when it comes to looking into the future.
    If I just "play the game" with project managers and give estimates that I know aren't right, am I already doing #NoEstimates?
    • This question is perhaps related to the one above where we tackle how to use #NoEstimates in an environment where we are asked to provide estimates. It must be said that lying (providing information you know not to be true) is not right, deceiving is not right - no matter what method you use to manage your projects. I hope to clarify how my practice of #NoEstimates can help you tackle the estimation requests you get, but still benefit from the added information #NoEstimates can provide.
    Is there some analogy between SQL/NoSQL and Estimates/#NoEstimates?
    • Many people comment on the choice of tag (#NoEstimates) for this wider discussion on estimating and managing projects. It is true that if you look at the hash tag used and the previous SQL/NoSQL discussions there is a similarity in the surface. However, I think there are further similarities between these two movements. I’ll explore those in a later blog post.
    Does #NoEstimates aim to change existing social constructs (e.g. capitalism, budgets) to remedy the need to estimate?
    • As far as I understand this question explores the possible impact and consequences of #NoEstimates, including on the budgeting cycle and the business models. I don’t think #NoEstimates tries to tackle business models at all, in fact I think it aims to enable more business models. On the other hand, #NoEstimates does try to change the way we look at budgets - project budgets at least - with the aim to make those more transparent and reliable. This topic promises to be very interesting as it has generated a lot of discussion on twitter.
    Is it accurate to say #NoEstimates is for projects in Cynefin's complex domain?
    • For those of you that haven’t followed the Cynefin model or Complex Systems discussion, let me just clarify that Complex Systems are such that causality can only be attributed in retrospect, i.e. we cannot anticipate how the system will react to a certain stimulus, therefore we must experiment (Probe-Sense) and Respond to the results of those experiments. While I believe that #NoEstimates is a perfect fit for Complex environments, I don’t think it applies only in that domain. There are benefits to be gained in other domains: Cynefin/Complicated, Cynefin/Chaotic and even Cynefin/Simple.
    If we elect not to take on the risk of predicting an uncertain future, who do we think should, and why?
    • I believe this question is trying to answer: Who should take the risk for the (future) results of our actions? Obviously I think we - the people in the project - should take the responsibility for tackling the project in all its characteristics and risk. I don’t think #NoEstimates leads to a shift of the risk carried in any project, and in fact I believe #NoEstimates approaches can be much more effective at uncovering and tackling emergent project threats or risks. More on this later.
    Why would we choose not to judge likely outcomes, when worthwhile human endeavors necessarily carry risk?
    • My interpretation of this question is that it asks: Should we even look into the future at all? Equating #NoEstimates with #NotLookingIntoTheFuture is a red-herring in my view. I’ll try to explain in a later post how, in my view, #NoEstimates approaches can deliver a much better model of “the future” for any project.
    What is and isn't 'estimating' when people form and act on expectations of an uncertain future every day?
    • If I remember correctly somebody tweeted: “how can you say “don’t estimate!” when even the act of getting up in the morning requires estimating?”. Perhaps this comment comes from a misunderstanding of the domain we are talking about when we use the term #NoEstimates. It is important to state that we are using #NoEstimates in the context of estimating large software endeavors (i.e. projects), not simple tasks or even fixed/variable-costs calculations (e.g. servers, bandwidth, person-hours, etc.).
This is the list of questions that I’ve collected in the last few weeks on twitter. I hope to answer them here on the blog with your help. Please contribute to the prioritization of these (and last week’s) questions by leaving a comment below with the questions you’d like to tackle first.

Once prioritized, I’ll start answering those questions and we can have a longer discussion about each of them here. After all, twitter, is a great conversation starter, but we need blogs to develop them further ;) Photo Credit: Brad Hoc @ flickr

Labels: , , , , , , , ,

at 16:11 | 4 comments
RSS link

Bookmark and Share

Wednesday, January 25, 2012

Story Points Considered Harmful - Or why the future of estimation is really in our past...

This article is the companion to a talk that myself and @josephpelrine gave at OOP 2012.


We have a lot to learn from our ancestors. One that I want to focus on for this post is Galileo.
Galileo was what we would call today a techie. He loved all things tech and was presented an interesting technology that he could not put down. Through that work he developed optic technology to build first a telescope and later a microscope.
Through the use of the telescope and other approaches he came to realize and defend the Heliocentric view of the universe: the Earth was not the center of the Universe, but rather moved around the Sun.
This discovery caused no controversy until Galileo wrote it down and apparently discredited the view held by the Church at that time. The Church believed and defended that the Universe was neatly organized around the Earth and everything moved around our lanet.
We now know that Galileo was right and that the Church was - as it often tends to be with uncritical beliefs - wrong. We now say obviously the Earth is round and moves around the Sun. Or do we...

The Flat Earth Society

Actually, there are still many people around (curious word, isn't it?) the planet that do not even believe that the Earth is round! Don't believe me? Then check The Flat Earth Society.

The fact that is that even today many people hold uncritical beliefs about how our world really works. Or our projects in the case of this post...

Estimation soup

We've all been exposed to various estimation techniques, in an Agile or traditional project. Here are some that quickly come to mind: Expert Estimation, Consensus Estimation, Function Point Analysis, etc. Then we have cost (as opposed to only time) estimation techniques: COCOMO, SDM, etc. And of course, the topic of this post: Story Point Estimation.
What do all of these techniques have in common? They all look towards the future!
Why is this characteristic important?

The Human condition

This characteristic is nt because looking at the future is always difficult! We humans are very good at anticipating immediate events in the physical world, but in the software world what we estimate is neither immediate, nor does it follow any physical laws that we intuitively understand!
Take the example of a goal-keeper in a football (aka soccer) match. She can easily predict how a simple kick will propel the ball towards the goal, and she can do that with quite a high accuracy (as proven by the typically low scores in today's football games). But even in soccer, if you face a player like Maradona, or Beckham, or Crisitiano Ronaldo it is very difficult to predict the trajectory of the ball. Some physicists have spent considerable amount of time analyzing the trajectory of Beckham's amazing free kicks to try to understand how the ball moves and why. Obviously a goal-keeper does not have the computers or the time to analyze the trajectory of Beckham's free kicks therefore Beckham ends up scoring quite a few goals that way. Even in football, where well-known physics laws always apply it is some times hard to predict the immediate future!
The undisputed fact is that we, humans are very bad at predicting the future.
But that is not all!

This is when things get Complex


Lately, and especially in the agile field we have been finding a new field of study: Complexity Sciences.
A field of study that tries to identify rules that help us navigate a world where even causality (cause and effect) are challenged.
An example may be what you may have heard of, the Butterfly effect: "where a small change at one place in a nonlinear system can result in large differences to a later state".
Complexity Sciences are helping us develop our own understanding of software development based on the theories developed in the last few years.
Scrum being a perfect example of a method that has used Complexity to inspire and justify its approach to many of the common problems we face in Software development.
Scrum has used "self-organization", and "emergence" as concepts in explaining why the Scrum approach works. Here's the problem: there's a catch.

Why did this just happen?


In a complex environment we don’t have discernible causality!
Sometimes this is due to delayed effects from our actions, most often it is so that we attribute causality to events in the past when in fact no cause-effect relationship exists (Retrospective Coherence). But, in the field of estimation this manifests itself in a different way.
In order for us to be able to estimate we need to assume that causality exists (if I ask Tom for the code review, then Helen will be happy with my pro-activeness and give me a bonus. Or will she?) The fact is: in a Complex environment, this basic assumption of the existence of discernible Causality is not valid! Without causality, the very basic assumption that justifies estimation falls flat!

Solving the the lack of internal coherence in Scrum


So, which is it? Do we have a complex environment in software development or not? If we do then we cannot - at the same time - argue for estimation (and build a whole religion on it)! In contrast, if we are not in a complex environment we cannot then claim that Scrum - with it’s focus on solving a problem in the complex domain - can work!
So then, the question for us is: Can this Story Point based estimation be so important to the point of being promoted and publicized in all Scrum literature?
Luckily we have a simple alternative that allows for the existence of a complex environment and solves the same problems that Story Points were designed (but failed to) solve.

The alternative prediction device

The alternative to Story Point estimation is simple: just count the number of Stories you have completed (as in "Done") in the previous iterations. They are the best indicator of future performance! Then use that information to project future progress. Basically, the best predictor of the future is your past performance!
Can it really be that simple? To test this approach I looked at data from different projects and tried to answer a few simple questions

The Experiment

  • Q1: Is there sufficient difference between what Story Points and ’number of items’ measure to say that they don’t measure the same thing?
  • Q2: Which one of the two metrics is more stable? And what does that mean?
  • Q3: Are both metrics close enough so that measuring one (number of items) is equivalent to measuring the other (Story Points)?
I took data from 10 different teams in 10 different projects. I was not involved in any of the projects (I collected data from the teams directly or through requests for data in Agile-related mailing lists). Another point to highlight is that this data came from different size companies as well as different size teams and projects.
And here's what I found:
  • Regarding Question 1: I noticed that there was a stable medium-to-high correlation between the Story Point estimation and the simple count of Stories completed (0,755; 0,83; 0,92; 0,51(!); 0,88; 0,86; 0,70; 0,75; 0,88). With such a high correlation it is likely that both metrics represent a signal of the same underlying information.
  • Regarding Question 2: The normalized data (normalized for Sprint/Iteration length) has similar value of Standard Deviation(equally stable). Leading me to conclude that there is no significant difference in stability of either of the metrics. Although in absolute terms the Story Point estimations vary much more between iterations than the number of completed/Done Stories
  • Regarding Question 3: Both metrics (Story Points completed vs Number of Stories completed) seem to measure the same thing. So...
At this point I was interested in analyzing the claims that justify the use of Story Points, as the data above does not seem to suggest any significant advantage of using Story Points as a metric. So I searched for the published justification for the use of Story Points and found a set of claims in Mike Cohn's book "User Stories Applied" (page 87, first edition):
  • Claim 1: The use of Story points allows us to change our mind whenever we have new information about a story
  • Claim 2: The use of Story points works for both epics and smaller stories
  • Claim 3: The use of Story points doesn’t take a lot of time
  • Claim 4: The use of Story points provides useful information about our progress and the work remaining
  • Claim 5: The use of Story points is tolerant of imprecision in the estimates
  • Claim 6: The use of Story points can be used to plan releases
This these claims hold?

Claim 1: The use of Story points allows us to change our mind whenever we have new information about a story


Although there's no explanation about what "change our mind" means in the book, one can infer that the goal is not to have to spend too much time trying to be right. The reason for this is, of course, that if a story changes the size slightly there's no impact on the Story Point estimate, but what if the story changes size drastically?
Well, at this time you would probably have another estimation session, or you would break down that story into some smaller granularity stories to have a better picture of it's actual size and impact on the project.
On the other hand, if we were to use a simple metric like the number of stories completed we would be able to immediately assess the impact of the new items in the progress for the project.
As illustrated in the graph, if we have a certain number of stories to complete (80 in our example) and suddenly some 40 are added to our backlog (breaking down an Epic for example) we can easily see the impact of that in our project progress.
In this case, as we can see from the graph, the impact of a story changing it's meaning or a large story being broken down into smaller stories has an impact on the project and we can see that immediate impact directly in the progress graph.
This leads me to conclude that regarding Claim 1, Story Points offer no advantage over just simply counting the number of items left to be Done.

Claim 2: The use of Story points works for both epics and smaller stories


Allowing for large estimates for items in the backlog (say a 100SP Epic) does help to account in some way for the uncertainty that large pieces of work represent.
However, the same uncertainty exists in any way we may use to measure progress. The fact is that we don’t really know if an Epic (say 100 SPs) is really equivalent to a similar size aggregate of User Stories (say 100 times 1 SP story). Conclusion: there is no significant added information by classifying a story in a 100 SP category which in turn means that calling something an "Epic" is about the same information as classifying it as a 100 Story Points Epic.

Claim 3: The use of Story points doesn’t take a lot of time

Having worked with Story Points for several years this is not my experience. Although some progress has been done by people like Ken Power (at Cisco) with the Silent Grouping technique, the fact that we need such technique should dispute any idea that estimating in SP’s "doesn’t take a lot of time". In fact, as anybody that has tried a non-trivial project knows it can take days of work to estimate the initial backlog for a reasonable size project.

Claim 5: The use of Story points is tolerant of imprecision in the estimates

Although you can argue that this claim holds - even if the book does not explain how - there's no data to justify the belief that Story Points do this better than merely counting the number of Stories Done. In fact, we can argue that counting the number of stories is even more tolerant of imprecisions (see below for more details on this)

Claim 6: Story points can be used to plan releases

Fair enough. On the other hand we can use any estimation technique to do this, so how would Story Points be better in this particular claim than any other estimation technique? Also, as we will see when analysis Claim 4, counting the number of Stories Done (and left to be Done) is a very effective way to plan a release (be patient, the example is coming up).

Claim 4: The use of Story points provides useful information about our progress and the work remaining

This claim holds true if, and only if you have estimated all of your stories in the Backlog and go through the same process for each new story added to the Backlog. Even the stories that will only be developed a few months or even a year later (for long projects) must be estimated! This approach is not very efficient (which in fact contradicts Claim 3).
Basing your progress assessment on the Number of Items completed in each Sprint is faster to calculate (number of items in the PBL / velocity in number of items Done per Sprint = number of Sprints left) and can be used to provide critical information about project progress. Here's a real-life example:

The real-life use of a simpler metric for project progress measurement

In a company I used to work at we had a new product coming to market. It was not a "first-mover" which meant that the barrier to entry was quite high (at least that was the belief from Product Management and Sales).
This meant that significant effort was made to come up with a coherent Product Backlog. The Backlog was reviewed by Sales and Pre-Sales (technical sales) people. All agreed, we really needed to deliver around 140 Stories (not points, Stories) to be able to compete.
As we were not the first in the market we had a tight market window. Failing to meet that window would invalidate the need to enter that market at all.
So, we started the project and in the first Sprint we complete 1 single Story (maybe it was a big story -- truth is I don't remember). Worst, in the same period another 20 stories were added to the Product Backlog. As expected, the Product Management and Sales discovered a few more stories that were really a "must" and could not be left out of the product.
The team was gaining speed and in the second Sprint they got 8 stories to "Done". They were happy. At the same time the Product Manager and the Sales agreed to a cut-down version of the Product Backlog and removed some 20 stories from the Backlog.
After the third sprint the team had achieved velocities of 1 (first Sprint), 8 (second) and 8 (third). The fourth sprint was about to start and the pressure was high on the team and on the Product Manager. During the Sprint planning meeting the team committed to 15 new stories. This was a good number, as a velocity of 15 would make the stakeholders believe that the project could actually deliver the needed product. They would need to keep a velocity of 15 stories per sprint for 11 months. Could they make it?

The climax

As the fourth sprint started I made a bet with the Product Manager. I asked him how many items he believed that the team could complete and he said 15 (just as the team had committed to). I disagreed and said 10. How many items would you have said the team could complete?
I ask this question from the audience every time I tell this story. I get many different answers. Every audience comes up with 42 as a possible answer (to be expected given the crowds I talk to), but most say 8, 10, some may say 15 (very few), some say 2 (very few). The consensus seems to be around 8-10.
At this point I ask the audience why they would say 8-10 instead of 15 as the Product Manager for that team said. Obviously the Product Manager knew the team and the context better, right?
At the end of the fourth sprint the team completed 10 items, which even if it was 20% more than what they had done in previous sprints was still very far from the velocity they needed to make the project a success. The management reflected on the situation and clearly decided that the best decision for the company was to cancel that product.

Story Points Myth: Busted!

That company did that extremely hard decision based on data, not speculation from Project Managers, not based on some bogus estimation using whatever technique. Real data. They looked at the data available to them and decided to cancel the project 10 months before its originally planned release. This project had a team of about 20 people. Canceling the project saved the company 200 man-month of investment in a product they had no hope of getting out of the door!
We avoided a death-march project and were able to focus on other more important products for the company's future. Products that now bring in significant amount of money!

OK, I get your point, but how does that technique work?

Most people will be skeptical at this point (if you've read this far you probably are too). So let me explain how this works out.
Don't estimate the size of a story further than this: when doing Backlog Grooming or Sprint Planning just ask: can this Story be completed in a Sprint by one person? If not, break the story down!
For large projects use a further level of abstraction: Stories fit into Sprints, therefore Epics fit into meta-Sprints (for example: meta-Sprint = 4 Sprints). Ask the same question of Epics that you do of Sprints (can one team implement this Epic in half a meta-Sprint, i.e. 2 Sprints?) and break them down if needed.

By continuously harmonizing the size of the Stories/Epics you are creating a distribution of the sizes around the median:


Assuming a normal distribution of the size of the stories means that you can assume that for the purposes of looking at the long term (remember: this only applies on the long term, i.e. more than 3 sprints into the future) estimation/progress of the project, you can assume that all stories are the same size, and can therefore measure progress by measuring the number of items completed per Sprint.

Final words

As with all techniques this one comes with a disclaimer: you may not see the same effects that I report in this post. That's fine. If that is the case please share the data you have with me and I'm happy to look at it.
My aim with this post is to demystify the estimation in Agile projects. The fact is: the data we have available (see above) does not allow us to accept any of the claims by Mike Cohn regarding the use of Story Points as a valid/useful estimation technique, therefore you are better off using a much simpler technique! Let me know if you find an even simpler one!

Oh, and by the way: stop wasting time trying to estimate a never ending Backlog. There's no evidence that that will help you predict the future any better than just counting the number of stories "Done"!

How do I apply #NoEstimates to improve estimation? Here's how...


Photo credit: write_adam @ flickr

Labels: , , , , , , , , , ,

at 11:08 | 54 comments
RSS link

Bookmark and Share

Friday, April 30, 2010

Tired of useless boring planning meetings? Read on, I've got a solution for you


In a meeting today I was graphically reminded of how some of the basic concepts in software development still escape many of us. Case in point, the meaning of capacity.

In many people's minds capacity still means "how many man-hours we have have available for real work". This is plain wrong.

Let's decompose this assumption to see how wrong it is.


  1.  First, in this assumption is implicit that we can estimate exactly how many man-hours we have available for "real work". The theory goes like this: I have 3 people, the sprint is 2 weeks/10 days, so the effort available, and therefore capacity, is 30 man-days. This is plain wrong!!! How? let's see:

    1.  Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
    2. Then there's the issue that not 100% of the time available for each people can actually be used for work. For example, there are meetings about next sprint's content, then there's interruptions, time to go to the toilet... You get the picture. In fact it is impossible to predict how many each person will use for "real" work.

  2. Then there are those pesky things we call "dependencies". Sometimes someone in the team is idle because they depend on someone else (in or out of the team) and can't complete their feature. This leads to unpredictable delays, and therefore ineffective use of the effort available for a Sprint.
  3.  Finally (although other reasons can be found) there's the implicit assumption that even if we could know perfectly the amount of effort available we can know how much some piece work actually takes from beginning to end, in exact terms! This is implicit in how we use the effort numbers by then scheduling features against that available effort. The fact is that we (humans) are very bad at estimating something we have not done before, which is the case in software most of the time.
The main message here is: effort available (e.g. man-hours) is not the same as capacity. Capacity is the metric that tells us how many features a team or a group of teams can deliver in a Sprint, not the available effort!

Implications of this definition of capacity

There are some important implications of the above statement. If we recognize that capacity is closer to the traditional definition of Throughput then we understand that what we need to estimate is not just size of a task, plus effort available. No, it's much more complex than that! We need to estimate the impact of dependencies, errors, meetings, etc. on the utilization of the effort available.

Let me illustrate how complex this problem is. If you want to empty a tank of 10 liters attached to a pipe, you will probably want to know how much water can flow through the pipe in 1 minute (or some similar length of time) and then calculate how long it takes to completely empty the tank. Example: if 1 liter flows through the pipe in 1 minute then it will take 10 minutes to empty a 10 liter tank. Easy, no?

Well, what if you now try to guess the time to empty the same tank but instead of being given the metric that 1 liter of water can flow in the piper for each minute, you are instead given:

  • Diameter of the pipe
  • Material that the pipe is built in
  • Viscosity of the liquid in the tank
  •  Probability of obstacles existing in the pipe that could impede the flow of the liquid
  • Turbulence equations that allow you to calculate flow when in the presence of an obstacle

Get the point? In software we are in the second situation! We are expected to calculate capacity (which is actually throughput) given a huge list of variables! How silly is that?!

For a better planning and estimating framework

The fact is that the solution for the capacity (and therefore planning) problem in software is much, much easier!

Here's a blow by blow description:

  • Collect a list of features for your product (not all, just the ones you really want to work on)
  • With the whole team, assess the features to make sure that neither of them is "huge" (i.e. the team is clueless about what it is or how to implement it). If they are too large, split them in half (literally). Try to get all features to fit into a sprint (without spending a huge effort in this step). 
  • Spend about 3 sprints working on that backlog (it pays off to have shorter sprint!)
  • After 3 sprints look at your velocity (number of features completed in each sprint) and calculate an average
  • Use the average velocity to tell the Product Owner how long it will take to develop the product they want based on the number of Features available in the backlog
  • Update the average expected velocity after each sprint
Does it sound simple? It is. But most importantly, I have done many experiments based on this simple idea, and I'm yet to find a project where this would not apply. Small or big, it worked for all projects where I've been involved in the past.

The theory behind

There's a couple of principles behind this strategy. First, it's clear that if you don't change the team, the technology or any other relevant environmental variable the team will perform at a similar level (with occasional spikes or dips) all the time. Therefore you can use historical velocity information to calculate a long term velocity in the future! 
Second, you still have to estimate, but you do that at the level of one Sprint. The reason is that even if we have the average velocity, an average does not apply to a single sprint but rather to a set of sprints. Therefore you still need to plan your sprint, to identify possible bottlenecks, coordinate work with other people, etc.

The benefits

Finally the benefits. The most important benefit here is that you don't depend on estimations based on unknown assumptions to make your long term plans. You can rely on data. Sure, sometimes the data will be wrong, but compare with the alternative: when was the last time you saw a plan hold up? Data is your friend!
Another benefit is that you don't need to twist anyone's arm to produce the metrics needed (velocity and number of features in backlog), because those metrics are calculated automatically by the simple act of using Scrum.

All in all not a bad proposition and much simpler than working with unavoidably incorrect estimates that leave everybody screaming to be saved by the bell at the end of the planning meeting!

Photo credits:
uggboy @ flickr
johnmcga @ flickr

Labels: , , , , , , , , , ,

at 21:08 | 0 comments
RSS link

Bookmark and Share

 
(c) All rights reserved