Software Development Today

Tuesday, July 08, 2014

What is an Estimate?

If you don’t know what an estimate is, you can’t avoid using them. So here’s my attempt to define what is an estimate.
The "estimates" that I'm concerned about are those that can easily (by omission, incompetence or malice) be turned into commitments. I believe Software Development is better seen as a discovery process (even simple software projects). In this context, commitments remove options and force the teams/companies to follow a plan instead of responding to change.

Here's my definition: "Estimates, in the context of #NoEstimates, are all estimates that can be (on purpose, or by accident) turned into a commitment regarding project work that is not being worked on at the moment when the estimate is made."

The principles behind this definition of an estimate

In this definition I have the following principles in mind:
  • Delay commitment, create options: When we commit to a particular solution up front we forego possible other solutions and, as a consequence we will make the plan harder to change. Each solution comes with explicit and implicit assumptions about what we will tackle in the future, therefore I prefer to commit only to what is needed in order to validate or deliver value to the customer now. This way, I keep my options open regarding the future.
  • Responding to change over following a plan: Following a plan is easy and comforting, especially when plans are detailed and very clear: good plans. That’s why we create plans in the first place! But being easy does not make it right. Sooner or later we are surprised by events we could not predict and are no longer compatible with the plan we created upfront. Estimation up front makes it harder for us to change the plan because as we define the plan in detail, we commit ourselves to following it, mentally and emotionally.
  • Collaboration over contract negotiation: Perhaps one of the most important Agile principles. Even when you spend time and invest time in creating a “perfect” contract there will be situations you cannot foresee. What do you do then? Hopefully by then you’ve established a relationship of trust with the other party. In that case, a simple conversation to review options and chose the best path will be enough. Estimation locks us down and tends to put people on the defensive when things don’t happen as planned. Leaving the estimation open and relying on incremental development with constant focus on validating the value delivered will help both parties come to an agreement when things don’t go as planned. Thereby focusing on collaboration instead of justifying why an estimated release date was missed.
  Here are some examples that fit the definition of Estimates that I outlined above:
  • An estimate of time/duration for work that is several days, weeks or months in the future.
  • An estimate of value that is not part of an experiment (the longer the time-frame the more of a problem it is).
  • A long term estimate of time and/or value that we can only validate after that long term is over.

How do you define Estimates in your work? Are you still doing estimates that fit the definition above? What is making it hard to stop doing such estimates? Share below in the comments what you think of this definition and how you would improve it.

This definition of an estimate was sent to the #NoEstimates mailing list a few weeks ago. If you want to receive exclusive content about #NoEstimates just sign up below. You will receive a regular email on the topic of #NoEstimates as Agile Software Development.

Subscribe to our mailing list

* indicates required

Picture credit: Pascal @ Flickr

Labels: , , , , , ,

at 06:00 | 1 comments links to this post
RSS link

Bookmark and Share

Tuesday, July 01, 2014

What is Capacity in software development? - The #NoEstimates journey


I hear this a lot in the #NoEstimates discussion: you must estimate to know what you can deliver for a certain price, time or effort.

Actually, you don’t. There’s a different way to look at your organization and your project. Organizations and projects have an inherent capacity, that capacity is a result of many different variables - not all can be predicted. Although you can add more people to a team, you don’t actually know what the impact of that addition will be until you have some data. Estimating the impact is not going to help you, if we are to believe the track record of the software industry.

So, for me the recipe to avoid estimates is very simple: Just do it, measure it and react. Inspect and adapt - not a very new idea, but still not applied enough.

Let’s make it practical. How many of these stories or features is my team or project going to deliver in the next month? Before you can answer that question, you must find out how many stories or features your team or project has delivered in the past.

Look at this example.

How many stories is this team going to deliver in the next 10 sprints? The answer to this question is the concept of capacity (aka Process Capability). Every team, project or organization has an inherent capacity. Your job is to learn what that capacity is and limit the work to capacity! (Credit to Mary Poppendieck (PDF, slide 15) for this quote).

Why is limiting work to capacity important? That’s a topic for another post, but suffice it to say that adding more work than the available capacity, causes many stressful moments and sleepless nights; while having less work than capacity might get you and a few more people fired.

My advice is this: learn what the capacity of your project or team is. Only then you will be able to deliver reliably, and with quality the software you are expected to deliver.

How to determine capacity?

Determining the capacity of capability of a team, organization or project is relatively simple. Here's how

  • 1- Collect the data you have already:
    • If using timeboxes, collect the stories or features delivered(*) in each timebox
    • If using Kanban/flow, collect the stories or features delivered(*) in each week or period of 2 weeks depending on the length of the release/project
  • 2- Plot a graph with the number of stories delivered for the past N iterations, to determine if your System of Development (slideshare) is stable
  • 3- Determine the process capability by calculating the upper (average + 1*sigma) and the lower limits(average - 1*sigma) of variability

At this point you know what your team, organization or process is likely to deliver in the future. However, the capacity can change over time. This means you should regularly review the data you have and determine (see slideshare above) if you should update the capacity limits as in step 3 above.

(*): by "delivered" I mean something similar to what Scrum calls "Done". Something that is ready to go into production, even if the actual production release is done later. In my language delivered means: it has been tested and accepted in a production-like environment.

Note for the statisticians in the audience: Yes, I know that I am assuming a normal distribution of delivered items per unit of time. And yes, I know that the Weibull distribution is a more likely candidate. That's ok, this is an approximation that has value, i.e. gives us enough information to make decisions.

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , , ,

at 06:00 | 10 comments links to this post
RSS link

Bookmark and Share

Friday, June 27, 2014

Coming out of the closet - the life and adventure of a traditional project manager turned Agilist


I’m coming out of the closet today. No, not that closet. Another closet, the tabu closet in the Agile community. Yes, I was (and to a point still am) a control freak, traditional, command and control project manager. Yes, that’s right you read it correctly. Here’s why this is important: in 2003 when I first started to consider Agile in any shape or form I was a strong believer of the Church of Order. I did all the rites of passage, I did my Gantt charts, my PERT charts, my EVM-charts and, of course, my certification.

I was certified Project Manager by IPMA, the European cousin of PMI.

I too was a control freak, order junkie, command and control project manager. And I've been clean for 9 years and 154 days.

Why did I turn to Agile? No, it wasn’t because I was a failed project manager, just ask anyone who worked with me then. It was the opposite reason. I was a very successful project manager, and that success made me believe I was right. That I had the recipe. After all, I had been successful for many years already at that point.

I was so convinced I was right, that I decided to run our first Agile project. A pilot project that was designed to test Agile - to show how Agile fails miserably (I thought, at that time). So I decided to do the project by the book. I read the book and went to work.

I was so convinced I was right that I wanted to prove Agile was wrong. Turned out, I was wrong.

The project was a success... I swear, I did not see that coming! After that project I could never look back. I found - NO! - I experienced a better way to develop software that spoiled me forever. I could no longer look back to my past as a traditional project manager and continue to believe the things I believed then. I saw a new land, and I knew I was meant to continue my journey in that land. Agile was my new land.

Many of you have probably experienced a similar journey. Maybe it was with Test-Driven Development, or maybe it was with Acceptance Testing, or even Lean Startup. All these methods have one thing in common: they represent a change in context for software development. This means: they fundamentally change the assumptions on which the previous methods were based. They were, in our little software development world a paradigm shift.

Test-driven development, acceptance testing, lean startup are methods that fundamentally change the assumptions on which the previous software development methods were based.

NoEstimates is just another approach that challenges basic assumptions of how we work in software development. It wasn’t the first, it will not be the last, but it is a paradigm shift. I know this because I’ve used traditional, Agile with estimation, and Agile with #NoEstimates approaches to project management and software delivery.

A world premier?

That’s why me and Woody Zuill will be hosting the first ever (unless someone jumps the gun ;) #NoEstimates public workshop in the world. It will happen in Finland, of course, because that’s the country most likely to change the world of software development. A country of only five million people yet with a huge track record of innovation: The first ever mobile phone throwing world championship was created in Finland. The first ever wife-carrying world championship was created in Finland. The first ever swamp football championship was created in Finland. And my favourite: the Air Guitar World Championship is hosted in Finland.

#NoEstimates being such an exotic approach to software development it must, of course, have its first world-premier workshop in Finland as well! Me and Woody Zuill (his blog) will host a workshop on #NoEstimates on the week of October 20th in Helsinki. So whether you love it, or hate it you can meet us both in Helsinki!

In this workshop will cover topics such as:

  • Decision making frameworks for projects that do not require estimates.
  • Investment models for software projects that do not require estimates.
  • Project management (risk management, scope management, progress reporting, etc.) approaches that do not require estimates.
  • We will give you the tools and arguments you need to prove the value of #NoEstimates to your boss, and how to get started applying it right away.
  • We will discuss where we see #NoEstimates going and what are the likely changes to software development that will come next. This is the future delivered to you!

Which of these topics interest you the most? What topics would you like us to cover in the workshop. Tell us now and you have a chance to affect the topics we will cover.

Contact us at vasco.duarte@oikosofy.com and tell us. We will reply to all emails, even flame bombs! :)

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , , ,

at 06:00 | 0 comments links to this post
RSS link

Bookmark and Share

Tuesday, June 24, 2014

Humans suck at statistics - how agile velocity leads managers astray

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Labels: , , , , , , , ,

at 06:00 | 3 comments links to this post
RSS link

Bookmark and Share

Thursday, June 12, 2014

Creating options by slicing features - #NoEstimates technique


Each feature (or story) in a product backlog contains many undiscovered options. By taking features as they are without slicing them into thin slices of functionality we implicitly commit to an implementation strategy. However, when we slice features we create options that allow us to pro-actively manage the scope of a project.

Let’s return to the IT Support Ticketing System project we discussed in a previous post. A feature like the one below will not allow us to manage the scope actively.

  • As an employee I want to be able to submit issues to IT so that I can fix a particular problem that prevents me from working.

The feature above is what I would call a “binary” feature. Either the employee is able to submit an issue to IT or not. This simple feature can have large implications in terms of the amount of work required to implement it. Taking the feature above and breaking it down into several smaller features or stories will allow us to make decisions regarding the implementation order, or delaying certain parts of the implementation. Let’s look at an example:

  • As an employee I want to be able to email an IT issue to the IT department so that I can have a fix for a problem that prevents me from working As an IT helpdesk employee I want to have a queue of issues to handle so that I know what items I should be working on at any given time.

By slicing the original feature in this particular way we unpacked the functionality under the term “submit issues” in the original feature into two different features: Email (replaces submit) and Queue of issues (replaces the receiving end of the submission process). We’ve potentially reduced the scope of the initial feature (no need to have a system to enter IT tickets, just send an email), and we’ve given ourselves the option to implement a solution based on standard tools. The two features we created allow for a solution based on email and a spreadsheet program with shared editing, like Google Docs.

These two stories could still be implemented with a full-fledged IT issue tracking system, but that is an option. Not a mandatory outcome of the initial feature. Slicing features into separate functional parts helps us actively manage the scope by creating different implementation options that are often implicit and non-negotiable when we have larger features in the backlog.

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , ,

at 06:00 | 0 comments links to this post
RSS link

Bookmark and Share

Tuesday, May 27, 2014

Dealing with Complexity in Software Projects - The theory that explains why Agile Project Management works


Why do projects fail?

This is a question that haunts all project managers. Good and bad, experienced and beginners, Agile or non-Agile. The reason for that is simple: we believe that if we crack the code of why projects fail, we will be able to avoid failure in our own projects. After all who does not want to be in a successful project?
We believe that if we crack the code of why projects fail, we will be able to avoid failure in our own projects.
But before we can can answer such a difficult question, we need to understand the factors that influence project failure. Many will immediately say that lack of planning or bad planning are major causes for project failure. We've all been told that more planning is the solution. And we have been told that the "right" planning is the solution. Sure, we all know that, but does that "right kind of planning" look in practice?

Enter Agile...

We know that "the right planning" is the solution, but we need to have a functional definition of what that "right planning" looks like. Agile has contributed greatly to further our understanding of software projects. For example: thanks to Agile we now can confidently state that individuals and their interactions are a key enabler for successful projects, not just "the right kind of planning". But there is more...

We are also starting to understand that there are some fundamental failures in the existing Theory of Project Management. Thanks for researchers like Koskela and Howell[1] we even have a framework to analyze what is wrong - very specifically - with traditional Project Management. Most importantly, that framework also helps us understand what needs to be different for projects to succeed in the new world of Knowledge Work.

In the article below (sign-up required) I explore the differences between the existing theory of project management and what Agile methods (such as Scrum) define as the new way to look at project management.

This paper explains some of the ideas that are part of  the "Chaos Theory in Software Projects" workshop. In that workshop we review the lessons learned from Complexity and Chaos theory and how they apply to Project Management.

The goal of the workshop is to give project managers a new idea of what is wrong in the current view of project management and how that can be changed to adapt project management to the world of Knowledge work. To know more about the workshop don't hesitate to get in touch: vasco.duarte@oikosofy.com.

[1] Koskela and Howell, The Theory of Project Management: Explanation to Novel Methods, retrieved on April 2014
Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , ,

at 06:00 | 0 comments links to this post
RSS link

Bookmark and Share

Friday, May 02, 2014

Real stories of how estimates destroy value in Software Development


A friend shared with me a few stories about how Estimates are destroying value in his organization. He was kind enough to allow me to share these stories anonymously. Enjoy the reading, I know I did! :)

The story of the customer that wanted software, but got the wrong estimates instead

Once, one of our customers asked for a small feature and, according to this customer it was quite clear from the beginning what he wanted. Alas, in my experience things often don’t go as planned and therefore I have added a good buffer on top of the needed estimates. Just like all developers I know do. Every day.

However, the story was about to get more interesting. A bit later, the sales team said those numbers were too high. "The customer will never accept those numbers!", they said. "And we really want this case."

After much negotiations we reduced our estimates. After all, we had added some buffer just in case. So, we reduced the estimates on the account, and I heard myself say (I should have known better): "Well, I guess that if everything goes well we could do it in that time."

My real surprise was that, a few days after the estimates were given to the sales team and the customer, I heard from the project manager that the company had agreed a Fixed Price Project with the customer. "That is madness", I said to the sales team. I felt betrayed as a developer! I had been asked for an estimate for the project, but I was tricked into accepting a Fixed Price Project through the estimation process! During the estimation I was not told that this would be a Fixed Price project.

The result was that the project was delivered late, we exceeded the estimates we gave. However, I did learn a valuable lesson: If you are forced to create estimates and the customer requires fixed price then never obey wishes from sales team: their target is different. Or, alternatively just do away with the estimate process altogether: just ask the sales team what number they want to hear ;-).

History repeated: never trust the sales team to handle estimates

A few months ago a customer called in and requested a small feature for their existing product. They were asking for an estimate. Part of the work was very clear and would be very easy to implement. But there was a part that wasn’t that clear. After some pre-analysis and discussions with other development team I knew about what would be needed to finish the job. Unlike the story above, this time I knew the customer asked for a Fixed Price Project and therefore added some buffer – just to be on the safe side.

After carefully estimating the work with the team we sent the total estimate to the sales team. This estimate included everything that was needed: from analysis, implementation, test, deployment. I did not split the estimates into its components parts because I didn’t want the customer to think that testing would be optional (Yes! It as it has happened before).

However, the sales teams split my estimate into the different roles. Big mistake! Magically the estimate that the customer received was half-day shorter than the one we provided. Not a big difference, but I learned my lesson!

Surprise number two was about to hit me: The customer said they hadn't used some hours from a previous contract and that they should get a discount. Long story short, when the project was started we had to reduce our "cost" to 50% of the estimation I had originally given to the sales team. I learned that I should never trust the sales team with mathematical calculations!

We did manage to deliver the feature to our customer, thanks to some very aggressive scope management, this meant that we had to deliver functionality that was requested, but did not perform as well as it could have if we had been allowed to work on the feature from the start, without the estimation back-and-forth.

I did learn my lesson: If you sales team doesn’t know how to sell software projects, and before you given them any estimates, add even more buffer on top of things learnt from the previous story! ;-).

Estimation as an organizational smell

Certainly there are many organizations in the world that do not go through similar stories as these. However, many still do! For those organizations that are affected by stories like these I suggest that we look at different ways to manage software projects. #NoEstimates provides a clear alternative that has proved very successful in the past. For example, in the paper below I mention a story where a team would have given an estimate within 4% of the actual time it took the project to complete, if they had used #NoEstimates!

The Silver Lining

The story continues, however. Here's what my friend added at the end of his email:

Well, after all having successfully changed the mindset of many in the company and in our team. We are already doing quite well.

Now I do things differently. The first thing I always discuss with the customer is how much money they would like to spend for the work they want to have done. This already gives me a good picture if we are even close in understanding of the amount of work that is necessary or more in the direction of “insane”.

Today I don't negotiate traditional contracts with my customers. Every job is now build on trust, communication and transparency.

Labels: , , , , , , , ,

at 11:20 | 8 comments links to this post
RSS link

Bookmark and Share

 
(c) All rights reserved