This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Tuesday, October 14, 2014

5 No Estimates Decision-Making Strategies


One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?

Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.

Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.

What do you mean by decision-making strategy?

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Some possible goals for business strategies might be:

  • Growth: growing the number of customer or users, growing revenues, growing the number of markets served, etc.
  • Market segment focus/entry: entering a new market or increasing your market share in an existing market segment.
  • Profitability: improving or maintaining profitability.
  • Diversification: creating new revenue streams, entering new markets, adding products to the portfolio, etc.

Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.

Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.

The two questions that I will consider when defining a decision-making strategy are:

  • 1. How well does this decision proposal help us reach our business goals?
  • 2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Are you taking into account the risks inherent in the decisions made with those frameworks?

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

A different risk profile requires different decisions

Each decision we make has an impact on the following risk dimensions:

  • Failing to meet the market needs (the risk of what).
  • Increasing your technical risks (the risk of how).
  • Contracting or committing to work which you are not able to staff or assign the necessary skills (the risk of who).
  • Deviating from the business goals and strategy of your organization (the risk of why).

The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.

There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).

An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.

Aligning decisions with business goals: decision-making strategies

When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.

Decision-making Strategy 1: Do the most important strategic work first

If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

Decision-making Strategy 2: Do the highest technical risk work first

When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

Decision-making Strategy 3: Do the easiest work first

Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

Decision-making Strategy 4: Do the legal requirements first

In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

Decision-making Strategy 5: Liability driven investment model

This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.

None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!

In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.

If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!

Your ideas about decision-making strategies that do not require estimation

You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.

Labels: , , , , , , , , , , , ,

at 06:00 | 3 comments
RSS link

Bookmark and Share

Wednesday, October 08, 2014

Lean Change Management: A Truly Agile Change Management approach


"I've been working in this company for a long time, we've tried everything. We've tried involving the teams, we've tried training senior management, but nothing sticks! We say we want to be agile, but..."

Many people in organizations that try to adopt agile will have said this at some point. Not every company fails to adopt agile, but many do.

Why does this happen, what prevents us from successfully adopting agile practices?

Learning from our mistakes

Actually, this section should be called learning from our experiments. Why? Because every change in an organization is an experiment. It may work, it may not work - but for sure it will help you learn more about the organization you work for.

I learned this approach from reading Jason Little's Lean Change Management. Probably the most important book about Agile adoption to be published this year. I liked his approach to how change can be implemented in an organization.

He describes a framework for change that is cyclical (just like agile methods):

  • Generate or gain insights: in this step we - who are involved in the change - do small experiments (like for example asking questions) to generate insights into how the organization works, and what possible things we could use to help people embrace the next steps of change.
  • Define options: in this step we list what are the options we have. What experiments could we run that would help us towards our Vision for the change.
  • Select and run experiments: each option will, after being selected, be transformed into an experiment. Each experiment will have a step of actions, people to involve, expected outcomes, etc.
  • Review, learn and...: After the experiments are concluded (and sometimes right after starting those experiments) we gain even more insights that we can feed right back into what Jason call the Lean Change Management Cycle.

The Mojito method of change

The overall cycle for Lean Change Management is then complemented in the book with concrete practices that Jason used and explains how to use in the book. Jason uses the story of The Commission to describe how to apply the different practices he used. For example, in Chapter 8 he goes into details of how he used the Change Canvas to create alignment in a major change for a large (and slow moving) organization.

Jason also reviews several change frameworks (Kotter's 8 steps, McKinsey's 7S, OCAI, ADKAR, etc.) and how he took the best out of each framework to help him walk through the Lean Change Management cycle.

The most important book about Agile adoption right now

After having worked on this book for almost a year together with Jason, I can say that I am very proud to be part of what I think is a critical knowledge area for any Agile Coach out there. Jason's book describes a very practical approach to changing any organization - which is what Agile adoption is all about.

For this reason I'd say that any Agile Coach out there should read the book and learn the practices and methods that Jason describes. The practices and ideas he describes will be key tools for anyone wanting to change their organization and adopt Agile in the process.

Here's where you can find more details about what the book includes.

Labels: , , , , , , , ,

at 06:00 | 0 comments
RSS link

Bookmark and Share

Friday, April 30, 2010

Tired of useless boring planning meetings? Read on, I've got a solution for you


In a meeting today I was graphically reminded of how some of the basic concepts in software development still escape many of us. Case in point, the meaning of capacity.

In many people's minds capacity still means "how many man-hours we have have available for real work". This is plain wrong.

Let's decompose this assumption to see how wrong it is.


  1.  First, in this assumption is implicit that we can estimate exactly how many man-hours we have available for "real work". The theory goes like this: I have 3 people, the sprint is 2 weeks/10 days, so the effort available, and therefore capacity, is 30 man-days. This is plain wrong!!! How? let's see:

    1.  Not all three people will be doing the same work. So, even if you have a theoretical maximum of 30 man-days available not all people can do the same work. If, for example, 1 person would be an analyst, another a programmer and finally the third a tester, then that would leave us with effectively 10 man-days of programming, analysis and testing effort available. Quite different from 30 man-days!
    2. Then there's the issue that not 100% of the time available for each people can actually be used for work. For example, there are meetings about next sprint's content, then there's interruptions, time to go to the toilet... You get the picture. In fact it is impossible to predict how many each person will use for "real" work.

  2. Then there are those pesky things we call "dependencies". Sometimes someone in the team is idle because they depend on someone else (in or out of the team) and can't complete their feature. This leads to unpredictable delays, and therefore ineffective use of the effort available for a Sprint.
  3.  Finally (although other reasons can be found) there's the implicit assumption that even if we could know perfectly the amount of effort available we can know how much some piece work actually takes from beginning to end, in exact terms! This is implicit in how we use the effort numbers by then scheduling features against that available effort. The fact is that we (humans) are very bad at estimating something we have not done before, which is the case in software most of the time.
The main message here is: effort available (e.g. man-hours) is not the same as capacity. Capacity is the metric that tells us how many features a team or a group of teams can deliver in a Sprint, not the available effort!

Implications of this definition of capacity

There are some important implications of the above statement. If we recognize that capacity is closer to the traditional definition of Throughput then we understand that what we need to estimate is not just size of a task, plus effort available. No, it's much more complex than that! We need to estimate the impact of dependencies, errors, meetings, etc. on the utilization of the effort available.

Let me illustrate how complex this problem is. If you want to empty a tank of 10 liters attached to a pipe, you will probably want to know how much water can flow through the pipe in 1 minute (or some similar length of time) and then calculate how long it takes to completely empty the tank. Example: if 1 liter flows through the pipe in 1 minute then it will take 10 minutes to empty a 10 liter tank. Easy, no?

Well, what if you now try to guess the time to empty the same tank but instead of being given the metric that 1 liter of water can flow in the piper for each minute, you are instead given:

  • Diameter of the pipe
  • Material that the pipe is built in
  • Viscosity of the liquid in the tank
  •  Probability of obstacles existing in the pipe that could impede the flow of the liquid
  • Turbulence equations that allow you to calculate flow when in the presence of an obstacle

Get the point? In software we are in the second situation! We are expected to calculate capacity (which is actually throughput) given a huge list of variables! How silly is that?!

For a better planning and estimating framework

The fact is that the solution for the capacity (and therefore planning) problem in software is much, much easier!

Here's a blow by blow description:

  • Collect a list of features for your product (not all, just the ones you really want to work on)
  • With the whole team, assess the features to make sure that neither of them is "huge" (i.e. the team is clueless about what it is or how to implement it). If they are too large, split them in half (literally). Try to get all features to fit into a sprint (without spending a huge effort in this step). 
  • Spend about 3 sprints working on that backlog (it pays off to have shorter sprint!)
  • After 3 sprints look at your velocity (number of features completed in each sprint) and calculate an average
  • Use the average velocity to tell the Product Owner how long it will take to develop the product they want based on the number of Features available in the backlog
  • Update the average expected velocity after each sprint
Does it sound simple? It is. But most importantly, I have done many experiments based on this simple idea, and I'm yet to find a project where this would not apply. Small or big, it worked for all projects where I've been involved in the past.

The theory behind

There's a couple of principles behind this strategy. First, it's clear that if you don't change the team, the technology or any other relevant environmental variable the team will perform at a similar level (with occasional spikes or dips) all the time. Therefore you can use historical velocity information to calculate a long term velocity in the future! 
Second, you still have to estimate, but you do that at the level of one Sprint. The reason is that even if we have the average velocity, an average does not apply to a single sprint but rather to a set of sprints. Therefore you still need to plan your sprint, to identify possible bottlenecks, coordinate work with other people, etc.

The benefits

Finally the benefits. The most important benefit here is that you don't depend on estimations based on unknown assumptions to make your long term plans. You can rely on data. Sure, sometimes the data will be wrong, but compare with the alternative: when was the last time you saw a plan hold up? Data is your friend!
Another benefit is that you don't need to twist anyone's arm to produce the metrics needed (velocity and number of features in backlog), because those metrics are calculated automatically by the simple act of using Scrum.

All in all not a bad proposition and much simpler than working with unavoidably incorrect estimates that leave everybody screaming to be saved by the bell at the end of the planning meeting!

Photo credits:
uggboy @ flickr
johnmcga @ flickr

Labels: , , , , , , , , , ,

at 21:08 | 0 comments
RSS link

Bookmark and Share

Saturday, March 07, 2009

Perfection in software is cheaper! not more expensive...

The fallacy of perfection = expensive has many forms. Two of which I just run into in the blogosphere. What is this fallacy? Simple, you say to yourself: "perfection in software development is way to expensive, therefore I should not even try to achieve it!", but that's a false problem, because perfection is only expensive if you continue to work in the way you do today. In order to take the next step to perfection you can change the way you work (process, guidelines, tools, etc.) and in the process you may end up with a cheaper way to do what you do today and be closer to perfection!

In fact there's is no single piece of evidence that would corroborate the idea that being better (i.e. trying to achieve perfection) is more expensive than, not trying to achieve it.

But let's get back to the examples that I just ran into on the net:

The first fallacy: the cost of Zero-Defects



In
a post by psabilla in shmula we are faced with the idea that Zero Defects would be too expensive to achieve therefore, the author suggests, it should not even be tried.

In here the author even presents a graph of the theoretical cost of achieving Zero Defects. It worthwhile to notice that this graph is a completely hypothetical graph, no connection with actual real data collected from a real project trying to really achieve Zero Defects.

Further along the author unmasks the prejudice that does not allow a more clear view into why Zero Defects are not only possible, but if achieved will yield a much faster _AND_ cheaper process for software development.

The author writes:
As defects are identified and eliminated, there will be theoretically few defects. But, this means that identifying defects will require more effort and will become more and more difficult, thus increasing the costs of this activity, along with the subsequent costs to fix the defects identified: The costs to inspect and test increases as there are fewer and fewer defects.


In the paragraph above take note of the causality that unmasks the prejudice. The author starts by using a seemingly obvious phrase (removing defects leads to less defects being present) and then exposes the prejudice behind the article: less defects means that it will be more expensive to identify other defects.

Think about that phrase for a minute: "if we have less defects it will be more expensive to find other defects".

Here are the problems with that phrase

  • It assumes that the role of testing is to find+remove defects: This is wrong, very wrong, because if a defect is added to the code base and only found much later that is indeed a much more expensive process than if we remove the defect closer to the source. Ideally immediately after it has been added to the source. I've written about this here. In other words: the role of testing is to prevent defects from being added to the source in the first place!
  • The second (wrong) assumption is that if we have less (presumably a lot less) defects we are then going to spend more money on finding the few that are left. Well, I don't know what projects the author has worked on, but as of today I don't know of any software project that would have achieved zero defects upon release. This actually means that at some point in the software development+release process the project team will say "alright, we have spent enough time looking for defects and not finding them, it is time to release". This suggests that project teams are smart enough to know when the cost of searching for a defect is higher than the benefit it brings.


Assuming the arguments above are sound we have to acknowledge that the basis for the argument by the author is false and indeed Zero Defects is not only possible, but also (given people's pragmatism) does not represent any additional cost to projects.

The second fallacy: You can't get rid of defects without Inspections



Jurgen Appelo, uses some of the data/arguments from the previously cited post to go on to say that Zero Inspections is an impossible and too costly goal. Note that Jurgen's post touches on other issues, I will not comment on those and restrict myself to the issue surrounding the impossibility or prohibitve cost of Zero Inspections.

The author builds on the (already demonstrated wrong) argument that Zero Defects is too expensive to also say that we cannot live without Inspections.

The author confuses Inspections with normal ways to prevent errors from going to far. I would agree that we need to have some inspections, but not in the way that Jurgen, or indeed Tom Gilb have suggested.

There's some data already available (albeit disputed) that Inspections provide little or no value and mostly focus on inspecting the format of the artifact (templates, coding conventions, grammatical defects, wrong color, wrong use of terms, etc.) and technicalities (curly braces where none are needed, too short method name, etc.) instead of the fundamental Defect magnets that logical flaws of thread-safety represent. This is also my experience.

Taking the example of code reviews, these are often more useful for knowledge sharing than for defect identification (not to mention that it is quite expensive to review code -- although you should do it in many situations :).

My point is this: Inspections have been around for 10's of years, why is the software industry consistently ignoring them: because they don't deliver enough value! At least Inspections in the sense of "let's get together in an additional meeting and review this artifact to make sure it has good quality, and then follow that with a formal process to change/update the artifact and get it approved" (code can be the artifact).

Ad-hoc inspections are often much more convenient, cheaper and value-adding but this is not what Jurgen or Tom Gilb suggest should be done! Hence my problem with Jurgen's article.

Zero Inspections is not only possible (been there done that, even with large - big L - projects) but it is also a catalyst for the teams to prepare not to need those Inspections. Imagine this: if you have an Inspection coming up for that last module you changed how motivated are you going to be do your job in the best way possible and aim for Zero Defects? The answer is: not motivated! In fact, my experience is that people will just aim to "avoid being embarrassed/humiliated during the Inspection meeting".

If you have the practices that support it and aim for Zero Defects (automated test, early Acceptance Test definition, etc.) you are going to be much more motivated (and supported) to achieve a very low (approaching Zero) defect count!

Oh, and by the way it follows that if you can achieve (proven) Zero Defects why do you need the Inspections?

Conclusion



There's a hidden (ok, not too much) argument here. If you aim for (and approximate) Zero Defects then your process will be faster (less rework due to defects, less inspections) and cheaper (less technical support needed, less reimbursements to annoyed customers).

But there's a catch! The catch is: you cannot achieve Zero Defects or Zero Inspections unless you invest in and change the way you work! It takes effort, dedication and constant experimenting and learning. However, if you don't do it your competitors will. Sooner or later. And when they do you will find yourself with a more expensive product that annoys your customers. What do you do then?

Labels: , , , , , ,

at 20:33 | 11 comments
RSS link

Bookmark and Share

Saturday, February 28, 2009

What makes a great company? Recognizing you are not one yet...

The statement in the title is true even for those that are commonly recognized as great, as they say "here today gone tomorrow". If you really want to be great you should not shout it out loud at every opportunity, you should recognize what your strengths are, play to them and always, always be on the look out for what you can learn to do better.

Follow the example of Toyota: get back to basics,
visit the Gemba and be faithful to the only thing that can help you succeed: Learning.

The Demise of modern management



Modern managers think they are good and infallible because, after all, they were promoted or head-hunted to manage. Well, reality is a bit more complex than that. Managers can only do their job properly if they visit the Gemba often. You need to understand deeply the problems at hand before you can make good decisions. And this is true for all levels of management, but especially for the highest levels.

Modern managers, especially top-of-the-pyramid ones need to be more "workers" and less managers (which nowadays translates to bean-counters with fat wallets).

Labels: , , , ,

at 23:55 | 0 comments
RSS link

Bookmark and Share

Friday, January 30, 2009

Bas, and the world are not ready...

My good friend
Bas Vodde just published a review of the panflet Scrumban.

I see that his review is yet another manifestation of what I titled "the world is not ready".

I don't think that his ideas are wrong, on the contrary, just that his ideas show that a lot of ground is still to be covered in software until we understand the nature of software and why certain techniques (like Scrumban) work, despite their counter-intuitive properties.

One example is when Bas states:
Another interesting thought that made me uncomfortable is the idea that there is such a thing as "in-control production" and special/common causes within that. To me, it feels like a manufacturing assumption within product development. Though Corey does cover these assumptions and makes great points about how to control variability, I'm not sure if it is a good thing to do.


Not believing that a team/organization can produce software at a regular pace in terms of number of stories is a common thing. The most common among people that believe strongly that you need to spend a lot of time in estimation of stories.

That is not my experience, I've been working in many projects (in many roles) and one thing that is common to all of the agile projects I've been is that the team's throughput (the number of stories they get DONE in a sprint) is very stable, "in control". This, to me, suggests that the throughput of a team can, indeed be "in control" (actually statistical process control (SPC) is the term).

I recently wrote a post where I detail how you can use the knowledge from SPC in order to help a team improve. The point I make in that post is that you can look at a team's velocity and based on it analyze whether the variation is a special cause (the velocity is all over the place) or if it is a common cause (the variation is predictable i.e. it goes down or up in a predictable fashion).

This distinction is very important. Special causes need to be eliminated one-by-one by instituting error-proofing, for example. Common causes are much harder to eliminate because as Wikipedia states:
Common-cause variation is characterised by:

  • Phenomena constantly active within the system;
  • Variation predictable probabilistically;
  • Irregular variation within an historical experience base; and
  • Lack of significance in individual high or low values.



In practice this means that a team where special causes are the reason for their velocity variation needs help in doing deep retrospectives with PDCA cycles tackle the root causes of their velocity variation.

However, a team who's velocity varies due to "common causes" will need support from management because they most likely need to change their process or the process of the neighboring teams in order to get any improvement to their throughput.

My experience is that teams of software developers can handle the special causes most of the time on their own, but need support and coaching to be able to overcome the common causes for their lack of constant improvement.

Not recognizing this lesson from manufacturing and applying it to software development is, in my opinion, lose sight of the goal: not falling in love with Agile, but falling in love with the improvement of our industry.

Labels: , , , ,

at 21:59 | 2 comments
RSS link

Bookmark and Share

Saturday, September 20, 2008

Learn or else...

Through
Elisabeth Hendrickson I got to Brett Pettichord's post about the role of feedback in Agile.

Read it if you haven't yet.

Here are the pearls that made me post this:

“If you don’t have meaningful feedback then you’re not agile. You’re just in a new form of chaos.”


and

“Agile practices build a technical and organizational infrastructure to facilitate getting and acting on feedback. If you aren’t going to adapt to feedback, then this infrastructure is waste that will only slow you down.”


This is very much in line (or even the same) with one of the tenants of the TPS (Toyota/Thinking Production System), the PDCA cycle. In order to improve you need to learn, feedback is the fuel for the process of learning. No feedback no learning.

If you are not learning, you are not Agile. No matter what you are doing!

Labels: , , , , , , ,

at 22:27 | 0 comments
RSS link

Bookmark and Share

Friday, April 11, 2008

Don't blame, reward people that surface their own mistakes!

In Lean thinking errors or mistakes are seen as opportunity for improvement and growth. But in lean, mistakes don't come alone, they also provide an opportunity for the team to create a poka-yoke tool/device. Poka-yoke stands for "mistake proofing", i.e. making sure the same mistake does not happen again. This can only be done by changing the process or tools.

All of these things stem from the most important value in Lean Thinking: Respect for people.

When Toyota started exporting their ideas outside Japan they found out that in some countries the culture was to "blame", not to reward the honesty of admitting a mistake.

There's this story (thanks for the link Jukka!) of an american enginner (let's call him Mike) working in Japan. Mike was working on the line and while assembling a part in the car he scratched the paint. Influenced by his culture he was afraid of admitting the mistake, so he thought twice before pulling the andon cord (the device that notifies others when there is a problem). After a few seconds of struggling with the dilemma he pulled the cord and waited. The team leader immediately came to the workstation and was able to fix the defect quickly. The line did not stop.
At the end of the day while the daily meeting was going on the team had brief exchange in Japanese that Mike did not understand, but his fear was that they were just criticizing him in Japanese so that he would not feel so bad. He was wrong. Soon after that the team started applauding and looking at Mike, he was confused. When asked, the supervisor clarified: "the team was proud of you admitting the mistake and wanted to express that!".

This is why mistakes should be "admitted" by those who make them, not just "blamed on" the people that made them. Admitting the mistake and wanting to improve based on the learning is the most important part of surfacing the mistakes and "stopping and fixing". A culture that only assigns blame for mistakes will only create a need to hide those mistakes.

Let people admit their mistakes and create a welcoming environment where people actually will be proud of surfacing and fixing their mistakes. Don't just blame, blame kills improvement.

Update: Updated to add link to the story about Toyota.


Labels: , , , , ,

at 09:48 | 2 comments
RSS link

Bookmark and Share

Wednesday, March 19, 2008

Admitting mistakes is the first step to learning, not just for you, but also for your team and company

Here is
an excellent piece from the blog Evolving excellence about how a worker at Toyota battled his fear of admitting a mistake and was rewarded by his pears and supervisor for not hiding, but rather disclosing the mistake he had done.

Admitting you committed a mistake is a very important part of continuous improvement. The andon cord (a sort of error alarm) should be pulled as soon as an mistake/error/defect is created or found.

Finding mistakes is not a blame game in Lean thinking, it is a key part of finding ways to avoid mistakes altogether through poka-yoke or mistake proofing our work methods!

Behind this willingness to show and learn from mistakes we make are some concepts in the Toyota Product System (TPS):
  1. If the student has not learned the professor has not taught.
  2. Most mistakes are caused by the situation or the system and not by people's incompetence or willingness to do their best.
  3. Respect for people (one of the key pillars of the TPS)
These concepts together with other key concepts in TPS allow people to concentrate and focus on continuous improvement and not play the very ineficient and unproductive blame game that mostly impedes learning.


Updated: with a link to the Respect for people principle in Toyota's website.
Blogged with the Flock Browser

Labels: , , , , , , , ,

at 19:37 | 0 comments
RSS link

Bookmark and Share

 
(c) All rights reserved