This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Wednesday, October 08, 2014

Lean Change Management: A Truly Agile Change Management approach


"I've been working in this company for a long time, we've tried everything. We've tried involving the teams, we've tried training senior management, but nothing sticks! We say we want to be agile, but..."

Many people in organizations that try to adopt agile will have said this at some point. Not every company fails to adopt agile, but many do.

Why does this happen, what prevents us from successfully adopting agile practices?

Learning from our mistakes

Actually, this section should be called learning from our experiments. Why? Because every change in an organization is an experiment. It may work, it may not work - but for sure it will help you learn more about the organization you work for.

I learned this approach from reading Jason Little's Lean Change Management. Probably the most important book about Agile adoption to be published this year. I liked his approach to how change can be implemented in an organization.

He describes a framework for change that is cyclical (just like agile methods):

  • Generate or gain insights: in this step we - who are involved in the change - do small experiments (like for example asking questions) to generate insights into how the organization works, and what possible things we could use to help people embrace the next steps of change.
  • Define options: in this step we list what are the options we have. What experiments could we run that would help us towards our Vision for the change.
  • Select and run experiments: each option will, after being selected, be transformed into an experiment. Each experiment will have a step of actions, people to involve, expected outcomes, etc.
  • Review, learn and...: After the experiments are concluded (and sometimes right after starting those experiments) we gain even more insights that we can feed right back into what Jason call the Lean Change Management Cycle.

The Mojito method of change

The overall cycle for Lean Change Management is then complemented in the book with concrete practices that Jason used and explains how to use in the book. Jason uses the story of The Commission to describe how to apply the different practices he used. For example, in Chapter 8 he goes into details of how he used the Change Canvas to create alignment in a major change for a large (and slow moving) organization.

Jason also reviews several change frameworks (Kotter's 8 steps, McKinsey's 7S, OCAI, ADKAR, etc.) and how he took the best out of each framework to help him walk through the Lean Change Management cycle.

The most important book about Agile adoption right now

After having worked on this book for almost a year together with Jason, I can say that I am very proud to be part of what I think is a critical knowledge area for any Agile Coach out there. Jason's book describes a very practical approach to changing any organization - which is what Agile adoption is all about.

For this reason I'd say that any Agile Coach out there should read the book and learn the practices and methods that Jason describes. The practices and ideas he describes will be key tools for anyone wanting to change their organization and adopt Agile in the process.

Here's where you can find more details about what the book includes.

Labels: , , , , , , , ,

at 06:00 | 0 comments
RSS link

Bookmark and Share

Tuesday, September 23, 2014

The No Estimates principle: The importance of knowing when you are wrong


You started the project. You spent hours, no: days! estimating the project. The project starts and your confidence in its success is high.

Everything goes well at the start, but at some point you find the project is late. What happened? How can you be wrong about estimates?

This story very common in software projects. So common, that I bet you have lived through it many times in your life. I know I have!

Let’s get over it. We’re always wrong about estimation. Sometimes more, sometimes less and very, very rarely we are wrong in a way that makes us happy: we overestimated something and can deliver the project ahead of (the inflated?) schedule.

We’re always wrong about estimation.

Being wrong about estimates is the status quo. Get over it. Now let’s take advantage of being wrong! You can save the project by being wrong. Here’s why...

The art of being wrong about software estimates

Knowing you are wrong about your estimates is not difficult after the fact, when you compare estimates to actuals. The difficult part is to make a prediction in a way that can tested regularly, and very early on - when you still have time to change the project.

Software project estimates as they are usually done, delay the feedback for the “on time” performance to a point in time when there’s very little we can do about it. Goldratt grasped this problem and made a radical suggestion: cut all estimates in half, and use the rest of the time as a project buffer. Pretty crazy hein? Well, it worked because it forced projects to face their failures much earlier than they would otherwise. Failing to meet a deadline early on in the life-cycle of the project gave them a very powerful tool in project management: time to react!

The #NoEstimates approach to being wrong...and learning from it

In this video I explain shortly how I make predictions about a possible release date for the project based on available data. Once I make a release date prediction, I validate it as soon as possible, and typically every week. This approach allows me to learn early enough when I’m wrong and then adjust the project as needed.

We’re always wrong, the important thing is to find out how wrong, as early as possible

After each delivery (whether it is a feature or a timebox like a sprint), I update my prediction for the release date of the project based on the lead time or throughput rate so far. After updating the release date projection, I can see whether it has changed enough to require a reaction by the project team. I can make this update to the project schedule without gathering the whole team (or "the chosen ones") into a room for an ungodly long estimation meeting.

If the date has not changed outside the originally interval, or if the delivery rate is stable (see the video), then I don’t need to react.

When the release date projection changes to a time outside the original interval, or the throughput rate has become unstable (did you see the video?), then you need to react. At first to investigate the situation, and later to adjust the parameters in your project if needed.

Conclusion

The #NoEstimates approach I advocate will allow you to know when the project has changed enough to warrant a reaction. I make a prediction, and (at least) every week I review that prediction and take action.

Estimates, done the traditional way, also give you this information, but too late. This happens because of the big-batch thinking the reliance on estimations enables (larger work items are ok if you estimate), and because of the delayed dependency integration it enables (estimated projects typically allow for teams that are dependent to work separately because of the agreed plan).

The #NoEstimates approach I advocate has one goal: reduce feedback cycle. These short feedback cycles will allow you to recognise early enough how wrong you were about your predictions, and then you can make the necessary adjustments!

Picture credit: John Hammink, follow him on twitter

Labels: , , , , ,

at 06:00 | 0 comments
RSS link

Bookmark and Share

Monday, September 15, 2014

The Release Paradox: releasing less often makes your teams slower and decreases quality


Herman is a typical agile coach. He works with teams to help them learn how to deliver high-quality software quickly.

Many teams want to focus on design, architecture, or (sometimes) even on business value. But they are usually not in a hurry to release quickly.

Recently Herman conveyed a story to me that illustrates how releasing quickly can help teams deliver high-quality software much faster than if they would focus on quality in the first place. This is the case of a team that was working on a long overdue project. They had used a traditional and linear process in the past and had been able to release software only very recently, after more than 12 months of work on the latest release.

Not surprisingly, they were having trouble releasing regularly. The software was not stable; once it was live it had many problems that needed to be fixed quickly, and worst of all: all of this was having a direct impact on the company’s business.

The teams were extremely busy fixing the problems they had added to the product in the last year and could not focus on solving the root causes of those problems.

They were in full-fledged firefighting mode. They worked hard every day to fix yet another problem and release yet another hot fix.

This lasted for a few weeks, but once the fire-fighting mode was over, Herman worked with the teams to improve their release frequency. During their work with Herman, those teams went from one year without any release to a regular release every two weeks.

At first the releases were not always possible, but with time they improve their processes, removed the obstacles preventing them from releasing every two weeks and started releasing regularly.

What happened next was surprising for the teams. The list of problems after each release did not grow - as they expected - but instead shrank.

When some problems came in from the customers after a 2-week release, they were also much faster to fix the problem and quicker to release a fix if that was required. When the fix was not critical, they waited for the following release which was, after all, only 2 weeks away.

By focusing on releasing every two weeks, Herman’s teams were able to focus on small, incremental changes to their product. That, in turn, enabled them to fine-tune their development and release processes.

Here are some of the key changes the teams implemented
  1. They started with a 4 week release cycle, and fine-tuned their daily builds and release testing process to enable a release every 2 weeks.
  2. They invested time and energy to improve their test automation strategy and automated the critical tests to enable them to run “enough” tests to be confident that the quality was at release level.
  3. They had some teams on maintenance duty in the first few iterations to make sure that any problem found after release could quickly be fixed, and released to customers if necessary.
  4. They changed their source code management strategy to enable some teams to work on longer term changes while others worked on the next release.
  5. They involved all teams necessary to complete a release in their iterations. This affected especially: production/operations team, localization team, documentation team, marketing team, and other teams when needed.
This list of changes was the result of the drive to complete each release and learning from the failures in the previous release. Some changes were harder to implement, and especially the testing strategy to allow for 2-week release cycles had to be changed and adjusted several times.

One of the key problems the teams had to solve, was the lack of coordination with departments that directly contributed to the release but were not previously involved in their day-to-day work.

This process lasted several months, and would not have been possible without a clear Vision set forth by the teams in cooperation with Herman, who helped them discover the right way to reach that Vision within their context.

Herman’s work as a coach was that of a catalyst for management and the teams in that organization. He was able to create in their minds a clear picture of what was possible. Once that was clear, the teams and the management took ownership of the process and achieved a step-change in their ability to fulfill market demands and customer needs.

Customers have no reason to change provider as they have an ever-improving experience when using this company’s services.

Today, this organization releases a new version of their product every two weeks. Unaware of it, their customers receive regular improvements to the product they use, and have no reason to change provider as they have an ever-improving experience when using this company’s services.

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , ,

at 06:00 | 0 comments
RSS link

Bookmark and Share

Tuesday, August 19, 2014

How to choose the right project? Decision making frameworks for software organizations


Frameworks to choose the best projects in organizations are a dime a dozen.

We have our NPV (net present value), we have our customized Criteria Matrix, we have Strategic alignment, we have Risk/Value scoring, and the list goes on and on.

In every organization there will a preference for one of these or similar methods to choose where to invest people’s precious time and money.

Are all these frameworks good? No, but they aren’t bad either. They all have some potential positive impact, at least when it comes to reflection. They help executive teams reflect on where they want to take their organizations, and how each potential project will help (or hinder) those objectives.

So far, so good.

“Everybody’s got a plan, until they get punched in the face” ~Tyson

Surviving wrong decisions made with perfect data

However, reality is seldom as structured and predictable as the plans make it out to be. Despite the obvious value that the frameworks above have for decision making, they can’t be perfect because they lack one crucial aspect of reality: feedback.

Models lack on critical property of reality: feedback.

As soon as we start executing a particular project, we have chosen a path and have made allocation of people’s time and money. That, in turn, sets in motion a series of other decisions: we may hire some people, we may subcontract part of the project, etc.

All of these subsequent decisions will have even further impacts as the projects go on, and they may lead to even more decisions being made. Each of these decisions will also have an impact on the outcome of the chosen projects, as well as on other sub-decisions for each project. Perhaps the simplest example being the conflicts that arise from certain tasks for different projects having to be executed by the same people (shared skills or knowledge).

And at this point we have to ask: even assuming that we had perfect data when we chose the project based on one of the frameworks above, how do we make sure that we are still working on the most important and valuable projects for our organization?

Independently from the decisions made in the past, how do we ensure we are working on the most important work today?

The feedback bytes back

This illustrates one of the most common problems with decision making frameworks: their static nature. They are about making decisions "now", not "continuously". Decision making frameworks are great at the time when you need to make a decision, but once the wheels are in motion, you will need to adapt. You will need to understand and harness the feedback of your decisions and change what is needed to make sure you are still focusing on the most valuable work for your organization.

All decision frameworks have one critical shortcoming: they are static by design.

How do we improve decision making after the fact?

First, we must understand that any work that is “in flight” (aka in progress) in IT projects has a value of zero, i.e., in IT projects no work has value until it is in use by someone, somewhere. And at that point it has both value (the benefit) and cost (how much we spend maintaining that functionality).

This dynamic means that even if you have chosen the right project to start with, you have to make sure that you can stop any project, at any time. Otherwise you will have committed to invest more time and more money (by making irreversible “big bang” decisions) into projects that may prove to be much less valuable than you expected when you started them. This phenomenon of continuing to invest beyond the project benefit/cost trade-off point is known as Sunk Cost Fallacy and is a very common problem in software organizations: because reversing a decision made using a trustworthy process is very difficult, both practically (stop project = loose all value) and due to bureaucracy (how do we prove that the decision to stop is better than the decision to start the project?)

Can we treat the Sunk Cost Fallacy syndrome?

While using the decision frameworks listed above (or others), don’t forget that the most important decision you can make is to keep your options open in a way that allows you to stop work on projects that prove less valuable than expected, and to invest more in projects that prove more valuable than expected.

In my own practice this is one of the reasons why I focus on one of the #NoEstimates rules: Always know what is the most valuable thing to work on, and work only on that.

So my suggestion is: even when you score projects and make decisions on those scores, always keep in mind that you may be wrong. So, invest in small increments into the projects you believe are valuable, but be ready to reassess and stop investing if those projects prove less valuable than other projects that will become relevant later on.

The #NoEstimates approach I use allows me to do this at three levels:

  • a) Portfolio level: by reviewing constant progress in each project and assess value delivered. As well as constantly preparing to stop each project by releasing regularly to a production-like environment. Portfolio flexibility.
  • b) Project level: by separating each piece of value (User Story or Feature) into an independent work package that can be delivered independently from all other project work. Scope flexibility.
  • c) User Story / Feature level: by keeping User Stories and Features as small as possible (1 day for User Stories, 1-2 weeks for Features), and releasing them independently at fixed time intervals. Work item flexibility

Do you want to know more about adaptive decision frameworks? Woody Zuill and myself will be hosting a workshop in Helsinki to present our #NoEstimates ideas and to discuss decision making frameworks for software projects that build on our #NoEstimates work.

You can sign up here. But before you do, email me and get a special discount code.

If you manage software organizations and projects, there will be other interesting workshops for you in the same days. For example, the #MobProgramming workshop where Woody Zuill shows you how he has been able to help his teams significantly improve their well-being and performance. #MobProgramming may well be a breakthrough in Agile management.

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , , , , , , ,

at 06:00 | 0 comments
RSS link

Bookmark and Share

Tuesday, August 12, 2014

Hierarchies remove scaling properties in Agile Software projects


There is a lot of interest in scaling Agile Software Development. And that is a good thing. Software projects of all sizes benefit from what we have learned over the years about Agile Software Development.

Many frameworks have been developed to help us implement Agile at scale. We have: SAFe, DAD, Large-scale Scrum, etc. I am also aware of other models for scaled Agile development in specific industries, and those efforts go beyond what the frameworks above discuss or tackle.

However, scaling as a problem is neither a software nor an Agile topic. Humanity has been scaling its activities for millennia, and very successfully at that. The Pyramids in Egypt, the Panama Canal in central America, the immense railways all over the world, the Airbus A380, etc.

All of these scaling efforts share some commonalities with software and among each other, but they are also very different. I'd like to focus on one particular aspect of scaling that has a huge impact on software development: communication.

The key to scaling software development

We've all heard countless accounts of projects gone wrong because of lack (inadequate, or just plain bad) communication. And typically, these problems grow with the size of the team. Communication is a major challenge in scaling any human endeavor, and especially one - like software - that so heavily depends on successful communication patterns.

In my own work in scaling software development I've focused on communication networks. In fact, I believe that scaling software development is first an exercise in understanding communication networks. Without understanding the existing and necessary communication networks in large projects we will not be able to help those project adapt. In many projects, a different approach is used: hierarchical management with strict (and non-adaptable) communication paths. This approach effectively reduces the adaptability and resilience in software projects.

Scaling software development is first and foremost an exercise in understanding communication networks.

Even if hierarchies can successfully scale projects where communication needs are known in advance (like building a railway network for example), hierarchies are very ineffective at handling adaptive communication needs. Hierarchies slow communication down to a manageable speed (manageable for those at the top), and reduce the amount of information transferred upwards (managers filter what is important - according to their own view).

In a software project those properties of hierarchy-bound communication networks restrict valuable information from reaching stakeholders. As a consequence one can say that hierarchies remove scaling properties from software development. Hierarchical communication networks restrict information reach without concern for those who would benefit from that information because the goal is to "streamline" communication so that it adheres to the hierarchy.

In software development, one must constantly map, develop and re-invent the communication networks to allow for the right information to reach the relevant stakeholders at all times. Hence, the role of project management in scaled agile projects is to curate communication networks: map, intervene, document, and experiment with communication networks by involving the stakeholders.

Scaling agile software development is - in its essential form - a work of developing and evolving communication networks.

A special thank you note to Esko Kilpi and Clay Shirky for the inspiration for this post through their writings on organizational patterns and value networks in organizations.

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , ,

at 07:00 | 4 comments
RSS link

Bookmark and Share

Tuesday, July 01, 2014

What is Capacity in software development? - The #NoEstimates journey


I hear this a lot in the #NoEstimates discussion: you must estimate to know what you can deliver for a certain price, time or effort.

Actually, you don’t. There’s a different way to look at your organization and your project. Organizations and projects have an inherent capacity, that capacity is a result of many different variables - not all can be predicted. Although you can add more people to a team, you don’t actually know what the impact of that addition will be until you have some data. Estimating the impact is not going to help you, if we are to believe the track record of the software industry.

So, for me the recipe to avoid estimates is very simple: Just do it, measure it and react. Inspect and adapt - not a very new idea, but still not applied enough.

Let’s make it practical. How many of these stories or features is my team or project going to deliver in the next month? Before you can answer that question, you must find out how many stories or features your team or project has delivered in the past.

Look at this example.

How many stories is this team going to deliver in the next 10 sprints? The answer to this question is the concept of capacity (aka Process Capability). Every team, project or organization has an inherent capacity. Your job is to learn what that capacity is and limit the work to capacity! (Credit to Mary Poppendieck (PDF, slide 15) for this quote).

Why is limiting work to capacity important? That’s a topic for another post, but suffice it to say that adding more work than the available capacity, causes many stressful moments and sleepless nights; while having less work than capacity might get you and a few more people fired.

My advice is this: learn what the capacity of your project or team is. Only then you will be able to deliver reliably, and with quality the software you are expected to deliver.

How to determine capacity?

Determining the capacity of capability of a team, organization or project is relatively simple. Here's how

  • 1- Collect the data you have already:
    • If using timeboxes, collect the stories or features delivered(*) in each timebox
    • If using Kanban/flow, collect the stories or features delivered(*) in each week or period of 2 weeks depending on the length of the release/project
  • 2- Plot a graph with the number of stories delivered for the past N iterations, to determine if your System of Development (slideshare) is stable
  • 3- Determine the process capability by calculating the upper (average + 1*sigma) and the lower limits(average - 1*sigma) of variability

At this point you know what your team, organization or process is likely to deliver in the future. However, the capacity can change over time. This means you should regularly review the data you have and determine (see slideshare above) if you should update the capacity limits as in step 3 above.

(*): by "delivered" I mean something similar to what Scrum calls "Done". Something that is ready to go into production, even if the actual production release is done later. In my language delivered means: it has been tested and accepted in a production-like environment.

Note for the statisticians in the audience: Yes, I know that I am assuming a normal distribution of delivered items per unit of time. And yes, I know that the Weibull distribution is a more likely candidate. That's ok, this is an approximation that has value, i.e. gives us enough information to make decisions.

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Picture credit: John Hammink, follow him on twitter

Labels: , , , , , , , , , ,

at 06:00 | 10 comments
RSS link

Bookmark and Share

Tuesday, June 24, 2014

Humans suck at statistics - how agile velocity leads managers astray

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates

Subscribe to our mailing list

* indicates required

Labels: , , , , , , , ,

at 06:00 | 3 comments
RSS link

Bookmark and Share

Friday, November 25, 2011

Kanban vs Scrum, the ultimate fight? Don't think so, here's why:...



Wow, what a week! A BIG post on Kanban by Scrum evangelist @jcoplien litterally put the blogosphere (and twittersphere on fire!).

It is good to have these family fights in the Agile family once in a while. As a life-philosopher once said: "These things gotta happen every five years or so, ten years. Helps to get rid of the bad blood".

But what are the differences between Kanban and Scrum, really? What are they?

Here's my take on the differences:

Kanban innovation


Kanban is, from it's origin a more systematic approach to measuring, visualizing and following up work in a "system" (one team, many teams, a company you name it). Thanks to the work by the Poppendiecks, David Andersson and Don Reinertsen (and others I'm sure) a pretty interesting and innovative economic framework as been put into the software process development lingo. Cost of Delay, Queues, optimize the whole, etc.
This economic framework is, in my view, the major innovation that Kanban proponents bring to the table. This was to be expected as many of the early adopters of Kanban were using Scrum before and felt the need to quantify and analyze some aspects of software development that Scrum did not tackle.


Scrum innovation


Scrum brought to the fore-front of process discussions issues that had never made it to our attention before: Self-organization, people/team dynamics, problem solving within a short cycle with feedback loops in place (Sprint), etc.
The major innovation of Scrum in my view was the introduction of the Socio-technical system of software development as Liker describes it in the Toyota way. Other similar software development processes took some of the ideas that Scrum also took, but shied away from the self-organization at the team level and blocker-removal focus of roles such as the scrum master. Those methods did not last long because the teams felt like puppets instead of adults with the possibility (and responsibility) to produce the best product they could.

Wrap-up


Both Kanban and Scrum brought significant innovations to the software development world. Those are just two types of innovations. There is more coming from the community and instead of focusing only on these two methods (which all of you should experiment with!) we should also look at what else is missing in the software development ecosystem.
My next interest area is "Complex Systems" (Complexity + Systems Thinking) and Management as a profession. I'll be exploring those subjects more and more in the future as a way to complement my use of Kanban *AND* Scrum. I suggest you do the same and find what else is missing in your environment and look for what else is around that could help you complement what you are already using. Here's a suggestion: start with @jurgenappelo's book on Management!

Happy reading, happy learning!

Labels: , , , , , ,

at 16:47 | 7 comments
RSS link

Bookmark and Share

Wednesday, May 18, 2011

Agile is about customer delight

Today on twitter I got into an interesting conversation with
@jeffpatton. The discussion was about whether Agile, as a family of methodologies and a value system is (or isn't) directed at customer delight.

@JeffPatton's point was that it is not. Here's one of his replies to me on this subject:


Later on during the day I had a face to face conversation with @mvonweiss and tried to crystalize in my mind why I disagreed with @jeffpatton. And the point is this: I believe that one of the core values in Agile is exactly about customer delight.The third value of Agile reads that Customer Collaboration is a preferred way to work in a software project. That customer collaboration is there for a reason.



The idea is that only through dialogue with the customer can we understand their needs. That is achieved by having close and continuous dialogue with the customer and through the delivery of software early and often (see principles 1, 3 and 4, for example). This continuous delivery of a working product builds the feedback loops we need. These feedback loops are directed specifically at providing an opportunity for the customer to interface with the team and help us understand what needs to be changed in order to provide, you guessed it: Customer Delight!

Real customer involvement patterns


Of course, there are many different types of projects that we work on. Custom software, Integration projects, shrink wrapped software, web-sites, etc. All of these projects have different "distances" to the customer (see figure below). But those distances can be bridged with many techniques so that the Agile values and principles are still applied and we can still get feedback early and often.



I've worked on many projects of the shrink-wrapped kind where we have teams working in an organization away from the customers (consumers). In these cases, interacting with "all" customers is impossible. But we already have many techniques that help us understand our customers better even when we can't be in direct contact! For example: customer surveys, usability testing, requirement exploration techniques, user persona development, etc.

Some people in the Agile community have been pushing us to consider these methods, David Hussman (@davidhussman) is but one example, but there are many more (including @jeffpatton, of course).

If @jeffpatton's point is to emphasize the need to consider the customer more fully in Agile projects, then I am in total agreement. But one thing is for sure, Agile methods are much more directed at Customer Delight than many of the other methods available today.

Labels: , , , , , ,

at 14:28 | 2 comments
RSS link

Bookmark and Share

Monday, May 16, 2011

You cannot transition to Agile. Stop and just embrace it!



I am writing this blog post to explore a concept. So bear with me, I'll probably ask more questions than I'll answer.

Why do most Agile fail in our companies (or government organizations for that matter)? My view is that we cannot actually transition from a command and control management paradigm to Agile / Complex management paradigm. The reasons are not fully clear to me, but I believe that it has something to do with the fact that we actually (typically) try to use a pre-determined way to make those transitions happen.

Case in point: When we try to move from Waterfall software development to Agile software development, we will typically draw a plan up for the transition with "steps" or "phases". Those "phases" or "steps" will typically be "stable points" in the evolution of our system (the company or organization). However, the Agile / Complex management paradigm assumes, at its core that software work is complex, therefore there is no predictable causality. The consequence of this is that the "steps" or "phases" in between the command and control paradigm and the Agile paradigm cannot themselves be "stable" in the sense that predictability can be recognized.

By following the argument above I'd state that: transitions fail because we try to move from a command and control paradigm to an Agile / Complex paradigm by applying command and control models. It is impossible to 'move orderly to a complex environment'.

What does it mean in practice for us? Well, for starters we cannot "plan" the transition in the same way we tried to plan our waterfall projects in the past. We can, and should have a goal or an idea of where we want to be. But after that we must embrace the new paradigm, or "Adopt the new Philosophy" as Deming put it. There are no intermediate steps between the "old command and control mindset" and the new "complex / agile mindset".

As this is an idea I'm still developing, I'll probably return to this subject and write some more, but in the meanwhile: what do you think? Does this make sense? What did you get from the above?

Photo credit: Marc Soller @ flickr

Labels: , , , , , , , ,

at 13:44 | 4 comments
RSS link

Bookmark and Share

Monday, April 05, 2010

We all want more value. Fine! But what do you mean by value? A discussion on the meaning of "value"


In the agile community there's been lately a great deal of talk about "value" and why that is more important than "process".

I also believe that we should try to optimize for value in our software environment, be it a small ISV or a big SW corporation. But what does value mean for you?

The post-agilists (people who believe they know better, and they may...) have started touting a new goal, a new Holy Graal for the software industry, that is "value". But very little is clear about what is now called "value".

I started looking around for definitions that may useful in a discussion of "value" in the context of the software development industry. Here's what I found.

The TPS way


In the early days of TPS (Toyota Production System), Taiichi Ohno defined value very simply by stating that value are that things that the customer, if observing our actions, would be willing to pay for. Example: if you write software for consumers, but spend considerable amount of time filling in paperwork that does not directly or indirectly improve the product, that would be waste and therefore "non-value add" work.
On the other hand, if you were developing software for the medical industry and spent considerable amount of time filling in forms that would ultimately lead to the successful certification of your product by the authorities, then that would be "value add" work.

Ohno's definition is simple and useful, but requires immense knowledge of your particular industry and company before it can be applied successfully. In Ohno's practice, managers would be asked to "stand in the circle" (a circle he would draw on the floor of the factory) for hours and hours. Ohno would then come back and ask the manager, "what did you observe?". If the manager would not have a satisfactory answer, Ohno would scold them and command them to continue to observe until they had found a way to improve the operation or the process in order to add more value to the product with less effort or waste.

This story illustrates a principle that is at the core of the TPS system: Deep Understanding. Inherent to this principle is another interesting concept, that it is impossible to create a "formula" for value that would apply to all situation. In other words: only you know what is value in the context of your work and company.

Now, that's quite different than what is being discussed back and forth in the agile community now.

What others say



Let's see how other people, linked to the software industry, define value:

Some of the loudest proponents of the focus on "value" are Kai and Tom Gilb. Both have a lot of credit and Tom, in particular, has a long history of contributions to the improvement of our industry so it is interesting to see what is described as "value" in their approach

Kai and Tom, in their site write a lot about "value", in fact they talk about many different types of value. Here are some:

  • value requirements
  • value decisions
  • value delivery (as a verb/action)
  • product values
  • value product owner certification (yes, certification)
  • value management certification (again: yes, certification)


About requirements they write that they should be:
"Clear, Meaningful, Quantified, Measurable and Testable".

These Value Requirements seem to be designed to structure and formalize the specification of "values".

Interestingly, they clearly assign Scrum a role that has (at least in their depiction) very little to do with "value management". See for yourself.

The emphasis is on "scales" or quantification of value:
Stakeholder Values are the scalar improvements a Stakeholder need or desire.
Product Qualities are the scalar attributes of a Product.


These are important additions to the effort to define value, but not really conclusive or contributing a clear definition of what value is. Ohno's definition was clear, but impossible to measure/understand without a deep understanding of one's business.


In VersionOne's blog Mike Cottmeyer argues that:
Lean tends to take a broader look at value delivery across the entire value stream... across the enterprise... Scrum by it's very nature tends to look only at the delivery team.

(...) the Lean folks say that Scrum focuses on velocity and Lean focuses on value... "


This in itself is not really helpful in defining value, but it helps us make a distinction, if we agree. The distinction is that delivering more features may, or may not, deliver more value. They are not necessarily the same.

This argument is easy to make when you look at the market out there and see that many products with "less" features tend to deliver the same (if not more) value to their customers. (examples include the irreverent 37signals crew as well as Nintendo or Apple).

In this non-exhaustive research I came across another interesting definition of Value. This one by Chris Matts:
Value is profit


Although this may seem logical, I doubt that your customers would agree that the more profit you make the better "they" feel. In fact I would argue otherwise. If profit is your only measure of value you will be driven to make decisions that effectively reduce the value delivered to your customer, even when your profits are increasing.
Now, it is important to understand that profit has to be part of the equation, but is also important to understand that, from your customer's perspective, your profit is only value in the measure that allows you to continue to deliver value to your customer, not as an absolute value.

Conclusion


What can we conclude from this superficial evaluation of the discussion out there?

Well, for starters we can easily say that the discussion about "value" in software development is just about to get started.

We have some properties that help us understand and define value:

  1. Value is that what the customer, if observing our actions, would be willing to pay for. (Ohno's definition)
  2. Gilb says that value must be translated in some form (Value Requirements) and that the key characteristic is that these Value Requirements are Quantifiable and Measurable
  3. Mike Cottmeyer, on the other hand, states that Features (stuff that gets developed as described in requirements) are different than Value. This means that even if we would have Value Requirements they would not necessarily add value from the customers point of view.


Mike's and Gilb's discussions seem, on the face of it, to be contradictory so we are left finally with a definition (Ohno's) that is useful but very hard to apply.

So, for the time being, Value is something that we cannot easily define "a priori", and for us to be able to define it requires a deep understanding of the business we are in (to be able to "guess" what the customer would pay for).

I'd say that we are still very far away from a viable (mass-usable) definition of Value for our industry.

What do you think? Have you defined value in the context of your project/company? What was that and how did you reach that definition?

Photo credit: Will Lion @ flickr

Labels: , , , , , , , , , ,

at 20:31 | 1 comments
RSS link

Bookmark and Share

Saturday, March 13, 2010

We continue to miss the point. It is not Kanban vs. Scrum, it is "people over process"!


The Scrum vs. Kanban debates rage on in the blogosphere but I can't help but feel that our Agile community is missing the point.

Where is the "People and interactions over processes and tools" that is part of the core values?

I commented on Rachel Davies's post about what she calls W-Agile (waterfall disguised as Agile, I guess). In that post she identifies very correctly a typical anti-pattern of agile adoption (read the post, it is worth it).

However in the comments she continues one thread with which I disagree, and I think the evidence for my argument is easily found around us.

Rachel states:
Vasco, You say "I disagree with the statement that Kanban can do anything to help the W-agile teams." I find this an odd thing for you to say and wonder if this is a gut reaction or based on experience?

I have seen Kanban help make the end-to-end workflow visible as a first step to improve those invisible parts. I don't see doing that as incompatible with Scrum.


My assertion is that Kanban *alone* cannot help where other methods have failed unless *people* change their way of thinking by way of adopting the Kanban (or any other) ideas. Sure, people can change, and many of us have changed our mindset when adopting iterative software development, then XP, then Scrum and recently Kanban. That's a fine argument to make, i.e.: Kanban can bring things to people's eyes that other methods have failed to *and* change the way people behave. But that is a totally different than saying "Kanban brings success"!.

Please note that the key ingredient here is not Kanban (or Scrum, or XP), but the fact that people *change* their views, prejudices, etc.

From this quickly follows the following proposition / hypothesis:
A new method can make a team succeed if and only if the person (or people) helping the team adopt the method succeeds in changing the team's behavior.

Proving this hypothesis is rather simple: just look around you.

  1. Have you seen teams succeed with some method and other reams fail with the same method? I have. Many. Generalizing this observation proves that *a* method alone cannot make a team succeed (no matter the method).

  2. Have you see teams succeed while adopting method X with the help of person A while previously having failed to adopt the same method when helped by person B? I have. I have been person B and A myself! -> this proves that a person(mentor/coach) can have more impact than the method itself.


Given that a mentor/coach can have a larger influence on the team's adoption of a method than the method itself, and that the same method can lead to success or failure (Jurgen's argument as well) then it follows that method alone cannot be a pre-condition for success or for failure.

At the end of the day it is about getting the right help (or changing the help if it is not working) to adopt a method that fulfills your business goals (whatever that method may be).

Agility is about providing business value, not about methods.

Photo credit: David Kingham @ flickr

Labels: , , , ,

at 07:22 | 3 comments
RSS link

Bookmark and Share

Tuesday, March 09, 2010

The Kanban vs Scrum argument stinks! But, can we learn anything from it?

There were some interesting conversations on twitter last evening, so interesting that they are worth some comment.

There was an interesting back-and-forth between
@jurgenappelo and @agilemanager about whether it is possible to do Scrum in a "pull" mode or not as well as other issues. But this was just the tip of the iceberg of the Kanban vs Scrum discussion. Here's one comment that caught my eye:



Here @agilemanager is trying to prove that Scrum cannot be done "right" because of inherent problems, specifically that velocity is so unstable that it cannot be used for planning. That argument, however, is easily proven wrong.
Here is a graph of a team implementing scrum, using velocity to plan for success and with a rather stable velocity. (Technically the velocity is "under control" as defined in statistical process control).



This tells us that it is possible (unlike @agilemanager states) that using Scrum planning based on velocity is possible. And in fact that's the main long term planning metric that I've used in the past with success.
The fact that we have a (statistically) controlled velocity allows us also to do other things like identify common causes and special causes that will require different action/intervention with the team in order to improve their overall result.

Then @jgoodsen pitched in with this comment:



Here @jgoodsen is disagreeing with @jurgenappelo's statement that as Kanban gets wider adoption it too will be mis-applied and lead to failure. This type of extreme position taking is an example of why the discussion between Scrum-mendalists and Kanban-istas is quite useless for the rest of us that are interested in learning more.

The point is: Kanban, just like Scrum, initially is being practiced by early adopters. People that tend to read and study things more and earlier. These are also typically people that are given a license to experiment, to try out new things.
In the end the effect is that, typically, early adopters are better at adopting new methods because they have done that more often (they are early adopters after all). Kanban is just another method/framework/whatever, it will succeed and fail as much as Scrum does. @jurgenappelo is right, any method fails when applied by a population large enough.

The takeaway from this discussion for us should be that methods, ultimately, are irrelevant. It is the learning achieved by experimenting that matters.

We should be sharing our learning, not our method allegiance!

Labels: , , , ,

at 09:26 | 2 comments
RSS link

Bookmark and Share

Monday, February 22, 2010

We need Proof!: Talk at Agile Saturday in Tallin

Last week I had the honor to present the keynote at Agile Estonia's first public event. Agile Saturday was full of people from that community and it was great to learn how that community is being developed.

The organizers also filmed the event, so here is the link to my talk video and slides.

Enjoy!

Agile Saturday #1 Keynote by Vasco Duarte from Anton Arhipov on Vimeo.



Labels: , , , , , ,

at 09:07 | 1 comments
RSS link

Bookmark and Share

Friday, August 14, 2009

On how PMBOK Change management creates variability and reduces predicatbility


In the last post I tried to point out how the analytical approach of any standard (and specifically PMBOK's) will create problems for those actually having to implement those standards.
Change management in PMBOK is a particularly large problem in this respect. Scope Control (PMBOK's process for change management in scope) comes at 19 different activities. This will leave the most experienced project manager grasping for air when it comes to implementing these activities (they are not implemented in a vaccum obviously, but that does not make it easier...)

Contrast that with the approach that Scrum uses: Re-plan every sprint based on the improved knowledge you have of the product and the market.

Now, that's simple! Simple, but not easy.

For Scrum's change management to work properly the people in charge need to understand the stakeholders needs, the market needs and the current state of development. None of these is easy to achieve, but -- and this is the point -- they are easy to explain.

In Scrum, every Sprint will have a small number of ceremonies:

  • The Sprint planning: where the previous sprint's result as well as the changes in environment (stakeholders, market, etc.) are input for the planning process
  • The Daily meeting: where the progress is reviewed and plans quickly adjusted to meet the Sprint goals
  • The Sprint review + demonstration: where the status of development is analyzed as well as the reasons for possible problems
  • The Retrospective: where we analyze what went well/wrong and take actions based on that to improve for the next Sprint, i.e. change our process.


That's it. That's how Scrum addresses change in scope: by being prepared for it at the very core of the process. Every sprint change is reviewed and handled. Plans are adjusted.

Interestingly this has an important (yet often forgotten side-effect). Because the stakeholders know that their needs will be taken into account in the next Sprint at the latest, they don't feel the need to disturb the teams during the Sprint. This allows the team to focus on the ongoing work and meet their Sprint goals while at the same time not avoiding change, rather embracing it!

Implementing a process based on PMBOK will not take this aspect into account. It is my experience that in practice, a PMBOK based approach will lead to a separate change management process (often through Change Management Boards), which will regularly but at unpredictable intervals submit changes to the teams. Teams, then need to react "immediately" to those changes: reviewing, commenting and sometimes even accepting them. Anyone familiar with Queuing Theory recognizes this problem immediately: adding more work to a team will make them late and reduce predictability because of the added variability inherent in the "change" related tasks.

This is why PMBOK fails when it separates processes like change management into their own "process" within a larger process which is a software development process.

I should however emphasize: there is value in PMBOK. Read it if you can, but you should not follow PMBOK when defining your processes. Rather you should look at Scrum, Kanban or other processes for inspiration on how to run your software development processes. This is because PMBOK is useful, but not enough!

Photo credit: Will Lion @ Flickr

Labels: , , , , ,

at 10:00 | 10 comments
RSS link

Bookmark and Share

Monday, August 10, 2009

PMI vs. Agile, what is different, and why should we care

The PMI people don't seem to stop trying to "guess" what Agile is. Guessing is the right term, because anyone with more than a few hours experience in Agile software development can see their cluelessness from afar!

Take
this article by Lynda Bourne DMP, PMP (no, I'm not making those TLA's up, she uses them). She says that Agile is different from waterfall because:

  • The need for robust change management and configuration management to track the evolution of the Agile project
  • The critical importance of developing the correct strategy and architecture at the beginning of the Agile project


Someone that says that in Agile project management you need "robust change management and configuration management" probably does not even understand what those are, let alone Agile. Hear me out: Change Management is not needed in Agile, Agile *IS* change management. Take Scrum for example, the whole idea with Scrum is to provide a framework for accepting and managing changes. Scrum *IS* change management. To say that Agile project management "needs" strong change management is to miss one of the most elementary points of Agile Software Development.

Then comes the killer: we Agile project managers (supposedly) need to focus on "developing the correct strategy and architecture at the beginning of the Agile project", missing this - Lynda writes - will lead to failure. Only one word: WTF? C'mon Lynda, that is probably the largest mis-conception of Agile Project Management that I've seen in my (admittedly) short life!

There are thousands of posts about why Agile people focus on "growing" architectures rather than "getting them right up-front" (aka BDUF). Please read up, just google for Agile Architecture and you will find many articles that explain how in Agile we look at Architecture development (here's an example link).

There seems to be a lot of discussion happening in the PMI circles about Agile, but PMI people need to understand that the practices they've developed for building, acquiring companies, etc. don't all apply to software. PMI people should first learn about software and then Agile. Trying to bypass software and going straight for an Agile take-over will only get us (the software industry) another 10-years back in time and lose so much of the evolution we gained with the Agile movement.

Labels: , , , , , ,

at 10:09 | 35 comments
RSS link

Bookmark and Share

Friday, August 07, 2009

To all PMBOK, PMP and PMI people: you are missing 1 million points! Stop trying to explain something you don't understand!

The traditionalists are starting to be quite dangerous in the hostile "take over" of Agile and Scrum (Kanban can't be far behind).

In
this post Glen tries to tell us that it is ok to call "project management" what we do in a Scrum (f. ex.) development effort.

Now, that would not be so serious if he did not go and quote the list of activities in the PMBOK and explain that because you are doing those, then you are (by syllogism, one supposes) doing Project Management. Well, it's not that simple as I try to explain in my comment to this article:


It's not that simple. One of the biggest changes that Agile brings to the development of software is that, for most (usually) so-called projects, Scrum (or other methods) change the approach to a more continuous work approach, i.e. the work is always ongoing, just the input (requirements) are being managed in a time frame (release).

So, to answer your question (are you doing project management in Scrum?), in many software projects you DON'T have project management, you have WORK management. Nevertheless you have all of the activities you list (except project initiation). Which also shows how useless the list is, because you always have those activities in any endeavour, project or not.

To sum it up. I think that PMBOK is a good read for newbies and people starting in WORK or PROJECT management, just like many other books are, but PMBOK is dangerous in one aspect: it singles out different aspects that should/must be part of every day work. Take the example of Risk management: in Scrum all activities in the process are risk management activities (planning, daily meeting, demo, retrospective...). If you follow PMBOK you get the idea that Risk Management is actually a different activity that may even include different actors/stakeholders. This is BS, pure and simple.

In my opinion PMBOK and the prevailing approaches to Project Management today are simply dangerous and destroy customer/shareholder value.

We need a new paradigm in project/work management and Scrum is a good start!

Labels: , , , ,

at 09:32 | 3 comments
RSS link

Bookmark and Share

Thursday, August 06, 2009

The CMMI folks are out to get you! Seriously, they are starting with Scrum, what's next?

The folks talking about CMMI and Scrum are dangerous! Not the least of which because they miss some of the most basic points about software development! You would think they would know about software, but you would be wrong!

Check this out:

In an article about
CMMI and Scrum the site ExecutiveBrief (go there at your own risk) states something like:
There is a tradeoff between cost and quality


The person who wrote that line cannot have written one single line of production code in his/her life! What a f*?€%# idiot (excuse my french, I'm sounding like @bellware).

The point is that quality software is cheaper, not more expensive than crappy software. For everybody: not just the customer, but also the vendor/developer.

When people try to sell you the idea that CMMI and Scrum are complementary they are only disguising Scrum in clothes that are easier to swallow for the gray-suited execs that don't understand software at all and should have been fired a long time ago!

Labels: , , , ,

at 09:06 | 0 comments
RSS link

Bookmark and Share

Wednesday, July 29, 2009

Stop committing to iteration scope!


This may seem like heresy in the world of "Scrum requires commitments from the team", but think about it.

If your iteration is about two weeks (give or take) and your team is pretty much fixed for that period of time (you can't really "change" a team that fast), then what else is left? Yes, precisely: only scope if left.

In a typical iteration your time is fixed (it better be if you are doing Scrum), your capacity/team is fixed so you are left with only one decision to make: choose to respect either scope or Definition of Done.

Definition of Done is probably the single most important practice as it is the practice that makes sure you have a future, so your choice should be pretty obvious! Don't commit to scope!

It's not that simple, mister!



Of course, the devil is in the details. How can you commit to "anything" for an iteration? Here in lies the crucial point of this post: User Stories are not fixed scope! If you have read anything about User Stories you know of the INVEST properties that each story should respect. The second property is N-Negotiable. This means that even if you commit to deliver the value explicit in the User Story the actual way you deliver that value is to be negotiated with the Product Owner/Customer. This negotiation is the key dynamic for teams that want to succeed at meeting their iteration commitments.

In an iteration you will discuss which are the key User Stories to be delivered. However, later on, when you find out that you are over-budget you should take clear actions, together with the Product Owner, to find out what is the key functionality for each of the remaining stories that should be delivered within the ongoing iteration. And drop the rest!

There's no point in doing over-time and delivering something you will have to throw away later on (because of quality problems or other). Sit down with the product owner and decide what to drop, what to keep in order to deliver the maximum amount of customer value without breaking the most important commitments Schedule and Quality.

Keep your relationship with the product owner as that is the most important relationship for an Agile team.

Photo credit: d_oracle @ Flickr

Labels: , , ,

at 10:39 | 0 comments
RSS link

Bookmark and Share

Monday, July 27, 2009

Can Scrum and Kanban be used for non-software development work?

Can Scrum or Kanban be used to manage any other work than software development? Do the same concepts apply?

The past week-end there's been some
chatter on twitter about using Kanban or Scrum for managing work outside the software development realm.

People seem (pleasently) surprised that it can be done. But can it really? The answer is YES! But why do methods that were clearly developed with software development in mind expand to manage other type of work?

Well, we have to look at the roots of these methods to understand that. Both Kanban and Scrum based their core practices around a list of "stuff" to do. Features/stories in software development fills these lists or backlogs, but in an advertising agency, for example, the "stuff" will be something like drawing proposals for clients, brainstorming meetings, research work, etc.

Through these lists, Kanban and Scrum establish an order of work (what to do now?) and allow what is normally a disconnected mass of work to be organized in a way that a team can use, not only to execute, but also to follow-up their work (e.g. through a burndown/burnup chart)

Indeed, at the root both Kanban and Scrum are just formalized work management methods that were created to overcome the chaos that is still so normal in software development organizations. The hidden truth however, is that the same chaos is present elsewhere and those companies can benefit as much from Scrum/Kanban as the software industry is benefiting now.

The corollary of this is that Scrum/Kanban can be used to do anything, from building a space ship to planting potatoes, and I have even used it for personal non-software development work as well as helped others establish some of the ideas behind Scrum as their own "working method".

There's a catch, though!



Getting back to the backlogs, both these methodologies need a steady stream of work to be analyzed, prioritized and ultimately picked up to be worked on. Because the constraint (i.e. the part in the system that is always busy) in the process is normally the project team, it is imperative that special attention be given to the backlog, where work is listed and described. It is often a good practice to regularly spend time analyzing the work in the backlog to make sure that it fits the Vision for the product/project and that it is sufficiently well defined so that the team will not lose their time trying to figure out what the work-item really means.

For this we normally have a manager (called product owner in Scrum) whose time is spent analyzing the project/product and deciphering the customer needs into a format that the team can understand.

This work is crucial, and the real reason why teams need "managers" (product owner in Scrum).

Manage your work for better performance



The key message here is this: Your work performance will only be as good as you are at managing the backlogs.

Because Scrum/Kanban are work management methods, it follows that the team's performance will depend heavily on how the backlogs are managed by the team and their manager
. Therefore, the hidden pearl in Scrum and Kanban is that you should focus a lot (but not all) of energy in defining and improving how your manage your work. And this is valid for any type of work. Be it software or creative projects in an advertising agency.

Labels: , , , , , ,

at 13:33 | 6 comments
RSS link

Bookmark and Share

 
(c) All rights reserved