This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Monday, August 24, 2009

About when I stopped worrying and embraced Engineering!


Product Managers often forget that they can make or break a project. If your team is using Scrum that is even more true, because Product Managers are (or should be) involved regularly in the planning of the work with the team. Either as Product Manager or in the role of Product Owner.

It is therefore very important to carefully craft the messages that the Product Manager gives the team. It took me a while to understand this simple message, and this is my story.

As a Product Manager I was eager to give the team direction and clarity on the goals for the product. I communicated regularly with the team with more information on the market, the product, the customers and generic feedback on the direction they were taking.

Eager to achieve the goals we had set for ourselves the team was churning new features at a very good pace. Then the bomb hit. Every time we were supposed to go to production we faced major hurdles. The Retrospectives did not point to anything that could cause the quality problems we had, so I went on an investigation. Why was deployment to production so hard to this team that was delivering features at such a good pace (uncommon for other teams at that time)?

As a Product Manager I had to forget about what I wanted and had to concentrate on finding the root-cause for the problem. I interviewed the developers but nothing I found would explain the situation. The team was testing regularly in their environments and they were practicing Scrum "by the book".

It was then that it hit me. Having a conversation about the development process with one of the developers I understood that they were neglecting their unit and integration tests, which in turn led them to have a long feedback cycle for integration (some days only, but still too long).

After I heard that, it was fairly easy to trace the deployment problems to the lack of automated, fast-cycle integration testing. In their eagerness to deliver more features the developers would be developing up to the last day and would not have time to do the integration testing for those last changes. In turn, that led to many problems when it came to deploy.

Through that conversation I realized that the problem the team had was that they were being implicitly rewarded for delivering more features every time I praised them about the new feature they delivered. However, they were not being rewarded for building the unit and integration tests that would prevent the quality problems.

The end result was that quality-sustaining practices were being neglected. Having understood that, I changed my communication with the team. From that time on I started asking them if they had the integration and unit tests for every feature they delivered and started giving them praise for delivering tested features, not just any feature.

Before this happened to me I was not aware of how strong an influence the Product Manager's message can have on team behavior. "They are the engineers" - I thought - "they should know what they need to do". It's not that simple!

Photo credit: pcalcado @ Flickr

Labels: , , , ,

at 14:29 | 2 comments
RSS link

Bookmark and Share

Thursday, August 06, 2009

The CMMI folks are out to get you! Seriously, they are starting with Scrum, what's next?

The folks talking about CMMI and Scrum are dangerous! Not the least of which because they miss some of the most basic points about software development! You would think they would know about software, but you would be wrong!

Check this out:

In an article about
CMMI and Scrum the site ExecutiveBrief (go there at your own risk) states something like:
There is a tradeoff between cost and quality


The person who wrote that line cannot have written one single line of production code in his/her life! What a f*?€%# idiot (excuse my french, I'm sounding like @bellware).

The point is that quality software is cheaper, not more expensive than crappy software. For everybody: not just the customer, but also the vendor/developer.

When people try to sell you the idea that CMMI and Scrum are complementary they are only disguising Scrum in clothes that are easier to swallow for the gray-suited execs that don't understand software at all and should have been fired a long time ago!

Labels: , , , ,

at 09:06 | 0 comments
RSS link

Bookmark and Share

Sunday, March 08, 2009

The Software quality hash tag #swquality

If you do Twitter you probably know exactly what I'm referring to, if you don't
here's an article you should read.

Together with @most_alive and @Mendelt we just coined a hash tag to aggregate discussions about the cost of quality in software development. The hash tag is #swquality just add that to every tweet that you write about the subject.

Join the conversation!

Labels: , , , , ,

at 22:39 | 0 comments
RSS link

Bookmark and Share

Saturday, March 07, 2009

Perfection in software is cheaper! not more expensive...

The fallacy of perfection = expensive has many forms. Two of which I just run into in the blogosphere. What is this fallacy? Simple, you say to yourself: "perfection in software development is way to expensive, therefore I should not even try to achieve it!", but that's a false problem, because perfection is only expensive if you continue to work in the way you do today. In order to take the next step to perfection you can change the way you work (process, guidelines, tools, etc.) and in the process you may end up with a cheaper way to do what you do today and be closer to perfection!

In fact there's is no single piece of evidence that would corroborate the idea that being better (i.e. trying to achieve perfection) is more expensive than, not trying to achieve it.

But let's get back to the examples that I just ran into on the net:

The first fallacy: the cost of Zero-Defects



In
a post by psabilla in shmula we are faced with the idea that Zero Defects would be too expensive to achieve therefore, the author suggests, it should not even be tried.

In here the author even presents a graph of the theoretical cost of achieving Zero Defects. It worthwhile to notice that this graph is a completely hypothetical graph, no connection with actual real data collected from a real project trying to really achieve Zero Defects.

Further along the author unmasks the prejudice that does not allow a more clear view into why Zero Defects are not only possible, but if achieved will yield a much faster _AND_ cheaper process for software development.

The author writes:
As defects are identified and eliminated, there will be theoretically few defects. But, this means that identifying defects will require more effort and will become more and more difficult, thus increasing the costs of this activity, along with the subsequent costs to fix the defects identified: The costs to inspect and test increases as there are fewer and fewer defects.


In the paragraph above take note of the causality that unmasks the prejudice. The author starts by using a seemingly obvious phrase (removing defects leads to less defects being present) and then exposes the prejudice behind the article: less defects means that it will be more expensive to identify other defects.

Think about that phrase for a minute: "if we have less defects it will be more expensive to find other defects".

Here are the problems with that phrase

  • It assumes that the role of testing is to find+remove defects: This is wrong, very wrong, because if a defect is added to the code base and only found much later that is indeed a much more expensive process than if we remove the defect closer to the source. Ideally immediately after it has been added to the source. I've written about this here. In other words: the role of testing is to prevent defects from being added to the source in the first place!
  • The second (wrong) assumption is that if we have less (presumably a lot less) defects we are then going to spend more money on finding the few that are left. Well, I don't know what projects the author has worked on, but as of today I don't know of any software project that would have achieved zero defects upon release. This actually means that at some point in the software development+release process the project team will say "alright, we have spent enough time looking for defects and not finding them, it is time to release". This suggests that project teams are smart enough to know when the cost of searching for a defect is higher than the benefit it brings.


Assuming the arguments above are sound we have to acknowledge that the basis for the argument by the author is false and indeed Zero Defects is not only possible, but also (given people's pragmatism) does not represent any additional cost to projects.

The second fallacy: You can't get rid of defects without Inspections



Jurgen Appelo, uses some of the data/arguments from the previously cited post to go on to say that Zero Inspections is an impossible and too costly goal. Note that Jurgen's post touches on other issues, I will not comment on those and restrict myself to the issue surrounding the impossibility or prohibitve cost of Zero Inspections.

The author builds on the (already demonstrated wrong) argument that Zero Defects is too expensive to also say that we cannot live without Inspections.

The author confuses Inspections with normal ways to prevent errors from going to far. I would agree that we need to have some inspections, but not in the way that Jurgen, or indeed Tom Gilb have suggested.

There's some data already available (albeit disputed) that Inspections provide little or no value and mostly focus on inspecting the format of the artifact (templates, coding conventions, grammatical defects, wrong color, wrong use of terms, etc.) and technicalities (curly braces where none are needed, too short method name, etc.) instead of the fundamental Defect magnets that logical flaws of thread-safety represent. This is also my experience.

Taking the example of code reviews, these are often more useful for knowledge sharing than for defect identification (not to mention that it is quite expensive to review code -- although you should do it in many situations :).

My point is this: Inspections have been around for 10's of years, why is the software industry consistently ignoring them: because they don't deliver enough value! At least Inspections in the sense of "let's get together in an additional meeting and review this artifact to make sure it has good quality, and then follow that with a formal process to change/update the artifact and get it approved" (code can be the artifact).

Ad-hoc inspections are often much more convenient, cheaper and value-adding but this is not what Jurgen or Tom Gilb suggest should be done! Hence my problem with Jurgen's article.

Zero Inspections is not only possible (been there done that, even with large - big L - projects) but it is also a catalyst for the teams to prepare not to need those Inspections. Imagine this: if you have an Inspection coming up for that last module you changed how motivated are you going to be do your job in the best way possible and aim for Zero Defects? The answer is: not motivated! In fact, my experience is that people will just aim to "avoid being embarrassed/humiliated during the Inspection meeting".

If you have the practices that support it and aim for Zero Defects (automated test, early Acceptance Test definition, etc.) you are going to be much more motivated (and supported) to achieve a very low (approaching Zero) defect count!

Oh, and by the way it follows that if you can achieve (proven) Zero Defects why do you need the Inspections?

Conclusion



There's a hidden (ok, not too much) argument here. If you aim for (and approximate) Zero Defects then your process will be faster (less rework due to defects, less inspections) and cheaper (less technical support needed, less reimbursements to annoyed customers).

But there's a catch! The catch is: you cannot achieve Zero Defects or Zero Inspections unless you invest in and change the way you work! It takes effort, dedication and constant experimenting and learning. However, if you don't do it your competitors will. Sooner or later. And when they do you will find yourself with a more expensive product that annoys your customers. What do you do then?

Labels: , , , , , ,

at 20:33 | 11 comments
RSS link

Bookmark and Share

Saturday, December 13, 2008

The "it's not my bug" anti-pattern

Talking to a friend we were discussing the organization of feature teams in a local company.

They were previously organized in component teams, which led to a lot of inefficiencies and disconnected work. This happened because one team would only work on one component, and when one feature required work in many components, that feature may not have been finalized at the end of the sprint, as one of the teams may have been busy with other work.

This highlights one of the problems that is common with component teams, they do not allow for an efficient allocation of work, as the dependencies increase between independent teams. You may still want to organize around component teams, but not if your goal is "efficiency".

But the anti-pattern that we talked about is actually when you have feature teams. If you have feature teams you will be able to assign one backlog item completely to one team, thereby reducing the dependencies between teams and potentially increasing the throughput of your program/group/unit/company.

However, there's a catch (there always is). You need to be explicitly clear about who fixes the bugs when nobody else wants them. The problem is this: when a bug is discovered during one iteration, it is likely that many teams may have touched the component that is the reason for the bug (assuming you can trace it). So, who takes it?

The anti-pattern tells us that the teams will *all* say that it is another team's code that is causing the problem, and nothing gets investigated which leads to many bugs crossing the iteration line and that should never happen.

How to solve this problem: you should state explicitly which team takes *all* the bugs under investigation. That way if the teams cannot agree on who takes the ball (in a Scrum of Scrums, say) there will be a clear owner for the investigation and potentially the fix.

Normally this would be the team that is assigned to "maintenance work" during that sprint, but you can decide otherwise if the maintenance team is too busy.

Labels: , , , , , , ,

at 09:32 | 0 comments
RSS link

Bookmark and Share

Saturday, November 29, 2008

The Skill issue, the industry shame

I was reading Jason's blog when I came across
this post. I could not agree more.

Way too many times I bump into problems that can be traced directly to the idea that you can hire just anyone, with any skill level and they will perform to the expected level of professionalism. This is pure bulls#%&!

Some time ago (just before the bubble burst in 1999) a company I was familiar with was hiring QA people just because they knew how to boot Windows. Yeah right! Way to go!

We also see the same with people putting together teams that behave like a set of individuals all pushing in different directions because "anyone" can be a manager! Stop believing in magic. You don't get a high performing team if you don't have a proper leader in the team (the leadership can be shared BTW, no need for a "hero", in fact that's sometimes worse). Start coaching the team and them make them understand how to work together!

If we put these two things together: hiring coders and testers that have no skill, and promoting people to leadership position that have no leadership skills what do you get? You got it: our software industry!

Can you believe it! This type of behavior and belief is rampant in our industry. Small wonder that we will be seeing lots of people being laid off in the near future...

PS: if you are smart and really good at what you do (testing or coding), you are better of these days starting your own consulting company, charging bucket loads of money and getting out of there once you are fed up with the local incompetence!

Labels: , , , ,

at 17:55 | 0 comments
RSS link

Bookmark and Share

Tuesday, September 23, 2008

The cost of un-fixed bugs or Good Code on top of Bad Code

While discussing with a colleague we started talking about the costs related to not fixing a bug
immediately (yes, this does mean that you have a way to immediately figure out if you introduced a bug).

The reason for fixing bugs immediately is not only the cost of content switching later on when you have to come back, but it is even more the cumulative cost of fixing all the code you have built on top of the bug you introduced.

Think about it this way. Bug = Bad code. If you have a bug and continue development you are developing working code (good code) on top of that bug (bad code).

When you later on go back to fix the bad code, you will then have to fix all of the good code that is now broken because of the bug fix you introduced.

Software is build in stacks, the lower level affects directly the upper levels (many of them), if you have a bug lower in the stack and then fix it you will have to change all of the upper levels! And that is where the real cost is.

So, if you find a bug, fix it now. If you did not find a bug, think of ways you could find it faster -- before you build good code on top of bad code.

Labels: , , , , , ,

at 10:55 | 0 comments
RSS link

Bookmark and Share

Monday, September 22, 2008

Testers are developers! Stop breaking them apart

First one clarification. Testing is needed, required, critical and very much a MUST. With that out of the way let's go to the post.

Many people have mentioned the need to include testers in Scrum or agile, and planning your processes with a testing step somewhere (normally close to the end).

All of these ideas are OK. There's nothing wrong with testing, the problem is that testing is done too late. Would you buy a car which engine was "fixed" after it came out of the line and didn't start? -- I certainly wouldn't not!

In software, like in cars, problems should be found early, very early. In fact problems should be found so early that they never reach the (needed) testing phase at the end (of the day preferably). This is why we need to stop thinking about testing as something separate from development or even coding. Testing must be part of every step of the process.

I don't agree with the proposal in
this article. The tester should not be the one developing the test cases in isolation, and certainly not after the team has planned! The reason is that the tester will find faulty assumptions and possible design problems while thinking about the test cases. That information is crucial for the team to avoid building problems into the software.

Testers should be involved from the start. In fact, for each story that the team is planning in the sprint planning meeting, they should at the same time plan the test cases, review the initial assumptions and only after that should they actually plan the tasks to accomplish that story.

The best way to develop quality software is not to test the bugs out, it is to develop quality in -- this is why testers should be involved in every step of the process (Scrum in this example).

As Deming said:
Build quality in, don't inspect quality in.


Oh and by the way, if the test cases are (really) written why would the coder wait for the tester to verify that his code change works? -- Just run the tests immediately! It seems to be that the coder not running the tests herself is just a way to avoid feedback, which is key to Agile!

Labels: , , , , ,

at 02:50 | 0 comments
RSS link

Bookmark and Share

Sunday, August 10, 2008

Why Apple should watch out or lose it's newly acquired customers

Apple had a considerable amount of credibility when they started they iPod "offensive" some years ago. So much credibility that people were willing to overlook critical customer-back stabbing such as the iTunes being
DRM ridden, the iTV (ooops, apple TV) being more expensive in Europe even if there's no content for it at all in most countries (seriously!) or even the latest MobileMe quality problems, not to mention the least than honest statement by Apple about the "push" feature in MobileMe.

Now, they've stooped to a new low. They have started outright lying (or "hiding the details" if you listen to PR).

Apple, come on! We love your products, but there's only so much back-stabbing we can take! Get your act together and start honoring your promises of creating great products for those of us that have a "digital life". Seriously, our patience is running out...

Labels: , , , , , ,

at 14:05 | 5 comments
RSS link

Bookmark and Share

Monday, April 21, 2008

Respect for people, the translator's edition

In the spirit of Lean my colleague and friend
Mika Pehkonen writes how they are able to respect people, get them to do what they are best at and most motived to do. I'd say that's a win-win-win situation!

we pay our translators by the hour, not by word count. This means that the translator gets fair pay for their work, they do not need to spend time on proofing computer propagated translation matches that are by default out of context and they get to concentrate on their key expertise, translating concepts from one language and culture to the other. This, combined with assisting scripts and tools, allows the translator more ownership over their own work in ways that are more meaningful than just reviewing and translating words in a software.


That's a message that is often missed in the frenzy of Agile or Lean adoption. An example of that is the testing work, espcially regression testing, that is mostly done manualy and where the question "how many test cases can you execute an hour?" is the most asked question. That kind of approach clearly leads to what Mika has managed to avoid: spending most of your money in low-value added work that does not motivate and by it's very nature reduces the quality of the output. I usually compare this to the person in an old-school factory whose job is to make sure that the Coke bottles do not have too much coke in it while reviewing hundreds of bottles against a white screen...

Labels: , , , , , ,

at 21:39 | 0 comments
RSS link

Bookmark and Share

Monday, March 17, 2008

Testing to script is waste, creative testing is extremely valuable

Testing is a hard job. Imagine this, you have to make sure an application of more than 2 million lines of code is ready to ship. It all depends on you, and your team of 2 testers.

How do you do it? Well, one way to do it is to make sure you cover all the possible use cases and test for those. But that can go into the thousands. There's no hope you can test all of those cases in a short period of time (let's say 3 months or so...). Well, now we just made it even more difficult: we have to release every 4 weeks. Oh, and did we tell you that we are changing the architecture as we go? Incrementally of course, but nevertheless.

How would you cope with a situation like this? Yes, the answer is you would not (just setting up my answer... wait for it). The answer is that you must make sure you are never in the this position!

How to avoid being in a position to have to test a large piece of code with large code changes ongoing and still release every 4 weeks? Test automation. All tests that can be automated should be so, and at all levels: unit, integration, system, performance, reliability, you name it.

The point is this, testers brain power is wasted if all they can/are allowed to do is to test against a static specification with a few tests added every 4 weeks. That's not the best way to get the most out of the smart people you have in your company. If you are not a tester, just imagine yourself having to go over the same 40-50 pages of tests every single iteration, month-in, month-out. How long would it take you to quit? I suspect not too much...

Additionally, if you consider the effect of familiarity (reading the same test cases 2-3 times a month for several months) on the quality of the testing you quickly realize that manual testing against a script over and over again is the best way to get problems to escape even the most dedicated tester's eyes.

So, what next? Well, test automation is one solution. The next step is to train your testers to be expert "breakers", their goal should be to find more and more ways in which to break your software. Specifically ways you have not thought about!

The message is: testers are way too valuable and way to smart to have them spend their work-hours going over a brainless set of tests-to-spec, you will get a lot more from your test team if you automate the repetitive tasks and let the loose on your code.

This is, BTW, what Richard Feynman
advocated when he reviewed the Challenger disaster in the 80's:
"(...) take an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product."

Labels: , , , , , ,

at 21:42 | 0 comments
RSS link

Bookmark and Share

 
(c) All rights reserved