The Quick & Dirty Myth

There is no such thing as “Quick and Dirty”. All my experience tells me “Quick and Dirty” actually means “Slow and Dirty” or “Slow, Dirty and everyone is unhappy.”

It came up again last week. I was asked: “When there is pressure to deliver something and you have the choice between getting it done, doing it right or getting it done and right, where do you come down?” It was a fancy way of dressing up the “Will you do quick and dirty?”.

Or rather, since we were talking about the management of software development the question was “Will you make your developers do it quick and dirty?”

This question, in its various forms, is used as a kind of test for us. The person asking the question wants to know if you have a similar set of values to them. In order to answer the question “right” you need to look into the soul of the person sitting opposite you and determine what they value.

All my experience tells me that when someone chooses a “quick and dirty” path a) it takes far longer than is expected, b) the quality is so low we go around the loop a few times making things slower, c) every one involved will feel bad, even “dirty” about the soltiion and d) it causes more problems very soon.

Perhaps (c) and (d) are where the name comes from: “A choice of actions which makes you feel dirty and quickly cause more problems.”

The “quick and dirty” or “slow and right” dilemma is only a problem because of the way we approach the question. There is an assumption that there are two, and only two, options. One will take a while and one can be done quickly, one is “good” and one is “dirty.” These two options are then juxtaposed and connected by the “tyranny of the OR.”

Everything about the question smells of winner and loose, or zero-sum game to give it a fancy name. “Quick and dirty” means “customer wins, developer looses”. And the alternative is “customer looses, developer wins.”

As someone who manages software development teams I find the question particularly naive. When I’m managing the team I can’t actually make anyone do anything. I may tell a developer to “do it quick and dirty” but I have no way of knowing what they will actually do. Indeed, asking developers to repeatedly do things “quick and dirty” not only damages the code base but means I’ll probably p***-off my developers. Before long they will be heading for the exit.

My solution: don’t accept this framing of the problem. Get more options on the table. (If you can stomach the tired old cliche: find a win-win solution.)

Define what you mean by the “quick and dirty” solution. What will it involve doing? Why do we think it is “dirty” ? Why do we think it is “quick” ? What will be the outcome? Will it meet our needs? Indeed, what is the real need? Is it what we think it is?

Repeat the exercise for your “slow and clean” option. Now if these two are opposite end of the spectrum what are all the thing in-between? What other options are available? What questions do you have which you should clarify first? Often there are questions over what is actually needed, when these are clarified the problem and solution may look very different.

Now of the options in front of you can you combine any of them?

Usually when I do this exercise more options open up. There are now “clean and quick”, “minimal” and other answers available. Sometimes “do nothing” becomes an option.

Unfortunately you also find that sometimes you options are very limited. Everything looks “dirty”. The “right” solution is not even on the table so there never such an option in the first place. Doing it this way also makes people feel better, if you do have to compromise somewhere then at least everyone involved has had a chance to have their say and look at the options.

If this all sounds like it will take too long then try it and see. Taking longer in the “what are we doing” phase usually saves time in the “do it” phase.

This is not to say there are not times when you should not take fast action. Sometimes the right cause of action is to “damn the torpedoes,” push on, take the risk and get something done. However those occasions are about getting stuff done, not about making choices.

So, next time someone asks you to do “quick and dirty” remember: there is no such thing as “quick and dirty, only dirty and slow.” Then get some options on the table.

What is Advanced Agile?

Suggestions please.

I few weeks ago I received a request to create and deliver an “Advanced Agile” training course for a Scandinavian client. My first reaction was: that is more of a consulting or coaching assignment. But when I thought it through and considered the kind of material I covered in my own “Agile Foundations” course, and the introductory courses I know from other people the more I realized how much more there was I could cover.

I’m not most of the way through creating the course and I’ll deliver it in a couple of week. I’ll report back on what I put in and what people thing.

In the meantime, I was wondering: what do readers of this blog consider Advanced Agile?

Suggestions please, on this blog or e-mail me and I’ll summarize them.

Defining Agile in e-Technology Management

I have a piece by me in the latest issue of e-Technology Management entitled “Agile agile everywhere – not a definition to speak of”. Believe it or not, for all the stuff I write in this blog about “agile” I’ve been trying to define what “agile is” for a long time. When I deliver a training course I always want to say “Agile is…” but there is no simple/short definition I can come up with.

In the e-Technology Management article I distinguish between Agile methods, the Agile toolkit and Agility or, to put it another way “the sate of being Agile”. The term “Agile” is really a label for all of these ideas. Its short hand, its useful in conversation but if you want to really understand it you have to go beyond the label.

One of the problems with “Agile” is that it are often defined as “not the waterfall.” Someone says “What is Agile?” and the answer starts out “Well you know traditionally we did requirements, then analysis, then development, then…. ?”

Yes its laziness, yes I’m guilty, but I’m not the only one.

When we define Agile by what it is not then the scope is unlimited. In my experience, very little development work happens the way it is supposed to in the “waterfall”. So most development is “not waterfall”. Thus, if you define Agile as as what it is not (i.e. not the waterfall), and most development is not waterfall (at the point the work is done) then most development is Agile.

Anyway, I’m sure this is a theme I’ll return to.

97 Things Every Programmer should know

Behind the scenes Kevlin Henney has been busy the last few month soliciting and editing contributions to a new O’Reilly web publication entitled 97 Things Every Programmer Should Know.

I’m proud to say a couple of my contributions have made it into the list:


Other contributions are from names which will be familiar to ACCU’ers: Michael Feathers, “Uncle” Bob Martin, Pete Sommerlad, Pete Goodliffe, Klaus Marquardt, Giovanni Apsroni, Jon Jagger and of Kevlin himself. Plus there are are contributions from many others.

Just don’t ask me why it is 97 and not 98, 99 or 96 things. 97 it is.

Not quite the doom of Agile

Just in the nick of time I resolved my baby sitting dilemma and made it to Tom Gilb’s SPA London talk, only missing the first 10 minutes. Here are a few notes I took – I don’t think I missed anything radical in the first few minutes but if I did I’m sure someone will correct me.

As always Tom was provocative and seeded a lively debate with the audience. As always Tom pushed for quantification and measurement of what software work is trying to achieve and how successful it is. And, as always, many in the audience pushed back on this.

Without going into too much detail: Tom says you can quantify just about anything, and for a software development you should. Many other people question whether you can quantify everything, whether it is necessary to quantify and if it is cost-effective to quantify.

I see both sides of the debate. I tend towards Tom’s point of view but I understand the reservations and share many of them. I also think there are some things it is extremely hard to quantify. Tom has this skill. Many others don’t and one of the results is a lot of bad quantification, bad metrics and consequently lots damage done.

Agile is not doomed the way the talk synopsis made out. Indeed the conclusion is much more that Tom’s methods – Evo and value management – are complementary t o Agile and specifically Scrum.

He didn’t address the “XP is already dead” quote until he was questioned at the end. This assertion seems to be based on the assumption that XP was mainly hype and now the hype has gone XP has gone.

Where Tom sees Agile methods going wrong is in the lack of understanding about what needs doing, for who it needs doing and a focus on “features” over “performance.” I agree with him here, I’ve blogged and written elsewhere about this problem myself (e.g. Requirements: the next challenge). I even gave a talk at the BBC a few years ago entitled “What’s wrong with Agile”.

Tom does believe teams need managing. Although he didn’t say as much that implicitly rejects one of the pillars of Scrum: the self organizing team. I don’t think Tom wants to see a return to command and control management – far from it – but he does see a need for active management. And that brings us to the next point.

Stakeholders and Product Owners
One of Tom’s key point is the role of stakeholders. That is, the need for all software development work to identify the stakeholders in the work and what they value from the work. Different stakeholders have different values and want different things, and these things change over time.

Tom asserted that stakeholder identification and management is not covered by Agile methods. Last night I thought “I’m sure it is in…” but now I look through my collection of Agile books I see its pretty thin on the ground. That said, there aren’t many references in Tom’s own Competitive Engineering book either.

As far as I am concerned, stakeholder identification, communication and management falls firmly within the Product Owner role. This is part of what Product Owners need to do when they are off the Agile stage and is part of the reason this role is so important.

At one point Tom stated that the Product Owner role dooms the project. I challenged him on this and it turns out we are in agreement. The doom occurs when the Product Owner is not managing the stakeholders.

Value over stories
Stakeholders have needs. Agile teams often interpret those needs as features. Tom suggests that User Stories feeds this assumption. I’ve blogged about this before: Requirements not functionality.

Tom proposes that by understand who the stakeholders are and what they value we can start to deliver value rather than features. Now I believe User Stories can be used to communicate this if they are written to communicate this. Tom’s own alternative to User Stories – Planguage – emphasizes the quantification, and thus value, more then User Stories. That said, Planguage is more difficult to pick up and use than User Stories.

Value driven planning
When Tom puts all this together you get Value Driven Planning. Here the idea is that critical stakeholders determine the value of software work.

Naturally there is conflict here as stakeholders argue about which work has the greatest value. The important thing is: conflict is natural. It is not resolving the conflict which is the problem. Once conflict is identified it can be resolved.

Potential v. Delivered value
Another key point to understand is the difference between potential value and delivered value. Too many projects have potential value which is not delivered because the software is not used, the users understand how to use it to the full or something else gets in the way.

It is not enough for the development team to measure the value they think they have delivered – e.g. functionality that can be used. They need to measure what value is created.

When there is a difference between these two numbers then it is time to act. If potential value is 1024 and actual delivered value is 500 then there is a lot of value being lost.

For me a lot of this debate comes down to scaling. In small teams, with few stakeholders and small(ish) budgets doing a lot of stakeholder analysis, quantification and measurement may well be more expensive then just doing it. As was pointed out by someone, the trial-and-error approach of iteration works well, and cheaply.

When there is a big team, many stakeholders and this a big budget then not only is it worth making these investments it is necessary to ensure the team is doing the right thing, that everyone agrees on what is being done and understands why, and that the budget is justified.

Perhaps that my biggest “take away” lesson from last night.

In the end…
As you might tell, and is often the case with Tom, he hits you with hundreds of ideas many of which are left hanging there for you to follow up or thing about later.

The conclusion to my previous blog entries this week (The Doom of Agile, Thursday and Tuesday. ) is: Tom doesn’t really disagree with Agile, he is highlighting a issue others have seen and wants to shift the focus.

If you are interested Tom’s slides can be downloaded (be warned, this is a big presentation, 23Mb in PPTX format). Mark Stringer at AgileLab has some comments on the presentation too.

Postscript: The doom of Agile

A post script to my last blog entry (“SPA London, Tom Gilb and the doom of Agile”) generated some comments and a couple of private e-mails.

I think Mark is right when he commented: “they are ‘fads’, the hope is they will just become nameless best practices”. Many of the Agile practices were already best practices in places, but until they were named, publicized and, yes, hyped, few people knew of them.

Have a look at EPISODES from 1995, thats 4 years before “XP” and 6 years before “Agile”. Its documenting best practices. Or one of the earliest attempts to document Scrum, The Scrum Pattern Language in 1998.

Both of these are patterns, which means they document what is being done, not what is suggested. they are documented because people think these are good practices.

One day, the Agile “hype” will fall down and what we are left with is the practices. Thats what I was trying to say when I wrote “ If XP is dead then it died so Agile might live.” The hype around XP is less, much of XP has already become best practices – think of TDD, think of daily stand up meetings. Obsessing about XP is wrong but doing the XP things is right.

Thats why I also agree with Rob Bowley’s comment when he wrote: “XP is here to stay (as much as Agile is) because through time it has proven to be effective.”

We may dislike hype but its part of the way the world works. Object oriented programming, UML, ISO 9000 and CMM have all been hyped and left their mark. SOA is hyped today, it means a lot to some but technically its just a continuation of modularisation (first there were static libraries, then there were objects, then components and now services.) Without hype people wouldn’t get to hear of these things. That’s why Simula and many other technologies have fallen by the way.

We might argue over which Agile method is best: XP, Scrum, Kanban?
We might disagree over which practices are applicable: TDD, BDD, stand-ups?
Or how to implement these ideas in unusual environments: distributed teams?

But I don’t see anyone seriously arguing for a return to to “waterfall”.

I still hope to make it to Tom’s presentation on Wednesday but we seem to have a baby sitting clash so I might be grounded. Won’t know until the last minute unfortunately.