Blog

3 Styles of agile (part 2)

In my last entry I set out what I call “3 Styles of Agile – Iterative, Incremental and Evolutionary.” In this entry I’d like to discuss the model as a whole.

Lest anyone start “my style is better than your style” I am at pains to point out that each style has merits. Regardless of whether any of us regards one style as superior, or one style as “wrong” the model describes what I find in companies and teams. I also believe you can use Scrum, XP or Kanban in any of the three styles.

That said, I confess, I see evolutionary as “superior” to iterative and I probably don’t do a very good job of hiding my preference. Rationally I must accept that each style has its own advantages and disadvantages depending on your context.

So how should one use the model? Or rather what does the model show?

Firstly (#1) the model is useful in naming the way a team is working and resolving conflicts. I frequently find that one group of people in the organization think Agile development means one style (e.g. Evolutionary) while another group thing it means another (e.g. Iterative). The result is a conflict with each group thinking the other is stupid and doesn’t understand.

This extends to much of the writing in books, blogs and elsewhere about agile. Sometimes the writer is implicitly assuming one style but their reader may be thinking another. In the extreme subscribers to one style will see the other as “Idealistic” or “Purist”, or contra wise: thinking someone “wants to do Waterfall” or “lacks flexibility.” One person’s “Pragmatic Agile” is anthers “Flawed Agile.”

Simply recognising the model and talking about it exposes assumptions and beliefs. It can also help to recognise that different teams within the same organization will work to different models.

The second (#2) use of the model is as a change model. Typically a team with a dysfunctional (or sub-functional) process and wanting one particular style, say, evolutionary, can enter at iterative and advance up in steps. It is fairly easy to see how a team working in iterative style could become incremental. (Of course a chaotic team finding itself in a controlling environment might make the opposite transition, from evolutionary to iterative.)

Thirdly (#3) – and already implied the last paragraph: organisations might rationally choose to adopt one style in preference to others. For example, for an organization which follows a strict budgeting process, operates in a slow paced market and has no intention of changing its management approach (e.g. a large insurance company) it is probably right to adopt an Iterative approach. Adopting evolutionary would just upset too much of the organization and possibly destabilise it.

One of the things I would like to understand – and something I should devote more time to is: understanding the forces which lead an organization to adopt one style over another. There are rational reasons for attempting to freeze out change (while achieving efficiency benefits) and there are rational reasons for embracing change (and sacrificing efficiency).

Now, while it sounds rational to say “A company can choose its Agile style” I don’t believe that is how it happens. Rather the style a company operates in has more to do with where they came from – what economists call path dependency.

Companies and teams seldom rationally choose the style they want. Rather their existing mental models and assumptions leads them to understand “agile” in one of the styles listed. As a result when they encounter engineers, consultants and books which assume a different style they a) ignore them, b) get confused or c) dismiss them as “idealistic” or “not in the real world.”

The net result is that it is more important than ever to name these things. Just naming them (#1) and recognising the styles for what they are is a great step forward. What you do next is a far deeper issue tied up with the wider organization.

3 Styles: Iterative, Incremental and Evolutionary Agile (part 1)

When I’m teaching training courses (as I was this week at Skills Matter) or advising clients on the requirements side of software development (which I’m doing a lot of just now) I talk about model I call “3 Styles of Agile”. Incredibly I’ve never blogged about this – although the model is hidden inside a couple of articles over the years.

So now the day has come… I don’t claim the “3 Styles model” is the way it should be, I only claim it is the way I find the world.

While “doing agile” on the code side of software development always comes back to the same things (stand-up meetings, test/behaviour driven development, code review/pair programming, stories, boards, etc.) the requirements side is very very variable. The advice that is given is very variable and the degree to which that advice is compatible with corporate structures and working is very variable.

However I find three reoccurring styles in which the requirements side operates and interfaces to development. I call these styles: Iterative, Incremental and Evolutionary, and I usually draw this diagram:



I say style because I’m looking for a neutral word. I think you can use Scrum, XP and Kanban (or any other method) in any of the three styles. That said, I believe Kanban is a better fit for evolutionary while Scrum/XP are a better fit for Iterative and Incremental.

I try not to be judgemental, I know a lot of agile folk will see Evolutionary as superior, they may even consider Evolutionary to be the only True Agile but actually I don’t think that is always the case. There are times when the other styles are “right.”

Let me describe the three styles:


Iterative

In this style the development team are doing lots of good stuff like: stand up meetings, planning meetings, short iterations or Kanban flow, test driven development, code review, refactoring, continuous integration and so on. I say they are doing it might be better to say “I hope they are doing” because quite often some bit or other is missing. Thats not important for this model. The key thing is the dev team are doing it!

In this model requirements are arrive in a requirements document on mass. In fact, the rest of the organization carries on as if nothing has changed, indeed this may be what the organization wants. In this model you hear people say things like “Agile is a delivery mechanism” and “Agile is for developers”.

The requirement document may even have been written by a consultant or analyst who is now gone. The document is “thrown over the fence” to another analyst or project manager who is expected to deliver everything (scope, features) within some fixed time frame for some budget. Delivery is most likely one “big bang” at the end of the project (when the team may be dissolved.)

In order to do this they use a bacon slicer. I’ve written about this before and called it Salami Agile. The requirements document exists and the job of the “Product Owner” is to slice off small pieces for the team to do every iteration.

The development team is insulated from the rest of the organization. There is probably still a change review board and any increase scope is seen as a problem.

I call this iterative because the team is iterating but thats about it. This is the natural style of large corporations, companies with annual budgets, senior managers who don’t understand IT and in particular banks.


Incremental

This style is mostly the same as Iterative, it looks similar to start with. The team are still (hopefully) doing good stuff and iterating. There is still a big requirements document, the organization still expects it all delivered and it is still being salami sliced.

However in this model the team are delivering the software to customers. At a very lest they are demonstrating the software and listening to feedback. More likely they are deploying the software and (potential) users can start using it today.

As a result the customer/users give feedback about what they want in the software. Sometimes this is extra features and functionality (scope creep!) and sometimes it is about removing things that were requested (scope retreat!). The “project” is still done in the traditional sense of everything in the document is “done” but now some things are crossed out rather than ticked. Plus some additional stuff might be done over and above the requirements document.

I call this incremental because the customers/users/stakeholders are seeing the thing grow in increments – and hopefully early value is being delivered.

I actually believe this is the most common style of software development – whether that work is called agile, waterfall or anything else. However in some environments this is seen as wrong, wrong because the upfront requirements are “wrong” or because multiple deliveries need to be made, or because the team are’t delivering everything they were originally asked to deliver.


Evolutionary

Here again the development team are iterating much as before. However this time there is no requirements document. Work has begun with just an idea. Ideally I would want to see a goal, an objective, an aim, which will guide work and help inform what should be done – and this goal should be stated in a single sentence, a paragraph at most. But sometimes even this is missing, for better or worse.

In this model the requirements guy and developers both start at the beginning. They brainstorm some ideas and select something to do. While Mr Requirements runs off talk to customers and stakeholders about what the problem is and what is needed the tech team (maybe just 1 person) get started on the best idea so far.

Sometime soon (2 weeks tops) they get back together. Mr Requirements talks about what he has found and the developers demonstrate what they have built. They talk some more and decide what to do next.

With that done the developers gets on with building and Mr Requirements gets on his bike again, he shows what has been built and talks to people – some people again and some new people. As soon as possible the team start to push software out to users and customers to use. This delivers value and provides feedback.

And so it goes. It finishes, if it finishes, when the goal is met to the organization decided to use its resources somewhere else.

Evolutionary style is most at home in Palo Alto, Mountain View and anywhere else that start-ups are the norm. Evolutionary is actually a lot more common than is recognised but it is called maintenance or “bug fixing” and seen as something that shouldn’t exist.

Having set out the three styles I’ll leave discussion of how to use the model and why you might use each style to another entry. If you want to know more about each model and how I see Agile as spectrum have a look my 2011 “The Agile Spectrum” from ACCU Overload or the recently revised (expanded but unfinished) version by the same title: “Agile Spectrum” (the 2013 version I suppose, online only).

Events! – Speaking and training

With the impromptu Quality mini-series out of the way normal blog-service will now be resumed!
First up a bit of self-publicity! I’ve got a busy few months coming up and I thought I’d post a list of the events and public training courses I’m giving…

  • Skills Matter In the Brain, “Xanpan – not a methodology, a personal take on Agile”, London, 16 September 2013, this is a free evening (6.30pm event) – registration required at Skills Matter site – THATS THIS MONDAY
  • Skills Matter, Essential Agile for Business Analysts, this is a 3-day course at Skills Matter in London, 16-18 September (booking essential – with Skills Matter). Its called “for Business Analysis” but anyone concerned with the requirements side will benefit. I’m also thinking of renaming it and tweaking to to “Agile for Business Analysts and Project Managers”
  • IIBA Business Analysis Conference, 23-25 September, 2013, “Patterns and Pattern Thinking for Analysis and Innovation – when I’m not preaching about agile, or wishing I’m an Economist, I’m a patterns guy, its my dirty little secret
  • Agile Cambridge conference: “Requirements: Whose job are they anyway?” – 25-27 September 2013
  • Program Utvikling, Kanban Software Development 101, Oslo, 21-22 October 2013: A semi-new 2-day course, some tried and tester material bundled into a new format and focused on Kanban
  • Agile Tour London, “Xanpan (What do you get if you cross XP with Kanban?)”, London mini-conference, 1 November 2013, a chance to hear me outline Xanpan (and save on reading the book)

Looking further ahead, I’m taking on a challenge by speaking to the BCS PROMS-G group (thats the project management group) on “The End Of Projects – and what happens next” (5 February 2014) free, register with the BCS.

Still banging on about Quality

The last couple of blog entires have been a public investigation – personal think – about software quality – see What is Software Quality? and Software quality? Thanks to all those who have provided feedback and comments, they’ve all contributed to my thought process.

I think I am moving towards some kind of conclusion. I’ll write up that conclusion for Xanpan and post it elsewhere too. Right now I’d like to add some more thoughts and hint at the conclusion.

I’m starting to think it is easier to talk about how to achieve high quality and about the benefits of quality than it is to actually define quality itself. Without actually defining what quality is or what the benefits of quality are then talk of how to achieve it is unlikely to be more than opinion.

I think that defining quality is one thing, and we also need to recognise that there may be multiple ways of achieving that end point. I might not agree with all those ways, I might even think that some are misguided but I will recognise that mine is not the only way.

So far my definition of “Software Quality” has focused on two aspects: defects and maintainability (and particularly extendability).

That is a small view, Tom Gilb and Jerry Weinberg’s suggestions are much wider. Tom would – I think – say my two aspects are but two “qualities” among many. For example: speed performance, ease of use and conformance to specification might all be part of quality.

I’m now thinking of quality like an onion. Defects and maintainability are at the core of the onion, and defects is probably the core.

BasicQOnion-2013-09-9-19-32.png

You, your organization, your team, your customers might choose to added additional qualities to this definition which you are quite at liberty to do. For example, this might be your onion:

EnhancedQOnion-2013-09-9-19-32.png

I believe this model is entirely compatible with Gilb’s concept of quality as qualities and Weinberg’s “Quality is meeting some person’s requirements.” (Thanks to Mark Nilsen for this).
Points here:

  • Quality is multifaceted and finding a single definition to satisfy all situations is tough. (Even if we had such a definition it might be too abstract or academic to be meaningful in everyday work.)
  • Each organization/team/project/product could benefit from defining what they take to be quality. (And I would suggest that few actually do this and that this resulting difference in individuals opinion on “what is quality” is the cause of much friction and mis-aligned aims.)

Quality is itself a statement of values – not necessarily financial value, its as much about personal values. That said financial value should neither be ignored or belittled – after all, most software is developed for commercial reasons, profit is not a dirty word.

Thus let me make another statement I believe is true:

Quality results in business benefit – money saved, money made, happier customers, business aims realised

Once we accept this the statements like “Quality is free”, “High quality save money” and “Quality sells” become axiomatic. The aim of “high quality” is to produce one of these business benefits.

Consequently statements like “We can’t afford this much quality” and beliefs like “Lower quality is acceptable to customers, is faster and saves money” are the result of a mismatch in different individuals understanding of what is quality.

For example: one person believes customers are happy with buggy software as long as it is available soon while another believes that customers want bug-free software later. When such mismatched occur using the word “Quality” is itself a problem because it means different things to the different parties.

That might all be too circular for some, sorry about that, I wish it wasn’t but I suspect that is the nature of the beast.

Now back to my definition and the inner layers of my onion. I think defects and maintainability are at the heart of the onion and need to be present in every definition of software quality because…

Defects: the research I have seen (Jones and elsewhere) and personal experience (as a developer,manager and customer) tells me that defects (bugs) are not a good thing – whether absolute defects or common defects. My experience and intuition tell me Jones is right when he says: “projects with low defect potentials and high defect remove efficiency also have the shortest schedules, lowest costs, and best customer satisfaction levels.”


Maintainability, changeability, and its half-brother extendability, is life. Software which can’t change can’t live.

At a basic level if you have maintainability you can fix defects and you can add any other quality you like. If your software is maintainable but not performant you can change it to have the performance you want. If your software is hard to use but maintainable you can change it to be easy to use.

If your software lacks maintainability, changeability, you will find it hard – expensive but perhaps not impossible – to make any of these changes. When software lacks maintainability it is not soft.

Lets agree to call software which is not (easily) changeable (maintainable) sclero-ware.


Definition: Sclero-ware – software which is not practically changeable.
(Where practically is context dependent.)

“But” you might ask, “are there not some examples of software which does not need to be maintained?”. “Surely,” you say, “there is some software which does its job and it is done. Defects might be core of the onion but maintainability is just another quality you might, or might not want.”

While this I can follow the logic here I am lacking an example. Do you know any? Indeed I suspect there are no such examples.

Because: successful software is used, and because it is used it needs to change.

Software needs to change because the world changes: Windows 8 replaced Windows 7 which replaced Vista which…. the Euro replaced the Deutschmark which replaced Reichsmark, the 747 replaced the 707 which replaced the Queen Mary and Queen Elizabeth I, telephones became mobile, telephones replaced iPods which replaced Discman which replaced the Walkman and along the way iPods replace Hifi systems, Banks replaced Building Societies, Debit cards displaced cash, ….

Must I continue? – the world changes.

Software which is not used does not need to change.

Software which is not used is the past. It is dead. Look here, SourceForge has plenty of dead software projects you can download, try one (here’s a page of examples.)

Successful software lives, it changes, it moves on.

So yes, software which did its job and is no longer used doesn’t need to change. You don’t need to change if you are dead.

Also, “maintenance” starts a lot sooner than is many people recognised. Maintenance starts the day after coding, not the day after the “project” ended. If you need to go back a fix a bug – at anytime! – you are maintaining. If your software lacks maintainability you are going to have a hard time getting any bugs out (and I suspect if it is not maintainable you probably have a lot of defects).

To summarise: I consider software quality to be a function of high maintainability and a low number of defects. I focus on these to attributes because I believe these are invariant. I am happy for add more qualities in a given context.

I accept there are different views on the best way of achieving these qualities. For example:

  • The approach I was taught in the 1980s and 1990s involved detailing what was required, perhaps writing a logical specification, engaging in a design process intended to minimise defects and maximise maintainability, then coding and finally testing the software. Some people call this “Waterfall”, I prefer “Traditional”.
  • The approach I advocate today involves setting a goal, identifying small pieces which can help build towards the goal, setting tests, engineering to those tests and repeating the process again and again and again. Some people call this “Agile”, I believe there are actually three styles better called “Iterative”, “Incremental” and “Evolutionary”.

And I believe that if you lack quality – you have a high number of defects and sclerotic code – sclero-ware – you will fail every time.

Software quality?

Since publishing my stream of consciousness about software quality on Friday (What is a software quality?) I’ve continued to think about the subject. In part putting the thoughts down started to structure my own thinking, in part a few comments made on Twitter caused me to think more, and, simply having written something caused me to reflect.

There are four main points I’d add to Friday’s entry.

First, what I wanted, what I needed for Xanpan, was (is) a good enough working definition of software quality, I don’t need a perfect definition. But even this turns out to be surprisingly difficult. And as Steve Smith pointed out this is not confined to software quality, defining “quality” at all tends to lead you in the direction of Zen and the Art of Motorcycle Maintenance.

Next, what I really need are two things: A definition of software quality and a definition of a software defect. I think I’m starting to approach workable definitions of both. But, definitions (by definition almost) should be free of assumptions about how you achieve – or don’t achieve – that aim. In other words, the definitions should not imply or assume an Agile, Waterfall or any other approach.

Third, one of the things I was concerned about was gold plating or over engineering. Really this is part of the “how do you achieve your aim.” A definition of software quality – and low defects – need not rule out (or embrace) any particular approach. I need to separate this out. Similarly the knowledge issue, while important, needs to be put on the side here.

Finally, I’m coming to the conclusion that we need two different terms for software defects. The clue was in that Tom Gilb quote really.

  • An Absolute Defect is, as Gilb says, free from personal opinion or taste. It is objective. For example, function called “Add Two Numbers” always returns 5 – whether you give it 2 and 2, 2 and 3, or 100 and 200.
  • A Common Defect (and I’m prepared to find a better name and I don’t imply any link to Deming) is anything which someone at some time decided to report – or classify – as a defect.

Absolute defects are a perfect subset of common defects, i.e. all absolute defects are also common defects but only some common defects are absolute defects.

I think its important to make this distinction because so much time and energy is spent by people arguing over what is and what is not a “defect”. If we define separate terms then we can at least reason about this situation and the fact that individual, teams and organizations often have incentives for classifying something as a defect whether it is or is not.

Jon Jagger pointed out in his comment on the previous blog that “Jerry Weinberg’s definition of quality is “value to someone” or “value to some persons”.” A common defect most definitely fits that description. Someone, somewhere, considers it valuable to call this thing a defect.

But I’m not sure I go along with that statement as a definition of quality as a whole. It is too open and not specific enough for me – or perhaps for the uses I want to put it to!

Some observations about defects to end:

  • All defects cost time and probably money: even logging a report of a potential defect costs time (and thus money) and action taken as a result (investigation and potential fixing) costs time. Defects themselves – even when not reported – may well costs money, e.g. overpayment or lost customers. (This also implies an element of the unexpected to defects.)
  • Absolute defects cost time and money; is the same true for common defects? Well maybe my opinion of something does result in a different financial outcome than someone else’s but does that make it a defect? Even then it can be surprisingly hard for some organizations to tell what makes, and what looses, money. I’d like this to be the dividing line but I don’t think it is.
  • Common defects need to be administered, managed and possibly fixed just the same as absolute defects. In my opinion, if someone reports a defect it should be considered against other defects – common or not – and action (fixing) taken or not. The debate between whether a particular defect is an absolute defect or a common defect isn’t very useful, it is the software equivalent of “my Dad is bigger than your Dad.”
  • If there exists some pre-code definition (specification, requirement, formula, etc.) of what the code should do it might be easier to classify something as an absolute defect but this isn’t enough reason to create that definition.
  • High quality software has few defects of either sort and is perceived as having few defects.
  • Software which has a lot of reported defects doesn’t quality as high quality.

What is a software quality?

If any of you have heard me speak in a training session or conference you’ll know I am found of quoting Philip Crosby: “Quality is free!”. Crosby was talking from a background in missile production but the message was picked up by the car industry and silicon chip industry (“The Anderson Bombshell” in 1980 explained how Japanese RAM manufacturers were cheaper than Americans and better quality). I am quite fond of applying the argument to software.

I like to cite Capers Jones: “projects with low defect potentials and high defect remove efficiency also have the shortest schedules, lowest costs, and best customer satisfaction levels.” (Capers Jones, Applied Software Measurement, 2008)

He’s not alone in this, Tom Gilb says: “The reduction in defects … saves ‘rework’, which otherwise is about half of all effort in software projects.” (Tom Gilb, Competitive Engineering, 2005).

Jon Jagger pulled me up on some sloppy discussion of software quality, defects and rework in Xanpan. So I’m trying to come up with a (concise) definition of what I mean by software quality and it turns out that this more difficult than you might think.

To my disappointment Jones doesn’t give a definition.

Gilb offers “Defect: a failure to observe a formal, written, required rule. It is not a personal opinion or personal taste. It is a failure to observe a group norm, or required best practice.”

That sounds good until you take the statement to pieces:

  • What if rules are informal? Well I suppose we can allow informal, tacit, rules, because they are “group norms”.
  • “written” assumes there is something written which isn’t an assumption I can accept. Jones points out that documentation is the second most expensive activity after fixing defects, so I’d hate to eliminate defects at the expense of increase writing costs.
  • “personal opinion or taste” seems fair enough but putting this into practice can be incredibly difficult. I know plenty of times when I would call a defect a personal taste but the person raising the issue wouldn’t
  • “group norm” is particularly difficult when you are developing products which will change group norms
  • And “best practice” …. who says it is best practice? who says it can’t be bettered?

I like Gilb’s definition but I don’t think is enough. Crucially even in saying “not a personal opinion” it does nothing to avoid the “one man’s bug is another man’s feature” problem.

What can we say about software quality and defects?

  • Software quality is inversely proportion to the number of defects in the system: high quality implies few defects and vice versa
  • Defects have undesirable consequences
  • Defects incur costs, in all likelihood financial costs but there are others, time in particular. Even if defects are not fixed they will incur costs, e.g. over payments from a financial system or people ringing the helpline to report a spelling mistake
  • Removing defects requires rework and rework costs time and money

This is probably the start of a longer list, what I am describing are the attributes – or qualities – I attribute to “high quality software”.

The list is also self fulfilling: everything I have said so far implies that low quality, lots of defects, will increase costs, so the quotes from Crosby, Jones and Gilb all become self fulfilling. Perhaps this isn’t a problem, perhaps the quality attributes we want from our software is that costs are kept down.

But there is another quality I would like from high quality software which is insidious. High quality software should be changeable – actually all software is changeable (its soft!) but some code is easier to change than other code.

High quality software as easy to change

Lets leave to one side a definition of easy, I agree it should be quantified but not right now.

What do I mean when I say “change” ? I think the spirit is captured by an old John Vlissidees quote I’ve long been fond of:

“A hallmark – if not the hallmark – of good object oriented design is that you can modify and extend a system by adding code rather than hacking it…. In short, change is additive, not invasive. Additive change is potentially easier, more localized, less error-prone, and ultimately more maintainable than invasive change.” John Vlissides, The C++ Report, February 1998

I’m prepared to generalise this to all software, not just OO software. I might even go as far as focusing on the “rather than hacking it” – although one then needs to define “hacking”. Good software needs to allow for change rather than having change forced into it.

Actually this quote also provides the attributes we need to define “Easy to change”

  • Change is localized
  • Change is less error-prone – perhaps better stated as “change does not inject defect” (Somewhere in Jones writing he suggests 7% of defect fixes inject new defects so high quality software would have a bad fix injection rate less than 7%)
  • Change is more maintainable, i.e. changing software does not detract from the changeability of the software

If have these attributes (qualities if you prefer) then software quality is high and as a result change is cheap(er). The relationship between quality and costs appears again.

But the way I describe the quality-cost link is the reverse of the way many people perceive it: the stereotypical Project Manager views quality as an attribute that can be reduced in order to accelerate development and reduce costs. I have to say I have difficulty in actually understanding this point of view but perhaps its because of the way I am defining quality.

Apart from that there is another danger of approaching: Over engineering.

Given all we have said so far you could make an argument for spending a lot of time designing your software to exhibit all these attributes. You could seek to build, design, software which would not require the design itself to be revisited. That after all is rework isn’t it?

Now I’ve long believed there is rework and there is rework:

  • Rework to fix bugs, defects, is bad and wasteful because you shouldn’t have put the bug in there in the first place.
  • Rework to change software for new requirements, even if that means reworking (refactoring) the design is good, or at least acceptable, because you couldn’t know about this up front therefore any effort to cater for this requirement might be misplaced and could actually end up complicating the design. In other words it is self defeating.

As I see it there is a question of knowledge here: you need to engineer within your knowledge, if you know, or could easily find out, some piece of information which would cause you to work differently then you should. But if there is information you don’t know, and would be time consuming/costly, or even impossible, to find out then it is acceptable to defer knowing and accept rework will be required later.

So the question starts to become one of knowledge acquisition. One way of acquiring more knowledge is through feedback, when feedback is rapid, timely and cheap to get we can rapidly expand the knowledge we are working with.

High quality software should be as free as possible from deficiencies given the current knowledge of what is required, but open to change when new knowledge becomes available which necessitates a change.

It’s tempting to write this as:

Quality(T) = Changeability(T) / Known defects(T)

Where T represents some point in time, as time progresses onwards changeability may well decrease which known defects may go up or down.

Notice I’ve said: “Known defects” not “Known changes”. For a piece of living, successful, software there will be a list of changes people would like made to the software. The existence of this list actually demonstrates another attribute of quality software: people use it and value changes. (Low quality software on the other hand may be so buggy that people avoid using it and thus don’t request changes.)

Excluding non-defect changes like this does leave open the problem of whether a defect report (a bug) is actually a defect report or a request for change. In some organisations such debates are heated but usually they are pointless. Sometimes they are really a Cap Ex v. Op Ex discussion, sometimes they are a “Who will do it?” or a “When will it be done?” discussion, sometimes they are a “Who will pay?” discussion. All these, and more, problems get in the way of this measurement.

While I would like to throw the door open and say: “Its all work to be done, one backlog” to do so would be to blow the equation and argument out of the water because, as I just said, high quality software may well have a longer list of change requests than low quality software.

So now we have to consider the argument about internal v. external quality…. but this blog entry is all ready too long.

Does any of this make sense?

Does any of this help?

Am I any closer to defining software quality?

Perhaps but I don’t think I’ve answered my own question yet!

Writing this entry has helped me. I think I’ve found a possible definition of quality, although I still need a definition of defect. I think we need to consider the attributes of both quality and defects. I think there is a temporal issue here related to knowledge (but I don’t know how to model or define that.) I’m even more confused then ever about the relationship between cost and quality because it appear circular.

Anyone got any better ideas? – or just comments?

Scaling Agile Heuristics – SAFe or not!

If I’m being honest, I feel as if I don’t know much about scaling agile. But when I think about it I think the issue is really: What do you mean when you say “scaling agile?”


It seems to me you might mean one of three things:

  • How do you make agile work with a big team? Not just 7 people but 27 people, or 270 people
  • How do you make agile work with multiple teams within an organisation? i.e. if you have one team working agile how do you make another 2, or 20 or another 200?
  • How do you make a large organisation work agile? – it’s not enough to have a team, even a large team working agile if the governance and budgeting systems them work within are decidedly old-school

When I rephrase the question like that I think I do have some experience and something to say about each of these. Maybe I’ll elaborate.

But first an aside: why have I been thinking about this question?

Well David Anderson pushed out a blog about the Scales Agile Framework (SAFe) which prompted Schalk Cronje to ask what I thought – and at first I didn’t have anything to say! But then Schalk pointed out that Ken Schwaber has blogged about the SAFe too. It’s not often that David and Ken find themselves on the same side of the argument… well, actually… they probably do but too many people are willing to see Kanban and Scrum as sworn rivals. I digress, back to SAFe and scaling.

I still don’t have anything original to say about SAFe, I simply don’t know much about it. However the points David and Ken make would worry me too.

I’m not about to rush out and read SAFe. I find I’m more likely to be told by my clients: “I can see how agile works in big companies, but we are a small company, we can’t devote people like that.” And while I do have, and have had, some larger clients I think that statement says a lot.

I have over the years built up some rules-of-thumb, heuristics if you prefer a fancy term, for “scaling agile”. I’ve never set them down so completely before so here you go:

  1. Inside every large work effort there is a small one struggling to get out: find it, work with it
  2. Make agile work in the small before you attempt to make it work at scale; if you can’t get a team of 5 to work then you won’t find it any easier to get a team of 10 or 100 working. Get a small team working, learn from it, and grow it…
  3. Use piecemeal growth wherever possible: start with something small and grow it
  4. Don’t expect multiple teams to work the same way – one size does not fit all! A new project team might be perfectly suited to Scrum, a maintenance team to Kanban and a BAU+Project team to Xanpan. Forcing one process, one method, one approach one different teams is a sure way to fail.
  5. Manage teams as atomic units, aim for team stability, minimise moves between teams
  6. Split big teams into multiple small, independent, teams with their own dedicated work streams, product focuses and even code bases        
  7. Trust in the teams, delegate as far down as you can; give teams as much independence as possible; staff teams with all the skills they need – vertical teams, include testers, requirements people and anyone else
  8. Minimise dependencies between teams; decouple deliveries, decouple teams and again, vertical teams, staff the team so they don’t need to depend on other teams
  9. Use technical practices to automate as much of the dependencies between teams as possible. e.g. a good TDD suit and frequent CI will by themselves allow two related teams to work much closer together
  10. Overstaff in some roles to reduce dependencies – overstaffing will pay back in terms of reduced dependency management work        
  11. Learn to see Supply and Demand (a future blog entry is in the works on this): supply is limited and is hard to increase, you need to work on the demand side too
  12. Recognise Conway’s Law: work with it or set out to use it; again piecemeal growth and start as small as possible will help
  13. Use Portfolio Management to assess teams, measure them by value added against cost. Put in place a governance structure that expects failure and use portfolio management to fail fast, fail cheap, fail often
  14. Ultimately embrace Beyond Budgeting and change your financial practices
  15. Big projects, big teams are high risk and likely to fail whatever you do; some things are too big to try. Some big projects just shouldn’t be done

    
There is no method, framework, tool out there that will be your silver bullet, but if you think for yourself, and if your people are allowed to think and act then you might just be able to create something for yourself.

Doing back to the three questions I opened with:

  • How do you make agile work with a big team? When the team gets too big split it along product lines or product subsystems; if you can’t then don’t do anything that big
  • How do you make agile work with multiple teams within an organisation? Use multiple independent teams, minimise dependencies, decouple and use technical practices
  • How do you make a large organisation work agile? Portfolio management and beyond budgeting

And remember: Don’t do big projects, do small ones and grow success. If that is not an option for you then brace.

Coach or Consultant? Agile or not? What am I?

I am: a Software Development Consultant who specialises in Agile techniques

Or maybe: I am an Agile Consultant who specialise in Software Development.

There was a fascinating thread on Twitter this morning started by Marcin Floryan when he asked: “What are your views on a difference between coaching and mentoring?”. Tweets were coming thick and fast from John McFadyen, Rachel Davies, myself with George Dinwiddie and Andy Longshaw bring up the rear.

For me the difference between Coaching and Mentoring is the different between non-directive coaching and directive coaching. i.e. a true coach is non-directive. Using this definition an awful lot of what passes for Coaching – both in software teams and sports – is actually closer to Mentoring.

Of course things aren’t always that clear and as John pointed out the Coaching industry doesn’t agree on these definitions itself. And sometimes mentoring takes on coaching dimensions. And in fairness sports coaches might not recognise that distinction and it might be that what some people call “Agile coaching” is actually based on sport coaching rather than business coaching.

Still it got me thinking about why I increasingly shun the title “Agile Coach” and bill myself as an Agile Consultant. Hence the statement above. I used to call myself a coach but in the last year I’ve slowly backed away from that term.

I say consultant not coach because while I have studied coaching a little and read excellent books by Whitman and Downey on coaching I’m conscious of what don’t know about coaching. I’m conscious about how much time and effort real coaches put into becoming coaches. And indeed being an opinionated sod I don’t think I can practice true non-directive coaching.

Thats not to say I don’t use coaching techniques, I do but thats not all I do. I give advice, I’m directive – particularly when I first engage with software teams.

Anyway, I know lots and lots of software development – both technical and management – and I can’t help feeling that my clients don’t get the value of my knowledge if they employ me as a non-directive coach. Besides, very few clients I meet want me to be non-directive: they want my knowledge.

So perhaps I am: A Software Development Consultant who specialises in Agile techniques and uses some coaching techniques. (And is prepared to go off-piste and work with non-software teams if you ask nicely.)

But then the word Consultant is pretty nebulous and – in my opinion – abused a lot. The dictionary on my Mac defines Consultant as “a person who provides expert advice professionally.” Thats a definition I can associate with. The thesaurus gives synonyms as “adviser, guide, counsellor; expert, specialist, authority, pundit; informal ace, whizz, wizard, hotshot.” I’m happy with them too.

And to complicate things the work Agile is even more difficult to pin down. The name “Agile” now has a lot of problems. I openly use the term when I’m marketing myself but as soon as I’m inside a company I often find myself distancing myself from “Agile”. Defining just what is agile – and more importantly what is not agile – is increasingly difficult. Hidden away on my website is an unfinished piece “What is Agile?” that says more.

Actually I’d like to have nothing to do with Agile, what we refer to as Agile is really just the best way we know to develop software. Its not about Agile its about Software Development. But try Googling for that, introduce the work Agile and it narrows the field considerably.

Then there is Lean and Kanban. I am – because I paid for it – a Kanban Coaching Professional. You have to respect the way Kanban University has sidestepped the issue of what is coaching by not designating people as a coach but rather a coaching profession.

But is Kanban Agile? A few Kanban people shun the word Agile and the Agile community. Does that mean I should say: “Agile and Kanban” or “Agile and Lean”. (Actually I think the “Kanban is not Agile” tendency has declined recently, it was a growth phase Kanban went through and I think most people accept that while Kanban is not Scrum it falls under the broad Agile umbrella.)

Which would make me: A Software Development Consultant who specialises in Agile and Kanban techniques and uses some coaching techniques.

A bit of a mouthful.

So what am I?

I know a few things about software development. Some people agree with my views and a few people even pay me money to coach/consult/mentor/train/advise them on the subject.

I think many of the things I know about processes in the context of software development can be extended to related domains in technology and elsewhere. But what is the limit? I don’t know.

Xanpan – now available

A bit over two years ago I started using the term Xanpan to describe the style of agile I advocate and help teams implement. If it isn’t obvious Xanpan – pronounced “Zan-pan” – is a cross between Kent Beck’s Extreme Programming (XP) and David Anderson’s Kanban.

At first I used the term Xanpan to myself, then I started using it in public, then people started telling me they wanted to know more. While I’ve mentioned Xanpan in this blog a few times I’ve never really described it as a whole. That has now changed.

A few days ago I made “Xanpan – reflections on Agile and Software Development” available on LeanPub. This is a short e-book which described Xanpan. In the best traditions of LeanPub publishing the book is going to evolve and change.

Right now I’m reading it end-to-end and fixing a few small things. After this I’d like to write a section on requirements/need/product ownership and another on management. Plus – and this is one of the advantages of LeanPub – I want to get feedback on what I’ve written.

A few people have already seen drafts and given me some feedback but I’m hoping to get more. I’m also hoping for a special type of feedback which is very meaningful: money.

On LeanPub the book is available for a small sum of money. If people are prepared to pay then I know its an endeavour worth continuing.

In addition, I’ve added a Xanpan page to my own website which provides some other links about Xanpan – mainly pieces in this blog.

Please buy Xanpan today! That will give me immediate feedback in dollars, then send me your comments – feedback tomorrow.

Commitment considered harmful

Some Agile evangelists are very keen on the idea of “Commitment.” i.e. the development team “commit” to doing an amount of work within a time-box (normally an iteration.) The team do this work come hell or high-water. They do what it takes. Once they’ve said they’ll do it they do it.

I believe the idea of Commitment was baked into Scrum – see the Scrum Primer by Larman, Benefield and Deemer for example. And I’ve heard Jeff Sutherland proclaim the power of Commitment in person. But it seem Scrum is less keen on it these days. The October 2011 “Scrum Guide” by Schwaber and Sutherland does not contain the words Commit or Commitment so I guess “commitment” is no longer part of Scrum. Who knew?

I’ve long harboured my doubts about Commitment (e.g. from last year see “Unspoken Cultural differences in Agile & Scrum” and “My warped, crazy, wrong version of Agile”, and from 2010 “Two ways to fill an iteration”) but now I’m going to go further and say the Commitment protocol for filling an iteration is actively damaging for software development teams in anything other than the very short run. This reassessment has been triggered by a) watching the #NoEstimates discussion on Twitter and b) visiting clients were teams attempt to follow the Commitment protocol.

1) Commitment can lead to gaming by developers who have an incentive to under commit (my argument in “Two ways to fill an iteration”).

2) Commitment can lead to gaming by the need/business side who have an incentive to make the team over commit

3) Commitment combined with velocity measurement and task estimation leads to confused and opaque ways of scheduling work into a sprint, Since (#1 and #2) developers over estimate stories and tasks and the business representatives apply pressure to reduce estimates. This prevents points and velocity of floating free instead they become a battle ground. (Points are a fiat currency, if you don’t allow it to float someone, somewhere, has to provide currency support; overtime from developers, or , more likely, inaccurate measurement.)

4) At one client commitment led to busy developers for the first half of the sprint (with testers under worked) and then as the sprint came to a close very busy testers with developers taking things a little easier. Except developers where also delivering bugs to Testers, the nice pile of bugs kept developers busy but meant that a sprint wasn’t really closed because each sprint contained bugs. On paper, on the day the sprint closed the sprint was done, but it soon required rework. There was also a suspicion that as the end of sprint approached Testers lowered their acceptance threshold and both Developers and Testers asked fewer probing, potentially disruptive, questions later in the sprint.

5) Developers under pressure – even self imposed – to deliver may choose to cut corners, i.e. let bugs through. Bugs are expensive and disruptive.

6) Developers asked to Commit ask for more and more detail before the sprint starts. A “cover your ass” attitude takes hold and stores start to resemble functional requirements of old with all the old problems that brought.

7) Developers become defensive pointing to User Stories and Acceptance Criteria and saying “Your bug isn’t included in the criteria so I didn’t do it” (the other end of a “cover your ass” attitude.)

8) Developers who have not totally mastered Test Driven Development will be tempted – even incentivised – to skip this practice in order to go faster. They may even go faster in the short run – building up “technical debt” – but in the long run will go far far slower.

9) Business/Customers conversely have no motivation to support development adoption of TDD or to invest in automated acceptance test (ATDD, BDD, SbE etc) of their own because, after all, the developers are committed.

Maybe I should say that I currently believe Estimates can work, I have sympathy with the #NoEstimates argument but I have clients where Estimates do work, one manager claims “to be able to bring a project in to the day” using estimates. So I have trouble reconciling #NoEstimates with experience.

Part of the #NoEstimates argument is that “estimates” are too easily mistaken for “commitments” and when they do so teams cannot be expected to honour them but some people do. Obviously if you remove commitment then the transmission mechanism is removed and estimates might still be useful.

While I’ve been suspecting much of the above its taken me a while to come to these conclusions. In part this is because I don’t see that many teams that actually do Commitment. Most of the teams I see are in the UK and I’ve always thought Commitment was a very American idea – it always creates images of American Football teams in my mind – “win one for the Gipper”.

Actually most teams I see are teams I have taught so they don’t do it. (they do some variation on Xanpan if I’m being honest). While I talk about Commitment I teach the Velocity protocol and it is estimation and velocity that is baked into Xanpan. (I hope to be able to push out my notes on Xanpan very soon so join the list.)