Blog

Commitment considered harmful

Some Agile evangelists are very keen on the idea of “Commitment.” i.e. the development team “commit” to doing an amount of work within a time-box (normally an iteration.) The team do this work come hell or high-water. They do what it takes. Once they’ve said they’ll do it they do it.

I believe the idea of Commitment was baked into Scrum – see the Scrum Primer by Larman, Benefield and Deemer for example. And I’ve heard Jeff Sutherland proclaim the power of Commitment in person. But it seem Scrum is less keen on it these days. The October 2011 “Scrum Guide” by Schwaber and Sutherland does not contain the words Commit or Commitment so I guess “commitment” is no longer part of Scrum. Who knew?

I’ve long harboured my doubts about Commitment (e.g. from last year see “Unspoken Cultural differences in Agile & Scrum” and “My warped, crazy, wrong version of Agile”, and from 2010 “Two ways to fill an iteration”) but now I’m going to go further and say the Commitment protocol for filling an iteration is actively damaging for software development teams in anything other than the very short run. This reassessment has been triggered by a) watching the #NoEstimates discussion on Twitter and b) visiting clients were teams attempt to follow the Commitment protocol.

1) Commitment can lead to gaming by developers who have an incentive to under commit (my argument in “Two ways to fill an iteration”).

2) Commitment can lead to gaming by the need/business side who have an incentive to make the team over commit

3) Commitment combined with velocity measurement and task estimation leads to confused and opaque ways of scheduling work into a sprint, Since (#1 and #2) developers over estimate stories and tasks and the business representatives apply pressure to reduce estimates. This prevents points and velocity of floating free instead they become a battle ground. (Points are a fiat currency, if you don’t allow it to float someone, somewhere, has to provide currency support; overtime from developers, or , more likely, inaccurate measurement.)

4) At one client commitment led to busy developers for the first half of the sprint (with testers under worked) and then as the sprint came to a close very busy testers with developers taking things a little easier. Except developers where also delivering bugs to Testers, the nice pile of bugs kept developers busy but meant that a sprint wasn’t really closed because each sprint contained bugs. On paper, on the day the sprint closed the sprint was done, but it soon required rework. There was also a suspicion that as the end of sprint approached Testers lowered their acceptance threshold and both Developers and Testers asked fewer probing, potentially disruptive, questions later in the sprint.

5) Developers under pressure – even self imposed – to deliver may choose to cut corners, i.e. let bugs through. Bugs are expensive and disruptive.

6) Developers asked to Commit ask for more and more detail before the sprint starts. A “cover your ass” attitude takes hold and stores start to resemble functional requirements of old with all the old problems that brought.

7) Developers become defensive pointing to User Stories and Acceptance Criteria and saying “Your bug isn’t included in the criteria so I didn’t do it” (the other end of a “cover your ass” attitude.)

8) Developers who have not totally mastered Test Driven Development will be tempted – even incentivised – to skip this practice in order to go faster. They may even go faster in the short run – building up “technical debt” – but in the long run will go far far slower.

9) Business/Customers conversely have no motivation to support development adoption of TDD or to invest in automated acceptance test (ATDD, BDD, SbE etc) of their own because, after all, the developers are committed.

Maybe I should say that I currently believe Estimates can work, I have sympathy with the #NoEstimates argument but I have clients where Estimates do work, one manager claims “to be able to bring a project in to the day” using estimates. So I have trouble reconciling #NoEstimates with experience.

Part of the #NoEstimates argument is that “estimates” are too easily mistaken for “commitments” and when they do so teams cannot be expected to honour them but some people do. Obviously if you remove commitment then the transmission mechanism is removed and estimates might still be useful.

While I’ve been suspecting much of the above its taken me a while to come to these conclusions. In part this is because I don’t see that many teams that actually do Commitment. Most of the teams I see are in the UK and I’ve always thought Commitment was a very American idea – it always creates images of American Football teams in my mind – “win one for the Gipper”.

Actually most teams I see are teams I have taught so they don’t do it. (they do some variation on Xanpan if I’m being honest). While I talk about Commitment I teach the Velocity protocol and it is estimation and velocity that is baked into Xanpan. (I hope to be able to push out my notes on Xanpan very soon so join the list.)

Three backlogs?

Now I know some people dislike backlogs – queues, wait states, work we want done – and I buy the argument. But the Scrum Sprint (Iteration) and Product Backlog model actually fits for a lot of organizations.

Maybe its a temporary state before moving to continuous flow, continuous value but it might also be a sensible state for many organizations. The Product Backlog is stuff they would like to do but haven’t gotten around to yet.

While I generally accept and teach the Product/Sprint backlog model (all the stuff we might do sometime in the future / the stuff we are of focused on for the next two weeks) I keep thinking its wrong.

There should be Three Backlogs

1. Opportunity backlog: all the ideas which have been suggested ever and have been considered worthwhile for recording. Recording such ideas does not in any way commit anybody to actually undertaking them.

2. Validated backlog: items from the opportunity backlog which have been examined, researched and discussed enough to be considered valid candidates for future development.

3. Iteration/Sprint backlog: the work that will be attempted in the current iteration.

While the iteration/sprint backlog plays the same role as it ever did – setting the agenda for the next iteration – splitting the product backlog allows for a clear separate of “good ideas” and “validated ideas.” Moving from the former to the latter involves checking ideas with stakeholders, measuring them against the over-arching goal, considering the benefits in the market or to the organizations and perhaps conducting experiments to measure benefits.

This three backlog model naturally maps to the three planning horizons I described a while back, and to the commonly used Epic, Story, Task work breakdown used by teams:
  • Opportunity backlog contains big items with little breakdown, Epics. These may be happen sometime in the longer term, over future quarters and years. They may appear on a roadmap or they may be more speculative.
  • Validated backlog items should be at the story size – small enough to be deliverable soon but demonstrating real business value. These items may be developed sometime in the next quarter so they appear on the quarterly plan.
  • Iteration backlog items are here and now, they are task sized and are in the current iteration.

There is no point in doing more work on any item until it moves to the next backlog, into the next planning horizon. At each stage some items will disappear, upon closer examination they will not be judged worthwhile.

Epics need not be broken down in their entirety before any work is undertaken. Ideally the first stories broken out of any epic would be experiments which could test technology options and, more importantly, market and client reaction.

For example the first stories for an epic entitled “Launch French version” might describe a series of data gathering experiments to assess the size of the market and opportunities. Translation, payment and such can wait, they might never need doing.

As for estimation:
  • Items in the Iteration backlog may well have detailed effort estimates arrived at from work breakdown if needed
  • Items in the Validated backlog probably have ballpark estimates and, very importantly, value estimates (how much is this item worth?)
  • Items in the Opportunity backlog might have effort scores taken from historical averages (why spend time estimating something that might not happen?) and can only move to the Validated backlog when a value estimate has been made and the item has been judged as worth doing relative to everything else

Three backlogs. What do you think? Good idea or silly?

Do it right, then Do the Right thing online

I was at the Nordic Developer Conference (NDC) in Oslo last week (well, me and about 1700 other people!). I took a risk, I spoke to the title “Do things right and then do the right thing” (now on SlideShare). I was deliberately challenging accepted management practice.

In many ways this was an experiment itself, the presentation is based on a feeling I’ve had for a few years now – dating back to the Alignment Trap – but never really sat down to argue in full. In part I was looking for people in Oslo to shoot down my argument – or support it! I’m glad to say that so far most of the comments I’ve had have been broadly supportive.

Basically my argument is this:

  • It is incredibly difficult to know “the right thing”
  • Therefore it is better to try something, get feedback (data etc.) and adjust ones aim; and repeat
  • However, in order to do this one needs an effective delivery machine
  • In other words: you need to be able to iterate.
Thus organizations need to “do things right” (be able to iterate rapidly) and then they can home in on the ultimate “right thing.”

I used the analogy of using of trying to hit a target with a gun. First you shoot, then you use what you learned to adjust your aim and shoot again. Again you use the feedback to refine your aim.

You might choose to use a machine gun, rapid fire, rapid feedback. Cover the target in approximate shots.

Alternatively you might use a sniper rifle. One perfectly aimed shot and bang! Hit first time.

While I’m sure many organizations would like to think they are using a snipers rifle (one bullet is cheaper than many) I’d suggest that many are actually using something with significantly poorer performance. An older weapon, one without sharp shooter sights, on which requires manual reloading, one which is prone to breaking.

As a result they try to make every shot count but they just aren’t very good at it.

While one might like to think that organizations make a rational choice between these different approaches I think it is more a question of history – or path dependency to use the economic term. You use the weapon you have used before. You use the weapon which your tools provide for.

I’m not saying this argument is universally true but I think it increasingly looks like the logical conclusion of Agile, Scrum and Lean Start-Up. I am also saying you need to try many times.

In technology our tools have changed: when teams worked with Cobol on OS/360 making every shot from your 1880 vintage gun count was important. But for teams working with newer technologies – especially the likes of Ruby, PHP and such on the web – then a trial and error approach might well be the best way.

Possibly the right answer is nothing to do with your organisation but rather a question of your competitors. You want to choose a weapon which will allow you to out compete the competitor, perhaps through asymmetric competition.

Of course the key to doing this is to work fast, and fail fast, and fail cheap. If you take a long time to fail, or if it is expensive then this technique isn’t going to work.

After the presentation Henrik Ebbeskog sent me this blog post via twitter which is beautiful example of exactly what I’m talking about: You Are Solving The Wrong Problem.

Then I went to the PAM Summit conference in Krakow were James Archer included a wonderful quote in his presentation:

“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.” economist JK Galbarith.

This adds another dimension to my hypothesis: that when we invest a lot of time, energy and/or money in detraining what the “right thing” to do it it becomes more difficult to change our mind.

For example, suppose we invest a lot in building a new feature on our website, and suppose that in the week after launch the new feature performs poorly, or at least less well than other recent new features. Those who have invested a lot in arguing for the feature, those who feel closely associated with the feature may be more prone to say “Give it another week, lets get more data” while those who are more distant might be prepared to review what is happening sooner.

In order to accept “failure” we cannot invest too much of ourselves in any feature, shot or attempt.

I’m still not completely convinced by my own argument. The management doctrine “Do the right thing, then do it right” is so strong in me – and so logical. Although I can build reasonable argument for “Do it right, then do the right thing” that I’m still uncertain that I believe it.

What do you think?

Truly, I’d love to have more comments on this.

A role for project managers and business analysts in Agile?

I was in Krakow last week at the PAM Summit of the Project Management Institute (PMI) and International Institute of Business Analysts (IIBA). I delivered a presentation entitled “Is there a role for Project Managers and Business Analysts in Agile?” – now online via SlideShare. Those of you in the London area can go even better, I’ll be repeating the presentation at Skills Matter next week. (Its free, its 6.30pm on the 24 June, sign-up on the Skills Matter website.)

For those who can’t I’ll talk through the question a little…

It is helpful to reference the “Iron Triangle” or “Triangle of Constraints” or “Project Constraints Triangle” – call it what you will, here it is again:

Many reader may have seen a slightly different version before. My version has neither cost or quality because I don’t believe these represent trade-offs:

  • Cost for a software development team are overwhelmingly wages, the more people you have, the longer you have them for the more money you spend. So Cost= People x Time. And people and time are both on this triangle so you can calculate cost, or vice versa.

So my preferred version looks like this:

Which leaves Features/Functionality/“the what” as the only place where we negotiate.

Which makes answering the original question very easy for BAs. Business Analysts have skills around exactly this question. There are a number of ways a BA can help: perhaps as a proxy customer, perhaps as a Tactical Product Owner, perhaps as a detail guy, or perhaps working with testers. Every team should have one (almost).

For project managers things are decidedly more complex. Much of their traditional work around “when” is redundant, since we are aiming for stable teams and sizing work to the time work around “who” is also gone. I can imagine, indeed I have seen, small teams dispense with project managers entirely. You can be successful without a project manager.

However for larger teams there is probably a role that needs filling. At a very basic level there is administration and reporting, there might be co-ordination tasks too, they might work with the BA/Product Owner around stakeholders, and when there is a client/supplier relationship both sides will probably want some managers “managing” the relationship.

But while there is management work to do I do not see a role for “projects.”

Successful software lives, it changes, if requires enhancements, adaptations. Only dead – unused – software doesn’t change. Developing a software product is like building a company: if people stop asking for things you are out of business.

Which conflicts with the PRINCE-2 definition of a project: “A temporary organization that is needed to produce a unique and predefined outcome or result at a pre-specified time using predetermined resources.”

Successful software does not have a pre-specified end date, indeed it can be incredibly hard to determine when many projects actually began!

Successful software isn’t temporary and the organizations which support/service it shouldn’t be. They may grown or shrink with time but we should aim for stability.

And since Agile embraces change the outcome isn’t pre-defined either. Indeed since all successful software changes in ways which are difficult for the originators to see there are only short term outcomes.

To me it is obvious that software development does not, and never really has, conformed to the project metaphor. Indeed I think using the project metaphor seriously damages software:
  • It leads to endless, pointless, discussions about “when will it be done”
  • It leads to governance processes that attempt to finish things which are not finished
  • It leads to short term thinking over things like quality
  • It leads companies to disband successful, performing teams – a condition I have termed Corporate Psychopahy.

BAU – business as usual – isn’t a dirty word, it is the normal. Supporting software, adding feature, fixing bugs, enhancing products is Business As Usual, we should be proud of that.

Then if we specifically look at Agile ways of working many of the traditional assumptions of project management are invalidated:
  • Teams are encouraged to self-manage: determine the details of the work to be done and decide how best to tackle it themselves
  • Agile teams are inclusive and non-hierarchical
  • Agile teams communicate peer-to-peer rather through some centralised communications hub (i.e. a manager)

In short Agile assume a “Theory Y” way of working not the “Theory X” which is implicit in too many project management texts.

And if you think I am radical then let me tell you I am a moderate, there are those who will go further. Look at my Scrum-Scrum-Scrum post last year and the discussion which followed, or watch Christin Gorman’s video “Adding a project manager to a Scrum team is like making cake with mayonnaise.”

The net upshot of all this is simple: Project Managers need to reinvent their role. And the reinvented role probably doesn’t include the word “project”.

For any software development team – especially one that wishes to be considered agile – the default choice is probably: no project manager. The onus is on the role holder to demonstrate how they add value to the team and to the wider organisation.

The real lessons of Lego (for software)

How many out there have hear this: “What we want is software which is like Lego, that way we can snap together the pieces we need to build whatever it is we want.”

Yes? Heard that?

Lets leave aside the fact that software is a damn sight more complex than small plastic bricks, lets leave aside too the fact that Lego makes a fine kids toy but on the whole we don’t use it to build anything we use at work (like a car, bridge or house), and lets pretend these people have a point….

We start with the standard Lego brick:

Of course there are multiple colours:

And a few variations:

Which now allows us to snap together a useful wall:

Walls are good but to build anything more interesting we need some more pieces, maybe some flat pieces:

Or some thinner pieces, or some bigger pieces:

It might also help to have some angled pieces, you know for roofs and things, and remember the slant can go either way, up or down:

I think we’re heading for a house so we will need some doors and windows:

Personally I like wheels, I like things to move, and so do my kids. So we need some wheels – different size of course, and some means of attaching them to the other Lego blocks:

If we are building a car we need to be able to see out….

Umm… my imagination is running away, we need to steer, and how about a helicopter, and a ramp to load things onto the car/boat/plane/.…

Still, something missing…. people!

Lego is not homogenous, when you say “Lego brick software” people are probably thinking of the first 2×8 block I showed. But to make anything really interesting you need lots of other bits. Some of those bits have very specific uses.

I’ve not even started on Lego Space/StarWars, Harry Potter Lego or what ever this years theme is. Things get really super interesting when you get to Technical Lego.

But there are some things that every Lego enthusiast knows:
  • You regularly feel the need for some special part which doesn’t exist
  • You never have enough of some parts
  • You always make compromises in your design to work with the parts you have

The components themselves are less important than the interface.  The interface stays consistent even as the blocks change.  And over the years the interface has changed, sure the 2×4 brick is the same but some interfaces (e.g. wheels with metal pieces) have been removed (or deprecated?) while others have been added.  There is an element of commonality but the interface has been allowed to evolve.

So the next time someone says: “We need software like Lego bricks” remind them of these points.

Testing triangles, pyramids and circles, and UAT

A few months ago Markus Gartner introduced me to the Testing Triangle, or Testing Pyramid. It looks like this:

If you Google you will find a few slightly different version and some go by the name of Testing Pyramid.

Now a question: where did this come from? Who should I credit with the original? Markus thinks it mike be Mike Cohn but he’s not sure.

This triangle is actually pretty similar to a diagram I’ve been drawing for a while when I do Agile training:

But it occurs to me the triangle should be pushed to the side, and when you do that you can add some axis which add more information:

At the base the, Unit Tests, there are lots and lots of tests and they typically execute in milliseconds. I typically hear people say there are between two and four times as much test code (for unit tests) as production code.

As you rise up there are fewer tests, tests take longer to execute and therefore tend to be run less often. Also as you rise up it becomes more difficult to automate the tests, you can automate but it requires more effort and more co-ordination. Therefore manual testing continues to exist.

True, manual testing can never be eliminated entirely for various reasons (e.g. exploratory testing) but you can get very high levels of automated acceptance or system tests.

Cost is proportional to time – and manual testing is an order of magnitude more expensive than automated tests over anything other than the very short run – and therefore as tests take longer to run costs go up.

Now a word on UAT or Beta testing.

As far as I am concerned User Acceptance Testing is the same as proper Beta Testing. Both mean: showing a potentially finished product to real life users and getting their response.

The difference is: UAT tends to happen in corporate environments where users work for the company and will need to use the software. Beta testing tends to happen in software vendor environments and it means showing the software, even giving it to, potential users outside the company.

Thus: UAT and Beta testing can only truly be performed by USERS (yes I am shouting). If you have professional testers performing it then it is in effect a form of System Testing.

This also means UAT/Beta cannot be automated because it is about getting real life users to user the software and get their feedback. If users delegate the task to a machine then it is some other form of testing.

And having users play with software means they are not doing their actual job so UAT is very expensive: expensive because it is manual and expensive because something else is not being done. Given this cost it is sensible to minimise UAT whenever possible.

In my experience most UAT phases (although not beta phases) are in effect an additional round of System Test and are frequently performed by Professional Testers. The fact that these professional testers are doing UAT is the give away. Professional Testers are not Users, they are Professional Testers.

(The other give away is that Professional Testers doing UAT are usually paid for by some group other than the IT department, i.e. IT test is not trusted, perhaps for good reason.)

More than once I have seen System Test / Acceptance Test cycles which are either completely absent or very poorly done. This necessities the need for a second round. Which is called UAT (possibly to hide the actual problem?)

I also see situations were Unit Test is poorly done or completely omitted.

If the low levels of this triangle are done well – and quality built in – then UAT should be reduced to a) real users, b) as near as possible a formality.

UAT is a very expensive way to find bugs, it also shows that something was missing in the development process. Either in coding or before that in understanding the users. It also, perhaps more worryingly, shows a failing in the earlier test steps.

Dialogue Sheets – update & new planning sheet

Last month InfoQ carried an update on the use of retrospective dialogue sheets. The use of these sheets continues to grow and I continue to receive good feedback.

If you’ve tried the sheets and haven’t sent me some feedback than please e-mail me and let me know about your experiences.

And for those of you who’ve not tried a dialogue sheet retrospective, whats stopping you? The Dialogue Sheet PDFs are free to download. For those who don’t have a large A1 Printer/Plotter there is a print on demand service for all the dialogue sheets.

I have also added a new sheet, this one is designed for Iteration Planning Meetings. I think one of the things teams new to Agile struggle with is the initial planing meetings. So I’ve created a new A1 sheet to guide teams through the planning meeting. This one has a bit more of a board game feel because there are activities to do and steps to repeat.

One thing I realised when creating this sheet is: planning meetings are not simple, and there are lots of variations in them. So I set about writing a Guide to Iteration Planning meetings. This turned out to be a little longer than I expected but this just goes to show what is involved. If you intend to use the sheet then I recommend you read the guide too.

Like the other sheets the planning sheet is free to download although I ask people to register – the guide is free without registration. Again, the printed version is available from the print on demand service too.

When I do training course I always give teams one or two retrospective dialogue sheets for them to use for their first retrospectives. I’m hoping that the planning sheet will fill a similar role at the start of the iteration.

I should say, I’m not completely sure this iteration planning sheet qualifies as a dialogue sheet in the strictest sense because it is much more guided and less about discussion. That said, I don’t think there are any hard and fast rules on what is, and what is not, a true dialogue sheet.

Agile Clinic: Dear Allan, we have a little problem with Agile…

Consider this blog an Agile Clinic. On Friday an e-mail dropped into my mailbox asking if I could help. The sender has graciously agreed to let me share the mail and my advice with you, all anonymously of course…

The sender is new to the team, new to the company, they are developing a custom web app for a client, i.e. they are an ESP or consultancy.

“the Developers work in sprints, estimating tasks in JIRA as they go. Sprints last three weeks, including planning, development and testing. I have been tasked to produce burndowns to keep track of how the Dev cells are doing.”

OK, sprints and estimates are good. I’m no fan of Jira, or any other electronic tool but most team use one so nothing odd so far. But then:

“three week sprints”: these are usually a sign of a problem themselves.

I’ll rant about 3-week Sprints some other time but right now the main points are:

3 weeks are not a natural rhythm, there is no natural cycle I know of which takes 3 weeks; 1 week yes, 2 weeks yes, 4 weeks (a month) yes, but 3? No.

In my experience teams do 3 weeks because they don’t feel they are unto shorter iterations. But the point of short iterations is to make you good at working in short cycles so extending the period means you’ve ducked the first challenge.

“planning, development and testing” Good, but I immediately have two lines of enquiry to peruse. Planning should take an afternoon, if it needs to be called out as an activity I wonder if it is occupying a lot of time.

Second: testing should come before development, well, at least “test scripts”, and it should be automated so if it comes after development it is trivial.

Still, many teams are in this position so it might be nothing. Or again, it could be a sign the team are not challenging themselves.

Also, you are doing the turndowns but by the sounds of it you are not in the dev teams? And yo have Jira? I would expect that either Jira produces them automatically or each devl “cell” produces their own. Again, more investigation needed.

Continuing the story:

“The problem I’m encountering is this: we work to a fixed timetable, so it isn’t really Agile.”

No, Agile works best in fixed deadline environments. See Myth number 9 in my recent Agile Connection “12 Myths of Agile” – itself based on an earlier blog “11 Agile Myths and 2 Truths”.

“We have three weeks to deliver xyz and if it gets to the end of the sprint and it isn’t done, people work late or over the weekend to get it done.”

(Flashback: I worked on Railtrack privatisation in 1996/7, then too we worked weekends, death march.)

Right now the problem is becoming clear, or rather two problems.

Problem 1: It isn’t done until it is done and accepted by the next stage (testers, users, proxy users, etc.). If it isn’t done then carry it over. Don’t close it and raise bugs, just don’t close it.

Problem 2: The wrong solution is being applied when the problem is encountered, namely: Overtime.

As a one-off Overtime might fix the problem but it isn’t a long term fix. Only the symptoms of the problem are being fixed not the underlying problem which explains why it is reoccurring. (At least it sounds like the problem is reoccurring.)

Overtime, and specifically weekend working, are also a particularly nasty medicine to administer because it detracts from the teams ability to deliver next time. If you keep taking this medicine it you might stave off the decease but the medicine will kill you in the end.

The old “40 hour work week” or “sustainable pace” seems to be ignored here – but then to be fair, an awful lot of Scrum writing ignores these XP ideas.

Lines of enquiry here:

  • What is the definition of done?
  • Testing again: do the devs know the end conditions? Is it not done because it hasn’t finished dev or test?
  • What is the estimation process like? Sounds like too much is being taken into the sprint
  • Are the devs practicing automated test first unit testing? aka TDD
  • Who’s paying for the Overtime? What other side effects is it having?

“This means burndowns don’t make any sense, right? Because there’s no point tracking ‘time remaining’ when that is immaterial to the completion of the task.”

Absolutely Right.

In fact it is worse than that because: either you are including Overtime in your burn downs in which cases your sprints should be longer, or you are not, in which case you are ignoring evidence you have in hand.

The fact that burndowns are pointless is itself a sign of a problem.

Now we don’t know here: What type of burn-downs are these?

There are (at least) two types of burn downs:

  • Intra-sprint burn-down which track progress through a sprint and are often done in hours
  • Extra-sprint burn-down which tracks progress against goal over multiple sprints; you have a total amount of work to do and you burn a bit down each sprint.
I’ve never found much use for intra-sprint burn-downs, some people do. I simply look at the board and see how many cards are in done and how many in to-do.

And measuring progress by hours worked is simply flawed. (Some of my logic on this is in last years “Story points series” but I should blog more on it.)

Extra-sprint burn-downs on the other hand I find very useful because they show the overall state of work.

From what is said here it sounds like hours based intra-sprint burn-downs are in use. Either the data in them is bad or the message they are telling is being ignored. Perhaps both.

“I was hoping you might be able to suggest a better way to do it? I feel like we should be tracking project completion, instead, i.e. we have xyz to do, and we have only done x&y. My main question is: Is there a useful way to use estimates when working to a fixed deadline by which everything needs to be completed by?”

Well Yes and Yes.

But, the solution is more than just changing the burn-down charts and requires a lot of time – or words – to go into. I suspect your estimating process has problems so without fixing that you don’t have good data.

Fortunately I’ve just been writing about a big part of this: Planning meetings.

And I’ve just posted a Guide to Planning Meetings on the Software Strategy website. It is designed to accompany a new dialogue sheet style exercise. More details soon. I should say both the guide and sheet fall under my “Xanpan” approach but I expect they are close enough to XP and Scrum to work for most teams.

This quote also mentions deadlines again. I have another suspicion I should really delve into, another line of enquiry.

Could it be that the Product Owners are not sufficiently flexible in what they are asking for and are therefore setting the team up to fail each sprint? By fail I mean asking they to take on too much, which if the burn-downs and velocity measurements aren’t useful could well be the case?

We’re back to the Project Managers old friend “The Iron Triangle.”

Now as it happens I’ve written about this before. A while ago in my ACCU Overload pieceTriangle of Constraints” and again more recently (I’ve been busy of late) in Principles of Software Development (which is an work in progress but available for download.)

This is where the first mail ended, but I asked the sender a question or two and I got more information:

“let’s say the Scrum planners plan x hours work for the sprint. Those x hours have to be complete by the end – there’s no room for anything moving into later sprints.”

Yikes!

Scrum Planners? – I need to know more about that

Plan for hours – there is a big part of your problems.

No room to move work to later springs – erh… I need to find out more about this but my immediate interpretation is that someone has planned out future sprints rather rigidly. If this is the case you aren’t doing Agile, you aren’t doing Scrum, and we really need to talk.

I’m all for thinking about future work, I call them quarterly plans these days, but they need to be flexible. See Three Plans for Agile from a couple of years back(long version is better, short version was in RQNG.

“Inevitably (with critical bugs and change requests that [deemed] necessary to complete in this sprint (often)) the work increases during the sprint, too.”

Work will increase, new work will appear, and thats why you should keep the springs FLEXIBLE. You’ve shot yourself in the foot by the sounds of it. I could be wrong, I might be missing something here.

Right now:

  • Bugs: I’m worried about your technical practices, what is your test coverage? How are the developers at TDD? You shouldn’t be getting enough bugs to worry about
  • Change requests are cool is you are not working to a fixed amount of work and if you haven’t locked your sprints down in advance.
You can have flexibility (space for bugs and change requests) or predictability (forward scheduling) but you can’t have both. And I can prove that mathematically.

You can approach predictability with flexibility if you work statistically – something I expose in Xanpan – but you can only do this with good data. And I think we established before your data is shot through.

“This leads to people ‘crunching’ or working late/weekends at the end of the sprint to get it all done. It is my understanding that this isn’t how Agile is supposed to work.”

Yes. You have a problem.

So how should you fix this?

Well obviously the first thing to do is to hire me as your consultant, I have very reasonable rates! So go up your management chain until you find someone who sees you have a problem and would like it fixed, if they don’t have the money then carry on up the chain.

Then I will say, first at an individual level:

  • The intra-sprint sprint hours based burn-downs are meaningless. Replace them with an extra-sprint charts count your delivery units, e.g. User Stories, Use Cases, Cukes, Functional spec items, what ever the unit of work is you give to developers and get paid to deliver; count them and burn the completed units each sprint
  • Track bugs which escape the sprint; this should be zero but in most cases is higher, if its in double figures you have series problems. The more bugs you have the longer your schedule will be and the higher your costs will be.
  • See if you can switch to a cumulative flow diagram charting to show: work delivered (bottom), work done (developed) but not delivered, work to do (and how it is increasing change requests), bugs to do
  • Alternatively produce a layered burn-down chart (total work to do on the bottom), new work (change requests) and outstanding bugs (top)
  • Track the overtime, find who is paying for this, they have pain, find out what the problem is they see
None of these charts is going to fix your problems but they should give you something more meaningful to track than what you have now.

Really you need to fix the project. For this I suspect you need:

  • Overhaul the planning process, my guess is your estimation system is not fit for purpose and using dice would be more accurate right now
  • Reduce sprints to 1 week, with a 4 weekly release
  • Push Jira to one side an start working with a physical board (none was mentioned so I assume there is none.)
  • Ban overtime
We should also look at your technical practices and testing regime.

These are educated-guesses based on the information I have, I’d like to have more but I’d really need to see it.

OK, that was fun, thats why I’ve done it at the weekend!

Anyone else got a question?

To estimate or not to estimate, that is the question

Disparaging those who provide software estimates seems to be a growing sport. At conferences, in blogs and the twitter-verse it seems open season for anyone who dares to suggest a software team should estimate. And heaven help anyone who says that an estimate might be accurate!

Denigrating estimation seems to be the new testosterone charged must-have badge for any Agile trainer or coach. (I’ve given up on the term Agile Coach and call myself and Agile Consultant these days!)

Some those who believe in estimation are hitting back. But perhaps more surprisingly I’ve heard people who I normally associate with the Scrum-Planning Poker-Burndown school of estimation decry estimation and join in the race to no-estimation land.

This is all very very sad and misses the real questions:

  • When is it useful to estimate? And when is it a waste of time and effort?
  • In what circumstances are estimates accurate? And how can we bring those circumstances about?
These are the questions we should be asking. This is what we should be debating. Rather than lobbing pot-shots at one another the community should be asking: “How can we produce meaningful estimates?”

In the early days of my programming career I was a paid up member of the “it will be ready when its ready” school of development. I still strongly believe that but I also now believe there are ways of controlling “it” (to make it smaller/shorter) and there are times when you can accurately estimate how long it will take.

David Anderson and Kanban may have fired the opening shots in the Estimation Wars but it was Vasco Duarte who went nuclear with his “Story Points considered harmful” post. I responded at the time to that post with five posts of my own (Story points considered harmful? – Journey’s start, Duarte’s arguments, Story points An example, Breakdown and Conclusions and Hypothesis) and there is more in my Notes on Estimation and Retrospective Estimation essay and Human’s Can’t Estimate blog post – so I’ve not been exactly silent on this subject myself.

Today I believe there are circumstances where it is possible to produce accurate estimates which will not cost ridiculous amounts of time and money to make. One of my clients in the Cornish Software Mines commented “These aren’t estimates, that is Mystic Meg stuff, I can bring a project in to the day.”

I also believe that for many, perhaps most, organisations, these conditions don’t hold and estimation is little more than a placebo used to placate some manager somewhere.

So what are these circumstances? What follows is a list of conditions I think help teams make good estimates. This is not an exhaustive list, I’ve probably missed some, and it may be possible to obtain accuracy with some conditions absent. Still, here goes….

  • The team contains more than at least two dedicated people
  • Stable team: teams which add and loose members regularly will not be able to produce repeatable results and will not be able to estimate accurately. (And there is an absence of Corporate Psychopathy.)
  • Stable technology and code base: even when the team is stable if you ask them to tackle different technologies and code bases on a regular bases their estimates will loose accuracy
  • Track record of working together and measuring progress, i.e. velocity: accuracy can only be obtained over the medium to long run by benchmarking the team against their own results
  • Track the estimates, work the numbers and learn lessons. Both high level “Ball Park” and detailed estimates need to be tracked and analysed for lessons. Then, and only then, can delivery dates be forecast
  • All work is tracked: if they team have to undertake work on another project it is estimated (possibly retrospectively) in much the same manor as the main stream of work and fed into the forecasts
  • Own currency: each team is different and needs to be scored in its own currency which is valued by what they have done before. i.e. measure teams in Abstract Points, Story Points, Nebulous Units of Time, or some other currency unit; this unit measures effort, the value of the unit is determined by past performance. In Economists’ lingo this is a Fiat Currency
  • Own estimates: the team own the estimates and can change them if need be, others outside the team cannot
  • Team estimates: the team who will do the work collectively estimate the work. Beware influencers: in making estimates the teams needs to avoid anchoring, take a “Wisdom of crowds” approach – take multiple independent estimates and treat experts and anyone in authority like anyone else.
  • (Planning Poker is a pretty good way of addressing some of these points, I still teach planning poker although there may be better ways out there)
  • Beware The Planning Fallacy – some of the points above are intended to help offset this
  • Beware Goodhart’s Law, avoid targeting: if the estimates (points) in anyway become targets you will devalue your own currently, when this happens you will see inflation and accuracy will be lost
  • Don’t sign contracts based on points, this violates Goodhart’s Law
  • Overtime is not practiced; if it is then it is stable and paid for
  • Traditional time tracking systems are ignored for forecasting and estimating purposes
  • Quality: teams pay attention to quality and stove to improve it. (Quality here equates to rework.)
  • The team aim for overall accuracy in estimates not individual estimates; for any given single piece of work “approximately right is better than precisely wrong”
  • Dependencies & vertical teams: Teams are not significantly dependent on other groups or teams; they posses the skills and authority to do the majority of the work they need to
  • The team are able to flex the “what” the thing they are building through negotiations and discussions. (Deadlines can be fixed, teams members should be fixed, “the what” should be flexible.)
  • The team work to a series of intermediate deadlines
  • It helps if the team are co-located and use a common visual tracking system, e.g. a white board with cards
  • Caveat: even if all the above I wouldn’t guarantee any forecasts beyond the next 3 months; too much of the above relies on stability and beyond 3 months, certainly beyond 6, that can’t be guaranteed
My guess – and it is only a guess – is that when these conditions don’t hold you will get the random results that Duarte described. Sure you might be able to get predictable results with a subset of these factors but I’m not sure which subset.

The more of these factors are absent the more likely your velocity figures will be random and estimates and forecast waste. When that happens you almost certainly are better off dumping estimation – at best it is a placebo.

Looking at this list now I can see how some would say: “There are too many conditions here to be realistic, we can’t do it.” For some teams I’d have to agree with you. Still I think many of these forces can be addressed, I know at least one team that can do this. For others the prognosis is poor, for these companies estimation is worse than waste because the forecasts it produces mislead. You need to look for other solutions – either to other estimation techniques or to managing without.

I’d like to think we can draw the estimation war to an end and focus on the real question: How do we produce meaningful estimates and when is it worth doing so?

Requirements and Specifications

As I was saying in my last blog, I’m preparing for a talk at Skills Matter entitled: “Business Analyst, Product Manager, Product Owner, Spy!” which I should just have entitled it “Requirements: Whose job are they anyway?” and so I’ve been giving a lot of thought to requirements.

I finished the last blog entry noting that I was concerned the way I saw Behaviour Driven Development (BDD) going and I worried that was becoming a land-grab by developers on the “need side” of development. (Bear with me, I’ll come back to this point at the end.)

Something I didn’t mention in the last blog was that I thought: if I’m doing a talk about “need” I’d better clearly define Requirements from Specifications. So I turned to my bookshelves….

The first book I picked up was Mike Cohn’s User Stories Applied, the nearest thing the Agile-set has to a definitive text on requirements. I turned to the index and…. nothing. There is no mention of Specifications or of Requirements. The nearest he comes is a reference to “Requirements Engineering” efforts. Arh.

Next up Alistair Cockburn’s Writing Effective Use Case, the shortest and best reference I know to Use Cases. No mention of Specifications here either, although there are some mentions of Requirements there isn’t a definition of what Requirements are.

So now I turned to a standard textbook on requirements: Discovering Requirements: How to Specify Products and Services by Alexander and Beus-Dukis. A good start, the words Requirements and Specify are in the title. Specifications gets a mention on page 393, thats it. And even there there isn’t much to say. True Requirements runs throughout the book but doesn’t help me compare and contrast.

Now I have a lot of respect for Gojko Adzic so I picked up his Specification by Example with great hope. This has Specifications running through it like the words in seaside-rock, and there are half a dozen mentions of requirements in the index. But….

When Gojko does talk about Requirements he doesn’t clear differentiate between Requirements and Specifications. This seems sloppy to me, unusual to Gojko, but actually I think there is an important point here.

In everyday, colloquial, usage the words Requirements and Specifications are pretty interchangeable. In general teams, and Developers in particular, don’t differentiate. There are usually one or the other, or neither, and they are both about “what the software should do.” On the occasions were there are both they are overkill and form voluminous documentation (and neither gets read.)

The fact that so many prominent books duck the question of requirements and specification makes me think this is a fairly common issue. (It also makes me feel less guilty about any fuzziness in my own mind.)

To solve the issue I turned to Tom Gilb’s Competitive Engineering and true to form Tom helpfully provided definitions of both:

  • “A requirement is a stakeholder-desired, or needed, target or constraint” (page 418)
  • “A ‘specification’ communicated one or more system ideas and/or descriptions to an intended audience. A specification is usually a formal, written means for communicating information.” (page 400)
This is getting somewhere – thanks Tom. Requirements come from stakeholders, Specifications go to some audience. And the Specification is more formal.

Still its not quiet what I’m after and in the back of my mind I knew Michael Jackson had a take on this so I went in search of his writing.

Deriving Specifications from Requirements: An Example (Jackson & Zave, ACM press 1995) opens with exactly what I was looking for:

  • “A requirement is a desired relationship among phenomena of the environment of a system, to be brought about by the hardware/software machine that will be constructed and installed in the environment.
  • A specification describes machine behaviour sufficient to achieve the requirement. A specification is a restricted kind of requirement: all the environment phenomena mentioned in a specification are shared with the machine; the phenomena constrained by the specification are controlled by the machine; and the specified constraints can be determined without reference to the future. Specifications are derived from requirements by reasoning about the environment, using properties that hold independently of the behaviour of the machine.”
There we have it, and it fits with Tom’s description. Lets me summarise:

  • A requirement is a thing the business wants the system to bring about
  • A specification a restricted, more exact, statement derived from the requirement. I think its safe to assume there can be multiple specifications flowing from one requirement.
From this I think we can make a number of statements in the Agile context:

  • Agile work efforts should see requirements as goals
  • A User Story, or plain Story, may be a Requirement itself, or it might be Requirement or Specification which follows from an previous Requirement.
  • At the start of a development iteration the requirement should be clear but the specification may be worked out during the iteration by developers, testers, analysts or others.
  • Over analysis and refinement to specifications will restrict the teams ability to make trade-offs and will also prove expense as requirements change during the development effort.
  • Therefore while requirements should be know at least before the start of the iteration specifications should only be finalised during the iteration.
In discussing this on Twitter David Edwards suggested the example of a business requirement to provide a login screen. Presumably the business requirement would be something like “All users should be validated (by means of a login system).” From this would flow the need to be able to create a user, delete a user, administer a user, etc. etc. These could be thought of as requirements themselves or as specifications. Certainly what would be a specification would be something like “Ensure all passwords contain at least 8 characters and 1 numeric.”

Which bring us back to BDD.

Having worked through this I conclude that BDD is an excellent specification tool. After all BDD is an implementation of Specification by Example.

And while fleshing out specifications may lead to the discovery of new requirements, or the reconsideration of existing requirements, BDD is not primarily a requirements mechanism and probably shouldn’t be used as one.

Requirements need to be established by some other mechanism, deriving specifications from those requirements may well be done using BDD or another SbE technique.

Now, while BDD and SbE may well give Developers first class specification tools these tools should not be mistaken for requirements tools and shouldn’t be used as such.

Pheww, does that all make sense?

I need to ponder on this, I suspect there is a rich seem of insight in being clear about specifications and requirements.