Blog

The real lessons of Lego (for software)

How many out there have hear this: “What we want is software which is like Lego, that way we can snap together the pieces we need to build whatever it is we want.”

Yes? Heard that?

Lets leave aside the fact that software is a damn sight more complex than small plastic bricks, lets leave aside too the fact that Lego makes a fine kids toy but on the whole we don’t use it to build anything we use at work (like a car, bridge or house), and lets pretend these people have a point….

We start with the standard Lego brick:

Of course there are multiple colours:

And a few variations:

Which now allows us to snap together a useful wall:

Walls are good but to build anything more interesting we need some more pieces, maybe some flat pieces:

Or some thinner pieces, or some bigger pieces:

It might also help to have some angled pieces, you know for roofs and things, and remember the slant can go either way, up or down:

I think we’re heading for a house so we will need some doors and windows:

Personally I like wheels, I like things to move, and so do my kids. So we need some wheels – different size of course, and some means of attaching them to the other Lego blocks:

If we are building a car we need to be able to see out….

Umm… my imagination is running away, we need to steer, and how about a helicopter, and a ramp to load things onto the car/boat/plane/.…

Still, something missing…. people!

Lego is not homogenous, when you say “Lego brick software” people are probably thinking of the first 2×8 block I showed. But to make anything really interesting you need lots of other bits. Some of those bits have very specific uses.

I’ve not even started on Lego Space/StarWars, Harry Potter Lego or what ever this years theme is. Things get really super interesting when you get to Technical Lego.

But there are some things that every Lego enthusiast knows:
  • You regularly feel the need for some special part which doesn’t exist
  • You never have enough of some parts
  • You always make compromises in your design to work with the parts you have

The components themselves are less important than the interface.  The interface stays consistent even as the blocks change.  And over the years the interface has changed, sure the 2×4 brick is the same but some interfaces (e.g. wheels with metal pieces) have been removed (or deprecated?) while others have been added.  There is an element of commonality but the interface has been allowed to evolve.

So the next time someone says: “We need software like Lego bricks” remind them of these points.

Testing triangles, pyramids and circles, and UAT

A few months ago Markus Gartner introduced me to the Testing Triangle, or Testing Pyramid. It looks like this:

If you Google you will find a few slightly different version and some go by the name of Testing Pyramid.

Now a question: where did this come from? Who should I credit with the original? Markus thinks it mike be Mike Cohn but he’s not sure.

This triangle is actually pretty similar to a diagram I’ve been drawing for a while when I do Agile training:

But it occurs to me the triangle should be pushed to the side, and when you do that you can add some axis which add more information:

At the base the, Unit Tests, there are lots and lots of tests and they typically execute in milliseconds. I typically hear people say there are between two and four times as much test code (for unit tests) as production code.

As you rise up there are fewer tests, tests take longer to execute and therefore tend to be run less often. Also as you rise up it becomes more difficult to automate the tests, you can automate but it requires more effort and more co-ordination. Therefore manual testing continues to exist.

True, manual testing can never be eliminated entirely for various reasons (e.g. exploratory testing) but you can get very high levels of automated acceptance or system tests.

Cost is proportional to time – and manual testing is an order of magnitude more expensive than automated tests over anything other than the very short run – and therefore as tests take longer to run costs go up.

Now a word on UAT or Beta testing.

As far as I am concerned User Acceptance Testing is the same as proper Beta Testing. Both mean: showing a potentially finished product to real life users and getting their response.

The difference is: UAT tends to happen in corporate environments where users work for the company and will need to use the software. Beta testing tends to happen in software vendor environments and it means showing the software, even giving it to, potential users outside the company.

Thus: UAT and Beta testing can only truly be performed by USERS (yes I am shouting). If you have professional testers performing it then it is in effect a form of System Testing.

This also means UAT/Beta cannot be automated because it is about getting real life users to user the software and get their feedback. If users delegate the task to a machine then it is some other form of testing.

And having users play with software means they are not doing their actual job so UAT is very expensive: expensive because it is manual and expensive because something else is not being done. Given this cost it is sensible to minimise UAT whenever possible.

In my experience most UAT phases (although not beta phases) are in effect an additional round of System Test and are frequently performed by Professional Testers. The fact that these professional testers are doing UAT is the give away. Professional Testers are not Users, they are Professional Testers.

(The other give away is that Professional Testers doing UAT are usually paid for by some group other than the IT department, i.e. IT test is not trusted, perhaps for good reason.)

More than once I have seen System Test / Acceptance Test cycles which are either completely absent or very poorly done. This necessities the need for a second round. Which is called UAT (possibly to hide the actual problem?)

I also see situations were Unit Test is poorly done or completely omitted.

If the low levels of this triangle are done well – and quality built in – then UAT should be reduced to a) real users, b) as near as possible a formality.

UAT is a very expensive way to find bugs, it also shows that something was missing in the development process. Either in coding or before that in understanding the users. It also, perhaps more worryingly, shows a failing in the earlier test steps.

Dialogue Sheets – update & new planning sheet

Last month InfoQ carried an update on the use of retrospective dialogue sheets. The use of these sheets continues to grow and I continue to receive good feedback.

If you’ve tried the sheets and haven’t sent me some feedback than please e-mail me and let me know about your experiences.

And for those of you who’ve not tried a dialogue sheet retrospective, whats stopping you? The Dialogue Sheet PDFs are free to download. For those who don’t have a large A1 Printer/Plotter there is a print on demand service for all the dialogue sheets.

I have also added a new sheet, this one is designed for Iteration Planning Meetings. I think one of the things teams new to Agile struggle with is the initial planing meetings. So I’ve created a new A1 sheet to guide teams through the planning meeting. This one has a bit more of a board game feel because there are activities to do and steps to repeat.

One thing I realised when creating this sheet is: planning meetings are not simple, and there are lots of variations in them. So I set about writing a Guide to Iteration Planning meetings. This turned out to be a little longer than I expected but this just goes to show what is involved. If you intend to use the sheet then I recommend you read the guide too.

Like the other sheets the planning sheet is free to download although I ask people to register – the guide is free without registration. Again, the printed version is available from the print on demand service too.

When I do training course I always give teams one or two retrospective dialogue sheets for them to use for their first retrospectives. I’m hoping that the planning sheet will fill a similar role at the start of the iteration.

I should say, I’m not completely sure this iteration planning sheet qualifies as a dialogue sheet in the strictest sense because it is much more guided and less about discussion. That said, I don’t think there are any hard and fast rules on what is, and what is not, a true dialogue sheet.

Agile Clinic: Dear Allan, we have a little problem with Agile…

Consider this blog an Agile Clinic. On Friday an e-mail dropped into my mailbox asking if I could help. The sender has graciously agreed to let me share the mail and my advice with you, all anonymously of course…

The sender is new to the team, new to the company, they are developing a custom web app for a client, i.e. they are an ESP or consultancy.

“the Developers work in sprints, estimating tasks in JIRA as they go. Sprints last three weeks, including planning, development and testing. I have been tasked to produce burndowns to keep track of how the Dev cells are doing.”

OK, sprints and estimates are good. I’m no fan of Jira, or any other electronic tool but most team use one so nothing odd so far. But then:

“three week sprints”: these are usually a sign of a problem themselves.

I’ll rant about 3-week Sprints some other time but right now the main points are:

3 weeks are not a natural rhythm, there is no natural cycle I know of which takes 3 weeks; 1 week yes, 2 weeks yes, 4 weeks (a month) yes, but 3? No.

In my experience teams do 3 weeks because they don’t feel they are unto shorter iterations. But the point of short iterations is to make you good at working in short cycles so extending the period means you’ve ducked the first challenge.

“planning, development and testing” Good, but I immediately have two lines of enquiry to peruse. Planning should take an afternoon, if it needs to be called out as an activity I wonder if it is occupying a lot of time.

Second: testing should come before development, well, at least “test scripts”, and it should be automated so if it comes after development it is trivial.

Still, many teams are in this position so it might be nothing. Or again, it could be a sign the team are not challenging themselves.

Also, you are doing the turndowns but by the sounds of it you are not in the dev teams? And yo have Jira? I would expect that either Jira produces them automatically or each devl “cell” produces their own. Again, more investigation needed.

Continuing the story:

“The problem I’m encountering is this: we work to a fixed timetable, so it isn’t really Agile.”

No, Agile works best in fixed deadline environments. See Myth number 9 in my recent Agile Connection “12 Myths of Agile” – itself based on an earlier blog “11 Agile Myths and 2 Truths”.

“We have three weeks to deliver xyz and if it gets to the end of the sprint and it isn’t done, people work late or over the weekend to get it done.”

(Flashback: I worked on Railtrack privatisation in 1996/7, then too we worked weekends, death march.)

Right now the problem is becoming clear, or rather two problems.

Problem 1: It isn’t done until it is done and accepted by the next stage (testers, users, proxy users, etc.). If it isn’t done then carry it over. Don’t close it and raise bugs, just don’t close it.

Problem 2: The wrong solution is being applied when the problem is encountered, namely: Overtime.

As a one-off Overtime might fix the problem but it isn’t a long term fix. Only the symptoms of the problem are being fixed not the underlying problem which explains why it is reoccurring. (At least it sounds like the problem is reoccurring.)

Overtime, and specifically weekend working, are also a particularly nasty medicine to administer because it detracts from the teams ability to deliver next time. If you keep taking this medicine it you might stave off the decease but the medicine will kill you in the end.

The old “40 hour work week” or “sustainable pace” seems to be ignored here – but then to be fair, an awful lot of Scrum writing ignores these XP ideas.

Lines of enquiry here:

  • What is the definition of done?
  • Testing again: do the devs know the end conditions? Is it not done because it hasn’t finished dev or test?
  • What is the estimation process like? Sounds like too much is being taken into the sprint
  • Are the devs practicing automated test first unit testing? aka TDD
  • Who’s paying for the Overtime? What other side effects is it having?

“This means burndowns don’t make any sense, right? Because there’s no point tracking ‘time remaining’ when that is immaterial to the completion of the task.”

Absolutely Right.

In fact it is worse than that because: either you are including Overtime in your burn downs in which cases your sprints should be longer, or you are not, in which case you are ignoring evidence you have in hand.

The fact that burndowns are pointless is itself a sign of a problem.

Now we don’t know here: What type of burn-downs are these?

There are (at least) two types of burn downs:

  • Intra-sprint burn-down which track progress through a sprint and are often done in hours
  • Extra-sprint burn-down which tracks progress against goal over multiple sprints; you have a total amount of work to do and you burn a bit down each sprint.
I’ve never found much use for intra-sprint burn-downs, some people do. I simply look at the board and see how many cards are in done and how many in to-do.

And measuring progress by hours worked is simply flawed. (Some of my logic on this is in last years “Story points series” but I should blog more on it.)

Extra-sprint burn-downs on the other hand I find very useful because they show the overall state of work.

From what is said here it sounds like hours based intra-sprint burn-downs are in use. Either the data in them is bad or the message they are telling is being ignored. Perhaps both.

“I was hoping you might be able to suggest a better way to do it? I feel like we should be tracking project completion, instead, i.e. we have xyz to do, and we have only done x&y. My main question is: Is there a useful way to use estimates when working to a fixed deadline by which everything needs to be completed by?”

Well Yes and Yes.

But, the solution is more than just changing the burn-down charts and requires a lot of time – or words – to go into. I suspect your estimating process has problems so without fixing that you don’t have good data.

Fortunately I’ve just been writing about a big part of this: Planning meetings.

And I’ve just posted a Guide to Planning Meetings on the Software Strategy website. It is designed to accompany a new dialogue sheet style exercise. More details soon. I should say both the guide and sheet fall under my “Xanpan” approach but I expect they are close enough to XP and Scrum to work for most teams.

This quote also mentions deadlines again. I have another suspicion I should really delve into, another line of enquiry.

Could it be that the Product Owners are not sufficiently flexible in what they are asking for and are therefore setting the team up to fail each sprint? By fail I mean asking they to take on too much, which if the burn-downs and velocity measurements aren’t useful could well be the case?

We’re back to the Project Managers old friend “The Iron Triangle.”

Now as it happens I’ve written about this before. A while ago in my ACCU Overload pieceTriangle of Constraints” and again more recently (I’ve been busy of late) in Principles of Software Development (which is an work in progress but available for download.)

This is where the first mail ended, but I asked the sender a question or two and I got more information:

“let’s say the Scrum planners plan x hours work for the sprint. Those x hours have to be complete by the end – there’s no room for anything moving into later sprints.”

Yikes!

Scrum Planners? – I need to know more about that

Plan for hours – there is a big part of your problems.

No room to move work to later springs – erh… I need to find out more about this but my immediate interpretation is that someone has planned out future sprints rather rigidly. If this is the case you aren’t doing Agile, you aren’t doing Scrum, and we really need to talk.

I’m all for thinking about future work, I call them quarterly plans these days, but they need to be flexible. See Three Plans for Agile from a couple of years back(long version is better, short version was in RQNG.

“Inevitably (with critical bugs and change requests that [deemed] necessary to complete in this sprint (often)) the work increases during the sprint, too.”

Work will increase, new work will appear, and thats why you should keep the springs FLEXIBLE. You’ve shot yourself in the foot by the sounds of it. I could be wrong, I might be missing something here.

Right now:

  • Bugs: I’m worried about your technical practices, what is your test coverage? How are the developers at TDD? You shouldn’t be getting enough bugs to worry about
  • Change requests are cool is you are not working to a fixed amount of work and if you haven’t locked your sprints down in advance.
You can have flexibility (space for bugs and change requests) or predictability (forward scheduling) but you can’t have both. And I can prove that mathematically.

You can approach predictability with flexibility if you work statistically – something I expose in Xanpan – but you can only do this with good data. And I think we established before your data is shot through.

“This leads to people ‘crunching’ or working late/weekends at the end of the sprint to get it all done. It is my understanding that this isn’t how Agile is supposed to work.”

Yes. You have a problem.

So how should you fix this?

Well obviously the first thing to do is to hire me as your consultant, I have very reasonable rates! So go up your management chain until you find someone who sees you have a problem and would like it fixed, if they don’t have the money then carry on up the chain.

Then I will say, first at an individual level:

  • The intra-sprint sprint hours based burn-downs are meaningless. Replace them with an extra-sprint charts count your delivery units, e.g. User Stories, Use Cases, Cukes, Functional spec items, what ever the unit of work is you give to developers and get paid to deliver; count them and burn the completed units each sprint
  • Track bugs which escape the sprint; this should be zero but in most cases is higher, if its in double figures you have series problems. The more bugs you have the longer your schedule will be and the higher your costs will be.
  • See if you can switch to a cumulative flow diagram charting to show: work delivered (bottom), work done (developed) but not delivered, work to do (and how it is increasing change requests), bugs to do
  • Alternatively produce a layered burn-down chart (total work to do on the bottom), new work (change requests) and outstanding bugs (top)
  • Track the overtime, find who is paying for this, they have pain, find out what the problem is they see
None of these charts is going to fix your problems but they should give you something more meaningful to track than what you have now.

Really you need to fix the project. For this I suspect you need:

  • Overhaul the planning process, my guess is your estimation system is not fit for purpose and using dice would be more accurate right now
  • Reduce sprints to 1 week, with a 4 weekly release
  • Push Jira to one side an start working with a physical board (none was mentioned so I assume there is none.)
  • Ban overtime
We should also look at your technical practices and testing regime.

These are educated-guesses based on the information I have, I’d like to have more but I’d really need to see it.

OK, that was fun, thats why I’ve done it at the weekend!

Anyone else got a question?

To estimate or not to estimate, that is the question

Disparaging those who provide software estimates seems to be a growing sport. At conferences, in blogs and the twitter-verse it seems open season for anyone who dares to suggest a software team should estimate. And heaven help anyone who says that an estimate might be accurate!

Denigrating estimation seems to be the new testosterone charged must-have badge for any Agile trainer or coach. (I’ve given up on the term Agile Coach and call myself and Agile Consultant these days!)

Some those who believe in estimation are hitting back. But perhaps more surprisingly I’ve heard people who I normally associate with the Scrum-Planning Poker-Burndown school of estimation decry estimation and join in the race to no-estimation land.

This is all very very sad and misses the real questions:

  • When is it useful to estimate? And when is it a waste of time and effort?
  • In what circumstances are estimates accurate? And how can we bring those circumstances about?
These are the questions we should be asking. This is what we should be debating. Rather than lobbing pot-shots at one another the community should be asking: “How can we produce meaningful estimates?”

In the early days of my programming career I was a paid up member of the “it will be ready when its ready” school of development. I still strongly believe that but I also now believe there are ways of controlling “it” (to make it smaller/shorter) and there are times when you can accurately estimate how long it will take.

David Anderson and Kanban may have fired the opening shots in the Estimation Wars but it was Vasco Duarte who went nuclear with his “Story Points considered harmful” post. I responded at the time to that post with five posts of my own (Story points considered harmful? – Journey’s start, Duarte’s arguments, Story points An example, Breakdown and Conclusions and Hypothesis) and there is more in my Notes on Estimation and Retrospective Estimation essay and Human’s Can’t Estimate blog post – so I’ve not been exactly silent on this subject myself.

Today I believe there are circumstances where it is possible to produce accurate estimates which will not cost ridiculous amounts of time and money to make. One of my clients in the Cornish Software Mines commented “These aren’t estimates, that is Mystic Meg stuff, I can bring a project in to the day.”

I also believe that for many, perhaps most, organisations, these conditions don’t hold and estimation is little more than a placebo used to placate some manager somewhere.

So what are these circumstances? What follows is a list of conditions I think help teams make good estimates. This is not an exhaustive list, I’ve probably missed some, and it may be possible to obtain accuracy with some conditions absent. Still, here goes….

  • The team contains more than at least two dedicated people
  • Stable team: teams which add and loose members regularly will not be able to produce repeatable results and will not be able to estimate accurately. (And there is an absence of Corporate Psychopathy.)
  • Stable technology and code base: even when the team is stable if you ask them to tackle different technologies and code bases on a regular bases their estimates will loose accuracy
  • Track record of working together and measuring progress, i.e. velocity: accuracy can only be obtained over the medium to long run by benchmarking the team against their own results
  • Track the estimates, work the numbers and learn lessons. Both high level “Ball Park” and detailed estimates need to be tracked and analysed for lessons. Then, and only then, can delivery dates be forecast
  • All work is tracked: if they team have to undertake work on another project it is estimated (possibly retrospectively) in much the same manor as the main stream of work and fed into the forecasts
  • Own currency: each team is different and needs to be scored in its own currency which is valued by what they have done before. i.e. measure teams in Abstract Points, Story Points, Nebulous Units of Time, or some other currency unit; this unit measures effort, the value of the unit is determined by past performance. In Economists’ lingo this is a Fiat Currency
  • Own estimates: the team own the estimates and can change them if need be, others outside the team cannot
  • Team estimates: the team who will do the work collectively estimate the work. Beware influencers: in making estimates the teams needs to avoid anchoring, take a “Wisdom of crowds” approach – take multiple independent estimates and treat experts and anyone in authority like anyone else.
  • (Planning Poker is a pretty good way of addressing some of these points, I still teach planning poker although there may be better ways out there)
  • Beware The Planning Fallacy – some of the points above are intended to help offset this
  • Beware Goodhart’s Law, avoid targeting: if the estimates (points) in anyway become targets you will devalue your own currently, when this happens you will see inflation and accuracy will be lost
  • Don’t sign contracts based on points, this violates Goodhart’s Law
  • Overtime is not practiced; if it is then it is stable and paid for
  • Traditional time tracking systems are ignored for forecasting and estimating purposes
  • Quality: teams pay attention to quality and stove to improve it. (Quality here equates to rework.)
  • The team aim for overall accuracy in estimates not individual estimates; for any given single piece of work “approximately right is better than precisely wrong”
  • Dependencies & vertical teams: Teams are not significantly dependent on other groups or teams; they posses the skills and authority to do the majority of the work they need to
  • The team are able to flex the “what” the thing they are building through negotiations and discussions. (Deadlines can be fixed, teams members should be fixed, “the what” should be flexible.)
  • The team work to a series of intermediate deadlines
  • It helps if the team are co-located and use a common visual tracking system, e.g. a white board with cards
  • Caveat: even if all the above I wouldn’t guarantee any forecasts beyond the next 3 months; too much of the above relies on stability and beyond 3 months, certainly beyond 6, that can’t be guaranteed
My guess – and it is only a guess – is that when these conditions don’t hold you will get the random results that Duarte described. Sure you might be able to get predictable results with a subset of these factors but I’m not sure which subset.

The more of these factors are absent the more likely your velocity figures will be random and estimates and forecast waste. When that happens you almost certainly are better off dumping estimation – at best it is a placebo.

Looking at this list now I can see how some would say: “There are too many conditions here to be realistic, we can’t do it.” For some teams I’d have to agree with you. Still I think many of these forces can be addressed, I know at least one team that can do this. For others the prognosis is poor, for these companies estimation is worse than waste because the forecasts it produces mislead. You need to look for other solutions – either to other estimation techniques or to managing without.

I’d like to think we can draw the estimation war to an end and focus on the real question: How do we produce meaningful estimates and when is it worth doing so?

Requirements and Specifications

As I was saying in my last blog, I’m preparing for a talk at Skills Matter entitled: “Business Analyst, Product Manager, Product Owner, Spy!” which I should just have entitled it “Requirements: Whose job are they anyway?” and so I’ve been giving a lot of thought to requirements.

I finished the last blog entry noting that I was concerned the way I saw Behaviour Driven Development (BDD) going and I worried that was becoming a land-grab by developers on the “need side” of development. (Bear with me, I’ll come back to this point at the end.)

Something I didn’t mention in the last blog was that I thought: if I’m doing a talk about “need” I’d better clearly define Requirements from Specifications. So I turned to my bookshelves….

The first book I picked up was Mike Cohn’s User Stories Applied, the nearest thing the Agile-set has to a definitive text on requirements. I turned to the index and…. nothing. There is no mention of Specifications or of Requirements. The nearest he comes is a reference to “Requirements Engineering” efforts. Arh.

Next up Alistair Cockburn’s Writing Effective Use Case, the shortest and best reference I know to Use Cases. No mention of Specifications here either, although there are some mentions of Requirements there isn’t a definition of what Requirements are.

So now I turned to a standard textbook on requirements: Discovering Requirements: How to Specify Products and Services by Alexander and Beus-Dukis. A good start, the words Requirements and Specify are in the title. Specifications gets a mention on page 393, thats it. And even there there isn’t much to say. True Requirements runs throughout the book but doesn’t help me compare and contrast.

Now I have a lot of respect for Gojko Adzic so I picked up his Specification by Example with great hope. This has Specifications running through it like the words in seaside-rock, and there are half a dozen mentions of requirements in the index. But….

When Gojko does talk about Requirements he doesn’t clear differentiate between Requirements and Specifications. This seems sloppy to me, unusual to Gojko, but actually I think there is an important point here.

In everyday, colloquial, usage the words Requirements and Specifications are pretty interchangeable. In general teams, and Developers in particular, don’t differentiate. There are usually one or the other, or neither, and they are both about “what the software should do.” On the occasions were there are both they are overkill and form voluminous documentation (and neither gets read.)

The fact that so many prominent books duck the question of requirements and specification makes me think this is a fairly common issue. (It also makes me feel less guilty about any fuzziness in my own mind.)

To solve the issue I turned to Tom Gilb’s Competitive Engineering and true to form Tom helpfully provided definitions of both:

  • “A requirement is a stakeholder-desired, or needed, target or constraint” (page 418)
  • “A ‘specification’ communicated one or more system ideas and/or descriptions to an intended audience. A specification is usually a formal, written means for communicating information.” (page 400)
This is getting somewhere – thanks Tom. Requirements come from stakeholders, Specifications go to some audience. And the Specification is more formal.

Still its not quiet what I’m after and in the back of my mind I knew Michael Jackson had a take on this so I went in search of his writing.

Deriving Specifications from Requirements: An Example (Jackson & Zave, ACM press 1995) opens with exactly what I was looking for:

  • “A requirement is a desired relationship among phenomena of the environment of a system, to be brought about by the hardware/software machine that will be constructed and installed in the environment.
  • A specification describes machine behaviour sufficient to achieve the requirement. A specification is a restricted kind of requirement: all the environment phenomena mentioned in a specification are shared with the machine; the phenomena constrained by the specification are controlled by the machine; and the specified constraints can be determined without reference to the future. Specifications are derived from requirements by reasoning about the environment, using properties that hold independently of the behaviour of the machine.”
There we have it, and it fits with Tom’s description. Lets me summarise:

  • A requirement is a thing the business wants the system to bring about
  • A specification a restricted, more exact, statement derived from the requirement. I think its safe to assume there can be multiple specifications flowing from one requirement.
From this I think we can make a number of statements in the Agile context:

  • Agile work efforts should see requirements as goals
  • A User Story, or plain Story, may be a Requirement itself, or it might be Requirement or Specification which follows from an previous Requirement.
  • At the start of a development iteration the requirement should be clear but the specification may be worked out during the iteration by developers, testers, analysts or others.
  • Over analysis and refinement to specifications will restrict the teams ability to make trade-offs and will also prove expense as requirements change during the development effort.
  • Therefore while requirements should be know at least before the start of the iteration specifications should only be finalised during the iteration.
In discussing this on Twitter David Edwards suggested the example of a business requirement to provide a login screen. Presumably the business requirement would be something like “All users should be validated (by means of a login system).” From this would flow the need to be able to create a user, delete a user, administer a user, etc. etc. These could be thought of as requirements themselves or as specifications. Certainly what would be a specification would be something like “Ensure all passwords contain at least 8 characters and 1 numeric.”

Which bring us back to BDD.

Having worked through this I conclude that BDD is an excellent specification tool. After all BDD is an implementation of Specification by Example.

And while fleshing out specifications may lead to the discovery of new requirements, or the reconsideration of existing requirements, BDD is not primarily a requirements mechanism and probably shouldn’t be used as one.

Requirements need to be established by some other mechanism, deriving specifications from those requirements may well be done using BDD or another SbE technique.

Now, while BDD and SbE may well give Developers first class specification tools these tools should not be mistaken for requirements tools and shouldn’t be used as such.

Pheww, does that all make sense?

I need to ponder on this, I suspect there is a rich seem of insight in being clear about specifications and requirements.

Requirements whose job are they anyway?

Later this week I’m giving a talk at Skills Matter entitled: “Business Analyst, Product Manager, Product Owner, Spy!” The talk title is a reference to the John Le Carre book “Tinker Tailor Solider Spy!”, its probably too clever by half and I should just have entitled it “Requirements: Whose job are they anyway?”

The talk idea was born out of what I see as confusion and land-grabbing in the requirements space, or as I prefer to think of it “the need side” i.e. the side of development which tries to understand what is needed.

I think there are a number of problems on this side of the business…

First all too often this side is neglected, companies believe that Developers will somehow comprehend what is needed from a simple statement. In the worst cases this is a condition I refer to as: “Requirements by Project Title”. Just because Developers understand the technology does’t mean they understand what is needed.

Unfortunately Agile tends to make this problem worse because a) developers think they get to decide what is needed, b) business see Agile as a cure all.

The second problem is the exact opposite of the first: Developer exclusion from requirements. In this case a Requirements Engineering type is the one who is tasked with understanding need, probably producing excessive documentation, and probably giving it to developers who are then expected to create something. In the extreme this means developers never get to meet, talk to or understand the people and businesses that will be using the product.

However, it was another problem that was on my mind more when I thought up the talk: the confusion of roles between Business Analysts and Product Managers, made worse by the appearance of the Scrum Product Owner title.

In the UK it seems to me that too many companies think requirements are done by Business Analysts. This is often the case when development groups are developing software for the company’s own use or by a specific client. But when the product is being Developers for multiple external customers, when it will be sold as a product then requirements are the job of a Product Manager. I’ve written about this role before, several times so I won’t repeat myself right now – see Inbound & Outbound marketing or Product Management in the UK.

Part of the problem is that in the UK – I can’t really talk for other countries but I think most of Europe is the same – software companies appoint Business Analysts to do what us essentially a Product Manager role. I base this statement partly on the fact that when I deliver my Agile for Business Analysts course (at Skills Matter later this week and another version then later this month at Developer Focus) I find people on the course who I would regard as Product Managers but they – and their employees often have never heard of the Product Manager role.

Finally, I’ve also become concerned in recent months the Behaviour Driven Development is being used by Developers in an attempt to occupy the requirements space – a land grab!

On the one hand this needn’t be a problem, if BDD allows Developers to better understand the problem they are trying to solve then I would expect development to go smoother.

On the other hand there are three reasons why I’m concerned about this trend:

The “need side” is a fussy, messy, ambiguous area and I sometimes wonder if the rational engineering mindset is right tool here.

  • I wonder if Developers really have the skills to understand the need side. Undoubtedly some do but I’m far from convinced they all do. Indeed those who do might be better off moving entirely from development into the BA or Product Manager role.
  • Time: perhaps is the main concern
I have long believed that to really understand the need side, and to get to the bottom of what is needed requires time. If Developers are trying to tackle the need side while still coding then I question whether they have the time to do both. If they do not then I believe that opinion and assumptions will substitute from fact finding and requirements validation.

(I should say that on a small work effort, with a suitable Developer(s), having one person do the needs assessment and coding can be ideal.)

I’ve only really stated the problem here not the solution. I’m still working all this out in my own head, Wednesday’s talk should move the discussion forward and I’ve already started to sketch another blog entry on this subject – coming up soon “Requirements & Specifications.”

People or the system?

“the two view-points are always tenable. The one, how can you improve human nature until you have changed the system? The other, what is the use of changing the system before you have improved human nature?”

George Orwell, “Charles Dickens” essay in Shooting and Elephant and Other Essays, Penguin Books

I am sure I am not alone in exhibiting another of Orwellian trait: Double think.

For several years I have been guilty of agreeing with two contradictory points of view:

  • The Deming idea that the system governs how people will perform and act
  • That no matter what system (e.g. Scrum, XP, Prince2, DSDM, etc.) you use to develop software the motivation and quality of the people is the overriding factor
It wasn’t until John Harvey tweeted about the latter recently that I realised In was double thinking, my positions were not compatible with one another.

I should have noticed the cognitive dissonance sooner. I’ve been known to say, and still believe: “The true test of any software development method (e.g. Kanban, Scrum, etc. etc.) is whether it can help mediocre people perform, i.e. deliver, be more productive”

In that one statement I epitomise the contradiction. I’m hoping that writing this blog will allow me to resolve this!

This isn’t an academic problem: depending on which side of the argument you come down on will determine how you go about improving things. It also explains why some people in the Agile moment emphasis: trust, communication and other inter-personal issues while others emphasis a method, technical practices, etc.

I’m thinking of my experiences:

  • I have worked in environments where the system defeated the people: Railtrack privitisation, Sema had an ISO-9000 waterfall process and 120 people. Of course in 120 people some were good and some not so good.
  • I have worked in places where (good) people have overcome the system: two years after leaving Railtrack I returned, a much smaller team were in maintenance mode. The same process was in place but we circumvented it, we faked the documentation and so on.
  • And I have worked places where everyone was treading water and complying enough with the system
I have seen different things happen within the same organisation, even within what the company regarded as “one team.” I saw Reuters destroy its own processes in an effort to impose a CMMI level 2 (going to 3) process. (Managers were a bit like the American General in Vietnam who supposedly said “we have to destroy this village to save it”. Like the American’s Reuters ultimately lost.)

I have worked places were the concept of “system” would have been laughed it, things might be more accurately characterised as “Chaos.” But even in this chaos there was a system of sorts.

At Dodge Group it was hard to define any system or process, but there was somewhere a process. The company was trying to sell, sell, sell so they could IPO and the software team just had to keep up as best they could.

The good people at Dodge won out for a while. I instigated a simple process we could now recognise as a form of Agile but ultimately the company, the system, killed the success. Partly by hiring bad people, partly by not recognising the good people and partly by letting the system run its course.

In my “Agile will never work in investment banks” posting a while back I made a similar point: Good people in banks can install a good process which can deliver. But in time the bank – the systems, culture and contradictions – will kill the system and dispose of the good people.

Then there is the question of whether “good people” are themselves the result of the system. Can you pick up a “good person” from company A, put them into company B and let them do their good stuff?

An article in the MIT Sloan Management Review a few years back suggested that when “star players” move to a new team they don’t necessarily, or even normally, keep their “star player” performance. (When ‘Stars’ Migrate, Do They Still Perform Like Stars?, Fall 2008 issue.)

So where does all this lead us?

I think there is truth in both the System-first and People-first point of views. Things are not so black-and-white. Sometimes the System-first view is right and sometimes the People-first is right. There are forces at work in different contexts.

Some of these forces are:

1) The strength of the system: the stronger the system the more likely it is to win out.

The strength of the system derives form multiple sources including: the effectiveness of the system, the degree to which it is enforced and the degree to which people see their interests being served by the system.

Reuters CMMI initiative also put a lot of effort into policing. At Dodge the system was weak day-to-day, developers got on with it and they made good stuff.

At Sema Railtrack (first time) a lot of effort was put into policing the system, the system delivered ISO-9000 which helped the company win contracts. As a programmer woking in the system it didn’t seem particularly effective but I guess it was effective at generating money.

2) The System affects the Good People

At Reuters the system encouraged people to leave, I was one. At Dodge the system eventually recruited the wrong people, forced others out and wore down the good people who submitted.

3) Good people can make an poor system work

Primarily through ignoring it, compensating or changing it. Which leads to…

4) Good people can change the system

I’ve seen it happen. In fact you need at least one good person to cause they system to change or it will not.

Again a strong system is harder to change.

5) How “good” are the people?

Yes “good people” can do things change thing but how “good” do you need to be? Is it real a question of having super-stars? Most people aren’t super-stars. Surely we should be more concerned with the mediocre.

And before anyone rushes to say “Good developers are 10 times (or 28 times) better than average” please read Laurent Bossavit book “Leprechauns of Software Development” where he shows the evidence for this claim doesn’t hold up.

I guess these conclusions are going in the Deming direction.

Yet good people can make a bad system work; but if they do not succeed in actually changing the system it will eventually impose itself.

Can I suggest it comes down to a question of time scales: In the short-run good people can overcome a poor system, but in the long-run, unless they change the system for something better the system will overwhelm them.

And the mediocre people? – How about we say this:

Over the short-run, the better your people are the more likely they will overcome a poorer system, but in the long-run the hard it will be for people to sustain the effort if they do not change the system.

A good system will help everyone work the best. It will allow star-performers to shine and mediocre people to contribute and feel good, I’d even like to think a good system will improve the mediocre people.

Conway's Law v. Software Architecture

I’ve written about Conway’s Law before (Return to Conway’s Law (2006) and a Focus Group I ran at EuroPLoP “What do we think of Conway’s Law Now?”)

but I make no apologies for revisiting it again because I still think it hold true.

There are times when I wonder if there is any point to the discipline called “Software Architecture.” Sure design can make a difference in the small but when it comes to the big then, for my money, Conway’s Law is a far more powerful force than the plans, designs and mandates of an enterprise architect.

Conway’s Law predates Agile and Waterfall but it applies to both:

“Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization’s communication structure.” Conway, 1968

The original paper “How do committees invent?” is available from Conway’s own website. The original publication was in Datamation, April 1968.

While Conway’s Law can be seen at the macro level, the company level, it is also observed in in the small, at the micro level. The law if often restated in the small as:

“If you have four developers writing a compiler you will get a four pass compiler”

Or

“If you have three developers writing a UI you will get three ways of doing everything (mouse click, menu item, short-cut key)”

For example, if a development team is set up with a SQL database specialist, a JavaScript/CSS developer and a C# developer you they will produce a system with three tiers: a database with stored procedures, a business middle tier and a UI tier. This design will be applied whether it is the best or not. Indeed, the system might not even need a relational database or a graphical interface – a flat file might be quite adequate for data storage and a command line interface perfectly acceptable.

It is a little like a children’s cartoon where the superheroes need to rescue somebody: Superman says “I can use my super strength to lift the ice boulders out of the way”, while Fireman says “I can use my fire breath to melt the ice” and Drillerman says “I use my drill hands to make a rescue tunnel”. Once added to the picture each specialist finds a way to utilise his or her special powers. For a software team this can lead to poor architecture, over complicated designs and competing modules.

In setting up a team architectural decisions are being made even if no architecture diagrams are being drawn. Just deciding to staff the team with four developers rather than three will influence the architecture because work now needs to be found for the fourth team member.

These decisions are made at the time when the least information is known – right at the beginning. They may made by planners – project managers – who have little knowledge of the technologies or by people who will not actually be involved in the development effort, e.g. enterprise architects.

To avoid this my advice is to start a team as small as possible. Try to avoid making architectural decisions in staffing the team by understaffing it. Keeping the team hungry will reduce the possibility of building more than is needed or over architecting it.

There are those who would quickly point out the risk of under architecting a system, not looking to the future and not building a system that can change and grow. But the risks of over architecting are if anything worse. Too much architecture can equally prevent a system changing and growing, and too much architecture leads to more time consuming and expensive code to cut. Then there is the risk of not shipping at all, too long spent producing the “right” design may result in a system too late to be viable.

Second staff the team with people who are more generalist than specialist, people who have more than one skill. Yes have a Java developer on the team but have one who knows a bit of SQL and isn’t scared of a little UI work. In time you might need to add database and UI specialists but delay this until it is clear they are needed.

These suggestions echo Conway’s own conclusion at the end of his paper:

“Ways must be found to reward design managers for keeping their organizations lean and flexible. There is need for a philosophy of system design management which is not based on the assumption that adding manpower simply adds to productivity.”

It is worth remembering that Conway was writing over 20 years before Womack and Jones coined the term Lean to describe Toyota.

Reverse Conway’s Law & Homomorphic force

Since Conway wrote this a lot more systems have been developed. Many of these systems are now referred to as “Legacy Systems.” In some cases these systems force certain structure on the team who maintain the systems and even on the wider company.

For example, take the 3-tier database, business, GUI system from above. Years go by, the original developers leave the company and new staff are brought in. Perhaps some of these have been and gone in the years. The net effect is a system so complex in all three tiers that each requires a specialist. The database is so rich in stored procedures, triggers and constraints that only a SQL expert can understand it. The GUI is crammed full of JavaScript, CSS and not HTML 5 that only someone dedicated to interfaces can keep it all working. And the middle tier isn’t any different either.

Given this situation the company has no option but to staff the team with three experts. Conway’s Law is now working in reverse: the system is imposing structure on the organization.

Again this happens not just at the micro-level but at the macro-level. Entire companies are constrained by the systems they have. Economists might call this path-dependency: you are where you are because of how you got here not because of any current, rational, forces.

Underlying both Conway’s Law and Reverse Conway’s Law is the Homomorphism, or as I prefer to think of it: The Homorphic Force.

This force acts to preserve the structure of system even as the system itself moves from one technology to another, from one platform to another.

Both forms of Conway’s Law and the Homomorphic Force pose a dilemma for any organization. Should they work with the force or try to break it?

I can’t answer this question generically, there is too much context to consider. However I tend towards saying: Work with Conway’s Law, not against it – like woodworking, work with the grain not across it. Be aware of Conway’s Law and Learn to play it to your advantage.

Conway’s Law does contain a get out clause: the system that will be created will be a copy of an existing system, if you can create a new system, a system not pre-loaded with assumptions and well-trodden communication paths then maybe you can create something new and of a better design.

Thus I sometimes view myself as an architect. Now I don’t think I’d accept membership of the Architects-Club even if it was offered, and probably my software design skills are not up to the job but it doesn’t stop me dreaming.

But I architect by (trying) to bring about a good organizational architecture which will in turn bring about a better system architecture via Conway’s Law.

Links! – 2 conferences, 1 week

I’ve been to two conferences this week!

The first was Agile Dev Practices in Potsdam, outside of Berlin. At I presented my Retrospective Dialogue Sheets (www.dialoguesheets.com), well I say presented, it was 10 minutes of introduction, 60 minutes of attendees doing Dialogue Sheets and 20 minutes wrap up.

Anyone who has attended one of my Dialogue Sheet sessions will recognise the format. Which also means there aren’t a lot of slides for download. There are as few slides, essentially the same as Agile Cambridge last September so if you grab those slides you’ll be fine – Dialogue Sheets presentation as PDF download or Dialogue Sheets presentation on SlideShare, take your pick.

The second conference was DevWeek 2013 here in London. This was a completely new presentation: 10 Tips to make your Agile Adoption more successful – so far I only have this up on SlideShare. This presentation was based on an article I wrote for InfoQ last year My 10 things for making your Agile adoption successful.

If you check out the Allan Kelly InfoQ page you will find more on Dialogue Sheets too – I say this because I’m expecting InfoQ to publish another piece on Dialogue Sheets before long.