What we forget about the Scientific Method

I get fed up hearing other Agile evangelists champion The Scientific Method, I don’t disagree with them, I use it myself but I think they are sometimes guilty of overlooking how the scientific method is actually practiced by scientists and researchers. Too often the scientific approach is made to sound simple, it isn’t.

First lets define the scientific method. Perhaps rather than call it “scientific method” it is better called “experimentation.” What the Agile Evangelists of my experience are often advocating is running an experiment – perhaps several experiments in parallel but more likely in sequence. The steps are something like this:

  1. Propose a hypothesis, e.g. undertaking monthly software releases instead of bi-monthly will result in a lower percentage of release problems
  2. Examine the current position: e.g. find the current figures for release frequency and problems, record these
  3. Decide how long you want to run the experiment for, e.g. 6 months
  4. Introduce the change and reserve any judgement until the end of the experiment period
  5. Examine the results: recalculate the figures and compare these with the original figures
  6. Draw a conclusion based on observation and data

I agree with all of this, I think its great, but…

Lets leave aside problems of measurement, problems of formulating the hypothesis, problems of making changes and propagation problems (i.e. the time it takes for changes to work though the system). These are not minor problems and they do make me wonder about applying the scientific method in the messy world of software and business but lets leave them to one side for the moment.

Lets also leave aside the so-called Hawthorne Effect – the tendency for people to change and improve their behaviour because know they are in an experiment. Although the original Hawthorne experiments were shown to be flawed some time ago the effect might still be real. And the flaws found in the Hawthorne experiments should remind us that there may be other factors at work which we have not considered.

Even with all these caveats I’m still prepared to accept an experimental approach to work has value. Sometimes the only way to know whether A or B is the best answer is to actually do A and do B and compare the results.

But, this is where my objections start…. There are two important elements missing from the way Agile Evangelists talk about the scientific method. When real scientists – and I include social-scientists here – do an experiment there is more to the scientific method than the experiment and so there should be in work too.

#1: Literature review – standing on the shoulders of others

Before any experiment is planned scientists start by reviewing what has gone before. They go to the library, sometimes books but journals are probably more up to date and often benefit from stricter peer review. They read what others have found, they read what others have done before, the experiments and theories devised to explain the results.

True your business, your environment, your team are all unique and what other teams find might not apply to you. And true you might be able to find flaws in their research and their experiments. But that does not mean you should automatically discount what has gone before.

If other teams have found consistent results with an approach then it is possible yours will too. The more examples of something working the more likely it will work for you. Why run an experiment if others have already found the result?

Now I’m happy to agree that the state of research on software development is pitiful. Many of those who should be helping the industry here, “Computer Science” and “Software Engineering” departments in Universities don’t produce what the industry needs. (Ian Sommerville’s recent critique on this subject is well worth reading “The (ir)relevance of academic software engineering research”). But there is research out there. Some from University departments and some from industry.

Plus there is a lot of research that is relevant but is outside the computing and software departments. For example I have dug up a lot of relevant research in business literature, and specifically on time estimation in psychology journals (see my Notes on Estimation and Retrospective Estimation and More notes on Estimation Research.)

As people used to dealing with binary software people might demand a simple “Yes this works” or “No it doesn’t” and those suffering from physics envy may demand rigorous experimental research but little of the research of this type exists in software engineering. Much of software engineering is closer to psychology, you can’t conduct the experiments that would give these answers. You have to use statistics and other techniques and look at probabilities. (Notice I’ve separated computer science from software engineering here. Much of computer science theory (e.g. sort algorithm efficiency, P and NP problems, etc.) can stand up with physics theory but does not address many of the problems practicing software engineers face.)

#2: Clean up the lab

I’m sure most of my readers did some science at school. Think back to those experiments, particularly the chemistry experiments. Part of the preparation was to check the equipment, clean any that might be contaminated with the remains of the last experiment, ensure the workspace was clear and so on.

I’m one of those people who doesn’t (usually) start cooking until they have tidies the kitchen. I need space to chop vegetables, I need to be able to see what I’m doing and I don’t want messy plates getting in the way. There is a word for this: Mise en place, its a French expression which according to Wikipedia means:

“is a French phrase which means “putting in place”, as in set up. It is used in professional kitchens to refer to organizing and arranging the ingredients (e.g., cuts of meat, relishes, sauces, par-cooked items, spices, freshly chopped vegetables, and other components) that a cook will require for the menu items that are expected to be prepared during a shift.”

(Many thanks to Ed Sykes for telling me a great term.)

And when you are done with the chemistry experiment, or cooking, you need to tidy up. Experiments need to include set-up and clean-up time. If you leave the lab a mess after every experiment you will make it more difficult for yourself and others next time.

I see the same thing when I visit software companies. There is no point in doing experiments if the work environment is a mess – both physically and metaphorically. And if people leave a mess around when they have finished their work then things will only get harder over time. There are many experiments you simply can’t run until you have done the correct preparation.

An awful lot of the initial advice I give to companies is simply about cleaning up the work environment and getting them into a state where they can do experiments. Much of that is informed by reference to past literature and experiments. For example:

  • Putting effective source code control and build systems in place
  • Operating in two week iterations: planning out two weeks of work, reviewing what was done and repeating
  • Putting up a team board and using it as a shared to-do list
  • Creating basic measurement tools, whether they be burn-down charts, cumulative flow diagrams or even more basic measurements

You get the idea?

Simply tidying up the work environment and putting a basic process in place, one based on past experience, one which better matches the way work actually happens can alone bring a lot of benefit to organizations. Some organizations don’t need to get into experiments, they just need to tidy up.

And, perhaps unfortunately, that is where is stops for some teams. Simply doing the basics better, simply tidying up, removes a lot of the problems they had. It might be a shame that these teams don’t go further, try more but that might be good enough for them.

Imagine a restaurant that is just breaking even, the food is poor, customers sometimes refuse to pay, the service shoddy so tips are small, staff don’t stay long which makes the whole problem worse, a vicious circle develops. In an effort to cut costs managers keep staffing low so food arrives late and cold.

Finally one of the customers is poisoned and the local health inspector comes in. The restaurant has to do something. They were staggering on with the old ways until now but a crisis means something must be done.

They clean the kitchen, they buy some new equipment, they let the chef buy the ingredients he wants rather than the cheapest, they rewrite the menu to simplify their offering. They don’t have to do much and suddenly the customers are happier: the food is warmer and better, the staff are happier, a virtuous circle replaces a vicious circle.

How far the restaurant owners want to push this is up to them. If they want a Michelin star they will go a long way, but if this is the local greasy spoon cafe what is the point? – It is their decision.

They don’t need experiments, they only need the opening of the scientific method, the bit that is too often overlooked. Some might call it “Brilliant Basics” but you don’t need to be brilliant, just “Good Basics.” (Remember my In Search of Mediocracy post?).

I think the scientific method is sometimes, rightly or wrongly, used as a backdoor in an attempt to introduce change. To lower resistance and get individuals and teams to try something new: “Lets try X for a month and then decide if it works.” Thats can be a legitimate approach. But dressing it up in the language of science feels dishonest.

Lets have less talk about “The Scientific Method” and more talk about “Tidying up the Kitchen” – or is it better in French? Mise en place…. come to think of it, don’t the Lean community have Japanese word for this? Pika pika.

Programmers without TDD will be unemployable by 2022 (a prediction)

New year is traditionally the time of predictions, and several of the blogs I read have been engaging in predictions (e.g. Ian Sommerville “Software Engineerng looking forward 20 years.”). This is not a tradition I usually engage in myself but for once I’d like to make one. (I’ll get back to software economics next time, I need to make some conclusions.)

Actually, this is not a new prediction, it is a prediction I’ve been making verbally for a couple of years but I’ve never put it on the record so here goes:

 

By 2022 it will be not be possible to get a professional programming job if you do not practice TDD routinely.

I started making this prediction a couple of years ago when I said: “In ten years time”, sometimes when I’ve repeated the prediction I’ve stuck to 10-years, other times I’ve compensated and said 9-years or 8-years. I might be out slightly – if anything I think it will happen sooner rather than later, 2022 might be conservative.

By TDD I mean Test Driven Development – also called Test First (or Design Driven) Development. This might be Classic/Chicago-TDD, London-School-TDD or Dan North style Behaviour Driven Development. Broadly speaking the same skills and similar tools are involved although there are significant differences, i.e. if you don’t have the ability to do TDD you can’t do BDD, but there is more to BDD than to TDD.

The characteristics I am concerned with are:

 

  • Developer written automated unit test, e.g. if you write Java code you write unit tests in Java… or Ruby, or some other computer language
  • The automated unit tests are executed routinely, at least every day

This probably means refactoring, although as I’ve heard Jason Gorman point out: interest in refactoring training is far less than that in TDD training.

I’d like to think that TDD as standard – especially London School – also implies more delayed design decisions but I’m not sure this will follow through. In part that is because there is a cadre of “designers” (senior developers, older developers, often with the title “architect”) who are happy to talk, and possibly do, “design” but would not denigrate themselves to write code. Until we fix our career model big up front design is here to stay. (Another blog entry I must write one day…)

I’m not making any predictions about the quality of the TDD undertaken. Like programming in general I expect the best will be truly excellent, while the bulk will be at best mediocre.

What I am claiming is:

  • It will not be acceptable to question TDD in an interview. It will be so accepted that anyone doesn’t know what TDD is, who can’t use TDD in an exercise or who claims “I don’t do TDD because its a waste of time” or “TDD is unproven” will not get the job. (I already know companies where this is the case, I expect it to be universal by 2022.)
  • Programmers will once again be expected to write unit tests for their work. (Before the home computer revolution I believe most professional programmers actually did this. My generation didn’t.)
  • Unit testing will be overwhelmingly automated. Manual testing is a sin. Manual unit testing doubly so.

And I believe, in general, software will be better (fewer bugs, more maintainable) as a result of these changes, and as a result programmer productivity will be generally higher (even if they write less code they will have fewer bugs to fix.)

Why do I feel confident in making this prediction?

Exactly because of those last points: with any form of TDD in place the number of code bugs is reduced, maintainability is enhanced and productivity is increased. These are benefits both programmers and businesses want.

The timescale I suggest is purely intuition, this might happen before 2022 or it might happen after. I’m one of the worst people to ask because of my work I overwhelmingly see companies that don’t do this but would benefit from doing it – and if they listen to the advice they are paying me for they start doing it.

However I believe we are rapidly approaching “the tipping point”. Once TDD as standard reaches a certain critical mass it will become the norm, even those companies that don’t actively choose to do it will find that their programmers start doing it as simple professionalism.

A more interesting question to ask is: What does this mean? What are the implications?

Right now I think the industry is undergoing a major skills overhaul as all the programmers out there who don’t know how to do TDD learn how to do it. As TDD is a testable skill it is very easy to tell who has done it/can do it, and who just decided to “sex up” their CV/Resume. (This is unlike Agile in general where it is very difficult to tell who actually understand it and who has just read a book or two.)

In the next few years I think there will be plenty of work for those offering TDD training and coaching – I regularly get enquiries about C++ TDD, less so about other languages but TDD and TDD training is more widespread there. The work won’t dry up but it will change from being “Introduction to TDD” to “Improving TDD” and “Advanced TDD” style courses.

A bigger hit is going to be on Universities and other Colleges which claim to teach programming. Almost all the recent graduates I meet have not been taught TDD at all. If TDD has even been mentioned then they are ahead of the game. I do meet a few who have been taught to programme this way but they are few and far between.

Simply: if Colleges don’t teach TDD as part of programming courses their graduates aren’t going to employable, that will make the colleges less attractive to good students.

Unfortunately I also predict that it won’t be until colleges see their students can’t get jobs that colleges sit up and take notice.

If you are a potential student looking to study Computer Science/Software Engineering at College I recommend you ignore any college that does not teach programming with TDD.

If you are a college looking to produce employable programmers from your IT course I recommend you embrace TDD as fast as possible – it will give you an advantage in recruiting students now, and give your students an advantage finding work.

(If you are a University or College that claims to run an “Agile” module then make sure teach TDD – yes, I’m thinking of one in particular, its kind of embarrassing, Ric.)

And if you are a University which still believes that your Computer Science students don’t really need to programme – because they are scientists, logisticians, mathematicians and shouldn’t be programming at all then make sure you write this in big red letters on your prospectus.

In business simply doing TDD, especially done well, will over time fix a lot of the day-to-day issues software companies and corporate IT have, the supply side will be improved. However unless companies address the supply side they won’t actually see much of this benefit, if anything things will get worse (read my software demand curve analysis or wait for the next posts on software economics.)

Finally, debuggers are going to be less important, good use of TDD removes most of the need for a debugger (thats where the time comes from), which means IDEs will be less important, which means the developers tool market is going to change.

(Postscript: sorry about the formatting problems with the first upload.)

Software supply & demand – this time its Agile

Carrying on from my previous posts applying the economists tools to thinking about software development (Supply & Demand in software development and Software supply over time). In this post I want to see what happens when we apply Agile…

We start with our supply and demand curves as before:

SoftwareMarket-2013-11-19-11-25.png

First we need to define Agile, not a trivial matter. Lets simplify this by saying “Agile practices”, specifically:

  • Short iterations and regular releases (or at least demos), e.g. every two weeks
  • Planning meetings, planning poker estimation (or similar, or #NoEstimates), velocity (or commitment if you prefer)
  • User Stories, Epics, task breakdown (if you do this sort of thing)
  • Stand-up meetings
  • Team retrospectives and team planning boards or electronic equivalent
  • Test driven development and continuous integration
  • Pair programming, code review and shared code-ownership
  • Acceptance Test Driven Development (ATDD), Behaviour Driven Development (BDD), Specification by Example (SbE), and other automated testing mechanisms

The first thing to note about all these techniques – and indeed the majority of what we tend to call “Agile” – is that these techniques work on the supply side. They operate on the supply curve, the net effect is to increase supply.

Possibly, if used in the right fashion, BDD and SbE might effect the supply side. But since most of the efforts to use BDD I encounter are little more than super-charged TDD we’ll leave it there for now – see Dan North’s blog for a better discussion BDD v. TDD.

Projects – specifically projects but other work initiatives too – are often still judged on the basis of: on schedule, on feature and on budget. Agile in this fashion is what I have called Iterative Agile, or at best Incremental. The team is still seeking to deliver a fixed quantity of software. Thus the demand curve is unchanged. (We will return to the demand curve in another post and discuss how Agile techniques might affect it.)

While Agile tools promise increased performance this is only in the long run. In the short run we can expect teams to slow down as they adopt these tools. We can also expect costs to go up as companies to spend money to buy in this expertise.

Maybe they hire the likes of myself to supply training and consulting in adoption (yes! I’m for hire, see my company website Software Strategy). Even if they don’t pay the likes of me costs will go up because engineers need to learn these skills. Companies rationale for hiring me is that I will help reduce the period of time for which they are slower and reduce the risk. The deal is: hire me and I will help you get to increased productivity faster (and with less risk).

Putting this in terms of curves:

SupplyAgAdop-2013-11-19-11-25.png

A team starting on Ss can expect to move left (1) to Sasr (agile short run) before moving to the right (2) – Salr (agile long run) – reduced supply before greater supply. In the first instance the price per unit increases while the quantity delivered falls but after the second move the price per unit falls and the team produces more software.

Leave consultants like me to one side for a moment and simplify it. Imagine a team which “just does it”. Assume for a moment that the team members stay the same and the amount they are paid each week stays the same. When the team start doing something new they slow down, they produce less software each week so the average price of per unit increases. As they get better the same team better their original performance, earn the same money, produce more software and as a result the average price per unit falls.

Hopefully that all makes sense and isn’t controversial.

But I think something else happens. This is a hypothesis, at the moment I can’t measure it so please tell me – in the comments on this blog – whether you agree or think I’m wrong.

Staying with the supply curve, consider again the tools we were looking at:

  • Short iterations and regular releases make it cheaper to make small changes. Therefore the supply of the small quantities of can should take place at a lower cost. And since the delivery – or at least a demo – is earlier the whole “project” could be cancelled if things don’t work out. Thus the bottom of the curve moves left over time.
  • Automated testing (TDD, ATDD, etc.) reduce the quantity of faulty software produced and increase the amount of useful software increased at any point thereby raising all points on the curve. With less “technical debt” teams can deliver more functionality more cheaply.
  • The same techniques – TDD, ATDD, etc. – and shared code ownership also make it easier for new engineers to become productive because a) there are more code examples of how the system works and b) places a safety net under any work they do, allowing them to experiment and learn faster, and get their changes to production sooner.

The net consequence of these changes is to flatten the curve, not entirely but certainly reduce of an angle. In economics terms we increase the elasticity. Changes in price – adding or removing a developer – causes a bigger effect on the quantity supplied than before.

SupplyAvlr-2013-11-19-11-25.png

Supply moves from the initial Ss curve to the Savlr (Supply Agile Very Long Run). As the chosen name implies, this takes some time to occur.

Now there is a catch. Consider the total amount of software bought and the total price paid (price x quantity.) To do this look at the next two diagram.

SupplyAvlrGreen-2013-11-19-11-25.png

The green area represents the total paid for the software. You can’t see it here but on my graphics tool I can see price is 10 high and quantity 7 wide, so 70 (whatever) units.

SupplyAvlrBlue-2013-11-19-11-25.png

In this diagram, on the new supply curve, the blue area represents the total spend. Again you can’t see it but on OmniGraffle it is 5 high and slightly more than 9 wide, almost 45 units of total spend.

Now if you are producing software for your own company – either to resell (e.g. Microsoft) or to consume yourself (e.g. a bank) – this model is good news. You get more software for a lower overall spend.

But, if you are an outsourced provider of software selling custom software to a client this is bad news. You now deliver more software for less money, about 36% less in total. If profit is tied to revenue (time and materials) then this is bad.

No wonder outsourced providers can be reluctant to embrace Agile or to do it badly. If you are a consumer of outsourced software development services you might want to think about that.

Next time I want to turn my attention to the software demand curve.

(As a footnote, using the same analysis techniques it is possible to show that reducing supply of software – moving the supply curve left, either because the software ages or because Agile adoption is slowing things down – will result in a greater overall spend for less software. This might be good for outsourcer suppliers but isn’t so good for in-house development.)

Pessimistic about Agile in retail banks

Almost exactly two years ago I wrote a strident blog entitled “Agile will never work in Investment banks”. Its a blog post that keeps getting rediscovered by those who agree or disagree with my position and just such event has occurred on Twitter with Kevin Burns @martinburnsuk and several others @kev_austin @sandromancuso @gordonmcmahon

During the intervening two years I have had many conversations which support my point of view. Most of these conversations have been with people who have more, and more recent, experience in banks than I do. In particular I found Peter Pilgrim’s blog closely aligned with my own views.

Conversations with the likes of Andy and Chris (you know who you are!) and others have led me to believe that while retail banking might be more hopeful it isn’t a massively better there. In part that is due to the colonisation of retail banking by investment banking that has occurred over the last twenty years.

Lets be clear: although investment banking and retail banking share the word “banking” in their title they have nothing else in common. I’ll leave it to John Kay to say something about the why they are in conflict. (Broken link).

The move by many investment banks to occupy retail space has been purely opportunistic, they have sought cheap funds and to embed themselves in the economic fabric so they become “too big to fail” and “systematically important institutions”, thereby guaranteeing Government bail out, thereby reducing their funding costs and increasing banking bonuses. (See The Bankers New Clothes for a discussion of the many problems with current banks and solutions.)

If you wanted to summarise my argument in one line I’d say: Too big to fail means too big to manage.

Now there are two caveats on my original post which I want to make clear before expanding on the discussion:

  1. Agile is interpreted by many in banking to mean Iterative Development – see my recent “3 Styles of Agile” post. This can work to some degree. Simply adding the technical practices, improving software craftsmanship, can improve IT. However if an Agile ever becomes more than – incremental or evolutionary – there then conflicts will be created within the management regime which may lead to its own destruction.
  2. Incremental and Evolutionary Agile can exist in banking, and have existed in many banks I know of but only in the short to medium term. This kind of Agile requires a guardian angel. Again this will create conflicts, while the guardian angel is strong enough the team can be protected. However given time either the angel is successful and will presented with a better opportunity (probably at another bank in which case he might have the team follow him) or he is not successful and will be pushed to one side and his creation will be dismantled.

Turning specifically to retail banks, and thereby extending my argument. On the one hand I don’t believe retail banks suffer with quite the same rent-seeking culture found in investment banks so I’m more hopeful of them. But on the other hand retail banks have an additional set of problems:

Legacy retails banks are dependent on Cobol mainframe systems. This brings several problems. Firstly while you can do some Agile like things in a Cobol environment there are many things you can’t do.

Specifically TDD: as far as I can tell several attempts have been made to create a CobolUnit (this CobolUnit may be the same to different) and they are all pretty much dead. No action since 2009 or 2010 on those links.

We can’t expect the OpenSource community to come to the rescue here. Few people write Cobol at home and the generation of people who write Cobol are different to the OpenSourcers who created JUnit, nUnit, etc.

Unit IBM or MicroFocus get interested in a CobolUnit then I don’t expect much to happen – it seems IBM might be moving in this direction. But generally I don’t see the incentive for either unless they can see money.

Second these Cobol systems are ageing. They have patch upon patch applied to them and by some accounts are reaching the end of their lives. Much of the technology they are built with just isn’t as flexible or modifiable as the current generation. There is only so much you can do with a hierarchical IMS database or a mainframe batch processing system. (If you could do these things, and you could do them as easily as modern technology then we wouldn’t have needed to invent modern technology.)

Since retail bankers were early adopters of computers some of these systems are among the oldest systems in use. Many of them have been patched together through mergers and acquisitions making things even more difficult.

True I’d a great proponent of never rewrite and I think it would be a mistake for banks to rewrite these systems. Indeed, given the chronically poor management skills in these banks authorising a rewrite would be a waste of money. Which means you are damned if you do, damned if you don’t.

Third, compounding the first two: the people who wrote these systems are nearing retirement age. Not only do they understand the technology but they understand the business too. When they are gone these systems are lost.

Banks could have alleviated this problem by training replacements but they don’t generally do this. Actually they have made things worse in the last 20 years by outsourcing and offshoring work thereby discarding their own experience and acerbating the generation change. Some of the knowledge might now reside in the outsourced and offshore but getting this into the bank will prove difficult and expensive.

The solution for the banks is to buy off-the-shelf systems. But this is incredibly difficult so…

Fourth, buying an off the shelf product means not only process changes but potentially product changes. That lands the retail bank with change problems. They will need to customise off the shelf software and that isn’t cheap either.

Fifth, new “challenger” banks like Metro bank are already buying off the shelf. As the costs of maintaining legacy systems or migrating to new systems escalate these banks will have a greater and greater advantage over the legacy banks.

And its not just challenger banks: Paypal, Zopa, Funding Circle and a host of other online payment and loans providers are applying technology to reinvent banking. In doing so they are eating the retail banks market. (And the banks can’t respond because of the reasons listed here.) In other words: the banks are suffering from disruptive innovation

Sixth – something I just hinted at – the quality of IT management in banks is absolutely appalling. Again investment banks have made retail banks worse because the big money was in investment, anyone who was any good and wanted more money had an incentive to move from the retail to the investment side. Retail IT has been robbed of its best staff.

Management quality is not entirely down to investment banks. Banks are about money management after all. Promotion to the senior positions comes from the banking side, not the IT side. Almost by definition the senior managers in banks don’t understand IT. It also means that IT managers sooner or later face a ceiling on progression.

Many of these problems have been made worse because investment bankers controlled retail banks. Investment banking is essentially a rent extracting activity, and they have managed retail IT just like that. The existing IT has been milked and without adequate investment. Hence problems like well the documented RBS outages and Lloyds payment problems.

To make matters worse banks are where the money is so they have been preyed on by major systems companies and tool vendors. These companies sell technology fixes which may – or may no – address a problem but do little if anything to build capacity. Put it this way: banks are major buyers of automated testing tools which aren’t used.

You see my argument?

I’m sure Agile could help with some of these problems – not all but some – but the way I see it the problems facing the banks mean Agile will have a very hard time.

Finally, as others have pointed out many of my arguments can be applied to other types of large corporations. I agree, I confine my arguments to banks because I think the case is clearer there, banks are all very similar, because banks exhibit more systematic failures than most and banks are something we should be worried about – as customers, as tax payers, as citizens in economies that depend on banks.

Scaling Agile Heuristics – SAFe or not!

If I’m being honest, I feel as if I don’t know much about scaling agile. But when I think about it I think the issue is really: What do you mean when you say “scaling agile?”


It seems to me you might mean one of three things:

  • How do you make agile work with a big team? Not just 7 people but 27 people, or 270 people
  • How do you make agile work with multiple teams within an organisation? i.e. if you have one team working agile how do you make another 2, or 20 or another 200?
  • How do you make a large organisation work agile? – it’s not enough to have a team, even a large team working agile if the governance and budgeting systems them work within are decidedly old-school

When I rephrase the question like that I think I do have some experience and something to say about each of these. Maybe I’ll elaborate.

But first an aside: why have I been thinking about this question?

Well David Anderson pushed out a blog about the Scales Agile Framework (SAFe) which prompted Schalk Cronje to ask what I thought – and at first I didn’t have anything to say! But then Schalk pointed out that Ken Schwaber has blogged about the SAFe too. It’s not often that David and Ken find themselves on the same side of the argument… well, actually… they probably do but too many people are willing to see Kanban and Scrum as sworn rivals. I digress, back to SAFe and scaling.

I still don’t have anything original to say about SAFe, I simply don’t know much about it. However the points David and Ken make would worry me too.

I’m not about to rush out and read SAFe. I find I’m more likely to be told by my clients: “I can see how agile works in big companies, but we are a small company, we can’t devote people like that.” And while I do have, and have had, some larger clients I think that statement says a lot.

I have over the years built up some rules-of-thumb, heuristics if you prefer a fancy term, for “scaling agile”. I’ve never set them down so completely before so here you go:

  1. Inside every large work effort there is a small one struggling to get out: find it, work with it
  2. Make agile work in the small before you attempt to make it work at scale; if you can’t get a team of 5 to work then you won’t find it any easier to get a team of 10 or 100 working. Get a small team working, learn from it, and grow it…
  3. Use piecemeal growth wherever possible: start with something small and grow it
  4. Don’t expect multiple teams to work the same way – one size does not fit all! A new project team might be perfectly suited to Scrum, a maintenance team to Kanban and a BAU+Project team to Xanpan. Forcing one process, one method, one approach one different teams is a sure way to fail.
  5. Manage teams as atomic units, aim for team stability, minimise moves between teams
  6. Split big teams into multiple small, independent, teams with their own dedicated work streams, product focuses and even code bases        
  7. Trust in the teams, delegate as far down as you can; give teams as much independence as possible; staff teams with all the skills they need – vertical teams, include testers, requirements people and anyone else
  8. Minimise dependencies between teams; decouple deliveries, decouple teams and again, vertical teams, staff the team so they don’t need to depend on other teams
  9. Use technical practices to automate as much of the dependencies between teams as possible. e.g. a good TDD suit and frequent CI will by themselves allow two related teams to work much closer together
  10. Overstaff in some roles to reduce dependencies – overstaffing will pay back in terms of reduced dependency management work        
  11. Learn to see Supply and Demand (a future blog entry is in the works on this): supply is limited and is hard to increase, you need to work on the demand side too
  12. Recognise Conway’s Law: work with it or set out to use it; again piecemeal growth and start as small as possible will help
  13. Use Portfolio Management to assess teams, measure them by value added against cost. Put in place a governance structure that expects failure and use portfolio management to fail fast, fail cheap, fail often
  14. Ultimately embrace Beyond Budgeting and change your financial practices
  15. Big projects, big teams are high risk and likely to fail whatever you do; some things are too big to try. Some big projects just shouldn’t be done

    
There is no method, framework, tool out there that will be your silver bullet, but if you think for yourself, and if your people are allowed to think and act then you might just be able to create something for yourself.

Doing back to the three questions I opened with:

  • How do you make agile work with a big team? When the team gets too big split it along product lines or product subsystems; if you can’t then don’t do anything that big
  • How do you make agile work with multiple teams within an organisation? Use multiple independent teams, minimise dependencies, decouple and use technical practices
  • How do you make a large organisation work agile? Portfolio management and beyond budgeting

And remember: Don’t do big projects, do small ones and grow success. If that is not an option for you then brace.

Scrum + Technical practices = XP ?

My last blog post (Scrum has 3 Advantages Over XP) attracted a couple of interesting comments – certainly getting comments from Uncle Bob Martin and Tom Gilb shows people are listening. And it’s Tom’s comment I want to address.

But before I do, lest this come across as Scrum bashing I should say that I think Scrum has done a lot of good in the software industry. Yes I had my tongue-in-my-cheek when wrote “Scrum has 3 advantages over XP” but if Scrum hadn’t made “Agile” acceptable to management it is unlikely that XP ever would have take Agile this far.

I suppose I’m a little like Uncle Bob, its a bit of a surprise to me that Scrum Certification has come to mean so much to the industry when really it means so little. I’m also worried that brands poor people “certified”, good people “uncertified” and stifles innovation.

Second the ideas of the Scrum originators are good. And lets not forget that while Schwaber and Sutherland have become celebrities in the community there were five authors of the original “Scrum: A Pattern Language for Hyperproductive Software Development”, Beedle also co-authors one of the early Scrum books, Devos still works as a Scrum Trainer, while Sharon seems to Scrum’s Fifth Beatle – where are they now?)

Getting back to the point. Tom Gilb commented: “Sutherland did say in a speech that Scrum works best with XP”. Chris Oldwood also says he’s heard Jeff say this.

In fact, I think I’ve heard Jeff Sutherland say something similar, heck, not similar, the same and reported it in this blog. And Jeff’s not alone, I’ve heard the same thing from other Scrum advocates, trainers, and cheer-leaders.

But, this view is not universal. There are Scrum trainers who don’t. Specifically there are those who train Scrum and specifically say XP practices like TDD and on-site customer should not be used. And as I sarcastically commented previously, part of the attraction of Scrum to management types is that it doesn’t delve into the messy world of coding.

Lets just note: there are multiple interpretation of what is, and is not Scrum. Nothing wrong with that, it just complicates matters.

Back to XP though….

There are, broadly speaking, two parts to Extreme Programming (XP). The technical practices (Test Driven Development, Continuous Integration, Refactoring, Pair Programming, Simple Design, etc.) and the process management side (Iteration, Boards, Planning Meetings, etc.)

It is these technical practices that Jeff, and others, are saying Scrum software development teams should use. Fine, I agree.

Look at the other side of XP, the process side. This is mostly the same as Scrum. Do a little translation, things like Iteration to Sprint and it is the same.

(OK there are a few small differences. I just checked my copy of Extreme Programming (old testament) and to my surprise stand-up meetings aren’t there. I think many XP teams do stand-up but it isn’t originally part. The book does include “40 Hour Work week” which isn’t present in The Scrum Pattern Language, or “Agile Project Management with SCRUM” but I’ve heard Scrum advocates talk about sustainable pace. I’m still not sure how you marry this with commitment but I’ve written about that before. However these points are knit picking.)

Essentially Project Management XP style is Scrum.

This shouldn’t be a surprise. In the mid-90’s when Scrum and XP were being formed and codified many of these people knew one another. If nothing else they saw each other’s papers at PLoP conferences. The Scrum Pattern language was presented at PLoP 1998, Episodes (the basis for XP) was at PLoP 1995, and both draw on the work of Jim Coplien.

I have every reason to believe that XP and Scrum didn’t appear in insolation. There was cross-fertilisation. I have also been told there was competition.

If you accept that Scrum and XP project management are close enough to be the same thing, and you accept Jeff Sutherland’s recommendation (namely do XP technical practices with Scrum) then what do you have?

  • XP = Technical Practices + Scrum Style Project Management
  • Jeff Sutherland says effective development teams = Scrum + Technical Practices from XP

Think about that. Even talk about it, but don’t say it too loudly. For all the reasons I outlined before, XP isn’t acceptable in the corporation. Scrum is. Therefore, we must all keep this little secret to ourselves.

Architects who aren’t

Having cleared up some preliminaries, i.e. What is architecture?, we are getting closer to the big question: what do architects do? But I’ll continuing to take this piecemeal. In this blog entry I’d like to dismiss too groups of people who carry the Architect title but are not Architects.

The first group are “Architects by Seniority.” Some years ago I held a post with the title “Senior Software Engineer.” At first I though this might mean I was “the senior software engineer” but quickly realised I was one of many “senior software engineers.” The company conferred this title on anyone who had more than a few, about five, years of experience working as a software engineer. Or as I used to joke “anyone over 30.”

Some Architects get their titles the same way. My guess is this is more common on the services side of the industry were engineers are sold by the hour to clients and Architects have a higher billing rate.

A few months ago I was told it was common in the Indian outsourcing industry to confer the title Architect on engineers with 3 years experience. This is one data point, I don’t know how common that really is. Anyone out their know?

Unfortunately, some of the people who are given the title Architect simply because they have been around a while let it go to their heads. Which brings us to the second group who are Architects in title but not in practice: “Divorced Architects” or, as I think Joel Spolsky christened them “Astronaut Architects.”

These are Architects who sit around thinking big thoughts about “the system” but aren’t connected with what is actually happening. Just because you have the title “Architect” does not give you the knowledge or right to tell people what to do without doing it yourself. As Jim Coplien and Neil Harrison put it “Architect also Implements.”

If you are lucky these architects are pretty much harmless, they cost the company money, the developers tip their flat-cap to them in the morning but ignore them when they do work. If your unlucky their crazy ideas result in a messed up system and their egos get in the way.

Years ago I worked on rail privatisation, the Railtrack A-Plan timetabling system to be exact. It was on Windows NT with Sybase (yes, thats how long ago it was) in C and C++ with a little Visual Basic. 120 people worked on the system at the peak, of which four were architects and about 12 were coders, OK, maybe 16 if you include the SQL and VB guys.

But the architects came from a mainframe Cobol background so they designed a batch processing system, set down constraints and ways of working which just didn’t make sense for a client-server system. The company had a ISO-9000 system in place with lots of management so the result of this architecture was a lots of problems. Once they got into the code the developer just did what they wanted, the architects would never know because a) they wouldn’t get their hands dirty with code and b) they didn’t really know C let alone C++.

The project wound down and went into maintenance mode so I left. A couple of years later I found myself back on a much reduced project to redevelop parts of the system. Now we had about five developers, one architect part-time and a couple of dozen people tops.

We mostly ignored the architect, he saw one system and we saw another. ISO-9000 was nominally in place but widely ignored. The process worked a lot better. Occasionally we wrote a formal document to keep the formal process and architect happy but the real documentation was contained in “Rough Guides” which didn’t formally exist.

Moral of the story: Just because you are called an architect, just because you go to meetings, you aren’t an architect.

Gartner talks up Pattern Based Strategy

For those who don’t know, Gartner group are a business/technology research outfit who produce reports on things technical and business related. They have a pretty good reputation and they have a couple of sidelines in consulting and event organizing (or is it the other way round?). Sometimes they are leading opinion and sometimes following. They sell their reports, their conferences are expensive and their consulting must be more so.

Anyway, the reason for saying this is that Gartner have decided “Pattern Based Strategy(tm)” is the next big thing: “Companies Must Implement a Pattern-Based Strategy to Increase Their Competitive Advantage.”

(Notice the “(tm)”. Somebody, somewhere owns the trademark “Pattern Based Strategy.” Hillside own the trademark “PLoP” but not “pattern”. Could the owner of this one be…?)

Now as many of my regular readers will know, I’ve been talking about Business Strategy Patterns for over five years – and you can read my Business Strategy Pattern papers for free. Since posting up the 2009 EuroPLoP paper I’ve spent some time on the whole, I know have a 230 page draft of a book and I’m starting to approach publishers. There is still a lot to do but I hope to have something out by the end of 2010.

Getting back to Gartner. I’d love to tell you how their patterns and mine link up but I can’t actually read the Gartner report. Call me penny-pinching if you will but $500 seems pretty tall. Actually, they have a bunch more papers on strategy patterns but again, its $500 a throw.

I asked Gartner for a copy of their research and they said: “Unfortunately, our research is only available to our clients, but I can provide you with the press releases we have done on the topic which you may find helpful.”

So there you go, a closed shop. They aren’t willing for other people to review their work, definately not the culture we in the patterns community have.

It also means I can’t see their sources. I’m naturally suspicious of organization that won’t disclose their sources. For all I know they could be be building on sand – they could even be referencing me left right and center. Never mind me, I’d like to see some third party research to support their ideas.

From the press release (the first link I gave) and some other downloads from the site it doesn’t look like they are talking about Patterns in the same sense that anyone familiar with Christopher Alexander’s work would understand it. And by extension, they are not linking their patterns work up with work from the software engineering community – Design Patterns (GoF, POSA, etc.). Neither are the talking about patterns as knowledge management from the likes of Mary Lynn-Manns and Daniel Mays, or the work on Business Strategy Patterns I and others have been doing.

At the basic level there are similarities about the patterns they see and the underlying ideas of Alexander. For both Gartner and Alexander it is about a reoccurring sequence of events in some setting – Alexander would say place, software folks would say context and Gartner say market.

I think Gartners work is more related to data mining and business intelligence than it is to Alexander. As such it may well have more to do with bird spotting. Some years ago Harvard Business Review ran a piece on how spotting patterns in birds could help with business strategy – see Spotting Patterns on the Fly from HBR November 2002.

Although we are all dealing with the same thing: reoccurring events, sequences, constructions and the forces that bring them about, I consider Gartner’s patterns as “patterns in the small.” Reoccurrence without the repetition, analysis and explanation that make up “pattern in the large”. Its the same word but used differently.

Gartner also discuss “weak signals” and scenario planning. This is interesting because I’ve always believed pattern thinking has a lot in common with scenario planning. It is about detecting the forces at work and seeing how they play out.

Gartner are mixing a number of different themes into their “Pattern Based Strategy(tm)”: IT systems, event monitoring and data mining. They talk about finding “new patterns” but how can you find new patterns if you don’t know the old ones? It would be nice to think Garner have some patterns they might share but I suspect they don’t.

OK, lets give it 10 years and see if anyone remembers Gartner’s “Pattern Based Strategy ™.” Alexander and Design Patterns will still be around – timeless.

Finally, I would imagine that Gartner are one of those companies with a reputation management system so someone in Gartner will shortly be alerted to this blog. I’d like to hear your comments.

More Business Strategy patterns for software companies

As you might guess, I’m lightly loaded this week so I’m catching up with a number of writing projects.

The latest set of patterns in my Business Strategy Patterns for Software Companies series are now online.

This set were reviewed at EuroPLoP 2009 and describe a Pattern Language for Product Distribution. The patterns are:

  • Branded Shops
  • Named Sales People
  • Internet Store
  • Independent Retail
  • Local Guide
  • White Label
  • Wholesaler

And while we are talking about patterns, did you know: there is a Google Pattern Search engine? Thanks Gregor.