I don’t really know what piloting a plane is like. I’m not a pilot. I have only ever been in the cockpit at museums (sitting in an SR-71 Blackbird was amazing). But, whenever I hear of software teams who need to work together – perhaps because they deliver different parts of the same product or perhaps because one supplies the other, or just because they all work for the same company – I always imagine its like synchronised flying.
In my mind I look at software teams and see the Red Arrows or Blue Angels. Now you could argue that software teams are nothing like an acrobatic team because those teams perform the same routines over and over again, and because those teams plan their routines in advance and practice, practice, practice.
But equally, while the routine may be planned in depth each plane has to be piloted by someone. That individual may be following a script but they are making hundreds of decisions a minute. Each plane is its own machine with its own variations, each plane encounters turbulence differently, each pilot has a different view through their window. And if any one pilot miscalculates…
As for the practice, one has to ask: why don’t software teams practice? – In many other disciplines practice, and rehearsal, is a fundamental part of doing the work. Thats why I’ve long aimed to make my own training workshops a form of rehearsal.
Software teams don’t perform the same routines again and again but in fact, software teams synchronise in common reoccurring ways: through APIs, at release times, at deadlines, at planning sessions. What the teams do in between differs but coordination happens in reoccurring forms.
While acrobatic teams may be an extreme example of co-ordination the same pilots don’t spend their entire lives flying stunts. Fighter pilots need to synchronise with other fighter pilots in battle situations.
OK, I’m breaking my own rule here – using a metaphor from a domain I know little of – but, at the same time I watch these displays and this image is what pops into my head.
Anyone got a better metaphor?
Or anyone know about flying and care to shoot down my metaphor?
My old ACCU friend Derek Jones has been beavering away at his Evidence Based Software Engineering book for a few years now. Derek takes an almost uniquely hard nosed evidence driven view of software engineering. He works with data. This can make the book hard going in places – and I admit I’ve only scratched the surface. Fortunately Derek also blogs so I pick up many a good lead there.
At first this finding worried me: so much of what I’ve been preaching about software living for a long time is potentially rubbish. But then I remembered: what I actually say, when I have time, when I’m using all the words is “Successful software lives” – or survives, even is permanent. (Yes its “temporary” at some level but so are we, as Keynes said “In the long run we are all dead”).
My argument is: software which is successful lives for a long time. Unsuccessful software dies.
Successful software is software which is used, software which delivers benefit, software that fills a genuine need and continues filling that need; and, most importantly, software which delivers more benefit than it costs to keep alive survives. If it is used it will change , that means people will work on it.
So actually, Derek’s observation and mine are almost the same thing. Derek’s finding is almost a corollary to my thesis: Most software isn’t successful and therefore dies. Software which isn’t used or doesn’t generate enough benefit is abandoned, modifications cease and it dies.
Actually, I think we can break Derek’s observation into two parts, a micro and a macro argument.
At the micro level are lines of code and functions. I read Derek’s analysis as saying: at the function level code changes a lot at certain times. An awful lot of that change happens at the start of the code’s life when it is first written, refactored, tested, fixed, refactored, and so on. Related parts of the wider system are in flux at the same time – being written and changed – and any given function will be impacted by those changes.
While many lines and functions come and go during the early life of software, eventually some code reaches a stable state. One might almost say Darwinian selection is at work here. There is a parallel with our own lives there: during our first 5 years we change a lot, we start school, things slow down but still, until about the age of 21 our lives change a lot, after 30 things slow down again. As we get older life becomes more stable.
Assuming software survives and reaches a stable state it can “rest” until such time as something changes and that part of that system needs rethinking. This is Kevlin Henney’s “Stable Intermediate Forms” pattern again (also is ACCU Overload).
At a macro level Derek’s observation applies to entire systems: some are written, used a few times and thrown away – think of a data migration tool. Derek’s data has little to say about whether software lifetimes correspond to expected lifetimes; that would be an interesting avenue to pursue but not today.
There is a question of cause and effect here: does software die young because we set it up to die young or because it is not fit enough to survive? Undoubtedly both cases happen but let me suggest that a lot of software dies early because it is created under the project model and once the project ends there is no way for the software to grown and adapt. Thus it stops changing, usefulness declines and it is abandoned.
The other question to pondering is: what are the implications of Derek’s finding?
The first implication I see is simply: the software you are working on today probably won’t live very long. Sure you may want it to live for ever but statistically it is unlikely.
Which leads to the question: what practices help software live longer?
Or should we acknowledge that software doesn’t live long and dispense with practices intended to help it live a long time?
Following our engineering handbook one should: create a sound architecture, document the architecture, comment the code, reduce coupling, increase cohesion, and other good engineering practices. After all we don’t want the software to fall down.
But does software die because it fails technically? Does software stop being used because programmers can no longer understand the code? I don’t think so. Big ball of mud suggests poor quality software is common.
When I was still coding I worked on lots of really crummy software that didn’t deserve to live but it did because people found it useful. If software died because it wasn’t written for old age then one wouldn’t hear programmers complaining about ‘technical debt” (or technical liabilities as I prefer).
Let me suggest: software dies because people no longer use it.
Thus, it doesn’t matter how many comments or architecture documents one writes, if software is useful it will survive, and people will demand changes irrespective of how well designed the code is. Sure it might be more expensive to maintain because that thinking wasn’t put in but…
For every system that survives to old age many more systems die young. Some of those systems are designed and documented “properly”.
I see adverse selection at work: systems which are built “properly” take longer and cost more but in the early years of life those additional costs are a hinderance. Maybe engineering “properly” makes the system more likely to die early. Conversely, systems which forego those extra costs stand a better chance of demonstrating their usefulness early and breaking-even in terms of cost-benefit.
Something like this happened with Multics and Unix. Multics was an ambitious effort to deliver a novel OS but failed commercially. Unix was less ambitious and was successful in ways nobody ever expected. (The CPL, BCPL, C story is similar.)
Finally, what about tests – is it worth investing in automated tests?
Arguably writing test so software will be easier to work on in future is waste because the chances are your software will not live. However, at the unit test level, and even at the acceptance test level, that is not the primary aim of such tests. At this level tests are written so programmers create the correct result faster. Once someone is proficient writing test-first unit tests is faster than debug-later coding.
To be clear: the primary driver for writing automated unit tests in a test first fashion is not a long term gain to test faster, it is delivering working code faster in the short term.
However, writing regression tests probably doesn’t make sense because the software is unlikely to be around long enough for them to pay back. Fortunately, if you write solid unit and acceptance tests these double as regression tests.
The code 15USERSTORY9YP should get you 15% off on the Edinburgh Agile website and there are some early bird offers too.
These are all half-day workshops which run online with Zoom. As well as the online class attendees receive one of my books to accompany the course, the workshop slides, a recording of the workshop and have the option of doing an online exam to receive a certificate.
These workshops are also available for private delivery with your teams. We ran our first client specific course last month and have another two booked before Christmas.
We are also working on a User Stories Masterclass 2 which should be available in the new year.
From time to time I come across software platform team – also called infrastructure teams. Such teams provide software which is used by other teams rather than end customers as such they are one step, or even more, removed from customers.
Now I will admit part of me doesn’t want these teams to exist at all but let’s save that conversation for another day. I acknowledge that in creating these teams organisations act with the best intentions and there is a logic to the creation of such teams.
It is what happens with the Product Owners that concerns me today.
Frequently these teams struggle with product owners.
Sometimes the teams don’t have product owners at all: after all these teams don’t have normal customers, they exist to do work which will enhance the common elements and therefore benefit other teams who will benefit customers. So, the thinking goes, coders should just do what they think is right because they know the technology best.
Sometimes an architect is given the power of product ownership: again the thinking is that as the team is delivering technology to technologists someone who understand the technology is the best person to decide what will add value.
And sometimes a product owner exists but they are a developer, they may even still have development responsibilities and have to split their time between the two roles. Such people obtain their role not because of their marketing skills, their knowledge of customers or because they are good at analysing user needs. Again it is assumed that they will know what is needed because they know the technology.
In my book all three positions are wrong, very wrong.
A platform team absolutely needs a customer focused product owner. A product owner who can appreciate the team have two tiers of customers. First other technology teams, and then beyond them actual paying customers. This means understanding the benefit to be delivered is more difficult, it should not be the case of ducking the issue, it should be a case of working harder.
If the platform team are to deliver product enhancements that allow other teams to deliver benefit to customers then it is not a case of “doing what the technology needs.” It is, more than ever, a case of doing things that will deliver customer benefit.
Therefore, platform teams need the strongest and best product owners who have the keenest sense of customer understanding and the best stakeholder management skills because understanding and prioritising the work of the platform team is a) more difficult and b) more important.
A platform team that is not delivering what other teams need does more damage to more teams and customers – in terms of benefit not delivered – than a regular team that just delivers to customers. Sure the PO will need to understand the technology and the platform but that is always the case.
So, to summarise and to be as clear as possible: Platform teams need the best Product Owners you have available; making a technical team member, one without marketing and/or product ownership experience, the product owner is a mistake.
“I’m frankly amazed at how far the #NoProjects throwaway Twitter comment travelled. But even today, in the bank where I work, the same problems caused by project-oriented approach to software are manifest as the problems I saw at xxxx xxx years ago.” Joshua Arnold
Once upon a time, 2 or 3 years back, #NoProjects was a hot topic – so hot it was frequently in flames on Twitter. For many of the #NoProjects critics it was little different from #NoEstimates. It sometimes felt that to mention either on Twitter was like pulling the pin and tossing a hand grenade into a room.
I never blocked anyone but I did mentally tune out several of those critics and ignore their messages. However I should say thank you to them, in the early days they did help flesh out the argument. In the later days were a great source of publicity. If we wanted to publicise an event one only had to add #NoProjects to a tweet and stand back.
The hashtag still gets used but far less often, the critics have fallen back and rarely give battle and as I’ve said before #NoProjects won. But, as a recent conversation on the old #NoProject Slack channel asked: why do we still have projects? why does nobody activity say they do #NoProjects?
In part that is because No doesn’t tell you what to do, it tells you what not to do, so what do you do?
In retrospect we didn’t have the language to express what we were trying to say, over time with the idea floating around we found that language: Outcome oriented, Teams over Projects, Products over projects, Product centric, Stable teams and similar expressions all convey the same idea: its not about doing a project, its not even about doing agile, it is about creating sustainable outcomes and business advantage.
The same thinking is embedded in AgendaShift, “The Spotify Model”, SAFe and other frameworks. These are continuity models rather than the stop-go project model. One might call all these ideas and models post-project thinking.
In many ways the hashtag died because we found better, and less confrontational, language to express ourselves.
There was a growing, if implicit, understanding that this is digital not IT, it is about digital business, and that means continuity. The project model of IT is dead.
Which begs the question: why aren’t these approaches more widespread?
The thinking is there, the argument has been made against projects and for alternative models, and you would be hard pressed to find a significant advocate of agile who would argue differently but companies are still, overwhelmingly, project oriented.
When I’m being cynical I’d say, like agile, it is a generational thing. The current generation of leaders – or at least those in positions of management authority – built their success on delivering IT projects. Only as this generation relinquishes leadership will things change.
Optimistically I remember what science fiction author William Gibson once said:
“The future is here, its just unevenly spread around”
For digital start-ups this isn’t an issue: they are born post-project, they create digital products, the business and technology are inseparable. The project model is counter to their DNA.
Some legacy companies have consciously gone post-project and are recognising the benefits: the capitalist model suggests these early movers and risk takers – will gain the most. Other legacy companies have adopted parts of the continuous model but cling to the project model too, some will make the full jump, some, most?, will fall back.
Unfortunately Covid, the hang over of bail-outs from the 2007-8 financial crash and failure to break up monopolies (Google, Facebook, Amazon specifically) mean capitalism is not exerting its usual Darwinian force.
Projects will exist for a long time yet, #NoProjects will continue small scale disruption but in the long term the post-project organizations will win out. Hopefully I’ll be alive to see it but I have no illusion, the rest of my career will be spent undoing the damage the project model does.
“Much of the writing I’ve seen assumes that software can be shipped directly into the hands of customers to create value (hence the “smaller packages, more often” approach). My experience has been that especially with new launches or major releases, there needs to be a threshold of minimum functionality that needs to be in place.”
Check your phone. Is it set to auto-update apps? Is your desktop OS set to auto-update? Or do you manual choose when to update?
Look at the update notes on phone apps from the likes of Uber, Slack, SkyScanner, the BBC and others. They say little more than “we update our apps regularly.”
Today people are used to technology auto-changing on them. They may not like it but do they like a big change any more?
My guess is that most people don’t even notice those updates. When you batch up software releases users see lots of changes at once, when you release them as a regular stream of small updates then most go unnoticed.
Still, users will see some updates change things, and they will not like some of these. But how long do you want to hide these updates from your users?
The question that needs asking is: what is the cost of an update? The vast majority of updates are quick, easy, cheap and painless.
Of course people don’t like updates which introduce a new UI, a new payment model or which demand you uninstall an earlier app but when updates are easy and bring benefits – even benefits you don’t see – they happily accept them.
And remember, the alternative to 100 small updates is one big update where people are more likely to see changes.
If your updates are generally good why hold them back? And if your updates are going in the wrong direction shouldn’t you change something? If you run scared of giving your users changes then something is wrong.
Nor is it just apps. Most people (in Europe at least) use telco supplied handsets and when the telco calls up and says “Would you like a new handset at no additional cost?” people usually say Yes. That is how telcos keep their customers.
The question continues,
“there needs to be coordination across the company (e.g. training people from marketing, sales, channel partners, customer/ internal support, and so on). There is also the human element – the capacity to absorb these changes. As a user of tech, I’m not sure I could work (well) with a product where features were changing, new ones being added frequently (weekly or even monthly), etc.”
If every software update was introducing a big change then these would be problems. But most updates don’t. Most introduce teeny-tiny changes.
Of course sometimes things need to change. The companies which do this best invest time and energy in making these painless. For example, Google often offers a “try our new beta version” for months before an update. And for months afterwards they provide a “use the old interface option.”
The best companies invest in user experience design too. This can go along way to removing the need for training.
Just because a new feature is released doesn’t mean people have to use it. For starters new changes can be released but disabled. Feature toggles are not only a way of managing source code branches but they also allow new features to be released silently and switched on when everyone is ready. This allows for releases to be de-risked without the customer seeing.
And when they are switched on they can be switched on for a few users at a time. Feedback can be gathered and improvements made before the next release.
That can be co-ordinated with training: make the feature toggle user switchable, everyone gets the new software and as they complete the training they can choose to switch it on.
Now marketing… yes, marketeers do like the big bang release – “look at us, we have something shiny and new!”
You could leave lots of features switched off until your marketeers are ready to make a big bang. That also reduces the problem of marketers needing to know what will be ready when so they known when to launch a campaign.
Or you could release updates without any fuss and market when you have the critical mass.
Or you could change your marketing approach: market a stream of constant improvements rather than occasional big changes.
Best of all market the capabilities of your thing without mentioning features: market what the app or system allows you to do.
For years I’ve been hearing “business people” bemoan developers who “talk technical” but I see exactly the same thing with marketeers. Look at Sony Televisions, what is the “picture processor X1” ? And why should I care? I can’t remember when I last changed the contrast on my television so the “Backlight master drive” (what ever that is) means nothing to me.
Or, look at Samsung mobile phones, 5G, 5G, 5G – what do I care about 5G? What does 5G allow me to do that I can’t with my current phone?
Drill down, look at the Samsung Galaxy lineup: CPU speed, CPU type, screen size, resolution, RAM, ROM – what do I care? How does any of that help me? – Stop throwing technical details at me!
Don’t market features market solution. Tell me what “job to be done” the product the addresses, tell me how my life will be improved. Marketing a solution rather than features decouple marketing from the upgrade cycle.
So sure, people don’t like technology change – I’ll tell you a story in my next blog. But when technology change brings benefits are they still resistant?
Now, with modern technology, with agile and continuous delivery, technology can change faster than business functions like training and marketing. We can either choose to slow technology down or we can change those functions to work differently – not necessarily faster but differently in a way that is compatible with agile technology change.
These kind of tensions are common in businesses which move across to agile style working. A lot of company think agile applies to the “software engine room” and the rest of the business can carry on as before. Unfortunately they have released the Agile Virus – agile working has introduced a set of tensions into the organization which must either be embraced or killed.
Once again technology is disruptive.
Perhaps, if the marketing or training department are insisting on big-bang releases maybe it is them who should be changing. Maybe, just maybe, they need to rethink their approach, maybe they could learn a thing or two about agile and work with differently with technology teams.
“If you’re not embarrassed by the product when you launch, you’ve launched too late.” Reid Hoffman, founder LinkedIn
Years ago I worked for a software company supplying Vodafone, Verizon, Nokia, etc. The last thing those companies wanted was to update the software on their engineers PC every months, let alone every week!
I was remembering this episode when I was drafting what will be my next post (“But our users don’t want to change”) and thought it was worth saying something about how regular releases change the risk-reward equation.
When you only release occasionally there is a big incentive to “get it right” – to do everything that might be needed and to remove every defect whether you think those changes are needed or not. When you release occasionally second chances don’t happen for weeks or months. So you err on the side of caution and that caution costs.
Regularly releases changes that equation. Now second chances come around often, additions and fixes are easy. Now you can err on the side of less and that allows you to save time and money.
The ability to deliver regularly – every two weeks as a baseline, every day for high performing teams – is more important than the actual deliveries. Releasable is more important than released. The actual question of whether to release or not is ultimately a question for business representatives to decide.
But, being releasable on a very regular basis is an indicator of the teams technical ability and the innate quality of the thing being built. Teams which are always asking for “more time” may well have a low quality product (lots of bugs to fix) or have something to hide.
The fact that a team can, and hopefully do, release (to live) massively reduces the risk involved. When software is only released at the end – and usually only tested before that end – then risk is tail loaded. Having releasable – and especially released – software reduces risk. The risk is spread across the work.
Actually releasing early further reduces risk because every step in the process is exercised. There are no hidden deployment problems.
That offsets sunk-cost and combats commitment escalation. Because at any time the business stakeholders can say “game over” and walk away with a working product means that they are no longer held captive by the fear of sunk-costs, suppliers and career threatening failures.
It is also a nice side effect that releasing new functionality early – or just fixing bugs – increases the return on investment because benefits are delivered earlier and therefore start earning a return sooner.
Just because new functionality is completed and even released early does not mean users need to see it. Feature-toggles allows feature and changes to be hidden from users – or only enabled for specified users. Releasing changed software with no apparent change may look pointless but it actually reduces risk because the changes are out there.
That also means testing is simplified. Rather than running tests against software with many changes tests are run against software with few changes which makes changes more efficient even if the users don’t see it. And it removes the “we can’t roll back one fix” problem when one of 10 changes don’t pass.
Back with Vodafone engineers who don’t want their laptops updated: that was then, that was the days of CD installs. Today the cloud changes that, there is only one install to do, it isn’t such an inconvenience. So they could have the updates but with disruptive changes hidden. At the same time they could have non-disruptive changes, e.g. bug fixes.
In a few cases regular deliveries may not be the right answer. The key thing though is to change the default answer from “we only deliver occasionally (or at the end)” to “we deliver regularly (unless otherwise requested).”
The story should deliver business value: it should be meaningful to some customer, user, stakeholder. In some way the story should make their lives better.
The story should be small enough to be delivered soon: some people say “within 2 days” but I’d generous, after all I used to be a C++ programmer, I’m happy as long as the story can be delivered within 2-weeks, i.e. the standard size of a sprint.
Now these two rules are in conflict, the need for value – and preferably more value! – pushes stories to be bigger while the second rule demands they are small. That is just the way things are, there is no magic solution, that is the tension we must manage.
Those two rules also help us differentiate between stories and epics – and tasks if you are using them:
Epics honour rule #1, epics are very valuable but they are not small, by definition they are large this epics are unlikely to be delivered soon
Tasks honour rule #2, they are small, very small, say a day of work. But they do not deliver value to stakeholders – or if they do it is not a big deal
Tasks are the things you do to build stories. And stories are the things you do to deliver epics. If you find you can complete a story without doing one of the planned tasks then great, and similarly not all stories need to be completed for an epic to be considered done.
In an ideal world you would not need tasks, every story would be small enough to stand alone. Nor would you need epics because stories would justify themselves. We can work towards that world but until then most teams of my experience use two of these three levels – stories and tasks or epics and stories. A few even use all three levels.
Using more than three is an administration problem. There is always a fourth level above these, the project or product that is the reason they exist in the first place. But really, three levels is more than enough to model just about anything: really small, small, and damn big.
And every story is a potential epic until proven guilty.
I’m on a mission to popularise the term Agile Guide. A few weeks ago Wood Zuill (farther of Mob Programming and force behind #NoEstimates) and I recorded a podcast with Tom Cagley – another in his SpamCast series – on the Agile Guide role.