Software Myth busting – final ACCU notes

I had intended to draw a line under my comments on the ACCU conference last week. I’ve got most of it out of my system but I had to return to Peter Sommerlad’s presentation – Only the code tells the truth.

Peter’s starting point was that, well, only the code tells you the truth about the system. That is the code, the stuff that is compiled and executed, not the documentation, not the specification, certainly not the UM and not even the comments but the actually lines of computer code.

There is nothing new in this assertion really. One of the things I had drummed into me by Derek Andrews when I was an undergraduate was: code is an executable specification, and as such it is the only completely accurate specification because, well, its the one that executes!

(And thanks for Hubert Matthews for a very interesting keynote at ACCU and reminding me that Derek is still out there practising software!)

Peter’s argument continues: if only the code is the truth what are we doing with all this other stuff? In fact, maybe a lot of what we think of as “good” software practice is actually mythology that has grown up around how we do it.

At one extreme some of this myth may be rooted in good intentions, it never did us any good but we thought it might and somehow we came to believe it. (I’d put the original ISO9000 in this category.)

Some of the other myths may have been true at some point in the past but we should really ask “Is this still good for us?” – perhaps we need to abandon some practices that were good once upon a time but don’t really add value any more.

One of Peter’s suggestions was the comments in code may actually detract from its maintainability. The argument goes like this: some comments are useful, many are useless (comment “increment i” before ++i for example), others are confusing, some are downright wrong. If the code is expressive enough then perhaps we don’t need comments in here at all.

This is pretty contentious stuff and unfortunately the group allowed ourselves to get side tracked down this avenue for 20 minutes or so when really is was just a side show.

Peters other suggestions included: strong typing may reduce understandability, design before coding can be detrimental, testing after we code is wrong, “everything is an object” and a few more too. Basically Peter is challenging us to look long and hard at some of the practices we engage in and ask: are they still useful? And so it was the session became a very interesting debate.

This opens a real Pandora’s box but its about time the software development industry engaged in some Myth Busting.

Here is a suggestion of my own for something we need to look long and hard at

Myth: accountancy correctly values knowledge based companies and their assets.
Is developing software an expense or an investment?

Most of the time when we develop software it is classed as a cost on the balance sheet. But in fact the result is an asset – software code – in which case we are increasing the assets the company owns. If this was a car or a factory it would be included on the balance sheet and it would be classed as an investment. So, £1 spent on development becomes £1 on the assets owned by the company.

Take this a step further; what about our people? – remember “our people are our greatest asset.”

In which case, sending someone on a training course isn’t a cost, its an investment and should be included on the balance sheet under assets. Companies regularly include “good will” on the balance sheet so why not people? And why not source code?

Once accept both these points the incentive for companies to invest in people and software increases drastically. No longer are these expenditures reducing profits, they are increasing assets. However, current accountancy doesn’t recognise this.

More notes on the ACCU conference

At the risk of boring you, my dear reader, I’d like to make a few more comments on presentations at the ACCU conference. Those of you who read this blog for the musings on change and product management might like to skip this entry, its for the guys who design programs.

Nico Josuttis is an ACCU conference regular, he is probably best know for his book The C++ Standard Library but (incredible) that was over 6 years ago now. Like the rest of us he’s moved on a bit. These days he’s more of a software architect so it was fitting his main presentation was on Service Oriented Architecture – SOA.

SOA is in fashions this year – along with last seasons MDA. The easy way to think about SOA is that is “doing it with SOAP” but there is more to it than that – for a start you need to add in a service bus. As with any good software architecture fashion it will increase flexibility, reduce development time, increase software reuse, reduce coupling, increase business value, reduce development costs and generally just fix the whole problem with software.

Well Nico discussed all these issues and more. He exposed it for what it is, a good idea with some advantages but not a cure all. You still have dependency problems, re-use isn’t automatic (far from it) and you can still make the same big mess you made without SOA.

What I really liked about Nico was that he was talking from experience, he had a hard case study he could draw lessons on and he has stories to tell. The audience often guessed the way the story would turn out before he got to the end – because, as I said, SOA suffers from the same problems as many other architecture fashions.

The slot before Nico was Klaus Marquardt who presented some of his software as medicine patterns. I’ve been familiar with Klaus’ ideas for a few years and I’d already ready several of the patterns he presented but this was the first time I felt I really understood the point he’s trying to make.

Klaus introduced several ideas about software illnesses – many of which were actually organizational illnesses. The two that stick with my most are palliative care for software and proactively wait. Let me explain.

Palliative care exists in medicine to deal with disease management and pain control when we can’t actually cure the disease. Klaus suggested that sometimes a similar situation exists in software – and I have to say I agree with him. Sometimes software gets into a mess and fixing it completely isn’t really an option – takes too long, costs too much, nobody really understands it, etc. – so we should consider what we can do to make it more manageable and reduce the pain we get from it.

The second idea that will stick with me is “Proactive wait.” In other words, sometimes you can’t do anything now, or perhaps the patient doesn’t want you to do anything, so you wait. You know your waiting, you may take some actions to prepare yourself while you wait but you don’t do anything until the patient is ready for you.

I think this is good advice and I intend to follow it were I can. I see two places were I can use the idea, first myself, sometimes I should wait. Rather than rushing in to fix something it might be better to wait until a better time or even until the pain is higher and the patient will accept your intervention.

To be honest I already do this at times but I had no term to describe it. Sometimes saying to someone “you don’t want to do that” is the wrong thing. They will perceive you as a busybody, a control-freak, a micro manager, or just a pain. Sometimes its better to let someone do it their way even if you know a better way, often they will learn a lesson – they may learn it the hard way but they will learn it better than you telling them.

(Of course if they are about to put their hand in the fire this is a bad time to apply this idea – only do it if nobody will be hurt.)

Second, and in someway this is an extension of what I’ve just said, it is better to wait for someone to ask for help than to give it without asking. It is better for someone to appreciate they need some help and ask for it than for you to start helping out – no matter how well intentioned you are.

Conclusion: timing can be important.

That’s almost it for my ACCU notes, normal service will be resumed soon.

Changing your organization – presentation and notes

As promised I’ve uploaded my presentation from ACCU 2006, you can download it here. Also here are the notes I made on the discussion surrounding the presentation – download here.

You might ask: What motivated me to give this presentation?

The way I see it, the main problem facing people and organizations today is not in tools or techniques – we have them – but in moving from the way we develop software today to the way we need to develop software in the future.

For example, I am a great believe in the ideas and practices coming out of the Lean and Agile Software Development communities, but I think the main problem faced by organizations is how we move from where we are today to the new practices. That is, the problem is change.

Ultimately, you can change if you want to, you need to want to do this.

In the meantime all we can do is open peoples eyes and say: Just because you developed software like that yesterday doesn’t mean to say you have to do it like that today. There are other, better ways.

Two of the technical presentations at ACCU – and the A, B, and C of speakers

The ACCU is a technical organization – people join because it has a technical focus. But actually the secret of the ACCU is that it is really about developing people.

We help those technical people develop their technical skills – but on the quiet we help them develop their less technical skills, we challenge them to think about less technical issues. So, while you see Scott Meyers, Herb Sutter, Guido Van Rossum and Michael Feathers headlining the conference you also find people like Helen Sharp challenging them to think about development as a social activity.

One of the ways we help people develop themselves is by giving them opportunities. As I told many people at the ACCU conference last week there are three kinds of people giving presentations.

First we have the established players, the ‘A’ list if you like: people like Scott Meyers and Herb Sutter – they don’t need an introduction, people will come to hear them alone.

Second we have the ‘B’ list: I’ll put myself on this list if you don’t mind, I’ll also add people like Klaus Marquardt. We are people who have been around a bit, published some papers, done some presentations, we’re the conference filler if you like. Some of the ‘B’ list are challenging for the ‘A’ list, so people like Kevlin Henney might still be on the ‘B’ list but likely he has already moved to the ‘A’ list.

And then we have the ‘C’ list. These are the most important speakers. These are the people the conference is really about. These folks may never have spoken before, or they may have done the odd presentation. These are the people the ACCU is developing, the future talent.

Not long ago I was ‘C’ list, I’d never done a presentation, I’d written a little. By doing presentations, by writing more and by thinking I’m now ‘B’ list. Will I ever make ‘A’ list? I don’t know, I’d like to but there are other things in life too.

Actually, I usually prefer the content of the ‘B’ and ‘C’ list presentations. There are more war stories, there is more discussion and more experimentation – both in presentation style and material.

I want to say a couple of words about two speakers who stood out. These guys are making the transition from ‘C’ to ‘B’ list in my book.

Jez Higgins (who is also the new ACCU chair) did a great presentation about XSLT2, XPath2 and XQuery. He was in a difficult slot – first session of the first day but for me he set the quality bar for the rest of the conference.

I thought I knew a lot of this stuff before the presentation but Jez had plenty of surprises for me. A Suduko solver in XSTL2? – the idea really makes me think again. Things have changed with the XML support technologies and Jez did a great job of bringing me up to date. I guess I knew these technologies were capable of a lot more than transforming XML but it was a while since I’d checked them out.

Day 2 brought Thomas Witt and “Shared Libraries in C++”. Watching Thomas I saw a younger version of myself. Five years ago this was my subject, I knew far too much about shared libraries in C++ – .dll’s in Windows, .so’s on Unix, dynamic linking, static linking, memory models, etc. etc. (Check out my pieces on porting if you don’t believe me.)

Well, things haven’t changed that much in the shared library world over the last five years but Thomas has gone further and deeper than I ever went. He is truly an expert on this subject: Windows, Unix(s), Macs, object models, you name it he knows it.

So I’m looking forward to next year. We’ll have our ‘A’ list speakers to pull in the crowds and make the conference economic but the real action will be with the ‘B’ and ‘C’ list speakers, these are experts every bit as much as the ‘A’ names.

Best of all, this is the ACCU developing talent, this is ACCU giving people space and time to stretch themselves and its why I’ll stick by the organization as I move further and further away from programming.

Return from ACCU conference

Most of the last week was at the ACCU conference in Oxford. Although ACCU has its roots in the C and C++ community the whole organization has been trying to move away from this for some time. This year I really felt we’re succeeding.

Yes I went to some C++ talks, yes a lot of people talked C++ but a lot of other subjects were covered too. People are just as happy to talk about Agile development, products, dynamic languages and about anything else to do with software development.

Before you ask, yes, my presentation was well received and I’ve got some good notes from the discussion. I’ll try and get these written up soon and post up both the slides and the comments in the next couple of days.

Overall I’m left with a lot of new information and ideas, plus things to think about and understand – I’ve got to update my mental maps.

For example, Peter Sommerlad did a session called “Only the code tells the truth” in which he set out challenge our assumptions about what is “good programming practice”. A few slides then a goldfish bowl discussion that really opened up some dark corners and said “Is accepted wisdom right?”

One of Peter’s questions was “Do comments help code maintainability?” – this got a lively reception. I must admit I’m tending towards the view that comments in the code don’t. I’ll leave a full discussion of that till later.

The conference was full of interesting people presenting or just attending. During the last hour or so I got talking to Gunter Obiltschnig of Applied Informatics in Austria. They have one of those libraries (C++, portable components) that sounds really useful, just I don’t have any need for it right now. One-day maybe.

Over the next week or so I’ll try and blog some more about of the interesting people and ideas at ACCU.

And, after being given some good feedback I’ll try and keep my blog entries short – or at least shorter than usual!

Software testing: the new value add?

I’m at the ACCU conference in Oxford this week. As always the conference is lively and giving me a lot to think about. I’ll blog some more about this next week.

But, before then: Monday’s FT had an interview with Larry Ellison of Oracle. He discussed the possibility of Oracle producing its own distribution of Linux. One of the drivers for this is to allow Oracle to compete with Microsoft in the OS market but it wasn’t the only one.

According to Ellison some of Oracle’s customers want a single supplier. They want one company to take responsibility for a systems performance. In other words, they don’t want the ERP vendor saying it is a database problem and the database vendor saying it is an OS problem.

So, if Oracle had its own distribution of Linux they would test that the OS worked with the database and the database worked with the ERP. What I find interesting here is that Oracle would be selling “piece of mind”, “proven software”, or simple “tested applications”.

The value add would not be in the products themselves but in the fact that they all worked together and were shown to work together. In other words, producing the software isn’t the difficult bit, testing it works together is. Hence it is the testing that adds the value not the development.

To my mind this is either a great step forward or a sign of a truly crazy world. I’ve not worked out which yet.

Path dependency

I’ve been wanting to write about Path Dependency for a while. Its a subject that fascinates me but I’m not quite sure I understand what is means and how it affects us.

The idea comes from economics and at its simplest path dependency says “History matters” – or, how we got here effects the way things are. Things are the way they are because of all the decisions that have been made up to this point. Sometimes this can explain why our theories don’t explain the real world – yes, this does touch on chaos theory too.

There is an interesting paper by Margolis and Liebowitz entitled “Look, I understand too little too late” that makes for a good starting point. One of the classic examples of path dependency is supposed to be the Qwerty keyboard. But the authors demonstrate that this isn’t really the case. They go on to question the whole idea of path dependency as an economic force.

So, maybe in economics the theory doesn’t apply. But I think it does apply to software.

All too often when you look at a piece of software it just doesn’t work as you might expect it too. It is programmed in C++ when Python would be a better language, or it is procedural based when OO would be better, maybe it just implements its own data structures when it should use a library. It seems every programmer when confronted with code he didn’t write says “this is awful… maybe we should rewrite?”

It doesn’t just happen at the code level, look at the whole product: are all the features still needed? Perhaps our product is difficult to use because we needed so many features two years ago but now they get in the way.

And the same thing goes at the process level: we write requirements documents because that’s the way we’ve always done it, we throw code over the wall to tester because that’s what we did at the last place. We are slaves to the past and ignore better ways of doing things.

I think a large part of this is path dependency at work. Not entirely, sometime it is because we now understand what we are trying to do better, and partly it is because we have more options available now, but largely it is because of how we got here.

Every individual decision made sense at the time. However, the cumulative set of decisions isn’t the optimal one now. Yet it is difficult to justify changing it now.

So often our theories, whether in Economics, Software Development or Business assume that should be just are: theory says that supply equals demand so it will, theory says code should be unit tested so it is, theory says that Business should concentrate on core competencies so they do. But, unlike Physics these things don’t happen instantaneously. We might need time to let the system adjust, or we might need to give it a helping hand or remove some blockages.

Path dependency may be a grandiose term but is makes an important point: often how we got here is a better explanation of the current situation than how it should be is.

I think much of what we call “design” is about breaking path dependency. Rather than accepting how things are, rather than doing things the way they always have been done, design is about finding a better way and encouraging people to do it that way instead.

That goes at the code level, the product level, the process level and at the business strategy level. We should respect the ways things are, they are that way for good reason (even if we don’t understand them all) but that doesn’t mean we can’t change and improve.

I’m still working out what path dependency means to me and my world view. I think its important and can explain a lot. As I work it out I’ll update you.

Two books – one recommendation

It has been a busy week but somewhere along the line I found time to finish both my current novel and my serious book.

The novel was Newtons Wake by Ken MacCleod. I really liked his early works (The Cassini Division, Stone Canal) but his last couple have been a bit of a disappointment. In truth I struggled to finish Newton’s Wake, he had some good ideas in the book but they just didn’t hang together.

On the other hand I really enjoyed Effective Coaching by Myles Downey. I got interested in the subject of coaching a couple of years back and everything I’ve learned about it since makes me think it is a good thing.

Downey’s book is a good introduction to coaching and makes some good points. Little of it was completely new but that was because I read John Witemore’s book Coaching For Performance a couple of years back. That is another good book on the subject and also worth reading.

I’m not a professional coach and I don’t think I’ll ever be one, but the techniques are useful when your trying to managing people or just trying to help them in daily work. So, if you’ve not encountered coaching before I recommend reading either of these books. The two books are complimentary really, reading them back to back probably isn’t worth it but reading one then reading the other a year or two later as a refresher is worth it.

One quick lesson from Downey:

Performance = Potential – Interference

Couldn’t be more true. Once I get focused on something I can really do something, when I’m only partly interested, or not sure what I should be doing then things go slower. Sometimes you need to go slow on things but sometimes you just need it done. Part of the coach’s role is to remove the interference.

Bringing it back to software development, I think one of the things the Agile methods do for developers is to focus them on the task in hand. You can see this when a team is working with a card-and-board system, the card is the piece of work to be done, you focus on it, you get it done. The same is true of pair programming. Its all about removing interference.

Many of the Agile methodologies have a role of project coach – XP has an explicit coach role and Scrum has the Scrum master role. As far as I know those behind these agile methods didn’t draw on the work of Witemore, Downey and others but I think they should, there is a lot to learn from these guys. Unfortunately, once again, the software profession seems to prefer re-inventing the wheel.