That picture above is Mimas, one of the moons of Saturn – yes, I’m still a bit of a “geek”. Mimas is the name I chose for the Agile on the Beach submission system I started developing about four years ago.
I’m not completely surprised by the fact that four years later we are still using it.
The code has lots of failings – doesn’t every system? – but I’m immensely proud of it as more than I’m embarrassed by it. I’m particularly proud of the way the system held up to 300 submissions and 55 reviewers this year. This was the most use it has ever had.
As of last month the system is also OpenSource – MIT License. You can browse the code – and tell me about all my failings: Mimas on GitHub.
The system itself runs on the Google App Engine here. Anyone can log in and create their own conference. Although this being the post-review period, and me having some time, I’m taking the opportunity to make some improvements. If it is down it won’t be down for long. Unfortunately Google are forcing a database upgrade (NDB to Cloud NDB) on a move from Python 2.7 to 3.x (which I should have done a while back.)
Once upon a time I thought that other people might like to use the system for their conferences but nobody ever has. The offer is still there.
I’d love it if anyone else wants to help with development on the system – but I’m realistic to know this is unlikely. (If you want to help there are a lot of places in the UI where a little JS or CSS could improve things. In the backend I know a few places that could do with changes too.)
For the record this is what Mimas does:
Conference organisers create a conference and accept submissions.
Speaker details are retained for future conferences as are talk details.
Submissions are accepted into tracks, they can have co-speakers, formats and duration are configurable.
Two rounds of reviewing are supported, these are configurable. AOTB uses scoring for round 1 and ranking for round 2.
Reviewers can volunteer and be assigned (random) submissions to review.
Conference permissions allow for different users to have different capabilities.
E-mail acknowledgements, acceptances and declines are handled via SendGrid – you can also define custom messages via templates.
Half a dozen or more types of report plus export to Excel capability.
OAuth login currently supporting e-mail, Google and Facebook via Firebase.
Share reviewer feedback and comments with submitters.
Some conference branding.
(There is an open speakers directory and tagging mechanism, I should either improve this or scrap it.)
I’m currently building out a scheduler so the accepted submissions can be organised in the system too. Its half done.
There are bunch of ways I could develop it further: it could support paper review rather than conference review, it could support shepherding, it could be used for reoccurring events, and a whole lot more.
Finally, it is surprisingly cheap to run on the Google App Engine.
Anyway, its OpenSource, its live, let me know what you think.
Anyone who has submitted to Agile on the Beach in the last few years will have used our submission system: Mimas. Like so many other conference we, or rather I, created our own system.
“Why does AOTB have its own submission system?”
Flippant but true answer: because nobody else (yet) has decided to share out system. I’m more than happy to host other conferences
Perhaps the actual question should be:
“Why create your own system when there are others out there?”
Yes, good question. Well, two (plus one) reasons really.
One: About five years ago when I got fed up of the Heath-Robinson combination of Excel sheets, Google sheets, HTML and a little Python that I was using nothing fitted. If I recall correctly Papercall was just started. While many commercial conference management systems had a synopsis module most of these didn’t do public call for papers and they cost money. I thought about using CyberChair/EasyChair but these are quite off-putting and needed hosting.
Things have changed a bit now so I might not make the same decision today.
Two: I was really very keen to give submitters feedback on their submissions and this was missing from most systems. This is an agile conference and agile is all about feedback so we should give feedback shouldn’t we?
Hillside’s Pasture can do this but is quite a niche system and I’m not sure it could handle the load we get. Similarly the Agile Alliance have a system for their conferences which gives feedback but having submitted I wasn’t impressed.
So that was then. What about today?
Having our own system has allowed us to do things we wanted: like a public review with over 50 reviewers this year. Or changing the voting system.
Of course that comes at a cost: my time to change the system, my stress in keeping the system up (a lot lately). O, there are some charges for the Google Cloud but these are trivial, a less than $1 a year and most of them are down to one report I run repeatedly during the review processes.
Those might be reasons for keeping Mimas but really my overhead should encourage me to kill it. Of course, I’ve grown to love my code and while I admit is stinks in places (the look of the UI) I’m proud of it.
But, the over whelming reason right now for not moving to Papercall (or similar) is: Speakers.
There are more speakers and potential speakers than conferences. Possibly the money to be made from a conference submission system is not from the conference organisers but from the speakers who want to submit to conferences. A bit like advertising the service could be provide for free to conference organizers if submitters paid a subscription.
And I have met speakers who tell me the go to Pepercall (or whatever) and submit to conference X. Then look down the list of other conferences they fancy and make the same submission multiple times. That is valuable to potential speakers.
But, that ease of submission is a problem for organizers. Particularly organizers who want to give feedback.
Easy submission for speakers means more work for reviewers.
The interests of speakers and organizers are not aligned.
One way of solving this problem would be for conferences to share reviews. So if a speaker submits, say, to AOTB and Agile Cambridge and Agile Alliance then each conference can see what others thought of the submission. I could imagine that working.
I can also see obstacles. First of all: data privacy.
More troublesome I wonder if that would actually make it more difficult for new speakers to break in. The same old hands, with the same good reviews, would come to dominate the conference circuit and new ideas wouldn’t get in.
So there you go. AOTB has its own submissions system and I’ll tell you more about that next time.
More notes on Agile on the Beach – this is going to go on all week, sorry but there is a lot to get on the record. Maybe only conference-geeks and those thinking of submitting to AOTB will find useful but I’d like to get it on the record and this is my blog 🙂
First off the Agile on the Beach is pretty much the same every year. Unfortunately it has grown over the years as we try and share more information with those submitting. I expect most conferences have the same problem.
Problem #1 the call for submissions is too big, problem #2 people don’t seem to read it. Actually, it is not just that people don’t read it but some people who submit don’t seem to know much about the conference.
The wrong track, Gromit
For example: lots of submissions are put against the wrong track. Many people seem to just submit to whatever track the system defaults to (and this year it looks like Agile Business was the default.)
While we can, and do, move sessions between tracks we don’t do this methodically. With nearly 300 submissions it would be too time consuming to review every session and decide which track it should be in.
Plus, some people deliberately want their submission in a particular track. Last year we had a talk on technical debt submitted to the business track. Before I moved it I happened to ask the submitter why he had gone there not software. His reply: “We deliberately did this because we want to raise the issue with managers.”
Some reviewers will mark sessions down because they are in the wrong track. That is a little unfair but understandable.
Where is Falmouth again?
One problem that seems to growing is people not knowing where the conference is. To be clear: Agile on the Beach is in Falmouth UK – lookup Falmouth on Google maps.
Falmouth is five hours by train from London. Seven hours from Manchester. Nine from Newcastle Upon Tyne.
You can fly into Newquay airport from several UK airports but you are still nearly an hour by taxi from Falmouth.
Both last year and this year we’ve accepted people who then, when they look at how long it takes to get to Falmouth pull out.
What is your conference about?
But that is not the worst.
Every year we get a few submissions, mostly from the USA , which are totally inappropriate, something like “Calisthenics for a younger look.” OK, I guess calisthenics helps make your body agile but did they stop reading at the conference name?
Mostly with some hint that the person who filled in the form is not the speaker. I suspect the semi-famous person has a PA who just submits to everything they can find.
We don’t pay
Similarly we get a clutch of submissions – again mostly from the USA – where in the synopsis the speaker say: “My feed is $1,500 plus travel expenses.”
AOTB only pays speakers a travel allowance, and we say that in the call for submissions.
OK, we do actually pay keynotes. But we choose the keynotes, usually in advance of the call for submissions. Don’t call us, we’ll call you.
During the year we get a few people – again largely Americans – who e-mail us to say: “I can keynote you conference blah blah blah…. my fee is blah blah.”
I can’t blame them, they are only trying to make a living. One day someone who we find interesting might even contact us!
(Ummm… maybe I should do it myself rather than waiting to be asked to keynote…)
I am …
A new one this year: round 1 was blind, no bibliography details. The hope was this would help new speakers and increase our diversity.
Some speakers chose to self-identification: they gave their names in the synopsis “I this talk Adam Jones will be talking about…” or even a mini-bio in the synopsis: “Sally Smith will draw on her long experience working as Agile Coach with companies such as blah blah blah”
True we didn’t tell people not to do this. Blind reviewing was an experiment so we didn’t really have any rules. But, some reviewers took a dislike to this – I can see comments which say “Would vote higher but speaker gave their name”
Please please please answer me when I call
Finally, AOTB has a two stage acceptance process.
If you get accepted you get a mail from me saying:
“Congratulations… here is the boring admin details, now confirm you can speak by clicking on this link….”
We know that a) some people can’t get to Falmouth, b) schedule change between submitting in December and acceptance in February, c) people forget they submitted, d) something in the admin details isn’t to their liking, d) some other reason.
Result: some people who submit can’t make it. That is why we ask for confirmation.
If you are accepted I need your reply. And the quicker I get it the better.
If I don’t get a reply after a few days I chase: repeat e-mails, Twitter, LinkedIn, SMS and if I need to I will pick up the phone and call you.
This year one speaker was impossible to contact, no phone number, no Twitter, no LinkedIn. My e-mail may have been going to spam. With a heavy heart I switched them from accept to decline.
Apart from this taking up my time it also delays the work of the rest of the committee, the website, the publicity, ticket sales and everything else. Until we have the programme in place a lot is on hold.
Your submission is important to us, please hold
Plus, we don’t say: “Sorry, your submission was not accepted” until we have our programme in place. When we loose a speaker we make an immediate substitution. It doesn’t seem right to tell a lot of people (about 240) that they are not speaking at AOTB and then a few days later say to one or two “O sorry, can we have you after all?”
We only send the sorries once we have all our speakers confirmed. So, if someone doesn’t reply, and we can’t get hold of them, we have no choice but to wait and try again. Which means everyone else has to wait which is not fair either.
Overall our system works. It is not perfect but I don’t think any system is perfect.
This year, as usual, we had nearly 300 submissions to speak at Agile on the Beach. There are only 40 speaking slots – the exact number varies a little because we make minor changes to the schedule.
We have two review rounds. In round one reviewers score submissions from 1 to 5 – with 5 being the best. From this we create a shortlist for each track. Rule of thumb is the shortlist is twice the number of available slots (5 slots for Thursday tracks, 4 for Friday and 9 for two day tracks, so 10, 8 and 18.)
The first change for 2020: round 1 was done blind. That is anonymously, speaker bibliography details were not shown to reviewers.
Now on the one hand this is fairer: it gives more chance to new speakers, it stops the same old names dominating the conference and hopefully promotes diversity.
On the other hand: reviewers have often seen speakers previously and know who is good and who is not so good. It also has a commercial hit because a programme with more known names is likely to be more attractive to ticket buyers. So 2020 was an experiment.
I think blind reviewing worked. Although it does mean a few regulars did not make it and I feel bad about that, they are my friends. I also think we were right to look at speaker bios in round 2.
While we set no rules about speaker self-identification (i.e. some speakers used the synopsis to give their name or even mini-bios) a few reviewers didn’t appreciate this and in their comments noted they scored such submissions lower.
Back to shortlisting… when shortlisting we look at the mean score in reviews and the median. Usually the first few, highest scoring, sessions are easy to pick. It is the border line ones which are hard to decide.
In round two reviewers are asked to rank the shortlist. There can be only one number 1. The lowest score depends on how long the shortlist is, this varies by track. (So oddly, round 2 reverses scoring, 1 is top.)
In both rounds reviewers are encouraged – but not compelled – to write an explanation of their score. This is used by us as reviewers when deciding border line submissions and provided to the submitter as feedback.
In the past this process has needed reviewers to review 70 or more submissions. A few reviewers look at everything, all 300. I stopped doing this myself about two years back. Our submission system (Mimas) tries to ensure that submissions get an equal number of reviews.
Because reviewers were reviewing so many sessions feedback declined, I suspect attention to submissions decline too but I don’t know for sure.
The other big change for 2020: we recruited more reviewers. Actually, we invited everyone who had already bought a ticket, and everyone who attended in 2019 to review for round 1. 54 people volunteers and were allocated sessions to review. About a dozen people didn’t do their reviews but we still had plenty of scores.
This created more work for me than expected. While Mimas was updated to cope with this is wasn’t completely smooth. Whats more some reviewers left it to the last minute to review which created problems and meant I had to chase people.
Round 2 reviewing was more limited. Only about 12 reviewers picked including some AOTB committee members.
Over the years, and especially since I wrote Mimas, AOTB selection has become more score dependent. This has its good and bad sides but it does mean we get it done faster.
Once round 2 is done we look at the average (mean and median again) ranks and count of from the top to fill the track. Then we apply some judgement: people who submit in two tracks normally only speak in one. We would like a male/female balance but we don’t have any hard and fast rules plus, we don’t know if we are being fair on other diversity criteria. Are dyslexics fairly represented? Ethnic minorities? and so on.
We also have a financial consideration. AOTB pays a travel allowance dependent on how the speaker travels. This year when we completed selection and looked at the costs we were in budget. That doesn’t always happen, some years we have to cut back on long-haul speakers. Some years we argue and get more money.
Finally, we send acceptances. We ask everyone who is accepted to confirm they will attend and speak. Hopefully everyone says Yes and says so quickly. However we often loose people for one reason or another. Hence there are quick substitutions – that is why we hold off sending declined. Once we have the lineup settled we send the declines.
Every year we decline some amazing sessions that we would really like to host. We only have so much space. Every sessions we accept means another we can’t accept. We have enough good material for two conferences but only have one.
Whether you have been declined or accepted for 2020, whether you are thinking of submitting in 2021 or just trying to find out how conferences operate I hope that helps.
Today I’m giving away my crown jewels. Once you have read this post I can’t run my favourite exercise with you. There is a bit of experiential learning I can’t give you. So if you’ve rather have the experience stop reading and go and book yourself on my May workshop.
I’m describing an exercise that models what happens in “the real world(tm).” Plenty of the people who have done the exercise recognise it was a real life problem. The insights are many, and some a little disturbing.
Dozens of teams and the answers are always the same. I live in dread that someone will guess and ruin the exercise but it never happens. Now I’m telling the world that might change.
On screen I put a story something like:
As a widget maker I want an online store to sell my widgets so that I can make money
I separate the room into teams. Each team represents a technology supplier – an agency, an outsourcer, whatever. I want each team to competitively bid on a piece of work. Each team gets to think about the problem and estimate the work. At the end I want each team to be ready to name their price, how long it will take and how many people they need. They may add any more details they like, e.g. staging, design, technology, etc. (and most do).
The teams on the right get a story which says:
As an international widget maker I want to sell direct to customers so that I can cut out distributors. I anticipate $10million turnover within 3 years and have budgeted $1.2m for this project.
15 minutes later the teams on the right read out their bids. They always want a million give or take. They want months, if not years. They want teams of half a dozen or more engineers, testers, UXD, analysts and project managers. They may propose an ongoing maintenance contract too.
Very disconcerting for these teams is that while they are answering and taking questions the other teams, those on the left, often burst out laughing – literally – when they hear these proposals.
What neither side knows is that they have different stories. The teams on the left get a story saying:
As an artisan widget maker I want to sell my widgets to customers so that I can give up my day job. When I make $100,000 a year in sales I can live my dream. My accountant tells me a WordPress website will cost $5,000.
These teams want a week or two, an engineer or two and perhaps $10,000 tops. In some cases they say “We can do it this afternoon, we’ll set up Etsy.” Even if they don’t want to use Esty or eBay they probably propose an OpenSource solution.
So what do you think?
True, it is a semi-artificial set-up but I would argue that these situations happen all the time in “the real world” work environment. However they are usually well disguised and hard to see. Maybe now you will spot them.
That aside there are many potential lessons this exercise illustrates, almost everyone is worth a discussion in its own right. To keep things brief I’ll just highlight some of them:
Anchoring (cognitive bias): individuals are anchored to those numbers, bigger number lead them to frame their answers as bigger numbers.
Assumptions: people jump to assumptions, people automatically fill in the blanks when they lack information and the information they fill in flows from the numbers mentioned. Few questions get asked.
Different solutions: the key lesson for me, confronted with similar problems, people (especially engineers) are capable of formulating very different solutions. Those solutions have different time and cost implications.
Problem bounding: presenting the same problem with different bounding constraints results in massively different solutions.
Effort estimates, and therefore cost estimates, flow from value: whether through anchoring assumptions or solution designs the estimates (time and money) flow from the value available NOT the other way around.
Prior experience often goes out the window. In one run a low-end teams told me: “We did this last week. A digital consultant showed us how to install WordPress and Magento for online retail in the morning and in the afternoon we did it ourselves.” While this team came up with a low cost proposal their colleagues who were given the $1m story forgot everything they learned last week.
People don’t ask questions: I answer questions while teams are creating their answers but people rarely challenge what is asked for. Maybe its because I’m usually in some position of authority as a consultant or workshop trainer and my word should be followed.
Occasionally a team with the million dollar story say “We could do this with eBay/WordPress/Shopify.” I keep a poker face and let them be. Inevitably left alone for long enough they talk themselves into a much more complex and expensive bid.
In fact, the longer I give teams the higher the estimates go. I heard a team in Australia say three times “Those estimates look low, lets double them.” And they did. (Again, planning has diminishing returns.)
So far nobody has offered two solutions: you could offer up a Shopify solution and a custom build solution but nobody has.
While we are going through the exercise the minimal viable product idea often gets mentioned – usually by the teams on the right. So recently I introduced a third story. This read the same as the international widget maker but has an extra paragraph underneath:
MacAllan consulting has advised the company to take an iterative and agile approach to this work using a minimally viable product model.
How do you think teams respond?
Think for a minute… your answer is?
It makes no difference.
Or rather, so far I’ve not had any of the million dollar teams propose anything close to the $5,000 solution. In one case a team with the MVP story actually came in more expensive – and longer – than the million dollar team without the MVP clause.
My learning here: talking MVP makes no difference. If you want an MVP you have to impose constraints. (Hence try an MVT.)
People continue to fill in the blanks after the charade is exposed. I’ve heard software architects argue forcefully they these are different problems because of the money involved and therefore require different architecture. They clearly feel cheated and want to justify the proposal they have made. I suspect they feel I’ve made them look silly and want to undo that impression, I’m sorry if I’ve made anyone feel silly.
I wonder how often that happens in the work place? How many of us would back down in real life? I’d like to think I would but I would probably first try and justify my position.
The architects have a point, in many ways the stories are functionally the same but the differences lie in the non-functional requirements: load, throughput, security and so on. But all of that is inferred by people from the price tag without question.
It makes me sad that teams ask so few questions. People easily see themselves as a tailor not as a consultant (my Tailor or Image consultant post.)
Then there are the questions about the bidding process and companies bidding on the work.
Imagine you are the buyer: one supplier bids really low, the others were much higher but inline with your expectations. Would you trust the low bid? Have they blow their credibility?
And as a bidder: if you know the client plans to spend $1,000,000 why bid lower? The engineers cost estimates and designs aren’t relevant. Ideally you bid just below the competition. You are the lowest price with all the credibility and maximum revenue.
For that matter, should you be bidding on this at all?
If you work for a small e-commerce provider in rural Cornwall you may never know about, let alone, bid on a million dollar piece of work from an American multi-national. And if you did, would anyone take you seriously?
Suppose you got your big break: you walk in and offer a quick, low cost solution based on something like Shopify. Would they take you seriously? Would they want to listen?
Do corporations increase their own costs simply by being?
Conversely, if you work for a big consultancy and bid on million dollar deals every week will you be interested in bidding on a $5,000 piece of work? Of course not!
But that also means that if a corporation approaches you for a million dollar online shop, even if you could deliver it in a week’s time running on a third party platform do you have any incentive?
I don’t have answers to these questions. Indeed, there aren’t any right answers. All answers are valid, just some answers are “better” in some places than others.
Ultimately the lesson I take away from this is: we craft solutions within constraints.
More specifically: Engineers engineer within constraints, that is what engineers do.
That reinforces my belief that one needs to really understand benefit (value) and how that changes with time. From there we can work back to a solution.