Wednesday 30 May 2018

True Confession: I don't enjoy estimating

But I do it anyway...

I haven't blogged in a while. I read a book, did the crossword. It was good.

I'm going to add a new story into my NoEstimates Blogging Backlog. This is it – you're reading it now. It's a backlog, I own it, and I'm allowed to do that :)

Estimating is no fun

I defy anyone in this game - Dev or Scrum Coach or BA or Programme Manager or even business client - to admit that they enjoy estimating. It's always based on incomplete information (that's why it's an estimate not a report), and we always feel like we're sticking our finger in the air and our butt on the line, at least a little bit.

If there's anyone out there who truly enjoys estimation, please teach me your special sauce!

Why I estimate anyway

I use two kinds of estimate: for the customer and for the team.

For the customer should be obvious, for most readers. Can I afford it? Will I get it in time?

I know the diehard NE set deny the value of this kind of estimate. They maintain that the customer is paying for software not for estimates, so let's give them valuable software and not waste time on estimating it. I also know that almost every business I've ever worked in has wanted answers to these questions before committing to a SW project. And since it's their money I respect that.

For the team, estimates are a mechanism for sustainable pace.

I facilitate estimates because I tend to coach Scrum. Estimates - generally story-point estimates - are key to Scrum's mechanism for managing sustainable pace.

I'm aware that other approaches to sustainable pace are available, most obviously Kanban (a lot of observers note a close relationship between NE and Kanban). So I could ditch team-oriented estimates by shifting to Kanban, but my experience is that teams are much more comfortable in Scrum. The Sprint cycle, which is (necessarily?) estimates-based, gives them an opportunity to come together, take stock and agree where they stand.

But not on this project!

I'm in an interesting position on my current piece of work: starting an Agile project that interfaces to a much larger Waterfall programme. We should be able to deliver results faster than the programme, which is an external dependency.

If we can do that, then the limiting factor will be the programme not the project. And with a bit of luck, in this specific luxurious circumstance, the customer will quickly consider that estimates from my team are irrelevant.

Can I eliminate estimates for the team too?

No.

"The Sprint cycle, which is (necessarily?) estimates-based..."

Necessarily?

Some years ago, Vasco Duarte tweeted a burnup chart comparing a trace of estimated stories, and one of equal-sized stories. He saw a negligible difference between the two. I found it intriguing.

There are important questions to ask about this:

  • Did the estimation process contribute to the stories being sufficiently similarly-sized for this to work?
  • Did the estimation process generate important conversations that would not have happened without attempting to estimate?
  • Were the statistical methods used robust?

I think the first important question for me is can I reproduce these results? Seems like a good time to try :)

Then there's that guy who bucks the trend

There's always one! In this case, that guy who sells software projects without providing customer estimates. Seems he acquires new work based on recommendations from existing clients, and his new clients respect and accept this.

That's one hell of a hard-earned reputation! And/or the guy could sell snow to Alaska. Either way, he has my respect and not a little envy!

Proponent? Opponent? Who the hell is this guy?

I think NE set consider me an oppositional pro-estimates traditionalist, while the traditionalists consider me a hopelessly naive borderline NE advocate. They are two highly oppositional camps. I'm please to sit in neither, though it means brickbats from both.

I hope this post demonstrates a more nuanced position. Estimates do not make me feel all warm and fuzzy. They're not why got into this game, as either a Dev or a Scrum Coach. If I see a circumstance in which they're not necessary, I'll happily jettison them. And for those particularly talented practitioners who are able to run their businesses without estimates - all strength to them!

At the same time, my clear experience and that of many others is that many businesses, most of the time, require estimates of one sort or another to invest in SW projects. I respect that and I'm not about to throw mud in their face for their financial diligence*.

(Yes, I know that financial diligence and governance can be misguided, even pathologically so. There may be a future post on this.)

An experienced practitioner has a varied toolbox. I came to NE looking for a new tool for mine. It's been tricky to find, obscured by a rancorous debate with little middle ground. Finally I've found one. The kind of middle position that doesn't generate Twitter followers and speaking engagements.

Sometimes some estimates are important, and sometimes they're not.

Friday 20 April 2018

On varieties of thought in #NoEstimates

Post 1 from my #NoEstimates blogging backlog

What differences of thought do we see in the #NoEstimates community? How deep do those differences go?

Reading the material in my bibliography and available on Twitter, I see slightly different positions taken by various #NoEstimates proponents. I see two apparent differences: the strength of their #NoEstimates position, and the actual objections to estimation.

Strength of #NoEstimates position

  • We should always eliminate estimates.
  • Can we find something better than estimates? (But if they work for you that's fine.)

I could characterise the first as Hard #NoEstimates, as it's a prescription for all practitioners, and the second as Soft #NoEstimates. I don't mean to impugn anyone as either Hard or Soft - if you dislike these terms I'll be grateful if you can help me find others :)

While there's an overt difference, I think it's a difference in personality and style rather than intent. For a Soft #NE advocate "just asking questions", if those questions are consistently about the value/validity of estimates, under the #NoEstimates banner, backed by claims of multiple years since their last estimate, I think they're pushing a position just as hard as their Hard #NE confederates.

Objections to estimation

  • Estimates are ineffective (therefore a waste at best, and misleading at worst).
  • Estimates are a sign of (and possibly a cause of) organisational dysfunction.
  • Estimation damages trust and/or team dynamics.

Ineffectiveness seems to be the core of the Soft case. The Hard case leans on dysfunction as well, hence its strong prescription to avoid estimates.

Hard proponents are also starting to make claims about team dynamics, eg, the suggestion that requests for estimates kill trust:

Despite these different objections, I don't recall seeing a #NE advocate disagree with another. That's in sharp contrast to the Agile community as a whole (eg You're doing Scrum wrong or TDD/BDD/both/neither), or even the broader Software Development community (Agile/Waterfall). In a community of practitioners exploring new ways of working, especially one whose members make different arguments in public, I would expect to see disagreement. ie critical appraisal of one anothers' thought.

The basis for these statements is not always obvious.

Conclusions

Whether you're a #NoEstimates proponent or critic, I think it's important to understand that the hashtag encompasses more than a singular opinion.

That said, regardless of specific arguments (objections to estimates) or style (strength of argument) it's also not clear to me that these really are different positions at all.

I'd be interested to hear of disagreements in the #NoEstimates community, which would indicate critical appraisal of one anothers' thought, rather than the apparent bloc approach I've seen up to now.

Sunday 15 April 2018

A #NoEstimates blogging backlog

A couple weeks ago I got into a tweetstorm around #NoEstimates. That provided a pile of reading to do, which with other obligations looming I had no time to read. One holiday in Italy later (it was lovely, thanks for asking!) I'm all caught up :)

I originally wanted to write some kind of comprehensive analysis. But it would have been very TL;DR and I might never have finished it anyway. So in the spirit of story-slicing, here's my #NoEstimates blogging backlog:

  • On varieties of thought in #NoEstimates
  • Here be ducks - the canards of #NoEstimates
  • Some challenges for #NoEstimates
  • What is #NoEstimates really trying to solve?
  • #NoEstimates strengths and weaknesses

Being a backlog, it's full of little pieces of value, and it's likely to change before I get to the bottom.

Where do I stand on all this?

I came to #NoEstimates a couple of years back, hoping for something interesting and provocative to learn. I've agreed and disagreed on various points with both its proponents and detractors.

Over the last couple days' reading, my own thinking has evolved. There's definitely value there, including ideas to help shape my new project. That said, I haven't bought the idea that we should avoid estimates wholesale.

Bibliography

Here's that reading list. Please point me towards anything else I should be looking at.

Woody Zuill

Woody is a major #NoEstimates proponent. These are the blog entries currently on his Beyond Estimates index:

Ryan Ripley

Ryan is another proponent, who I'd not come across before.

Peter Kretzman

Peter is a critic of #NoEstimates. This is his commentary on Ryan's talk above and on the debate as a whole.

Update. Peter has pointed me to some more posts of his:

Dan North

Dan isn't particularly an advocate or an opponent, though clearly he uses estimation in his practice. He's been recognised as an Agile thought leader for as long as I've simply been trying to be a Scrum Master.

Glen Alleman

Glen has long been an outspoken critic of #NoEstimates.

Update. Glen has pointed me to an aggregation of his posts on #NoEstimates. I've certainly read some of these before, but there's a lot there and I'm afraid I've not made a comprehensive review this time around.

Glen's also clarified that there's no Part 2 to the book review. However he does have some further commentary that he'll be making available.

Wednesday 17 January 2018

The UI that broke Hawaii

Does anyone need reminding that design is more than pretty colours? Apparently they do. Here’s the web-app screen that sent an SMS to some 1 to 1.5 million Hawaiians, that a ballistic missile was headed their way.

Emergency SMS control screen

At least bad data design didn’t kill anyone this time *. I hope.

* This is awfully reminiscent of the powerpoint slide at NASA that should have, but didn’t, warn of the likelihood of the Space Shuttle Columbia disaster.

What’s wrong with that screen?

Let’s count the problems.

  1. It’s heavy with acronyms and jargon that make it hard to understand the links
  2. The items aren’t in any meaningful order
  3. High-safety critical items (Tsunami Warning) are mixed with convenience items (road closure notification) and tests
  4. Heavy use of capitals means the emphasis on DRILL does not stand out
  5. Inconsistent language – there are three test options, all indicated with different phrases:
    • “DRILL” (at the start)
    • “DEMO TEST” (at the end and)
    • “1. TEST Message” (the whole line)

This adds up to a screen with heavy cognitive load to perform a basically simple but safety-critical task. It is inviting an error, and it is a serious failure of the team that commissioned, accepted and manages the software, and the team that built it.

I hope lessons are learnt in the right place, and it’s not the operator who suffers.

How would I change it?

Since I’m carping, I should be clear what I would do differently here. I want to remedy a couple of those faults listed above:

  1. Ditch the acronyms and the jargon. “High Surf Warning North Shores” is perfect. PACOM should say “Incoming Missile Warning”.
  2. Order the items, in a way that makes sense to the operators. Alphabetical would be a good start.
  3. Make a crystal-clear design distinction between high-criticality links, low-criticality links and test actions.

Why haven’t I touched the issues of CAPITALS or of inconsistent language? I want to get the design fix right first (point 3):

  1. Place options for Test, Info and Emergency on different screens, or clearly marked sections on the same screen
  2. Make Test the easiest option to pick (least deliberate) and Emergency the hardest (most deliberate)

Get this right – create utter clarity between Incoming Missile Warning and Incoming Missile Warning Drill – and those other points shouldn’t matter nearly so much.

Excuses, excuses. This means YOU!

So you don’t work on safety-critical systems? Me neither. This still applies to both of us.

At one time in my career I’d say “But a user wouldn’t do that.” Or “A user shouldn’t do that.” Why would they? It’s stupid. It doesn’t make sense. Obviously it will break the system.

So here’s the heads-up. Sooner or later your users will,/b> do that. Why? Because they’re in a hurry. Because they’re overworked. Because their partner yelled at them this morning. Or just because they’re trying to do their job, the best they can, with a limited view of a complex system.

We the Dev team, are the ones with the full context. We’re the ones tasked with thinking through the workflows – the exceptions as well as the happy path. We’re the ones who need to make the right thing easy and the wrong thing damn near impossible.

And it’s everyone’s responsibility – Devs, Testers, Product Owners and Scrum Masters – whether or not we have a Designer on the team.

A case study

My last product was a lead generation tool for fund managers, including the custom CMS, managing a complex relational content model. We provided content editors with a delete button on content items. What about content items with dependencies?

3 options:

  1. Leave it – the content team is responsible for content integrity
  2. Remove the delete button if there are content dependencies
  3. Make the delete button do...something else

1. is the attitude I used to take. A content editor would daft to delete an Investor with a Mandate hanging off it. But you know it’s going to happen, the very first time they’re in a hurry to clean out an old record.

This is the attitude behind the Hawaii screen.

2. is more helpful. But it leaves users wondering why that delete button is missing. This way, bug reports lie!

We went for 3. The delete button is still there, but instead of deleting the item it opens a dialog with an explanation and a list of links to the dependencies that need to be fixed. It makes the wrong thing impossible, and the right thing as easy as possible.

Coda. A fix for Hawaii

In the wake of the incident, the relevant agency has issued a software update:

Emergency SMS control screen, showing False Alarm option

There it is at the top of the list, a BMD False Alarm option! Granted we’ve seen that this is necessary, but it only adds to the shortcomings listed above:

  1. More acronyms
  2. Still not in a meaningful order
  3. A whole SMS new category mixed up with the ones already there
  4. More capital letters

And a whole new problem. There’s no way to tell from this screen which SMS warnings the False Alarm applies to. Just the missile alert? Whatever was the last message sent? What does this link do if the last message was a Test? Or was sent three months ago?

Without fixing the underlying design failures, they’ve actually made this screen worse not better.

In anticipation of the next inevitable accident,
Guy

Thursday 4 January 2018

So your Product Owner doesn't like paying off Tech Debt?

No Product Owner likes paying off tech debt. It looks suspiciously like the Devs messing around with perfection when the product is already working. The team could be building me new features dammit!

Tech debt is a pretty abstract concept to people without a coding background. We want to communicate it in a way that explains the value to the PO, in terms that are meaningful to them. Here are two approaches – one that I've used before and that worked, and another that I mean to try next time.

Tried and tested – the car service

If you drive a car, you get it serviced every year. It's painful because (a) it's expensive and (b) your car's still running. Yes you could drive it to Birmingham next week without getting it serviced. And the week after. And the week after that. But it will keep getting a bit slower and a bit more expensive to run, until one day it stops. And it won't stop gently on a day that doesn't matter – it will stop hard on the motorway when you have to get to Birmingham in a hurry. Because that's when you're stressing it hardest.

Your codebase is just the same. Sure you can put off paying off tech debt, because it's still running. But dev work that should be easy will get slower and more expensive, until one day you can't go any further.

If your PO wants to keep driving, they've got to service the car. Otherwise expect it to come to a screeching halt just when it matters the most.

Next time – revenue protection

Product Management types understand two broad categories of project:

  • Revenue generation
  • Revenue protection

They prefer revenue generation projects. Everyone does – they're sexy and pay all our bills. But they understand the need for revenue protection as well.

Paying off tech debt is revenue protection for the workstream. Or maybe velocity protection. Without it, once again work will slow down until it can't go any further.

Can we avoid this in the first place?

Of course it's better if you can avoid having to commit time to paying off tech debt. In a steady-state business-as-usual workstream with frequent releases, ideally the team refactors the code as you go to avoid getting into this situation at all.

However sometimes you have to accrue tech debt – eg there's a cost-of-delay driving an MVP release. Or you'll discover it some time later. When that happens, you'll want to convince your PO to give it appropriate priority.

Guy