Sunday, 16 January 2022

LAND and freedom

Back in 2019 a user called @agileklzkittens made a hella assertive tweet:

LAND: Large, Atomic Non-Deployable requirements.
In early 2022 they tweeted again, changing Non-Deployable to Non-Divisible. I think this probably captures their point better.

What are they on about? In the linked Atlassian Community discussion a user asks how to manage in Jira a User Story that's too large for his team's two-week Sprint. There are 2 responses, both advising him to break it into smaller Stories. In fact the user has already broken it down into sub-tasks.

So breaking down AKK's tweet:

  1. There are Large, Atomic Non-Deployable requirements
  2. These break Agile
  3. The respondents are somehow dishonest for giving their best advice

So can Agile handle LAND requirements?

Yes Agile emphatically can handle LAND requirements.
They may challenge Scrum, and our Agile-sceptical chum was naughty to conflate the two.

Let's assume that LAND requirements really do exist. Though in my experience theymust be exceedingly rare.

For sure they might break Scrum. You can't fit a pint in a half-pint pot and you can't fit three weeks of developmet into two.

They won't break Agile. Remember the first Value in the Agile Manifesto:

"Individuals and interactions over processes and tools"

And the 4th:

"Responding to change over following a plan"

Both of these comfortably tell an Agile team "Can't be done in 2 weeks? Fine. Flex your approach."

What's the difference?

Scrum is a tool built around a simple model. All models have limits – Newton's laws break down when you get too heavy or too fast or too small. That doesn't make them wrong – they'll still get a body to the moon and back. Models have a limited range of applicability, and a LAND requirement larger than your Sprint is simply not in the range of applicability for Scrum. I think that's entirely acceptable for a rare case.

Agile by contrast is a set of value statements. They may also have a limited range of applicability. If so, it's broader than a specific set of instructions (Scrum) that's broadly under the Agile umbrella.

What advice would I give?

If you find a LAND requirement:

1. Try to break it up anyway

It's easy to assume a piece of work is LAND. And we all need help sometimes with story slicing, particularly if we're new to it. So grab someone with a bit more experience – or simply different experience – and try to break it into smaller user stories that meet the INVEST criteria. One way or another your going to reinforce the habit of assuming it can be broken up or assuming that it can't. Assuming that it can is healthier.

2. Don't worry

If it truly is Large, Atomic and Non-Divisible, Agile has your back! Flex your method if necessary. You're running Scrum? Let the work cross a Sprint boundary, declare a double-Sprint or whatever. Agile is about what works best for your team and their needs, not about following a specific method's rules.

Happy trails!

child on bike with stabilisers, heading down a rural road

Wednesday, 30 May 2018

True Confession: I don't enjoy estimating

But I do it anyway...

I haven't blogged in a while. I read a book, did the crossword. It was good.

I'm going to add a new story into my NoEstimates Blogging Backlog. This is it – you're reading it now. It's a backlog, I own it, and I'm allowed to do that :)

Estimating is no fun

I defy anyone in this game - Dev or Scrum Coach or BA or Programme Manager or even business client - to admit that they enjoy estimating. It's always based on incomplete information (that's why it's an estimate not a report), and we always feel like we're sticking our finger in the air and our butt on the line, at least a little bit.

If there's anyone out there who truly enjoys estimation, please teach me your special sauce!

Why I estimate anyway

I use two kinds of estimate: for the customer and for the team.

For the customer should be obvious, for most readers. Can I afford it? Will I get it in time?

I know the diehard NE set deny the value of this kind of estimate. They maintain that the customer is paying for software not for estimates, so let's give them valuable software and not waste time on estimating it. I also know that almost every business I've ever worked in has wanted answers to these questions before committing to a SW project. And since it's their money I respect that.

For the team, estimates are a mechanism for sustainable pace.

I facilitate estimates because I tend to coach Scrum. Estimates - generally story-point estimates - are key to Scrum's mechanism for managing sustainable pace.

I'm aware that other approaches to sustainable pace are available, most obviously Kanban (a lot of observers note a close relationship between NE and Kanban). So I could ditch team-oriented estimates by shifting to Kanban, but my experience is that teams are much more comfortable in Scrum. The Sprint cycle, which is (necessarily?) estimates-based, gives them an opportunity to come together, take stock and agree where they stand.

But not on this project!

I'm in an interesting position on my current piece of work: starting an Agile project that interfaces to a much larger Waterfall programme. We should be able to deliver results faster than the programme, which is an external dependency.

If we can do that, then the limiting factor will be the programme not the project. And with a bit of luck, in this specific luxurious circumstance, the customer will quickly consider that estimates from my team are irrelevant.

Can I eliminate estimates for the team too?

No.

"The Sprint cycle, which is (necessarily?) estimates-based..."

Necessarily?

Some years ago, Vasco Duarte tweeted a burnup chart comparing a trace of estimated stories, and one of equal-sized stories. He saw a negligible difference between the two. I found it intriguing.

There are important questions to ask about this:

  • Did the estimation process contribute to the stories being sufficiently similarly-sized for this to work?
  • Did the estimation process generate important conversations that would not have happened without attempting to estimate?
  • Were the statistical methods used robust?

I think the first important question for me is can I reproduce these results? Seems like a good time to try :)

Then there's that guy who bucks the trend

There's always one! In this case, that guy who sells software projects without providing customer estimates. Seems he acquires new work based on recommendations from existing clients, and his new clients respect and accept this.

That's one hell of a hard-earned reputation! And/or the guy could sell snow to Alaska. Either way, he has my respect and not a little envy!

Proponent? Opponent? Who the hell is this guy?

I think NE set consider me an oppositional pro-estimates traditionalist, while the traditionalists consider me a hopelessly naive borderline NE advocate. They are two highly oppositional camps. I'm please to sit in neither, though it means brickbats from both.

I hope this post demonstrates a more nuanced position. Estimates do not make me feel all warm and fuzzy. They're not why got into this game, as either a Dev or a Scrum Coach. If I see a circumstance in which they're not necessary, I'll happily jettison them. And for those particularly talented practitioners who are able to run their businesses without estimates - all strength to them!

At the same time, my clear experience and that of many others is that many businesses, most of the time, require estimates of one sort or another to invest in SW projects. I respect that and I'm not about to throw mud in their face for their financial diligence*.

(Yes, I know that financial diligence and governance can be misguided, even pathologically so. There may be a future post on this.)

An experienced practitioner has a varied toolbox. I came to NE looking for a new tool for mine. It's been tricky to find, obscured by a rancorous debate with little middle ground. Finally I've found one. The kind of middle position that doesn't generate Twitter followers and speaking engagements.

Sometimes some estimates are important, and sometimes they're not.

Friday, 20 April 2018

On varieties of thought in #NoEstimates

Post 1 from my #NoEstimates blogging backlog

What differences of thought do we see in the #NoEstimates community? How deep do those differences go?

Reading the material in my bibliography and available on Twitter, I see slightly different positions taken by various #NoEstimates proponents. I see two apparent differences: the strength of their #NoEstimates position, and the actual objections to estimation.

Strength of #NoEstimates position

  • We should always eliminate estimates.
  • Can we find something better than estimates? (But if they work for you that's fine.)

I could characterise the first as Hard #NoEstimates, as it's a prescription for all practitioners, and the second as Soft #NoEstimates. I don't mean to impugn anyone as either Hard or Soft - if you dislike these terms I'll be grateful if you can help me find others :)

While there's an overt difference, I think it's a difference in personality and style rather than intent. For a Soft #NE advocate "just asking questions", if those questions are consistently about the value/validity of estimates, under the #NoEstimates banner, backed by claims of multiple years since their last estimate, I think they're pushing a position just as hard as their Hard #NE confederates.

Objections to estimation

  • Estimates are ineffective (therefore a waste at best, and misleading at worst).
  • Estimates are a sign of (and possibly a cause of) organisational dysfunction.
  • Estimation damages trust and/or team dynamics.

Ineffectiveness seems to be the core of the Soft case. The Hard case leans on dysfunction as well, hence its strong prescription to avoid estimates.

Hard proponents are also starting to make claims about team dynamics, eg, the suggestion that requests for estimates kill trust:

Despite these different objections, I don't recall seeing a #NE advocate disagree with another. That's in sharp contrast to the Agile community as a whole (eg You're doing Scrum wrong or TDD/BDD/both/neither), or even the broader Software Development community (Agile/Waterfall). In a community of practitioners exploring new ways of working, especially one whose members make different arguments in public, I would expect to see disagreement. ie critical appraisal of one anothers' thought.

The basis for these statements is not always obvious.

Conclusions

Whether you're a #NoEstimates proponent or critic, I think it's important to understand that the hashtag encompasses more than a singular opinion.

That said, regardless of specific arguments (objections to estimates) or style (strength of argument) it's also not clear to me that these really are different positions at all.

I'd be interested to hear of disagreements in the #NoEstimates community, which would indicate critical appraisal of one anothers' thought, rather than the apparent bloc approach I've seen up to now.

Sunday, 15 April 2018

A #NoEstimates blogging backlog

A couple weeks ago I got into a tweetstorm around #NoEstimates. That provided a pile of reading to do, which with other obligations looming I had no time to read. One holiday in Italy later (it was lovely, thanks for asking!) I'm all caught up :)

I originally wanted to write some kind of comprehensive analysis. But it would have been very TL;DR and I might never have finished it anyway. So in the spirit of story-slicing, here's my #NoEstimates blogging backlog:

  • On varieties of thought in #NoEstimates
  • Here be ducks - the canards of #NoEstimates
  • Some challenges for #NoEstimates
  • What is #NoEstimates really trying to solve?
  • #NoEstimates strengths and weaknesses

Being a backlog, it's full of little pieces of value, and it's likely to change before I get to the bottom.

Where do I stand on all this?

I came to #NoEstimates a couple of years back, hoping for something interesting and provocative to learn. I've agreed and disagreed on various points with both its proponents and detractors.

Over the last couple days' reading, my own thinking has evolved. There's definitely value there, including ideas to help shape my new project. That said, I haven't bought the idea that we should avoid estimates wholesale.

Bibliography

Here's that reading list. Please point me towards anything else I should be looking at.

Woody Zuill

Woody is a major #NoEstimates proponent. These are the blog entries currently on his Beyond Estimates index:

Ryan Ripley

Ryan is another proponent, who I'd not come across before.

Peter Kretzman

Peter is a critic of #NoEstimates. This is his commentary on Ryan's talk above and on the debate as a whole.

Update. Peter has pointed me to some more posts of his:

Dan North

Dan isn't particularly an advocate or an opponent, though clearly he uses estimation in his practice. He's been recognised as an Agile thought leader for as long as I've simply been trying to be a Scrum Master.

Glen Alleman

Glen has long been an outspoken critic of #NoEstimates.

Update. Glen has pointed me to an aggregation of his posts on #NoEstimates. I've certainly read some of these before, but there's a lot there and I'm afraid I've not made a comprehensive review this time around.

Glen's also clarified that there's no Part 2 to the book review. However he does have some further commentary that he'll be making available.

Wednesday, 17 January 2018

The UI that broke Hawaii

Does anyone need reminding that design is more than pretty colours? Apparently they do. Here’s the web-app screen that sent an SMS to some 1 to 1.5 million Hawaiians, that a ballistic missile was headed their way.

Emergency SMS control screen

At least bad data design didn’t kill anyone this time *. I hope.

* This is awfully reminiscent of the powerpoint slide at NASA that should have, but didn’t, warn of the likelihood of the Space Shuttle Columbia disaster.

What’s wrong with that screen?

Let’s count the problems.

  1. It’s heavy with acronyms and jargon that make it hard to understand the links
  2. The items aren’t in any meaningful order
  3. High-safety critical items (Tsunami Warning) are mixed with convenience items (road closure notification) and tests
  4. Heavy use of capitals means the emphasis on DRILL does not stand out
  5. Inconsistent language – there are three test options, all indicated with different phrases:
    • “DRILL” (at the start)
    • “DEMO TEST” (at the end and)
    • “1. TEST Message” (the whole line)

This adds up to a screen with heavy cognitive load to perform a basically simple but safety-critical task. It is inviting an error, and it is a serious failure of the team that commissioned, accepted and manages the software, and the team that built it.

I hope lessons are learnt in the right place, and it’s not the operator who suffers.

How would I change it?

Since I’m carping, I should be clear what I would do differently here. I want to remedy a couple of those faults listed above:

  1. Ditch the acronyms and the jargon. “High Surf Warning North Shores” is perfect. PACOM should say “Incoming Missile Warning”.
  2. Order the items, in a way that makes sense to the operators. Alphabetical would be a good start.
  3. Make a crystal-clear design distinction between high-criticality links, low-criticality links and test actions.

Why haven’t I touched the issues of CAPITALS or of inconsistent language? I want to get the design fix right first (point 3):

  1. Place options for Test, Info and Emergency on different screens, or clearly marked sections on the same screen
  2. Make Test the easiest option to pick (least deliberate) and Emergency the hardest (most deliberate)

Get this right – create utter clarity between Incoming Missile Warning and Incoming Missile Warning Drill – and those other points shouldn’t matter nearly so much.

Excuses, excuses. This means YOU!

So you don’t work on safety-critical systems? Me neither. This still applies to both of us.

At one time in my career I’d say “But a user wouldn’t do that.” Or “A user shouldn’t do that.” Why would they? It’s stupid. It doesn’t make sense. Obviously it will break the system.

So here’s the heads-up. Sooner or later your users will,/b> do that. Why? Because they’re in a hurry. Because they’re overworked. Because their partner yelled at them this morning. Or just because they’re trying to do their job, the best they can, with a limited view of a complex system.

We the Dev team, are the ones with the full context. We’re the ones tasked with thinking through the workflows – the exceptions as well as the happy path. We’re the ones who need to make the right thing easy and the wrong thing damn near impossible.

And it’s everyone’s responsibility – Devs, Testers, Product Owners and Scrum Masters – whether or not we have a Designer on the team.

A case study

My last product was a lead generation tool for fund managers, including the custom CMS, managing a complex relational content model. We provided content editors with a delete button on content items. What about content items with dependencies?

3 options:

  1. Leave it – the content team is responsible for content integrity
  2. Remove the delete button if there are content dependencies
  3. Make the delete button do...something else

1. is the attitude I used to take. A content editor would daft to delete an Investor with a Mandate hanging off it. But you know it’s going to happen, the very first time they’re in a hurry to clean out an old record.

This is the attitude behind the Hawaii screen.

2. is more helpful. But it leaves users wondering why that delete button is missing. This way, bug reports lie!

We went for 3. The delete button is still there, but instead of deleting the item it opens a dialog with an explanation and a list of links to the dependencies that need to be fixed. It makes the wrong thing impossible, and the right thing as easy as possible.

Coda. A fix for Hawaii

In the wake of the incident, the relevant agency has issued a software update:

Emergency SMS control screen, showing False Alarm option

There it is at the top of the list, a BMD False Alarm option! Granted we’ve seen that this is necessary, but it only adds to the shortcomings listed above:

  1. More acronyms
  2. Still not in a meaningful order
  3. A whole SMS new category mixed up with the ones already there
  4. More capital letters

And a whole new problem. There’s no way to tell from this screen which SMS warnings the False Alarm applies to. Just the missile alert? Whatever was the last message sent? What does this link do if the last message was a Test? Or was sent three months ago?

Without fixing the underlying design failures, they’ve actually made this screen worse not better.

In anticipation of the next inevitable accident,
Guy

Thursday, 4 January 2018

So your Product Owner doesn't like paying off Tech Debt?

No Product Owner likes paying off tech debt. It looks suspiciously like the Devs messing around with perfection when the product is already working. The team could be building me new features dammit!

Tech debt is a pretty abstract concept to people without a coding background. We want to communicate it in a way that explains the value to the PO, in terms that are meaningful to them. Here are two approaches – one that I've used before and that worked, and another that I mean to try next time.

Tried and tested – the car service

If you drive a car, you get it serviced every year. It's painful because (a) it's expensive and (b) your car's still running. Yes you could drive it to Birmingham next week without getting it serviced. And the week after. And the week after that. But it will keep getting a bit slower and a bit more expensive to run, until one day it stops. And it won't stop gently on a day that doesn't matter – it will stop hard on the motorway when you have to get to Birmingham in a hurry. Because that's when you're stressing it hardest.

Your codebase is just the same. Sure you can put off paying off tech debt, because it's still running. But dev work that should be easy will get slower and more expensive, until one day you can't go any further.

If your PO wants to keep driving, they've got to service the car. Otherwise expect it to come to a screeching halt just when it matters the most.

Next time – revenue protection

Product Management types understand two broad categories of project:

  • Revenue generation
  • Revenue protection

They prefer revenue generation projects. Everyone does – they're sexy and pay all our bills. But they understand the need for revenue protection as well.

Paying off tech debt is revenue protection for the workstream. Or maybe velocity protection. Without it, once again work will slow down until it can't go any further.

Can we avoid this in the first place?

Of course it's better if you can avoid having to commit time to paying off tech debt. In a steady-state business-as-usual workstream with frequent releases, ideally the team refactors the code as you go to avoid getting into this situation at all.

However sometimes you have to accrue tech debt – eg there's a cost-of-delay driving an MVP release. Or you'll discover it some time later. When that happens, you'll want to convince your PO to give it appropriate priority.

Guy

Thursday, 21 December 2017

SAFe - a second opinion

I had a brief introduction to SAFe at a conference back in 2015. The session focused very much on how in SAFe 100-plus participants come together every 8 to 12 weeks to plan their next delivery commitment. I was not impressed. This to me was the antithesis of Agile.

This week I sat the Leading SAFe course. Was my second opinion any different?

Summary: SAFe does something Very Very Good, something Somewhat Less Good, and something Deeply Troubling.
skip-to-the-end

But first...

What is SAFe for?

The Scaled Agile Framework (SAFe) is for large teams, typically more than 50 individuals, working on a single software solution.

eg 1. One of the course participants was working on a European national train operator's programme to replace their entire ticketing and reservations system. 300+ people, most of them developers, working on a single programme at the same time. (I probably wouldn't structure it like that, but apparently it's working for them.)

eg 2. The trainer has acted as SAFe Programme Consultant on a military system. I don't know which one, but you can easily imagine that a programme to overhaul the systems on the Eurofighter Typhoon might take a dozen software teams several years.

The Very Very Good

In my first encounter with SAFe I was deeply unimpressed with Programme Increment Planning: a 2 week conference, every 8-10 weeks, involving every member of the extended team committing to typically 4 Sprints' work.

Well, I changed my mind.

If we accept that working at scale is something that some programmes just have to do, they are inevitably going to lose a degree of Agility. Can't have Team 1 pivoting when they're working on the same solution as Team 10, or ignoring a dependency from Team 7.

PI Planning is actually a really well designed way to get the teams collaborating. Some key features indicate the genuinely as-Agile-as-possible flavour:

  • Teams plan their own work, based on their velocity
  • Face-to-face discussions between developers across the organisation, to deal with dependencies, ambiguities etc
  • Commitment to Objectives for the upcoming Sprints, not specific User Stories
  • An explicit understanding that the teams' User Stories and plans will change during the forthcoming Sprints (Sprints. Does that sound very Scrummy? More on that below...)
  • A confidence vote towards the end, with an opportunity for team members to raise doubts, risks and concerns

SAFe appears to take de-centralised decision-making, respect for and autonomy of individual team members very seriously. Bringing 100 people together for two days is expensive. But if you're going to have them working on the same overall effort, giving them the opportunity to work directly together every couple of months is a pretty damn good start!

The Somewhat Less Good

So SAFe provides a great way for multiple Agile teams to work together – 10/10. SAFe also has something to say about how those individual teams operate. Actually, SAFe has lots to say about how those individual teams operate.

  • All Scrum teams operate to the same cadence, with Sprints starting and finishing on the same day
  • Teams can operate something other than Scrum (probably Kanban), but they have to deliver to the same cadence anyway
  • Lots and lots of guidance about specific practices: WSJF prioritisation, Lean UX, Story Point estimation and plenty more

SAFE derives these all from their 9 Principles. They're all good principles and it's all good advice – some of the best stuff from the last 2 decades of Agile exploration. But we're moving away from Individuals and interactions over processes and tools (Agile Value #1 for anyone who's forgotten), and replacing it with yet another One True Way.

Look, I understand that SAFe are selling into large enterprises. Some of them don't have any Agile implementation at all and need some guidance, and it is good guidance. (Plus large corporates like uniformity.) But yet another method from the ground up isn't what SAFe excels at and isn't what the Agile community needs. It just seems unnecessary.

The Deeply Troubling

Story Points. Are great. I. Am. Not. Going. To. Sell. Them. To. You. (not here anyway)

Critically, a team comes to their own feel for how User Stories scale for them. Except in SAFe. In SAFe, teams are encouraged to normalise their Story Points: a 3 in my team should mean the same as a 3 in yours. This breaks the cognitive basis of Story Point estimation and the trust put in the team to engage with it. And that should be all I need to say.

It gets worse.

The reason to encourage uniform story-point scaling is so that Product Managers can estimate the sizes of Epics, without consulting the people who will be doing the work, to determine prioritisation and funding. Yes, we're back to software decision-making based on management estimates.

And once more I understand why they've done it. SAFe scales all the way up to Enterprise Portfolio level, and they want to offer a way for senior budget holders to approve pieces of work that could consume many thousands of developer days. And sure it could be done well. But I'm willing to bet that once the SAFe Programme Consultant goes home, these Product Manager estimates rapidly become personal commitments, translated into direct pressure on all those delvelopers or else...

This only exacerbated by the scale of these Epics, reasonably in the range of 1000s of Story Points. A software organisation going through the detail of Programme Increment Planning might be able to come to a reasonable estimate at this scale. For a Product Manager, it can't be anything but a guess.

Conclusions

  • If you have to structure a software programme with 50+ developers, SAFe offers a great Agile way to plan and deliver at scale. Yes there's a sacrifice of agility here, but it's a cost of operating at scale.
  • SAFe also offers its own approach to Agile delivery. This may be a useful starting point for enterprises new to Agile. For experienced practitioners, it may just be overly restrictive.
  • SAFe's use of Story Points should be treated with a great deal of caution. IMO it's a reversion to a damaging pre-Agile mode.

Like any other Agile method, I recommend you take what works for you and leave the rest.

Guy