Breaking Work Into Chunks That Users Care About

When a product backlog item (PBI) is finished, there should be some product of that work that is independently useful to the business.

Let’s say the product owner wants to see a grid of all the accounts in their web application. You could break the development work down into product backlog items like so:

  • PBI 1: Add a link to the administrative navigation that points to the accounts page
  • PBI 2: Add a new action to the Accounts controller that returns a listing of accounts
  • PBI 3: Add a new view
  • PBI 4: Add a grid control to the view
  • etc...

The problem with breaking the work down this way is that there’s nothing valuable to show to an end user after any one of these PBIs is done.

I’d argue that the work should be broken down into exactly one PBI like so:

  • PBI 1: Show a listing of accounts

When this PBI is done, the product owner and any other concerned end users will know that something valuable has been added to the software.

I think some teams like the former method of breaking things down because it feels like they’re getting more “done” when they’ve completed a higher count of PBIs by the end of a sprint. The problem is that this definition of “done” is not one that matters to any end user of the software (and we work for them, right?).

The people that use your software don’t know what PBIs are and they certainly don’t care how many “points” you completed. They know you did something when they see a useful feature added to the software (in production, obviously).

Part of the spirit of Agile is getting the development team to think in the terms that users think and to think about value in the way the users think about value. When we write product backlog items as user stories, then we keep the focus of each one on atomic business value.

You Test In Production

I always enjoy hearing about heretical ideas in software methodology. I remember the first time I read the phrase “if it hurts, do it more often” in one of Martin Fowler’s articles.

And I had a similar feeling recently when reading Pete Hodgson’s article "Hello, production”:

Deploying something useless into production, as soon as you can, is the right way to start a new project.

It seems like this is the right time—the cloud era—to talk seriously about putting “Hello, world” into production.

Make it real

There’s nothing more motivating than seeing your work live. That feeling of This is real now. It’s almost like the advice you hear if you’re having trouble committing to a trip to a new place: book the flight immediately and then fill in the details of your agenda later. Booking the flight is the smallest action you can take to make the trip real.

If you think this is hard now, imagine how hard it will be later

It’s much harder to establish an automated deployment pipeline when you’ve got a bunch of existing, complex crap to deploy. It’s never easier to figure it out then when your deployment package is a single static HTML file.

As Pete writes:

It’s a lot more pleasant to work through these processes calmly at the start of development, rather than rushing to get everything squared away in front of a fast-approaching launch deadline.

In my experience, the go-live launch for systems built out this way tends to go extremely smoothly, to the point of being anti-climactic.

I can’t think of any descriptor I’d rather have associated with my production deployment than “anti-climactic”!

Why deploy without features

There’s no feature more valuable than the ability to deliver additional features easily.

Prod is the only thing that’s real

I will say that in my 15 years in the software industry I’ve never seen a UAT environment that was truly “the same” as production. Not once. Despite all claims to the contrary by the people involved. Not once have I worked on an application where there weren’t bugs that only cropped up in production. Let’s stop pretending that we don’t test things in production. Production is the only place to really test something.

Deliver Us From Estimation

It’s not ideal to think of our product as a long list of things we must do. It’s not the best idea to predict when we’ll be done, or even project when we’ll be done. The Agile Manifesto calls for Working Software. Take the next problem, solve it with working software. Really solve it, which means getting that solution in the hands of the people who need it. It’s not about planning, predicting, and projecting. It’s about choosing, building, and providing.

- Ron Jeffries

I’m not fundamentally opposed to estimation in software development. But I might say I’m opposed to taking it seriously.

The only thing that’s real in this industry is working software in the hands of real users. The map is not the territory.

My heart's desire is that every impulse toward better estimation get channelled toward streamlining delivery. Amen.

 

Retro Grade

Out of all the Agile ceremonies, it seems like retrospectives tend to go to pot first (if they’re done at all).

They turn quickly into complaining sessions, with no visible signs of progress. It’s easy to schedule an hourly meeting every couple weeks. We can say objectively that that happened. It’s much harder to actually fix all those things people complained about.

And there’s nothing more demoralizing and demotivating than marching through a ceremonial process over and over that we know is not going to change anything.

I propose that every worthwhile retrospective must begin by discussing the progress made on the action items produced in the previous retrospective. This of course necessitates that action items from a retrospective are first-class members of the next sprint backlog, i.e., the list of work items the team commits to delivering by the end of the sprint. Without accountability and trackability, let’s just admit our retrospective is a show trial—a farce—lip service to the Agile ideal of continuous improvement.

And the thing is, once you begin this backslide, there’s no returning. The retrospectives become less and less productive, people stop speaking up, and the value of the ceremony erodes into nothing. Why is no one speaking up in the retros anymore? Because they know it’s pointless.

Unfortunately, there’s no escaping hard work in software development, no matter how many meetings we put on the calendar and what we call them.

Discipline Is a Limited Resource

Many years ago on this very blog I wrote the cheekily titled Blodgett’s First Law of Software Development:
A development process that involves any amount of tedium will eventually be done poorly or not at all.
I don’t remember what exactly triggered me to write that down in February of 2008, but I could take a good guess.
 
This may be a gross oversimplification, but I think that smart people tend to have a low tolerance for repetitive or tedious work. They can’t ignore the futility of what they’re doing.
 
A way I see this play out in software development is that all those things that developers should do, but are not directly necessary to ship software tend to fall by the wayside given enough time. Testing, documentation, code reviews, tracking hours spent on tasks, and “process” in general.
 
You can usually tell when an activity is discipline-based, because you’ll notice someone who’s not a developer is constantly hounding the developers to do it. That’s a clear sign.
 
 
When it becomes clear that an activity is discipline-based, it’s time to ask: Is there a way we can get roughly the same result while adding less to the discipline burden of the team members?
 
Why is it that this activity feels so tedious and discipline-based? Maybe some aggressive automation is in order. Computers are damn good at doing tedious things, and you never have to hound them to do it. The necessity of discipline is a sign that something has not been automated yet.
 
Relying on discipline alone is not a sustainable strategy to accomplish…well...basically anything, at least in the long run. Try to lean on discipline as little as possible.

How Do I Convert Story Points to Hours?

So someone told you that in “Agile” we estimate in “story points” rather than hours. Since you want to be Agile you decide that’s the right thing to do. Great.

But almost immediately a conundrum arises: How the heck do we convert these story points into an actual estimate that we care about...you know...in time? What’s the dang formula?

Sorry, but I’m going to be a jerk and answer your question with a question.

Why do you feel you need to know how many hours something will take? It’s probably because you’re trying to “forecast” when you can do a “release” of your software, right?

Here’s my next question: What if you stopped doing “releases” all together and instead continuously delivered business value to production? Would it still matter how many hours that PBI was going to take, or would it be good enough to know that it will be done within a few days and then immediately deployed to actual users?

Now imagine how much simpler things would get if you didn’t feel the need to care so much about estimates. Luckily we’re not building bridges here, we’re building small digital artifacts that can be delivered to people instantaneously over the internet when they’re done.

If you didn’t try to deliver the whole darn thing at once, then it wouldn’t all need to be right at once. You could break something small instead of something huge. You could deliver a small broken thing in a few days rather than a huge broken thing after months of anticipation.

How do we stop caring so much about estimates? Investigating that question could change everything.

I Prefer to Read the Classics

 

Programs must be written for people to read, and only incidentally for machines to execute.

- Abelson & Sussman, Structure and Interpretation of Computer Programs

I like to occasionally revisit Martin Fowler's article Mocks Aren’t Stubs whenever I’ve been thinking a lot about unit testing. I tend to pick up on nuances that I didn’t pick up on previously.

In the article, Fowler breaks down what he calls the Classicist vs. Mockist schools of unit testing. He describes what he sees as the advantages and disadvantages of each approach in a fairly unbiased way.

In my most recent pass through the article, I noticed that he leaves out what I see as an important facet of this divide in approaches: readability of tests.

Here’s an example of the Classicist style of unit testing that Fowler uses in the article:

public void testOrderIsFilledIfEnoughInWarehouse() {
Order order = new Order(TALISKER, 50);
order.fill(warehouse);
assertTrue(order.isFilled());
assertEquals(0, warehouse.getInventory(TALISKER));}

 

And here’s a functionally similar test he provides in the Mockist style of unit testing, which I understand may not be at the cutting edge of Mockist style, but I think illustrates the problem for me:

public void testFillingRemovesInventoryIfInStock() {
//setup - data
Order order = new Order(TALISKER, 50);
Mock warehouseMock = new Mock(Warehouse.class);

//setup - expectations
warehouseMock.expects(once()).method("hasInventory")
.with(eq(TALISKER),eq(50))
.will(returnValue(true));
warehouseMock.expects(once()).method("remove")
.with(eq(TALISKER), eq(50))
.after("hasInventory");

//exercise
order.fill((Warehouse) warehouseMock.proxy());

//verify
warehouseMock.verify();
assertTrue(order.isFilled());
}

In my opinion, Mockist-style unit tests are much harder to read than Classic tests. And readability is king. If you scan a test and think to yourself I have no idea what this test is supposed to prove, then that’s a bad test.

And to be frank, I sometimes feel that mock-heavy tests are more a demonstration of Look what objects can do! than a practical means of gaining confidence in one’s software. Each new mocking framework tries every syntactic trick it can think of to mitigate the fact that behavior-based verification is inherently an awkward concept.

If the primary test of a test is readability, then the Classicists have it.