Breaking Work Into Chunks That Users Care About

When a product backlog item (PBI) is finished, there should be some product of that work that is independently useful to the business.

Let’s say the product owner wants to see a grid of all the accounts in their web application. You could break the development work down into product backlog items like so:

  • PBI 1: Add a link to the administrative navigation that points to the accounts page
  • PBI 2: Add a new action to the Accounts controller that returns a listing of accounts
  • PBI 3: Add a new view
  • PBI 4: Add a grid control to the view
  • etc...

The problem with breaking the work down this way is that there’s nothing valuable to show to an end user after any one of these PBIs is done.

I’d argue that the work should be broken down into exactly one PBI like so:

  • PBI 1: Show a listing of accounts

When this PBI is done, the product owner and any other concerned end users will know that something valuable has been added to the software.

I think some teams like the former method of breaking things down because it feels like they’re getting more “done” when they’ve completed a higher count of PBIs by the end of a sprint. The problem is that this definition of “done” is not one that matters to any end user of the software (and we work for them, right?).

The people that use your software don’t know what PBIs are and they certainly don’t care how many “points” you completed. They know you did something when they see a useful feature added to the software (in production, obviously).

Part of the spirit of Agile is getting the development team to think in the terms that users think and to think about value in the way the users think about value. When we write product backlog items as user stories, then we keep the focus of each one on atomic business value.

You Test In Production

I always enjoy hearing about heretical ideas in software methodology. I remember the first time I read the phrase “if it hurts, do it more often” in one of Martin Fowler’s articles.

And I had a similar feeling recently when reading Pete Hodgson’s article "Hello, production”:

Deploying something useless into production, as soon as you can, is the right way to start a new project.

It seems like this is the right time—the cloud era—to talk seriously about putting “Hello, world” into production.

Make it real

There’s nothing more motivating than seeing your work live. That feeling of This is real now. It’s almost like the advice you hear if you’re having trouble committing to a trip to a new place: book the flight immediately and then fill in the details of your agenda later. Booking the flight is the smallest action you can take to make the trip real.

If you think this is hard now, imagine how hard it will be later

It’s much harder to establish an automated deployment pipeline when you’ve got a bunch of existing, complex crap to deploy. It’s never easier to figure it out then when your deployment package is a single static HTML file.

As Pete writes:

It’s a lot more pleasant to work through these processes calmly at the start of development, rather than rushing to get everything squared away in front of a fast-approaching launch deadline.

In my experience, the go-live launch for systems built out this way tends to go extremely smoothly, to the point of being anti-climactic.

I can’t think of any descriptor I’d rather have associated with my production deployment than “anti-climactic”!

Why deploy without features

There’s no feature more valuable than the ability to deliver additional features easily.

Prod is the only thing that’s real

I will say that in my 15 years in the software industry I’ve never seen a UAT environment that was truly “the same” as production. Not once. Despite all claims to the contrary by the people involved. Not once have I worked on an application where there weren’t bugs that only cropped up in production. Let’s stop pretending that we don’t test things in production. Production is the only place to really test something.

Deliver Us From Estimation

It’s not ideal to think of our product as a long list of things we must do. It’s not the best idea to predict when we’ll be done, or even project when we’ll be done. The Agile Manifesto calls for Working Software. Take the next problem, solve it with working software. Really solve it, which means getting that solution in the hands of the people who need it. It’s not about planning, predicting, and projecting. It’s about choosing, building, and providing.

- Ron Jeffries

I’m not fundamentally opposed to estimation in software development. But I might say I’m opposed to taking it seriously.

The only thing that’s real in this industry is working software in the hands of real users. The map is not the territory.

My heart's desire is that every impulse toward better estimation get channelled toward streamlining delivery. Amen.

 

Retro Grade

Out of all the Agile ceremonies, it seems like retrospectives tend to go to pot first (if they’re done at all).

They turn quickly into complaining sessions, with no visible signs of progress. It’s easy to schedule an hourly meeting every couple weeks. We can say objectively that that happened. It’s much harder to actually fix all those things people complained about.

And there’s nothing more demoralizing and demotivating than marching through a ceremonial process over and over that we know is not going to change anything.

I propose that every worthwhile retrospective must begin by discussing the progress made on the action items produced in the previous retrospective. This of course necessitates that action items from a retrospective are first-class members of the next sprint backlog, i.e., the list of work items the team commits to delivering by the end of the sprint. Without accountability and trackability, let’s just admit our retrospective is a show trial—a farce—lip service to the Agile ideal of continuous improvement.

And the thing is, once you begin this backslide, there’s no returning. The retrospectives become less and less productive, people stop speaking up, and the value of the ceremony erodes into nothing. Why is no one speaking up in the retros anymore? Because they know it’s pointless.

Unfortunately, there’s no escaping hard work in software development, no matter how many meetings we put on the calendar and what we call them.

Discipline Is a Limited Resource

Many years ago on this very blog I wrote the cheekily titled Blodgett’s First Law of Software Development:
A development process that involves any amount of tedium will eventually be done poorly or not at all.
I don’t remember what exactly triggered me to write that down in February of 2008, but I could take a good guess.
 
This may be a gross oversimplification, but I think that smart people tend to have a low tolerance for repetitive or tedious work. They can’t ignore the futility of what they’re doing.
 
A way I see this play out in software development is that all those things that developers should do, but are not directly necessary to ship software tend to fall by the wayside given enough time. Testing, documentation, code reviews, tracking hours spent on tasks, and “process” in general.
 
You can usually tell when an activity is discipline-based, because you’ll notice someone who’s not a developer is constantly hounding the developers to do it. That’s a clear sign.
 
 
When it becomes clear that an activity is discipline-based, it’s time to ask: Is there a way we can get roughly the same result while adding less to the discipline burden of the team members?
 
Why is it that this activity feels so tedious and discipline-based? Maybe some aggressive automation is in order. Computers are damn good at doing tedious things, and you never have to hound them to do it. The necessity of discipline is a sign that something has not been automated yet.
 
Relying on discipline alone is not a sustainable strategy to accomplish…well...basically anything, at least in the long run. Try to lean on discipline as little as possible.

How Do I Convert Story Points to Hours?

So someone told you that in “Agile” we estimate in “story points” rather than hours. Since you want to be Agile you decide that’s the right thing to do. Great.

But almost immediately a conundrum arises: How the heck do we convert these story points into an actual estimate that we care about...you know...in time? What’s the dang formula?

Sorry, but I’m going to be a jerk and answer your question with a question.

Why do you feel you need to know how many hours something will take? It’s probably because you’re trying to “forecast” when you can do a “release” of your software, right?

Here’s my next question: What if you stopped doing “releases” all together and instead continuously delivered business value to production? Would it still matter how many hours that PBI was going to take, or would it be good enough to know that it will be done within a few days and then immediately deployed to actual users?

Now imagine how much simpler things would get if you didn’t feel the need to care so much about estimates. Luckily we’re not building bridges here, we’re building small digital artifacts that can be delivered to people instantaneously over the internet when they’re done.

If you didn’t try to deliver the whole darn thing at once, then it wouldn’t all need to be right at once. You could break something small instead of something huge. You could deliver a small broken thing in a few days rather than a huge broken thing after months of anticipation.

How do we stop caring so much about estimates? Investigating that question could change everything.

I Prefer to Read the Classics

 

Programs must be written for people to read, and only incidentally for machines to execute.

- Abelson & Sussman, Structure and Interpretation of Computer Programs

I like to occasionally revisit Martin Fowler's article Mocks Aren’t Stubs whenever I’ve been thinking a lot about unit testing. I tend to pick up on nuances that I didn’t pick up on previously.

In the article, Fowler breaks down what he calls the Classicist vs. Mockist schools of unit testing. He describes what he sees as the advantages and disadvantages of each approach in a fairly unbiased way.

In my most recent pass through the article, I noticed that he leaves out what I see as an important facet of this divide in approaches: readability of tests.

Here’s an example of the Classicist style of unit testing that Fowler uses in the article:

public void testOrderIsFilledIfEnoughInWarehouse() {
Order order = new Order(TALISKER, 50);
order.fill(warehouse);
assertTrue(order.isFilled());
assertEquals(0, warehouse.getInventory(TALISKER));}

 

And here’s a functionally similar test he provides in the Mockist style of unit testing, which I understand may not be at the cutting edge of Mockist style, but I think illustrates the problem for me:

public void testFillingRemovesInventoryIfInStock() {
//setup - data
Order order = new Order(TALISKER, 50);
Mock warehouseMock = new Mock(Warehouse.class);

//setup - expectations
warehouseMock.expects(once()).method("hasInventory")
.with(eq(TALISKER),eq(50))
.will(returnValue(true));
warehouseMock.expects(once()).method("remove")
.with(eq(TALISKER), eq(50))
.after("hasInventory");

//exercise
order.fill((Warehouse) warehouseMock.proxy());

//verify
warehouseMock.verify();
assertTrue(order.isFilled());
}

In my opinion, Mockist-style unit tests are much harder to read than Classic tests. And readability is king. If you scan a test and think to yourself I have no idea what this test is supposed to prove, then that’s a bad test.

And to be frank, I sometimes feel that mock-heavy tests are more a demonstration of Look what objects can do! than a practical means of gaining confidence in one’s software. Each new mocking framework tries every syntactic trick it can think of to mitigate the fact that behavior-based verification is inherently an awkward concept.

If the primary test of a test is readability, then the Classicists have it.

The Psychology of Story Points

One of the evergreen topics among Agilists is the debate of story points vs. hours.

My own take on this debate is largely based on what I see as the psychology of estimating with story points versus the psychology of estimating with hours.

Experience has taught me that developers are generally terrible at predicting how many hours it will take for them to do something, and I don’t think that’s entirely their fault.

I think it may be universally true that people who have commissioned a work effort are happier when the work takes less time to complete rather than more time.

There is now a natural pressure on people giving estimates of that work to only cheat in one direction: down, never up.

The brilliance of story points is that when comparing the relative complexity of two chunks of work, the estimator feels no pressure to rate one chunk as less complex or more complex than another. There’s no pressure to cheat.

I fully understand that people who want something badly also want to know how much time it will take to get that thing. But is there any point in getting the answer you desire if it’s always wrong?

Quality Assistance

I’ve had a gut feeling for a while that the existence of a dedicated QA team actually leads to more bugs, not fewer, by letting developers off the hook for the quality of their software.

It was fascinating for me to learn about the approach that Atlassian takes to ensuring the quality of their Jira product. The presentation below by the head of QA for the Jira team sums up their almost heretical approach to quality assurance, which they think of more as quality assistance.

I love their idea that QA experts should function more as testing consultants that are charged with helping the developers to be better testers themselves, with the ultimate goal of the testing consultants moving on to other things once the developers are self-sufficient.

Instead of pair programming, how about pair testing—where a QA expert pairs with a developer to do exploratory testing of the developer’s work.

It’s so interesting to see how they’ve basically inverted the typical flow of activity during an iteration. The QA input comes before the development work starts, not after it’s done (well, “done”).

All in all, learning about a fresh approach like this gives me hope that as an industry we can come up with better ways to resolve the tensions between development and QA in an Agile context.

My Job Went to the Cloud

In my post Be the Automator I discussed the inevitability of automation in the software profession, and the need to embrace this automation lest you be automated.

I read a thought-provoking post by Forrest Brazeal recently called The Creeping IT Apocalypse that discusses the specter of The Cloud, in particular...

It’s not that these tools have democratized IT or software development, exactly. Rather, they’ve enabled technical work to be done by a vastly smaller absolute number of people.

Companies still need tech-savvy people, of course, just like factories need people on the floor. But instead of five backend developers and three ops people and a DBA to keep the lights on for your line-of-business app, now you maybe need two people total. Those two people make great money, they’re plenty busy, and they have lots of technical challenges to solve. But they’re not managing a database cluster or babysitting a build server or writing giant stored procedures to do some non-differentiated task, like OCR on insurance forms. The cloud provider can do that (and is adding more capabilities all the time).

As Forrest points out, this is not the “my job went to India” type of fear. Instead of someone cheaper doing your job in a foreign country, the job disappears from the human labor force, to be done by a server farm in rural Washington.

I can say in my experiences inside of big enterprise IT organizations, the number of people doing repetitive, mechanistic, tedious jobs—jobs that should not be done by anyone—is staggering.

The economy will take a small dip, or your department will get re-orged, and you will lose that job as an operations engineer on a legacy SaaS product. You’ll look around for a similar job in your area and discover that nobody is hiring people anymore whose skill set is delivering a worse version of what AWS’s engineers can do for a fraction of the cost

Even the big, slow companies that are dragging their feet on cloud adoption will eventually figure this stuff out, because all their competitors are figuring it out. Dark clouds are on the horizon.

Job Ad Red Flags: “Unlimited Vacation”

Nothing makes my spidey sense tingle in a job ad quite like the phrase “unlimited vacation.”

Nedry

It’s one of those transparently ridiculous perks. Like how free sodas in the office are an amazing perk. Or how long meetings are fun if there’s “free” pizza.

Obviously, the vacation is not unlimited. So why say that it is?

There are certainly companies out there that have the best intentions for their employees. But stating that you have an unlimited vacation policy and leaving it at that is frankly irresponsible.

“I took 3 months off from my job last year because my employer has an unlimited vacation plan!” — No One

The Myth Of The Unlimited Vacation Policy

Again, I don’t believe every company that has such a policy is nefarious, but the end result is the same. As a Hacker News commenter points out:

It's a means of weaponising ambiguity into guilt. Everyone knows it's not unlimited, because you can't take 3 months off. But because there's no boundary, the real limit has to be found by stressfully, riskily probing it.

pjc50

I think a much better policy is a minimum vacation policy, with unlimited discretionary days on top.

We require a minimum of [high number] vacation days are taken by each employee, and we allow unlimited additional vacation days beyond that.

Now that is a true perk.