Does this feel agile?

This is how Merriam-Webster defines agile:
  1. marked by ready ability to move with quick, easy grace
  2. having a quick, resourceful, and adaptable character

It's common for software development teams to get lost in the artifacts and ceremonies associated with "Agile" methodologies. You know--standups, backlogs, sprints, etc. They can lose sight of the very basic idea of what agility means.

I've always thought of agility as being fundamentally about change--how well you deal with change.

The Agile Manifesto talks about change explicitly:
Responding to change over following a plan
Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.

It doesn't matter how well you're following "the process" if you're not responding well to change. If you're not responding well to change, then you're not agile.

How well do we handle changing requirements, for example? Do we grumble and complain, or do we take it in stride?

If we're grumbling, why are we grumbling? Maybe it's one of these reasons:
  • We wrote very detailed user stories or acceptance criteria that we have to change now
  • The QA people have test cases they have to rewrite now
  • We're mid-sprint and we've already committed to a certain scope of work
In the spirit of responding well to change, we could examine those pain points:
  • Are the user stories or acceptance criteria written at the level of value an end user would recognize, or do they include low-level implementation details?
  • Are the QA people focused on testing business value delivered, or following a rote checklist focused on implementation details?
  • Why don't we deliver the business value that we committed to at the beginning of the sprint and capture the new requirements as an item/items for a future sprint (which is at most 2 weeks away)?


Software is soft. If we're working with a process and artifacts that we experience as hardened, ossified concretions, then we might stop and ask: what would it take to get back to our software being soft? What would it take to get to a place where it felt easy to respond to change. What would it take to be agile again?

Keep Code Reviews Focused on the Code

Hopefully we're all doing code reviews, right? Whether it's part of a pull request process, or something else, it's important to get another set of eyes on one's code.

There are a million checklists and blog posts out there on what to look for and how to give constructive feedback.



The thing I don't see discussed enough is how to appropriately scope a code review. When I'm reviewing commits, what should I leave out?

Here are some things I believe are out of scope for code reviews:

- Hypothetical business requirements

"What if the business wants X as well?" Then let them ask for it. That's what the product backlog is for.

- Broad architectural debates

"Should we change the way that we're doing X in general?" Maybe we should. Let's put an item in the backlog to find a new approach. If we adopt a new pattern for similar functionality, we can revisit this code and refactor.

- Fear of change

The perspective that the code under review must represent the last word on a particular topic. If the code under review is high quality, well-tested, and meets a known requirement, then it represents no danger to master. There's no shame in changing or even deleting the code in question with a future pull request if requirements shift.


As with so many aspects of software engineering, setting the appropriate scope is key. For code reviews, stay focused.

"Environmental Issue"

I really like a post I read recently from Paul Osman called Production Oriented Development.

He touches on a lot of topics near and dear to my heart, but the section on non-production environments stood out:

Environments like staging or pre-prod are a lie. When you’re starting, they make a little sense, but as you grow, changes happen more frequently and you experience drift. Also, by definition, your non-prod environments aren’t getting traffic, which makes them fundamentally different. The amount of effort required to maintain non-prod environments grows very quickly. You’ll never prioritize work on non-prod like you will on prod, because customers don’t directly touch non-prod. Eventually, you’ll be scrambling to keep this popsicle sticks and duct tape environment up and running so you can test changes in it, lying to yourself, pretending it bears any resemblance to production.

I've heard the heartbreaking phrase "environmental issue" too many times in my career. You know, like when a developer spends days investigating a bug reported by QA only for the team to decide it was due to a configuration drift between a lower environment and production.

With modern infrastructure-as-code tools like Puppet and Terraform, at least there's a chance of preventing the pitfalls with using non-production environments. Even then, only prod is prod. ¯\_(ツ)_/¯

Process Bikeshedding

Hopefully we’re all familiar with Parkinson’s law of triviality:

Parkinson observed that a committee whose job is to approve plans for a nuclear power plant may spend the majority of its time on relatively unimportant but easy-to-grasp issues, such as what materials to use for the staff bikeshed, while neglecting the design of the power plant itself, which is far more important but also far more difficult to criticize constructively.

One anti-pattern to watch out for in retrospectives is what I would call process bikeshedding.

Topics like these tend to get a disproportionate amount of coverage in retrospectives:

  • The length of sprints (2 weeks or 3 weeks?)
  • On which day to start the sprints (Monday or Wednesday?)
  • On which day to have the sprint planning meeting (first day of this sprint or last day of previous sprint?)

They come up over and over because they’re simple, easy to have an opinion on, and no one’s feelings will be hurt by discussing them. Unfortunately, this also means they’re trivial—i.e., not impactful to the delivery of working software to production (the purpose of a sprint).

So how does a team get beyond the trivial to discuss the real stuff? That’s the tough part. But I believe it comes down to trust and scope.

People need to trust each other in order to discuss deep concerns. There’s no magic to building trust. It just takes time.

In Parkinson’s example of the nuclear power plant, why are people so focused on the bikeshed while ignoring the design of the plant itself? I’d say it's an issue of scope. Luckily on a Scrum team we’re not approving designs for a nuclear power plant (hopefully); we’re just trying to improve the next two weeks on our software project.

In this video on Scrum retrospectives, Jeff Sutherland cuts the scope:

You only want to fix one thing at a time.

What is the one thing, that if we fixed it, would have the biggest impact on making this team go fast? And then commit to fix it.

Certainly the color of the bikeshed is not that one thing.

Documentation as Code

I remember being blown away when I first saw Cucumber roughly a decade ago. It was like writing documentation, and then executing the documentation against your actual codebase. Wow!

A decade later I firmly believe that automated tests should be thought of first as documentation that can also be executed, even just your plain old unit tests.

We’ve all seen codebases where an initial enthusiasm for unit testing slowly erodes as the tedium of maintaining low-value tests doesn’t seem worth it anymore. Tests are commented out when they break, and eventually new tests are no longer added.

And we developers hate writing documentation! It’s tedious, docs get out of sync with updates to the codebase, and nobody reads it.

Well, I like to think of automated tests as a chance to write useful documentation for once.

And that’s why I believe the #1 most important quality of a test is readability!

As Roy Osherove says in The Art of Unit Testing:

Readability is so important that, without it, the tests we write are almost meaningless.

Tests are stories we tell the next generation of programmers on a project.

So rather than piling tests onto a codebase just to see a code coverage metric go up, let’s ask: Have I made the codebase easier to understand today?

Breaking Work Into Chunks That Users Care About

When a product backlog item (PBI) is finished, there should be some product of that work that is independently useful to the business.

Let’s say the product owner wants to see a grid of all the accounts in their web application. You could break the development work down into product backlog items like so:

  • PBI 1: Add a link to the administrative navigation that points to the accounts page
  • PBI 2: Add a new action to the Accounts controller that returns a listing of accounts
  • PBI 3: Add a new view
  • PBI 4: Add a grid control to the view
  • etc...

The problem with breaking the work down this way is that there’s nothing valuable to show to an end user after any one of these PBIs is done.

I’d argue that the work should be broken down into exactly one PBI like so:

  • PBI 1: Show a listing of accounts

When this PBI is done, the product owner and any other concerned end users will know that something valuable has been added to the software.

I think some teams like the former method of breaking things down because it feels like they’re getting more “done” when they’ve completed a higher count of PBIs by the end of a sprint. The problem is that this definition of “done” is not one that matters to any end user of the software (and we work for them, right?).

The people that use your software don’t know what PBIs are and they certainly don’t care how many “points” you completed. They know you did something when they see a useful feature added to the software (in production, obviously).

Part of the spirit of Agile is getting the development team to think in the terms that users think and to think about value in the way the users think about value. When we write product backlog items as user stories, then we keep the focus of each one on atomic business value.

You Test In Production

I always enjoy hearing about heretical ideas in software methodology. I remember the first time I read the phrase “if it hurts, do it more often” in one of Martin Fowler’s articles.

And I had a similar feeling recently when reading Pete Hodgson’s article "Hello, production”:

Deploying something useless into production, as soon as you can, is the right way to start a new project.

It seems like this is the right time—the cloud era—to talk seriously about putting “Hello, world” into production.

Make it real

There’s nothing more motivating than seeing your work live. That feeling of This is real now. It’s almost like the advice you hear if you’re having trouble committing to a trip to a new place: book the flight immediately and then fill in the details of your agenda later. Booking the flight is the smallest action you can take to make the trip real.

If you think this is hard now, imagine how hard it will be later

It’s much harder to establish an automated deployment pipeline when you’ve got a bunch of existing, complex crap to deploy. It’s never easier to figure it out then when your deployment package is a single static HTML file.

As Pete writes:

It’s a lot more pleasant to work through these processes calmly at the start of development, rather than rushing to get everything squared away in front of a fast-approaching launch deadline.

In my experience, the go-live launch for systems built out this way tends to go extremely smoothly, to the point of being anti-climactic.

I can’t think of any descriptor I’d rather have associated with my production deployment than “anti-climactic”!

Why deploy without features

There’s no feature more valuable than the ability to deliver additional features easily.

Prod is the only thing that’s real

I will say that in my 15 years in the software industry I’ve never seen a UAT environment that was truly “the same” as production. Not once. Despite all claims to the contrary by the people involved. Not once have I worked on an application where there weren’t bugs that only cropped up in production. Let’s stop pretending that we don’t test things in production. Production is the only place to really test something.