Showing Up for Remote Work

Remote work is awesome. I've been doing it full time for over 5 years now, and I've learned the importance of "showing up" as a remote worker.

It's really easy to feel disconnected from people when you aren't in the same physical space. I've picked up a few habits that have helped me stay visible as a remote engineer.

Show Your Face

People are instantly humanized again when you see their faces. I've found myself assuming things about someone's personality and attitude when I've only communicated with them via text.

Who would you rather leave a comment on your pull request?

  or  

By default the various services that you use at work to collaborate with people are going to give you a lifeless avatar. Do the bare minimum and upload a picture of yourself to your profile on each one.

The same goes for Zoom meetings. Every laptop these days comes with a webcam, or you can buy a cheap USB version and clip it to the top of your monitor. Even if your hair is messy and your home office is drab looking, just turn on your webcam. Don't be this guy:

Speak Up

Simply hearing someone's voice is a powerful thing when you're not in the same physical space. It doesn't take much to just say "Good morning." at the beginning of a Zoom call or "Thanks, see ya." at the end of a call. 

When I'm attending meetings remotely, I try to speak more than I probably would if everyone were in the same room. Even if I don't have anything particularly original or earth-shattering to contribute, it doesn't hurt to say "Yeah, that makes sense" or "I agree". Your silent nods aren't going to be noticed in a Zoom call with ten people.

Show Your Work

I try to take advantage of the various activity feeds that surround my work as a software engineer.

For example:

  • Instead of coding for days on a backlog item and then making one big commit-push to the team's Git repository, break your work into smaller logical chunks that you can push daily
  • If you did some research as part of a backlog item that won't be captured as code in the repository, write up an article in the team wiki summarizing your findings and link to it from the backlog item or post a link in the team chat
  • Instead of the briefest possible comment in the morning standup like "Still working on item #4567", give a few sentences description of the specific progress you made on that item
  • Tag specific people in the comments on backlog items, pull requests, etc. when you've done some work that would be of interest to them

Acknowledge Communications

It's easy to feel like you're writing into a black hole when working remote. Let people know you're paying attention.

When someone writes a message in the team chat that helped you, throw a thumbs-up on it or tag them in a reply to say thanks. Same goes for nice or constructive comments on wiki articles you wrote or pull requests you submitted.

With Great Power Comes Great Responsibility

Remote work arrangements are a great benefit for engineers, and they require a great deal of trust between employer and employee to work. Since I'm not showing up in person, I make a conscious effort to show up in other ways.

Your Wiki Has Expired

I've had the experience when starting a few jobs that I spend the first days and weeks reading lots of internal wiki articles that were written by previous waves of team members.

Through these experiences I've come to realize that deleting documentation is at least as important as writing it in the first place.

We're probably all familiar with the concept of self-documenting code. Since we know that code comments tend to drift from the code they're describing, we structure and choose names for our code elements so that the code itself describes its purpose. Great!

If we're being good developers, we also write unit tests for our code that form a layer of executable documentation covering the code. We know (in most cases) that our tests have drifted from the code they cover when they start failing. Even better!

Unfortunately, wiki articles don't have any built in mechanism to stay in step with the things they describe. And just like code comments, they're out-of-date almost as quickly as they're written, with no automated way of finding this out.

Over time, out-of-date documentation adds negative value to a project. Going past the point of being valueless, it actually misleads the very people who need the documentation and won't know that it's become inaccurate.

I would love to see a wiki system where articles have an expiration date. Or something like a self-destructing wiki. Articles that aren't kept up to date simply disappear from the system automatically.

I'm kinda joking, but I'm kinda not. I'm truly curious if there are good solutions to this problem. Dear reader, please leave a comment if you have any.

A cursory googling makes it look to me like all of the wiki systems I've used have REST APIs available: Azure DevOps, Confluence, and GitHub. I'm imagining it should be possible to build a process that periodically checks the "last updated" date of every article in a wiki and notifies the author or a distribution list when it hasn't been updated for X days. Or maybe just deletes it!

I want my wiki to expire.

Goals, Not Tasks

From my recent posting history, you might tell I'm on a real kick about how work is split up on software projects. Well, that kick continues because I just read a great post from Charles Miller called Decluttered Software Development. I particularly identified with the section in which he discusses assigning goals to software developers rather than tasks.

One of my longstanding pet peeves in the software industry is the way in which some managers treat developers as if they're highly paid instruction followers, like computers themselves, instead of rather intelligent human beings who can comprehend high level business objectives.

Here's Charles:

If somebody takes ownership of a complete goal, they feel responsible for delivering it as best they can, even the bits they discover on the way that they didn’t know about when they took responsibility for it.

When you split a goal into tasks, you are redefining “achieve this goal” as “complete this set of tasks”. Developers own their individual tasks, nobody owns the gaps between them, and anything that shows up after that initial split inevitably floats all the way back up into the project-wide planning process for expensive renegotiation and rescheduling.

Often these things that fall between the gaps won’t be noticed until long after the feature is “delivered”, because nobody at the developer level is thinking about the problem as a whole any more, just about the component parts it was split into during some meeting.

There's a whole class of bugs that comes down to the developer followed very specific instructions without understanding the goal. And a well-meaning manager will take that to mean I wasn't specific enough in my instructions. No! Computers need instructions. Humans need understanding.

I hope I'm not beating this dead horse until the end of my career, but I must again point out that user stories are essential to organizing the work on a software project. If the developers are not focusing on the users' goals, then they cannot make good software. Developers don't need better instructions. Developers don't need more detailed tasks. Developers need understanding of the users' goals.

Code as Inventory

I recently read a ten-year-old blog post by one of my software heroes, favorite Twitter follows, and author of Working Effectively with Legacy Code: Michael Feathers. The post, called The Carrying-Cost of Code: Taking Lean Seriously, struck a chord with me.

In Lean Software Development, requirements are often seen as inventory.  If you spend a lot of time elaborating requirements for features you are not going to work on for a while your process isn't streamlined enough.  That's fair, but I think that the brutal reality is that we have something much more tangible that we can see as inventory: our code.

It is stuff lying around and it has substantial cost of ownership. It might do us good to consider what we can do to minimize it.

In my career I've heard people brag about teams that "cranked out code" and such phrases. I cringe at the idea that bringing new code into the world is inherently a good thing, and doing it really fast is an even better thing.

On top of the cost of compiling it, running unit tests against it, cloning it from a repository, and shipping it in DLLs, there's the conceptual overhead of keeping around code that we don't need. Every time a developer has to read the code, scroll past it when searching for something else—in fact every thought that occurs in relation to it, in perpetuity, is waste.

As Feathers concludes:

I think that the future belongs to organizations that learn how to strategically delete code.  Many companies are getting better at cutting unprofitable features in their products, but the next step is to pull those features out by the root: the code.

User Story or QA Story

"What do our QA people do at the beginning of a sprint before the developers have coded any user stories for them to test?"

- Every aspiring Agile team with QA people

Honestly, I'm not sure I've ever worked on an Agile team that did not struggle with this conundrum of keeping QA people busy early in a sprint.

I'm a fan of the maxim: "If you can lean, you can clean"—there's always something valuable to do on a software project at any given time. Here are a few ideas off the top of my head, but I'm sure a curious mind could come up with many others:

  • Automate the testing of user stories that were tested manually in previous sprints
  • Conduct exploratory testing not directly related to a particular user story
  • Training

In case none of these ideas are adequate, there's also the common advice from Agile literature that smaller user stories are better. This idea of smaller user stories is appealing to people who are struggling to keep their QAs busy, because the thinking is that smaller stories take less time for the developers to code, and therefore the QAs don't have to wait as long to begin their testing. Makes sense, right?

So well-meaning teams take an axe to their user stories in an effort to make them smaller by any means necessary. Give QA something—anything!—to test.

Work proceeds like:

  1. Write a specification of something.
  2. Write code that meets that specification.
  3. As quickly as possible let a QA person check if the specification has been accurately met.
  4. Forget whether the behavior specified solves any problem for a real user.

By following this path, it's amazingly common to see the goal of "keeping QA busy" become the central organizing principle of a software project. And before we know it, maximum utilization becomes the goal of our sprint, rather than delivering value to users at the end of our sprint.

After a while, it starts to look like QA is our user, not the actual user anymore. We're delivering stories for QA now, not the actual people who need our software.

Snap out of it! There is no user but the user. Value is only realized when the user has it.

It's like acquiring staggering piles of Monopoly money. That pile looks really pretty, but let's not kid ourselves about the "value" of what we've achieved.

We don't work for managers and we don't work for QAs. We work for our users.

Selling Technical Debt

I tend to see technical debt discussed in a hand-wavy, fuzzy way, in which it's taken for granted that everyone knows exactly what it means, and why it's obviously bad. I'd like to put the onus on software developers to quantify the impacts of technical debt and what the business impacts are of paying it off.

"It's going to be easier to make changes later."

How would we prove that it's "hard" now? Is the average number of files changed per pull request higher than it should be? How would we prove that it's gotten "easier" later? What will the average number of files changed in a pull request be after changes are "easier"?

"We can move faster."

How would we prove that we're "slow" now? How would we prove that we've gotten "faster"? Will the number of story points completed per sprint go up? Is that a valid way of measuring efficiency or productivity?

If you're thinking to yourself, jeez, it's really hard to quantify these things, it's going to be equally as hard to convince someone non-technical to fund our work on technical debt. Unfortunately, the hard truth is that as long as we're unable or unwilling to quantify technical debt, then we can't really complain about the business not wanting to pay us to work on it.

We developers tend to grumble that "business people" don't understand code quality, but we have to be honest and admit that in equal measure we probably don't understand "the business" either.

As Martin Fowler points out…

Sadly, software developers usually don't do a good job of explaining this situation. Countless times I've talked to development teams who say "they (management) won't let us write good quality code because it takes too long". Developers often justify attention to quality by justifying through the need for proper professionalism. But this moralistic argument implies that this quality comes at a cost - dooming their argument. The annoying thing is that the resulting crufty code both makes developers' lives harder, and costs the customer money. When thinking about internal quality, I stress that we should only approach it as an economic argument. High internal quality reduces the cost of future features, meaning that putting the time into writing good code actually reduces cost.

We, of course, know that code quality has real impact on users, but it's on us to dispense with emotional appeals and instead make the connection clear to economic outcomes.

User Story or Manager Story

A pet peeve of mine is when people say "user story" as a generic term for any discrete chunk of work the team needs to accomplish.

And I don't mean this in a procedural sense, like you forgot to write the requirements in the classic "story" format like:

As a [type of user], I want [some functionality], so that [some benefit].

There's something very revealing about how teams write their requirements. Some teams have a product backlog filled with "user stories" written like this:

When you click on the "Edit Account" button, show a modal dialog with the title "Account Editor".

The modal contains:

- A dark green button in the lower right corner with the label "Save Account"

- Next to the "Save Account" button there should be a light green button "Close"

- A field with the label "Account Number" is pre-filled with the account number

… 

At the surface level, you can see that obviously this work item is not written in the typical user story format. So you could say, hey, we should write the requirements more like this:

As an administrator, I want to edit an account, so that the account is up to date when information changes.

That's better…maybe…but the real issue is not the format in which the requirement was written. The way a requirement was written can often indicate who we're hoping to please when the requirement is met.

When we approach software development as a series of user stories, we're seeking to please a user with each chunk of work that we deliver.

All too often software development proceeds more like a series of manager stories, in which we are pleasing someone in the team's management chain with each chunk of work delivered.

Some software organizations are set up like a feature factory, in which a management function beyond the development team decides ahead of time which things to build and the specifications of their design, and then parcels these chunks out to the development team on a sprint cadence. 

Feature Factory diagram via John Cutler

The optimum state of a software organization is one in which every person involved in the creation of the software understands the user's brain. Instead of thinking "what would my manager want me to build", think "what would our users want me to build".

Certainly many organizations get along fine as feature factories, but the real magic of the modern Agile philosophy is that we all take a user-centric approach to our work.

We call a chunk of work "done" when we all understand how it pleases a user's need, not just that our boss(es) told us to do it.

A user story is not just a format for writing a software requirement, it's the atom of a user-centric software organization.