Wednesday, April 22, 2026

Who Wrote This Crap?

I remember many years ago interviewing at a software consultancy where one of the consultants in the meeting was talking about a project they had going on. He was marveling at how fast the team of junior developers on the project was "cranking...out...code." I'll never forget the way he said it, and the look of awe on his face as he slowly shook his head back and forth.

This of course happened well before the advent of LLM assistants that can produce code at an astonishing speed. I wonder if that consultant's head would have literally exploded if I had showed him GitHub Copilot or Claude Code at the time.

Hell, I remember being gobsmacked as a junior engineer myself by CodeSmith, which was an early code generator product for C#. It could automatically spit out huge amounts of boilerplate code that an engineer would typically write by hand. I heard about it from a coworker at my very first job in the software industry, and I remember how unnerved I felt, thinking about a program that can automatically write code (isn't that why I'm here?).

The next code generator that came into my life was Ruby on Rails, which I first tried a couple years later. By that time, I had experienced what it was like to write the most tedious code in most applications, which was code wiring up database tables to web forms, and all the plumbing to pipe the data through each layer. It was a mind-numbing yet essential task, and almost every application needed it. Rails came at me like a breath of fresh air. At this point, I was like, oh yeah, this rules. I hated writing all that data access CRUD, and Rails made it largely disappear. Code generation clicked for me.

In the .NET world, where I've mostly worked for my career, we had a series of object-relational mappers that could generate classes off of your database schema, and resulted in engineers having to write way less code that shuttles data from a web application to a database and back. To name a few, we had NHibernate, SubSonic, LINQ to SQL, and then Entity Framework. I welcomed these tools with open arms, but I do remember there being pushback at the time from more senior people at my companies who were accustomed to writing this kind of code by hand and didn't trust what the tools were doing under the covers.

The common denominator in my early experience with code generation was eliminating the manual work of writing a lot of extremely predictable, relatively dumb, utterly tedious code that was also essential. And this usually took the form of data access code that moved data between layers of a web application, from the front-end to the database, where you had SQL tables that corresponded to C# classes, that corresponded to web forms. Classic CRUD. It's usually highly predictable stuff, and ripe for code generation. Yes, there were times where the tooling would break down, and an engineer would have to get under the covers and debug an edge case that the tool couldn't handle automatically, but, in my experience this was relatively rare.

What I'm seeing in the industry now with AI coding assistants like Copilot feels fundamentally different to me. I've witnessed fellow engineers in recent years generating code more akin to "business logic". This is the kind of code that's specific to the domain of the company and not transferable from codebase-to-codebase. I've also seen fellow engineers letting AI write the code for areas of the codebase that they don't understand well. For example, they may not know how to do a certain thing with React, so they describe what they're trying to do to Copilot, which then generates code that the engineer accepts without understanding, as long as it seems to work.

What's fundamentally different to me about the scenarios I just described and the code generation scenarios of yore, was that the pre-AI code generators were deterministic, and they were applied only to non-domain-specific logic.

What happens when a codebase is peppered with business logic that no human working at the company wrote, and hence cannot definitively explain? I have already seen first-hand times when bugs in important processes were not discovered until the AI-generated code had been in production for weeks. The engineer committing the code did not know that what Copilot generated did not match the logic they intended. Maybe I can write another whole blog post about this topic, but I'll say briefly here that in many scenarios, increasing the speed at which the code is produced is less important than a human understanding what it does at a granular level. In other words, speed of coding is not a bottleneck.

Another consideration is that AI code generators like Copilot are non-deterministic. As in, you can run them multiple times with the same input and they will produce different results. Going back to my examples before of pre-AI tools, the code generation features of CodeSmith and Entity Framework are deterministic. You can run them multiple times with the same input, and they will give you the same output every time. This is because a human software engineer wrote the code behind those tools, and the rules are directly and unambiguously traceable back to the lines of code a human wrote while designing them.

I can't help but wonder if, as an industry, we're hurtling toward a future where many production codebases will be littered with code that no human at the company understands or could definitively explain, not years later, but even weeks later. My personal relationship to AI-generated code as a working software engineer is that I will not commit code that I cannot explain. And when doing code reviews, I cannot accept the explanation that code included in the pull request was AI-generated and hence the submitter does not know what it does.

I also have to wonder if the AI slop era we're all in at the moment says something about the illusory nature of quality. Maybe quality was just an unintended side effect of manual coding that business leaders never really cared much about in the first place. In my multi-decade career in the software industry, the emergence of Copilot represents the first time I've ever experienced non-technical people mandating the use of a particular tool to software engineers. It seems that the idea of faster code production was so mouthwatering that quality flew out the window within seconds.

Who wrote this crap? Maybe the answer never mattered.