The Intent Paradox of AI generated code
The Intent Paradox: When Machines Write Our Code
I think a lot about how machines can instruct themselves to do something. Now we call it with fancy names such as AI code generation or coding agent, sometimes we give it a name like Devin or similar (nothing against it honestly). In hindsight it's just a calculator "thinking" what it should calculate based off of something you said out loud and it starts to do the magic, funny right?
The Human Element in Coding
Historically we as engineers are taught to write code for other people, that's why we sometimes get mad on these fancy leetcode solutions which solve entire humanities problems with just 50 characters. Sometimes it's of course hard to write a good program that other human beings can read easily but ultimately that's what we are doing. We are building software for the long run, we are building something to fix it later when a new problem will arise.
We sometimes make the system for that exact purpose too, that's why we have terms like "premature optimization" or a "technical debt." I can't remember where and who told me this but shoutout to that person, who told me that "0 technical debt is bad, that means the system is off, in a growing system you can't just not accumulate issues." So I thought that solving technical debt is just the ability to acknowledge it and see it.
What I am trying to outline is that we are trying very hard to reflect our intention in the code, and by saying write readable code, we don't say that someone can read the code and understand what it does, they should understand why it does something, it's the WHY.
The Zombie Code Problem
Now that we are introducing new parts in our software entirely generated by machines, we are slowly losing the intention. To what would that lead? One immediate thought is zombie code, a code which lives, maybe works and nobody knows why it is implemented that way.
The irony is that in our quest to make development more efficient, we might be creating a new category of technical debt – one where the debt isn't poorly written code, but rather code whose purpose and design decisions are obscured from the beginning.
Do We Even Care?
Interestingly enough I would like to oppose myself by asking a question "do we even care?". Like for real, do we even care if that code is a zombie? We will just run Claude Code and ask what this piece of code is doing, why you think it is implemented.
Perhaps the future of code comprehension isn't in the code itself but in the AI tools we use to interpret it. Maybe we're entering an era where understanding code isn't about reading comments or variable names, but about having an AI explain the reasoning behind code it or another AI wrote.
The Internal Conflict
So generally speaking sometimes I find it very hard to mental model all this fragmentation and how people will try to manage all this AI generated code, but then the opposite view comes in and I kind of separate myself into two polarized entities fighting for a view:
Entity 1: We don't want to put the intent into the code, LLMs are good at it and will become even much better. With relevant context from any potential source, it will understand the intent and we will be fine.
Entity 2: We somehow need to observe what these agents or coding tools are doing. We need oversight, transparency, and a way to ensure the "why" doesn't get lost in layers of machine-generated abstractions.
The New Balance
Maybe the solution isn't choosing one side, but finding a new balance. Perhaps we need to develop new practices where the human-machine collaboration in coding includes explicit capture of intent. Or maybe we need to embrace a new paradigm where AI doesn't just generate code but also generates the explanation of its intentions alongside it.
This capture of intent can materialize in a form of metadata, gitignore files or even some new form of a jsdoc, but maybe just maybe this intent will be critical for later? We might need standardized ways to attach the "why" to our code that both humans and machines can understand. Imagine intent files alongside your code files, or intent markers that an AI can both generate and interpret.
The real paradox might be that as machines get better at writing code, humans might need to get better at articulating the "why" behind what they want the machines to create. And perhaps that articulation of intent will become the most valuable skill in a developer's toolkit.
The Future of Intent
Maybe the abstraction is high enough that we don't need to care about underlying code, LLM generated it let LLM debug it, let LLM refactor it, let LLM think through and perform the tasks. This is exactly where I see the intent paradox, we both need it and need not. I think the paradox is because we simply don't know, nobody does and we are collectively building and researching the future.
I sometimes try to draw parallel lines between existing abstract and low level stuff, we don't actually capture the machine code right? It is being executed somewhere, but we really don't care.
Interestingly I want to emphasize this, we don't read that machine code, so it can be decoupled from the intent, but we DO READ AI generated code, so we need the intent?
But will we always read the AI-generated code? Or are we moving toward a world where we only interact with the highest level of abstraction — our intentions — and let the machines handle everything below that? Maybe future developers won't be expected to read any code at all, just as today's web developers rarely need to understand assembly language.
As coding agents become more sophisticated, will we develop new languages or frameworks that make intent explicit? Will we create new visualization tools that make the reasoning behind AI-generated code transparent? Or will we simply develop a new generation of developers who are skilled not at writing code, but at directing and understanding the code that machines write?
I don't have clear answers yet, but I think this Intent Paradox is going to be one of the defining challenges of software development in the AI era. How we resolve it will shape not just how we build software, but how we understand and relate to the systems we create.
What do you think? Are you concerned about zombie code in your projects, or do you see AI code generation as the solution to making intention clearer, not more obscure? Does the intent even matter if we're moving to an era where humans may not read the code at all?