9 min read

Emilio Carrión

AI Won't Replace the Software Engineer. It Will Replace the One Who Only Wrote Code.

Dario Amodei says AI will replace engineers in 6-12 months. Jensen Huang says we shouldn't learn to code. I've been thinking about this for months. I don't have all the answers, but I do have a stance.

aileadershipcareer

I've had this on my mind for months.

In January, Dario Amodei took the stage at Davos and said we're 6-12 months away from AI doing "most, maybe all" of what a software engineer does. Jensen Huang has been saying since 2024 that kids shouldn't learn to code. Zuckerberg predicts that AI will write most of Meta's code before the year is out. And last week I read a post by Sean Goedecke, a Staff Engineer, saying bluntly: "I don't know if my job will exist in ten years."

I read that post cover to cover and recognized myself in many things. But when I finished I realized my conclusion is different. Not because I'm right and he's wrong, but because I think we're looking at different things.

Sean looks at his job and sees code. Code that AI already writes better than many engineers. And he's right: if your job is turning user stories into lines of code, AI is going to eat that. It's already happening.

I look at my job and I don't see code. I see context, friction between teams, decisions nobody wants to make, principles that need to be made explicit so seven teams don't step on each other. And I don't know how you automate that. At least not yet.

I won't pretend I have the full answer. But I do have a stance, and I want to share it.

The Drafter and the Architect

I have an analogy that works very well for explaining what I think. I use it in conversations with other engineers and it always sparks debate, so here it is.

For decades, architecture firms employed drafters: professionals who turned the architect's sketches into precise technical drawings. It was skilled, slow, and essential work. Every line, every dimension, every detail of a building went through their hands.

When CAD arrived in the '80s, drafters disappeared. But architects didn't. Because the architect's job was never to draw plans. It was to decide how a building should function, how it's lived in, how it ages, how it endures. The plan was the artifact. The decision was the work.

Writing code is drafting. The technical function in a company is architecture.

I coordinate seven engineering teams at Mercadona Tech. I'll tell you something that might sound exaggerated but is literal: none of the important conversations I have are about code. Not one. They're about context. Why is this team blocked? What design principle should guide this cross-team decision? How do we prevent this friction from recurring? Where's the real bottleneck — in the implementation or in the problem definition?

An agent can generate the plan. These questions, it can't.

And someone will say: "of course, but you're a Staff Engineer, your case is different." And they're right. My case is different. But the direction the entire profession is moving is this. What I do today is what more engineers will do tomorrow: less drafting, more deciding.

The Data: Neither Apocalyptic Nor Comforting

I don't want to cherry-pick data, so I'll lay out the ones I find most relevant and let everyone draw their own conclusions.

What's happening with juniors is concerning. A Stanford study found that employment of developers aged 22 to 25 has dropped nearly 20% from its 2022 peak. And a Harvard study on 62 million workers showed that when companies adopt generative AI, junior employment falls 9-10% over six quarters, while senior employment barely moves.

But we're not seeing mass destruction of senior employment. What we're seeing is a rebalancing. Companies are shifting budget from junior training to senior hiring. From speed to quality. From scaling people to building systems that scale.

Meanwhile, a GitHub study showed that developers with Copilot finish tasks 55% faster. And what do companies do with that gain? They launch more projects, not fire people. At least, for now. That could change, and it would be dishonest not to say so.

On the Jevons Effect: An Argument with a Catch

Many cling to the Jevons paradox as a source of optimism. The idea: when something becomes more efficient, total demand increases instead of decreasing. It happened with coal, with cars, with data storage. If producing software becomes cheaper, we'll produce more software and need more engineers.

Historically this has held true for software. The internet, GitHub, the cloud — each productivity leap created more engineering jobs, not fewer. There are more engineers than ever.

But the Jevons paradox has a counterexample that nobody usually mentions. Agricultural productivity skyrocketed in the 20th century and agricultural employment fell from 33% to 1.3%. The difference? Demand for food has a ceiling. The question is: does demand for software have a ceiling?

Sean Goedecke argues that AI agents can maintain code as well as write it, so the plateau needed for Jevons to work probably doesn't exist. I think he's right if we're talking only about code. But software isn't just code. It's context, decisions, trade-offs, operations, people. And demand for that has no visible ceiling.

That said, I don't trust the Jevons effect as an argument for comfort. It's a historical pattern, not a guarantee.

Weekly Newsletter

Enjoying what you read?

Join other engineers who receive reflections on career, leadership, and technology every week.

The Abstraction Goes Up. It Always Has.

This is what gives me the most perspective. Because it's not new.

First it was ones and zeros. Then assembly. Then high-level languages. JVM, .NET, the browser. Each abstraction layer made something that was once critical irrelevant. When was the last time you hand-optimized assembly instructions?

Now AI is the next layer. As an InfoWorld article puts it: agentic programming is the compilation of spoken language into programming language. Code is becoming what machine code is today. A soup of instructions you don't read, generated by an automated process.

And this is where I think Amodei gets it wrong, or at least oversimplifies. When he says "my engineers no longer write code," he's describing a reality. But he's confusing "writing code" with "doing software engineering." They're not the same thing. As Ivan Turkovic points out in his response to Amodei's post: engineering work is moving up the abstraction stack. Architecture, security, scalability, organizational knowledge, product alignment. Those questions are becoming central to the role, not peripheral.

At Mercadona Tech we've already lived this. Discussions about whether a class should be generic, whether you should inject a use case one way or another — those conversations have practically disappeared. What remains are debates about context, system-level design decisions, friction between teams, cross-cutting principles. Code is an implementation detail. And I say this as someone who loved writing code. But it is what it is.

What Really Worries Me: The Automation Paradox

I'm not worried about AI taking my job. I'm worried that it will take our competence without us noticing.

In 1983, Lisanne Bainbridge published "Ironies of Automation". The central irony: automating a task doesn't eliminate the need for human expertise. It transforms and increases it. The operator remains responsible but loses continuous contact with the work that built their understanding. When the automation fails (and it always fails), the operator faces the most demanding moment precisely when they are least prepared.

The most tragic example is Air France 447. The plane fell into the Atlantic in 2009 after the autopilot disconnected due to a known and recoverable failure. The crew, with 20,000 combined flight hours, couldn't diagnose the problem. Not because they were bad pilots, but because automation had stolen their practice.

The parallel with our profession is direct. If you let AI write all the code and limit yourself to approving PRs, what happens when the system fails in a way AI can't diagnose? What happens when you need to understand a system deeply to make a critical decision at 3 AM?

As Addy Osmani puts it: if everyone has access to AI agents, what distinguishes great engineers is knowing when the AI is wrong. And to know that, you need the judgment that's only built by solving hard problems. Not approving what AI gives you.

In my article on invisible heuristics I talked about how seniors have an internal library of patterns they can't articulate but that let them resolve incidents in minutes. That's exactly what AI can't replicate. Because AI generates code, but it doesn't have the experience of thousands of hours of real production.

A Data Point That Puts Things in Perspective

The MIT NANDA report of 2025 found that 95% of enterprise AI pilots don't generate measurable returns. That looks pretty bad for AI, until you look at the base rate: enterprise IT projects in general fail 84% of the time. McKinsey places only 1 in 200 on time and on budget.

AI doesn't fail more than any other technology transformation. It fails differently: companies try to drop generic AI into existing processes without adaptation. The projects that work are those that deeply integrate AI into specific workflows. The problem was never technological. It's about context, about understanding the complete system (business, people, technology) and making decisions with judgment.

Exactly what a good senior engineer does. And exactly what no model knows how to do yet.

What I Think, with All the Uncertainty That Implies

In the medium term I'm convinced that technical roles will continue to exist. With more abstraction, with more tools, but they'll exist. Thinking the role disappears because the machine writes code is like thinking designers disappear because Figma has autocomplete. Someone has to use the machine. And that someone needs judgment.

In the long term, I don't know what will happen. I'll be honest: I don't know. And I say this as someone who has no problem admitting it.

What I am sure of is this: AI amplifies whatever judgment it's given. With weak judgment, it produces chaos quickly and confidently. With strong judgment, it's a force multiplier. The difference between a team that uses AI to deliver better and a team that uses AI to generate technical debt faster is the same as always: the quality of thinking behind the decisions.

Programming in the future will be programming context. What you want to happen, under what conditions, with what constraints. Not how it's implemented.

AI won't replace the software engineer. It will replace the one who only wrote code. And if that worries you, the question isn't whether your job disappears. The question is which of the two halves you're in.

Question for you: Does your day-to-day look more like the drafter's or the architect's? I don't ask with judgment. I ask because I think the answer marks the difference between worrying and preparing.

Newsletter Content

This content was first sent to my newsletter

Every week I send exclusive reflections, resources, and deep analysis on software engineering, technical leadership, and career development. Don't miss the next one.

Join over 5,000 engineers who already receive exclusive content every week

Emilio Carrión
About the author

Emilio Carrión

Staff Engineer at Mercadona Tech. I help engineers think about product and build systems that scale. Obsessed with evolutionary architecture and high-performance teams.