> Code Is Cheaper. Judgment Isn’t
13 April 2026 · 12 min read
The rise of Vibecoding
The term "vibecoding," introduced by Andrej Karpathy, refers to the practice of creating software with Large Language Models (LLMs). (Andrej Karpathy, X post, Feb 2025) With it has come a wave of predictions about the future of software engineering. If you've been in the software engineering space for even a short time, you've probably been bombarded with claims that AI will dramatically reduce the need for software engineers.
- "The title software engineer may begin to disappear" — Boris Cherny
- "AI will replace mid-level software engineers in 2025" — Mark Zuckerberg
However, many may be unaware that concerns like this go back decades, long before LLMs or vibecoding were feasible. This pattern has repeated itself with every major shift in software engineering.
A recurring pattern
1960s–1970s
FORTRAN was created by IBM as an abstraction above assembly. It was the first high-level programming language and significantly simplified program writing. That did not eliminate programmers, but it did begin a familiar pattern: every time programming gets more abstract, people start asking whether programming itself is about to disappear. (Computer History Museum: History of FORTRAN)
1980s–1990s
With the emergence of high-level languages like C and FORTRAN, the next objective became skipping programming as a whole. With this came the development of Fourth-Generation Languages (4GL), which focused on declarative programming (e.g. SQL), and Computer-Aided Software Engineering tools (CASE tools), which turned diagrams into code. IBM Rational Rose is a good example of this era: a visual modeling tool built around the idea that more of software design and generation could be pushed into higher-level tooling. (IBM Rational Rose documentation)
2000s
With outsourcing and early web builders, programming started to look like something that could be commoditized. Companies began offshoring development to reduce costs, while tools like WordPress and Wix made it possible to build websites without writing much code. Code could be written cheaper elsewhere, or avoided entirely through abstraction. WordPress is still the dominant CMS today, and Wix still markets itself directly as a no-code website builder, which shows how strong that abstraction trend became. (Wix no-code website builder)
2010s
No-code and low-code platforms pushed this further. These tools let you build applications using drag-and-drop interfaces instead of code. This reinforced the idea that programming itself might become optional.
2020s
And now AI and LLMs bring the same idea back again. So that begs the question: are these the same empty words that have been getting thrown around for decades, or is this time different?
So is this time different?
Yes. But not in the way people keep saying.
AI is changing how software is developed. It is speeding things up, changing workflows, and making it easier to get from idea to implementation. But it is not replacing your fundamental knowledge and understanding of the problem at hand. It is amplifying them and applying them at scale.
That is the part many people do not seem to mention.
AI does not just amplify what you know. It also amplifies what you do not know. It scales your understanding, but it also scales your misunderstanding. If you are a developer with good coding practices and a good understanding of software architecture, you can steer AI to apply those skills at scale. However, if you do not understand the problem, do not know how to build software, or leave ambiguity, AI can help you build the wrong thing faster than ever before.
We already have evidence that this effect is uneven rather than universal. In one large field study of 5,179 customer support agents, access to generative AI increased productivity by 14% on average, with the biggest gains going to novice and lower-skilled workers, while experienced and highly skilled workers saw much smaller gains. (NBER: Generative AI at Work) In a controlled experiment with GitHub Copilot, developers completed a scoped coding task 55.8% faster. (Microsoft Research: The Impact of AI on Developer Productivity) But in a 2025 METR randomized trial on experienced open-source developers working on their own repositories, AI tools made developers 19% slower. (METR: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity)
That is why I do not buy the "AI is replacing developers" argument at all. It is not replacing developers. It is changing the role. Maybe the argument about whether it is replacing how code is written has some truth to it, but the fundamental knowledge needed to review, direct, and shape software is more important than ever.
AI is no longer just a chatbot
I've seen a lot of people still think of AI coding as opening a web interface, pasting some code, and asking it to fix something. Yes, even at university people still use AI to code this way.
That is not the best way to leverage the power of AI.
The real power comes from coding harnesses. A coding harness gives the model access to the codebase, files, terminal, tests, and tools around the code. This makes AI much more powerful. Context is everything with LLMs, and a harness can pull its own context instead of relying entirely on whatever you remembered to paste into the chat interface.
The second main advantage of coding harnesses is that they can work in a feedback loop. The model is not just producing a possible solution and confidently telling you it will work. It can inspect files, make a change, run tests, read the failure, and iterate. That loop is where a lot of the real value comes from: not just generation, but generation with feedback and iteration.
You can already see this reflected in how coding systems are evaluated. SWE-bench, for example, evaluates models on real GitHub issues in actual repositories, where the system has to generate a patch that resolves the problem. (SWE-bench GitHub)
This is where AI coding can really be transformative. It can iterate toward a defined outcome instead of just producing one-off guesses. Whether the AI implemented the change in the best way is a different story, and the code should still be reviewed by hand, but this is still a massive improvement over the web interface.
The coding harness I have been using is opencode. It is free and open-source, and they even have some free models available.
Skills, MCP, and changing how we use tools
Once you move into harnesses, the next thing you notice is that AI gets much stronger when it is given structure.
That is where skills come in.
Skills are simply markdown files that an AI can load when it deems them useful. They just add a block of markdown to the context. They can provide the LLM with best practices, design guidelines, and much more. Vercel has a website where you can browse through thousands of skills: skills.sh.
But skills should be used carefully.
If you add too many, they can pollute context. If they are poorly written, they can push the model toward worse outputs instead of better ones. And as models improve, some skills can become worse than the model's own understanding and practices. So skills are good, but they need to be reevaluated every so often and used carefully. This is a classic example of less being more. They are useful when they add clarity. They become harmful when they add noise.
Then there is MCP.
MCP is useful because it changes what AI can interact with. The Model Context Protocol is an open standard for connecting AI tools to external data sources and systems. This opens up a variety of new applications and ways to improve the results that AI generates. (Anthropic: Introducing the Model Context Protocol)
A good example is Context7. Context7 provides up-to-date, version-specific documentation to coding agents, including through MCP integrations. That means the model can look up syntax and library usage from current docs instead of relying only on stale internal knowledge or generic web search. I have been using this recently and I have definitely noticed an improvement in accuracy. (Context7 overview, Upstash: Introducing Context7)
Another good example is the Supabase MCP. Rather than digging through dashboards to find a setting manually, the model can help interact with the system more directly. You still need to understand what that setting does and why you are changing it. But the way you interact with the tool changes, and it becomes much more efficient.
What AI is really changing is not the role of the engineer, but the way software can operate. It can connect to tools, pull in context, and work inside the environments engineers already use.
The real problem: context rot
This is where things start to break.
LLMs run on context. Good context improves output. Too much context, or bad context, does the opposite. As tasks get bigger, details get missed, ambiguity gets introduced, and the model starts moving away from the original goal.
This is typically referred to as context rot.
The session gets longer, more files get pulled in, more patches get stacked on top of earlier patches, and eventually the model is no longer solving the actual problem very well. It is just continuing the current thread in a way that sounds plausible.
And if you compact context to make it fit, some information gets lost. Constraints disappear. Conventions disappear. Earlier decisions disappear. The model keeps going anyway.
Why subagents help
One of the best ways to reduce context rot is to stop putting everything into the main context in the first place.
This is where subagents come into play.
They can explore code, look up documentation, or investigate isolated parts of a problem without bloating the primary harness. This can improve efficiency through parallelisation, but more importantly it keeps the main context cleaner.
Vibecoding works, until you stop thinking
I like vibecoding. I think it is useful.
But blind vibecoding is where things go wrong.
If you know what you are building, can review what is being generated, and can tell when something is off, vibecoding can be great. It allows you to move really fast.
But if you are accepting output just because it looks right, that is when things start to go wrong. I have seen a lot of developers claim they are reviewing thousands of lines of AI-generated code per day. I am sceptical of that.
I have seen LLMs produce hacky workarounds, ugly abstractions, and code that is not reusable at all. I have seen them patch symptoms instead of solving causes. I have seen them generate code that looks fine on the surface but is poorly thought through underneath.
When I write software I always apply these two principles: high cohesion, low coupling. In my experience this is where LLMs struggle. I have often found that LLMs write dirty helper functions or use hacky workarounds that violate those principles completely.
One way to improve this is an AGENTS.md file.
An AGENTS.md file can define coding style, conventions, and architecture expectations. That gives the model a clearer idea of what good looks like in that codebase and improves consistency a lot. It still does not guarantee great results, but it is another improvement.
And again, the same pattern shows up: better context in, better results out.
What this actually means
So if there is one thing to take from all of this, it is that context matters most.
Harnesses matter because they can pull context and verify outputs. Skills matter because they can add useful structure. MCP servers matter because they let models access the right tools and information. Subagents matter because they keep the main context cleaner. AGENTS.md matters because it gives the model conventions to follow.
The pattern is the same every time. Better context, better output.
And the opposite is true as well. Bad context, ambiguous goals, and weak constraints usually lead to bad results.
AI is a multiplier
AI is not replacing developers. It is amplifying their strengths and weaknesses.
If you were a good developer before AI, you will probably be a good developer with AI. In many cases you will just be faster. If you were a bad developer before AI, you will still be a bad developer. You will just be building bad software faster than ever before.
AI multiplies everything: judgment, understanding, sloppiness, taste, confusion, good habits, and bad habits.
It does not remove the need for fundamentals. It makes them more important.
If you do not understand what you are generating, you cannot trust it. I have seen LLMs produce code with absurd levels of nesting that look fine if you only skim it. But if you know what cyclomatic complexity is, you recognise immediately that this is not clean code. It is a problem.
You need to know what you are looking at.
The Gel-Mann Amnesia problem
There is a useful comparison here.
Gel-Mann Amnesia is the idea that you can read an article about a topic you know well, immediately notice all the mistakes, then move on and trust the next article anyway because it sounds authoritative.
AI has the same effect. If you generate something related to a field that you know about, you can correct it. If you ask it about an unfamiliar concept, you might take it at face value.
The point here is that you still have to know what you are generating.
Knowledge is more valuable than ever
This is why I think the whole "your skills are becoming obsolete" reaction to AI is backwards.
Knowledge is not becoming less valuable. It is becoming more valuable.
Typing code is getting cheaper. Knowing what code should exist, how it should be structured, and how to verify it is becoming even more important.
The way we apply knowledge is changing. The value of understanding is not.
Conclusion
AI is changing software engineering. That part is obvious.
But it is not replacing developers.
You still have to learn the fundamentals, understand the systems you are working with, use the tools well, give them good context, and verify what they generate.
AI is just making it much more obvious who understands what they are doing.