Search
Items tagged with: agenticai
2026 is the year the software industry transitions from artisan to industrial, analogous to the transitions that happened with weavers in the 1820s, typists in the 1980s, typesetters in the 1990s, travel agents in the 2000s,
The transition for weavers took 60 years. The transition for travel agents took 15. He estimates 5-7 for software. That seems too long to me. I expect I will be an "agent orchestrator" before the end of 2026, and I'm not some top-level software engineer working at a top-level company, so if I'm going to be doing it, probably 80% or 90% of software engineers are going to be doing it by the end of 2026. I think maybe the new rule is, whatever Andrej Karpathy is doing at the beginning of a year, I'm going to be doing by the end of the year? (And it will be mandatory.)
He says history is very clear on what happens when a craft goes from "artisan" to "industrial": quality of life is destroyed for people that remain because wages go down and demands go up, and quality of life is destroyed for the people who don't make it through the transition and are laid off. He calls this the "the quality of life collapse". "The quality of life collapse" is what awaits software engineers. For those who make it, the "agent orchestrator" job will be lower pay, will have no perks, and will include being woken up at 3 AM because an agent hallucinated an API change.
Quality of life goes up for consumers and factory owners profit handsomely. Here, the factory owners are companies like OpenAI, Anthropic, Microsoft, etc. Artisans never make the transition to factory owners. Artisans may make it to factory supervisor, but they pretty much never make it to factory owner.
He (Pratik) then says, "The question nobody asks":
"I keep coming back to something that doesn't get discussed. Factories produce more textiles at lower cost. That's unambiguously good for consumers. But software isn't textiles. Does a tenfold increase in software quantity, with corresponding decreases in quality, security, and maintainability, actually improve anything? Or do we just get ten times more technical debt, ten times more half-broken products, and ten times harder debugging when agents hallucinate library versions and nobody notices because everyone's validating outputs they don't fully understand?"
First of all, people do discuss this, although not anybody where I work and maybe not anybody where he works. But I have seen discussion out on the internet. You can have AI agents code-review other AI agents. You can ask for security and ask AI agents to do security audits. All the things you do with humans engineers to make more reliable software, you do the analogous thing with AI agents. All the tools like staticaly typed languages, static analysis tools, formal methods, and so on, can be used in the AI agent world. Some argue they work better, because if the Lean proof of the correctness of a piece of code is 5 or 10 times larger than the code itself, in a world of human engineers, that makes the correctness proof uneconomical, but in a world where AI agents can produce thousands of lines of code in minutes, it's a non-issue. If the tremendously greater mental effort required to prove the correctness of the code is just more tokens, it might be a non-issue. So provably correct software may eventually be vastly more common in a world full of AI agents than a world full of human engineers.
"The factory optimizes for throughput. Artisan software optimized for correctness. Those aren't the same thing, and treating them like they are might be the most expensive mistake we make. But maybe we're ready for the Ikea of software world, and hand made furniture will still exist but not everyone will be able to afford it. Or maybe artisan software will just be better verified? Because, who wants to type 10k loc when they can get it generated in few seconds."
Him saying "Artisan software optimized for correctness" made me laugh. No it doesn't! Not where I work and not in, I'm sure, the vast majority of software companies. You have a large codebase that dozens of engineers have contributed to over the years, each under tremendous time pressure to implement features. That makes the resulting codebase messy -- hopefully not too messy, but still far from "optimized for correctness".
Software that is really and truly "optimized for correctness" is the software that controls the flight control surfaces of airplanes. Software that NASA puts on spacecraft and sends to distant regions of the solar system. That software takes vastly longer to produce than commercial software, at vastly higher cost. That's what it truly means to be "optimized for correctness".
In the upcoming world of AI agent-driven software, "optimized for correctness" might eventually become a standard feature, if formal methods verification become standard practices. That might take a long time, because that is so far from the way humans develop software now, and people initially will simply translate human engineering practices into the AI agent realm.
The software factory age: Why 2026 may be the end of artisan coding
The Software Factory Age: Why 2026 may be the End of Artisan Coding
From hand-weavers to quality-control inspectors: the 200-year pattern that software is repeating faster this timePratik