Skip to main content
TechnologyApr 4, 2026· 11 min read

AI Generates Code in 8 Minutes and Seniors Stop Teaching It: Who Will Train the Developers of the Future?

A few weeks ago, an Italian developer with ten years of professional experience published a detailed account of how his way of working has changed in the last three months. I will call him M. because the name is not the point: the point is what he narrates, which is both enlightening and disturbing, often in the same paragraph.

This developer has worked in top-tier companies, has written web applications and iOS apps, and has coordinated teams. He knows what he is talking about when he talks about code. What he says is this: from December 2025 to today, his workflow has transformed to the point of being unrecognizable. He used to write code in an editor, asking AI models for help with repetitive parts. Now he converses with Claude Code in a terminal, discusses the architecture, gets a detailed plan prepared, approves the plan, and then says "write the code". And the code arrives, functional, in five or eight minutes.

The sentence that strikes the most in his story is this: "I have almost completely stopped looking at the code." He says it with a kind of satisfied wonder, like someone who has discovered that the airplane's autopilot works so well that looking out the window becomes unnecessary. His quality control now consists of checking the interface, testing functionality, and seeing that the result matches what he had in mind. The code itself, the lines, the structure, the low-level architectural choices, he looks at them less and less.

He adds a detail that he himself acknowledges as significant: 95% of the time, the generated code works on the first try. He compares the experience to working with a capable software engineer, to whom you give a detailed brief and who then executes it without needing supervision line by line.

The Data Behind the Anecdote

The temptation is to dismiss everything as the anecdote of an enthusiast. That would be a mistake, because the data confirms that something structural has changed in software between the end of 2025 and the early months of 2026.

The METR (Model Evaluation & Threat Research) study measures the time horizon of AI systems, that is, how long they can work autonomously on real programming tasks without human intervention. That horizon doubles every seven months on average, and in the recent period, it has accelerated to around four months. The models that our developer uses (Claude Opus 4.5 before, 4.6 now) are the ones that, in the METR research, have measurably moved the frontier, autonomously completing tasks that would take a human expert several hours.

Programming is the most favorable case for AI. The output is automatically verifiable (it compiles or it does not compile, it passes tests or it does not pass them), the training corpus is vast (all the open-source code in the world), and the feedback loop is immediate. Generalizing from the experience of a single developer to "all cognitive work" would be the same mistake that Matt Shumer made in his viral post in February, which Gary Marcus referred to as "weaponized hype". Coding is the cutting edge, and our programmer is right to say that it is here we see first where AI is heading. But the cutting edge is, by definition, not the average.

That said, the data remains impressive. And he is also right about another point that deserves attention: those who are not using frontier models (not the free ones, not the ones from six months ago, but the current ones that cost twenty dollars a month) have a perception of change that is genuinely distorted compared to reality.

The perceptual gap between those who experiment with these tools every day and those who read about them in the news is probably the largest since generative AI has existed, and it continues to widen.

The Question Not Asked

The interesting part of the story, however, is not the one that the developer narrates with pride. It is the part he does not tell, or the part he tells without realizing that it should worry him. He himself admits that in the summer of 2025, when he was "vibe coding" by accepting AI's changes without looking at them, one feature came out "half broken". He only realized it days later. That unpleasant experience led him to refine the process, introduce a shared planning phase, and be more precise in his instructions. And the refined process works much better, to the point that now 95% of the time, everything goes well.

But what does the 5% that does not work, in production, represent? Is it a bug that is caught immediately? A subtle error that emerges after weeks? An architectural problem that accumulates over time until it makes the software fragile in ways that no automated test can detect? He does not say, and probably does not know, because to know that he would have to look at the code, which is exactly what he has stopped doing.

There is a parallel that seems fitting to me. Those who always use GPS navigation for their travels, after a few months, no longer know how to organize a route. They have not lost the cognitive ability to navigate; they have suspended it, due to lack of exercise. The day the GPS does not work (dead battery, no signal tunnel, uncharted area), that person finds themselves struggling in a way that would have been unthinkable before delegating navigation.

Our developer is doing the same thing with understanding the code that his software produces. The muscle of critical reading, the one that after ten years of experience makes you "feel" that something is off in a function before you even understand what, that muscle weakens if you do not exercise it. And it weakens silently, because 95% of the time everything works, and negative feedback does not arrive.

Cognitive research has a name for this phenomenon: cognitive offloading. Sparrow, Liu, and Wegner documented it back in 2011 in a paper published in Science regarding memory: knowing that information is retrievable online reduces the likelihood that the brain will store it. The METR itself published a study in 2025 that deserves more attention than it has received: developers using AI tools for coding spend, on average, 19% more, not less, time than colleagues not using them to complete the same tasks. The result seems paradoxical and likely reflects a phase of adaptation, but signals that the relationship between AI and productivity in coding is less linear than the enthusiastic narrative suggests.

Who Trains Future Developers?

The most serious problem raised by this testimony does not concern the one who wrote it. Our programmer has ten years of experience, has written a lot of code by hand, and has developed that sense of software architecture that comes from prolonged practice. When he says to AI, "no, I do not want it that way, I want it another way," that judgment comes from a decade of work in which he has seen what works and what does not, what scales and what collapses, what is maintainable and what becomes a nightmare. AI amplifies that accumulated judgment. So far, so good.

The problem is who comes after him. The developer entering the job market today and who learns to program in a context where the code is written by AI and their role is to supervise an output they have never learned to produce independently. Toyota discovered something similar when it tried to fully automate certain production lines: engineers who no longer worked alongside the machines had lost the ability to improve processes because kaizen (the continuous improvement that is the heart of the Toyota production system) requires an understanding of the process that develops only through direct practice.

In software, the risk is analogous. The tacit knowledge of the experienced developer, that which the psychologist of science Michael Polanyi described with the phrase "we know more than we can tell," forms through thousands of hours of practice: debugging, refactoring, reading others' code, failed attempts, rethinking architectures. If the new generation of developers skips this formative phase because AI handles basic tasks, the transmission chain of competence is interrupted.

We produce professionals capable of supervising AI when everything works, but unable to intervene when AI fails in ways that do not fit known patterns.

Intelligent or Doing Intelligent Things?

In his story, the developer gets heated on one point: he states that it is "absurd" to deny that these models manifest a form of intelligence, and those who do so are denying the evidence. He then adds that he is not talking about self-awareness, not talking about rights, not talking about ontology. But his rhetoric points in a direction that his premises exclude.

The issue deserves a nuance that in enthusiasm risks getting lost. That AI does tasks requiring intelligence is a documented fact. Whether AI "is intelligent" in the same sense as a human being is a different assertion, and the latter does not automatically follow from the former. A computer from 1960 performed tasks requiring mathematical ability greater than that of any human being, and no one thought to attribute intelligence to it. The difference in scale between a computer and an LLM is enormous, of course, and the capabilities of current models are something that twenty years ago would have seemed like science fiction. But the leap from "does intelligent things" to "is intelligent" involves something that the philosophy of mind has debated for decades without consensus, and which is not resolved with "it's a fact, period."

The distinction is important for a practical reason. If AI "is intelligent," then it is a colleague, and one relates to a colleague differently than to a tool. If AI "does intelligent things but is a tool," then the responsibility for the output remains entirely human, and critical supervision is not an optional exercise when it happens but a professional obligation requiring specific competence.

The Law 132/2025, which implemented the European AI Act in Italy, has clearly chosen the latter framework. Article 13 establishes that in intellectual professions, AI can only be used instrumentally, with a predominance of human work, and that the professional must inform the client of which AI tools have been used. Article 14 does the same for public administration, Article 15 for judicial activity, and Article 7 for healthcare. The legislator has no doubts: responsibility is human, supervision must be substantive, and those who do not exercise it expose themselves to concrete consequences.

The Real Message

The developer is right about the most important thing: change is real, it is rapid, and those who do not touch it with their hands every day struggle to understand its extent. Three months is a geological era in the current speed of AI, and the assessments made on the world of December 2025 are already partially obsolete by March 2026.

But the conclusion to draw is not the one he proposes, that is, that AI is intelligent and those who do not admit it are behind. The conclusion is that we are building a dependence on a powerful tool whose functioning we understand only partially, and that the speed at which we adopt it is surpassing the speed at which we comprehend what we are giving up in exchange for the efficiency we gain.

I say this with a certain knowledge of the facts because the story I have recounted is not just his. It is also mine, with a variant that perhaps makes it even more significant. I have never been a programmer. I do not have ten years of software development experience, I have never written an iOS app, I have never worked in a team of engineers. Yet in recent months, I have built and put into production functioning web applications, with databases, authentication, payments, deployed on real servers. I built them by conversing with AI, just like our developer: I describe what I want, discuss the architecture, approve the plan, and the code comes. The term is "vibe coder," and I have become one "without passing go," from the phase where one learns to program.

If a sixty-year-old without technical training can build in a few hours what until yesterday required a team and weeks of work, the extent of the change goes beyond the community of developers. It concerns anyone who does cognitive work because it shows that the barrier to entry for technical skills is lowering at a speed that renders obsolete the categories we once thought (and still think) define the job market. And while this is exciting (I am the first to find it extraordinary), it also makes the question of what happens to deep competence, the kind that develops only through years of practice, more urgent when practice itself becomes optional.

Our developer has stopped looking at the code. I have never even started to do it, and for now, it works for both of us. The problem is that "for now" might be a window shorter than we think, and when something breaks in ways that AI does not know how to fix, we may discover that we have lost something we did not even know we needed to possess.

It is not a story of technophobia. I am not a luddite, but what concerns me is the difference between using a tool and depending on a tool, and the fact that the latter condition tends to establish itself silently, comfortably, and is very difficult to reverse once it has consolidated. This is why I think it is important to think about it now, and not when that "5% that does not work" becomes the problem that none of us knows how to solve manually.