Karpathy Names the Change: It’s No Longer Faster Programming, It’s Smarter Programming

For years, talking about code assistants meant discussing autocomplete: small, helpful suggestions, but limited in scope. However, in recent weeks, a lively conversation around Andre Karpathy (AI researcher and communicator) and Boris Cherny’s response (head of Claude Code at Anthropic) has crystallized an idea many engineers have been quietly noticing: programming is entering a new phase.

It’s not just about “doing the same thing faster.” The true leap — the one that changes habits — is that work is being scaled differently. Where once a person reserved time for “worthy” tasks, now projects emerge that simply wouldn’t have been attempted before. Not due to lack of vision, but due to limited hours, energy, or knowledge of peripheral areas.

From 20% to 80%… in just a few weeks

Karpathy describes a transition that sounds exaggerated until experienced: shifting from a workflow dominated by manual coding (with some autocomplete) to one in which AI agents do most of the work, leaving humans as editors, supervisors, and final decision-makers. The feeling is mixed: enormous potential… and a slight blow to the ego. “Programming in English”—giving instructions and success criteria instead of typing every line—feels strange at first, but becomes addictive once it starts working.

This transition, moreover, isn’t based on “magic.” It relies on a very practical approach: large actions on a codebase (full refactors, cross-cutting changes, instrumentation, tests) that previously required sustained focus and iron patience.

Speed vs. Expansion: The Distinction That Explains Everything

The most interesting part of the debate isn’t whether time is gained (it is), but what that time becomes. Karpathy summarizes it with an idea many recognize: the main effect isn’t going faster, but expanding the map. With assistants capable of filling technical gaps, generalists dare to tackle tasks that blend disciplines: video pipelines, command-line automation, model integration, deployments, documentation, even pedagogical or product-related work.

In other words: the bottleneck is no longer always “knowing how to write code,” but rather knowing what to build, how to validate it, and what to exclude.

Models still fail, but they fail “better” (and that’s dangerous)

This is the core warning: those celebrating the “no IDE needed” or “swarm of agents” might be moving too fast. Karpathy emphasizes that models still make mistakes, but they have changed in nature: they’re no longer obvious syntax errors but subtle conceptual errors, similar to those of a hurried junior developer.

Recognizable patterns include:

  • Assuming things nobody asked them to assume.
  • Managing confusion poorly: not asking questions, not highlighting inconsistencies, not presenting trade-offs.
  • Tending to overcomplicate: more abstractions, layers, lines of code.
  • Leaving “leftovers”: dead code, unnecessary comments, collateral changes.

The conclusion isn’t “don’t use agents.” Instead, it’s to use them with rules: tests, reviews, scope limits, and a mindset of constant inspection. For critical software, humans don’t disappear; their role shifts.

“I have to remember that Claude can do this”

The most galvanizing statement comes from within Anthropic. Boris Cherny, creator and manager of Claude Code, shared that he feels like “most weeks” he’s re-learning capabilities of the model. His example is revealing: facing a memory leak, his instinct was the usual (profiling, reproducing, manual inspection). A colleague, however, asked the agent for a heap dump, analyzed it, and submitted a PR “on the first try.”

Cherny even described a routine that sounds like science fiction for traditional development: periods where he doesn’t open the IDE, with dozens — and eventually hundreds — of PRs in a short time, with the agent writing the code and the human steering. Beyond the headline, the core message is another: mental habits matter. Those without a “historical memory” of worse models — new hires or recent graduates — sometimes exploit current capabilities better because they don’t carry learned limits.

The new competitive advantage: being a good editor, not just a good author

If the LLM writes, what sets the best apart? Discrimination. The ability to read critically, detect hidden assumptions, enforce tests, and simplify. The skill shifts from “generating” to “evaluating.” Karpathy even notes an uncomfortable side effect: atrophy. As less is handwritten, the muscles involved in typing solutions weaken, even if review skills improve or stay the same.

This shift reopens an old question with a new twist: what about the “10x engineer”? If tools amplify everyone, the gap could narrow… or widen, if top performers master guiding agents, defining criteria, designing architecture, and demanding cleanliness.

2026 and the “slopacolypse”: when the problem isn’t producing, but filtering

Karpathy makes an almost inevitable prediction: if generating code becomes easy, so will generating documentation, articles, repositories, papers, videos, and noise. A “slopacolypse” — a flood of mediocre content — could affect GitHub, newsletters, social media, and basically any digital surface.

This isn’t just a cultural issue; it’s an operational one. In an excess world, the advantage will shift to curating, verifying, and maintaining standards. Software teams will adopt more demanding processes: linters, testing, observability, security reviews, dependency management, merge policies, and real change traceability.

What’s coming: less typing, more directing… and more ambition

The most insightful interpretation of this trend might not be nostalgia or euphoria but rethinking “shortages” as a roadmap: what currently falters — clarifying doubts, debating options, questioning assumptions, simplifying — is precisely what will improve in the coming months. And as it does, the change will be both quantitative and qualitative: bigger projects, smaller teams, shorter cycles.

Still, one rule remains: when software matters, someone must be responsible. AI can write a lot of code. But accountability, judgment, and strategic vision remain human. For now, the revolution isn’t replacing engineers; it’s prompting them to become conductors.

via: Karpathy puts words to the “phase shift” in coding with LLMs

Scroll to Top