The Hidden Tax of Vibe Coding

I've spent the last few days spiraling down a rabbit hole of AI-native workflows, trying to reconcile the massive productivity gains with a nagging sense of "something is missing." Three specific pieces of content really crystallized this for me:

Finally, a recent podcast featuring Andrej Karpathy (discussing his AutoResearch repo and OpenClaw) added the final layer of "engineer psychosis" to the mix.

Shifting the Bottleneck

Christos Kritikos hit the nail on the head regarding the "drain." We often talk about AI as a time-saver, but the reality is a shift in the bottleneck. After reading his post, I found myself reflecting on how this applies to my own work as a 1st year PhD in AI for Proteins. I ended up leaving a comment that really prompted me to gather these broader thoughts.

The core issue is a subtle hidden tax that comes with the AI-native environment:

"Less learning and less fulfillment," driven by a thinning connection to the core knowledge that each decision leads to in actual implementation.

We've moved into a phase of "co-deciding." We judge the high-level path, but offload the actual implementation to the LLM or coding agent. The problem is that "implementation itself is by nature also a decision search space." When we coded manually, that trial-and-error wasn't just "work"—it was a feedback loop that awarded us with real experience.

By skipping the struggle of choosing the design pattern or the structure ourselves, we end up with "half-learning." We are reviewing code we didn't write, which is inherently more exhausting and less fulfilling than creating it.

Karpathy and "Engineer Psychosis"

This shift is even more jarring when you realize that even the "godfathers" of the craft are moving away from the keyboard. In a recent podcast, Andrej Karpathy admitted that he hasn't manually written code since December 2025. Instead, he's orchestrating agents.

Karpathy touched on a phenomenon I've certainly felt: Engineer Psychosis. It's that tempted urge to max out subscriptions, burn through tokens, and the frantic need to "stay in the loop" with every new agentic development. We are no longer just developers; we are managers of "Parallel Agents in a loop."

This drags us even farther from actual code knowledge. When we treat coding as a series of parallel loops handled by agents, we move from a profession rooted in deep logic to a luck-driven trial-and-error skill. We are disconnected from the "why." If the vibe is right and the agent returns a working block, we move on. But that disconnect means our human knowledge is no longer being built on the bedrock of implementation. We are skimming the surface of our own projects.


The Path Forward

So, where does this leave us? I'm definitely not an AI-doomer, and I have no interest in returning to the "stone age" of pre-AI coding. The efficiency is too high to ignore, and the potential for discovery—especially in fields like protein engineering—is too great.

However, we have to acknowledge the exhaustion of the "review-only" workflow and the decay of implementation-based learning. We need to find a new way to stay connected to the "decision search space" without being buried by it.

The big question remains:

"How do we (humans) adapt and cope with this new paradigm?"