Article

From Code to Vibes: Andrej Karpathy on the New Era of Software

Last week at Y Combinator’s AI Startup School, Andrej Karpathy, the former Director of AI at Tesla, took the stage to deliver a compelling vision of the future—and present—of software. His talk, “Software in the Era of AI,” wasn’t just about technical details; it was a fundamental reframing of how we should think about building products in a world powered by Large Language Models (LLMs).

Karpathy argues that we are in the midst of a monumental shift, one as significant as the invention of the computer itself. For over 70 years, the core of software development remained unchanged. Now, in just the last few years, it has transformed twice.

If you’re building a startup today, understanding these shifts isn’t just an advantage—it’s essential.

Software 1.0, 2.0, and Now 3.0

Karpathy’s framework for understanding this evolution is brilliantly simple:

  • Software 1.0: This is the software we’ve known for decades. It’s code. Humans write explicit, deterministic instructions in languages like Python or C++, and the computer executes them. It’s a world built on logic and algorithms.
  • Software 2.0: With the rise of deep learning, a new paradigm emerged. Software 2.0 is defined by weights. Instead of writing instructions, we curate massive datasets and use optimizers to “train” a neural network. The program isn’t the code; it’s the millions of parameters (weights) that have learned to perform a task, like image recognition. As Karpathy put it, “Software 2.0 are basically neural networks, and in particular, the weights of a neural network.”
  • Software 3.0: And now, we’ve entered another new era. Software 3.0 is programmed with prompts. The new “computer” is an LLM, and the new “programming language” is English (or any natural language). Instead of writing complex code, we give instructions in plain text to get the LLM to perform a task. This marks a radical democratization of software creation.

LLMs: The New Operating System

Karpathy suggests we should think of LLMs less as simple tools and more as a new kind of Operating System.

“It’s not just electricity or water, it’s not something that comes out of a tap as a commodity,” he explained. “These are increasingly complex software ecosystems.”

He draws fascinating parallels between today’s LLM landscape and the history of computing:

  • Like Early Computers (1960s): LLM compute is centralized and expensive. We access it through “thin clients” (our chat interfaces) and use it via time-sharing, much like the mainframe era. The personal computing revolution for LLMs hasn’t happened yet.
  • Like Fabs and Utilities: LLM labs like OpenAI and Google are like fabs, requiring immense capital (CAPEX) to build the models. They then serve intelligence like a utility, with metered access (OPEX). When a major LLM goes down, we experience an “intelligence brownout.”
  • Like Windows vs. Linux: We see a few dominant, closed-source “operating systems” (like GPT-4 and Gemini) competing with a vibrant, open-source alternative (the Llama ecosystem).

What This Means for Startups

This new era is brimming with opportunities. It’s not just about building the next LLM; it’s about building the applications, infrastructure, and user experiences that make this new OS useful.

1. Build the “Iron Man Suit,” Not the Robot

The most successful products won’t be flashy demos of full autonomy. They will be partial autonomy products—tools that augment human capabilities, not replace them.

Think of the Iron Man suit. It’s an incredible piece of technology, but Tony Stark is always in the loop, deciding when to let the suit take over and when to be in direct control. Karpathy calls this the “autonomy slider.” Your product should allow users to seamlessly move between manual control and AI-driven assistance, depending on the complexity of the task.

2. Master the Human-AI Workflow

The core of modern AI applications is the generation-verification loop. The AI generates something, and the human verifies it. The goal for any startup is to make this loop as fast and frictionless as possible.

This means:

  • Packaging context so the LLM has the right information.
  • Orchestrating calls to multiple specialized models (e.g., chat models, embedding models, diff models).
  • Designing a custom GUI that makes verification intuitive. Don’t make users read and edit massive blocks of text; give them visual diffs and simple accept/reject buttons.
3. Build for Agents

The “users” of the future won’t just be humans. A new category of consumer is emerging: agents. These are other LLMs that need to interact with your software.

This requires a shift in how we design our products and documentation. A human-friendly website with buttons and menus is useless to an LLM. Instead, we need to “build for agents” by providing:

  • LLM-friendly documentation (like simple markdown files, e.g., /llms.txt).
  • API-like actions instead of vague instructions. Instead of telling a user to “click here,” provide the cURL command an agent can execute.

By making your software highly accessible to these “people spirits,” you open up a world of possibilities.

The landscape is changing at a breathtaking pace. A huge amount of software will be re-written, not just by professional engineers but by a new generation of “vibe coders.” For startups, the message is clear: understand the psychology of these new systems, build tools that empower humans, and prepare for a future where your users aren’t just people.

Get in Touch and Let's Connect

We would love to hear about your idea and work with you to bring it to life.