AI won’t thrive in legacy interfaces
The next technological leap will happen in your command line.
A guest post by Zach Lloyd, founder & CEO of Warp
The way we build software is changing. We no longer write code by hand, line by line. Now it’s about telling a computer what you want to do and working with AI to make it happen.
As software shifts from writing code to writing prompts, we need a fundamentally different kind of interface. The old ones weren’t built for collaboration with AI agents; bolting AI onto legacy systems only gets you so far. Tools built on VS Code hint at the future, but they still operate within constraints that weren’t designed for building software with AI.
I don’t think the shift we’re experiencing is about adding chat panels to IDEs or improving autocomplete. The bigger change is that developers are starting to delegate real work—writing code, fixing issues, setting up environments—to agents. These agents operate by prompt and do the manual work of writing code and commands for you. They need context, control, and access to the live system.
So the question becomes: where should that interaction happen?
Why the terminal still matters
When you think about building the developer interface of tomorrow, your mind probably doesn’t go to the 1970s: black screen, green text, blinking cursor. And yet, despite its age, I think the command-line terminal is the best foundation we have for what comes next.
It already sits at the center of modern engineering workflows. Developers use it to install packages, debug production, manage infrastructure, and automate scripts. It’s low-level, precise, scriptable, and language-agnostic. Unlike IDEs, which are usually tied to a specific language or project, the terminal is where developers interact with the entire system. It’s also where agents can observe, act, and respond with complete visibility across local and remote machines. That’s precisely what makes it so powerful when building with AI.
But we didn’t build traditional terminals for this kind of interaction. They don’t understand natural language. They only run commands, not agents. They don’t know when a command fails or what to do next. They expose errors but offer no guidance on how to fix them.
The command line has always been a natural home for telling computers what to do. And in a world where software is increasingly built by collaborating with agents, I think it’s the best starting point for an AI-native interface. But for the terminal to play that role, it must evolve so profoundly that it becomes something else entirely.
We need a new developer environment
In the new workflow, developers don’t just execute commands; they describe outcomes and give guidance on how to achieve them. “Set up a staging environment.” “Find the root cause of this error.” “Write a migration script.” The agent interprets, acts, and reports back. Sometimes it needs approval; sometimes it runs autonomously. But the developer is always in control.
For that to work well, the interface must feel like a natural part of how developers think and build. It should understand plain language and be able to turn it into the right actions. It must stay aware of what’s happening across the system in real time, so the agent isn’t flying blind. Developers should be able to see precisely what the agent is doing, step in, make changes, or take over if needed. And it shouldn’t hide anything—no black boxes, no mystery commands. If you’re going to trust an agent to do real work, you need complete visibility into how and why it’s doing it.
I don't think this interface lives in an IDE, and while I think it works best rooted in a command-line UX, it doesn’t live in a traditional terminal either. New command-line approaches, like Claude Code, OpenAI Codex CLI and now Gemini CLI all show that the paradigm works. But they’re limited: They operate as single CLI apps, not full environments, and are tied to a single model provider. We need something more capable and integrated to support real AI-assisted development across the full lifecycle. We need something new.
At Warp, we call it an Agentic Development Environment (ADE). Warp just launched its own ADE, but we didn’t coin the term to name a product. The industry needs new language for a fundamental shift already underway; we’re naming something that is becoming its own thing.
It’s still early, but there’s a real chance the future of development is rooted in something we’ve had all along. Not the terminal as it exists today, but a new kind of interface built on its foundations: agent-aware, prompt-native, and system-level by default.
If we’re going to work alongside AI to build software, we’ll need an environment designed for that partnership. My bet is the command line, reimagined, is the right place to start.
Interesting article! However, I’m curious about your stance on AI’s inability to interact with local processes. For example, I recently used Google’s Gemini CLI to start a Rails server on port 3000, but since I already had a server running on that port, Gemini detected the conflict and notified me about the existing process and its PID.
Doesn’t this imply that AI tools are already capable of interacting with and reading from the system environment (like ports and running processes)? How does this fit with your point that AI assistants today can’t access or control background processes? Would love to hear your take on this apparent contradiction.