"We've got a suspicious package. Unknown origin. Could be nothing. Could level the block."
The bomb squad rolls up in their heavy truck, suited in Kevlar. They don't rush in. They don't bring it home and put it on the shelf next to their summer vacation photo albums. They establish a perimeter. They clear the area. They bring in the robot first. And when they finally approach the device, it's with every possible safety protocol engaged.
Now imagine that same bomb squad watching how we deploy MCP servers.
"Where's your safety gear? Your protection? Don't you have protocols for this? How do you know you can trust it?"
We’re running these powerful systems on our main development machines, giving them access to our entire filesystem, and handing over our SSH keys. We are handling digital explosives without a perimeter, without a protocol, and without a plan.
We’re talking about powerful systems with system-level access, and power without containment is just chaos waiting to happen. The bomb squad knows this instinctively. We've somehow forgotten it.
The casualties are already mounting
The proof is in the headlines. File a public GitHub issue, and their MCP server starts leaking private repos (and you thought remote MCP servers were bulletproof, huh?) Submit a JIRA ticket, and suddenly you've exfiltrated secrets from a developer's machine. These aren't theoretical attacks, they're happening right now, to real people, with real consequences.
But here's what gets me: we saw this coming.
The moment we started building agents with full access to our machines and our data platforms, the security-minded among us felt that familiar itch. That little voice whispering, "This is going to end badly."
That's why I started treating MCP servers like the powerful, potentially dangerous tools they actually are.
The containment protocol
Step 1: Assume it's powerful
Not evil. MCP and AI agents aren't inherently malicious. But they're powerful. And powerful things, when they go wrong, go really wrong. This isn't paranoia. It's respect.
Step 2: Remember why virtual environments exist
We already solved this problem (more on that later) with tools like Docker. Why? Because isolation prevents disasters. Because blast radius management is fundamental to safe computing.
The same principles apply to AI agents, but somehow we forgot them in our excitement.
Step 3: Contain from from the start, and not a moment later
You can spin up environments with:
No network access by default
Access to only specific folders (or none at all)
Pre-configured toolsets and dependencies
Instructions that recreate your local setup inside the container
The argument "but I need this to work on my local machine" doesn't hold water here. You can bring what matters about your entire local development environment into the container. You can use things like MCP Toolkit from Docker to make this step turnkey.
My rule: every MCP experiment gets its own isolated environment. No exceptions. Even for "quick tests" or "simple experiments."
Especially for those.
We're reinventing virtual machines (and that's perfect)
Here's the funny part (to me, at least): the AI industry is rediscovering virtualization and calling it innovation.
When companies announce "ephemeral personal computers for agents," my first reaction is: "Congratulations, you've invented Docker containers. Welcome to 2013."
But then I think about it more, and actually? This is exactly right.
The hype engine is reinventing VMs, and that's perfect. Because VMs solved this exact problem decades ago. Isolation. Security. Controlled environments. Blast radius containment. The infrastructure is mature. The concepts are proven.
We just forgot to apply them to AI agents. Sounds like this familiar story from Julien Simon recently.
Smart teams are already moving in this direction. They're treating AI agents like any other untrusted code: with isolation, monitoring, and healthy skepticism. They're building the future of AI tooling on the foundation of battle-tested infrastructure patterns.
With great power
If you’re reading this, you’re probably an early adopter. The one pushing the boundaries. The one figuring out what's possible with these tools.
That makes us responsible for figuring out what's safe.
The most impressive AI tools of the future will be the ones that accomplish the most with the least access. Secure by default. Respectful of boundaries. Working within constraints instead of demanding unlimited power and knowledge.
The teams that master this balance first will have a massive competitive advantage. They'll experiment aggressively while maintaining security. They'll iterate quickly without risking their infrastructure. They'll build trust with stakeholders who are rightfully skeptical about AI security.
Want to share your own blast radius horror stories or containment strategies? Hit me up. The community learns fastest when we share our failures openly.