MCP servers are the bridge between AI agents and APIs
Plus, how MCP is influencing API providers
A guest post by Gil Feig, co-founder of Merge.
At Merge, we've built hundreds of API integrations over the past few years. This has led us to see every flavor of API design, authentication scheme, and data format imaginable.
But over the past year, something fundamentally different has emerged in how we think about API connectivity: The Model Context Protocol (MCP).
AI agents have always been capable of making API calls, but MCP servers represent what may be the most significant shift in how we think about API connectivity. They're not replacing APIs, but instead creating a structured bridge that makes API interactions with AI agents work better at scale.
To better understand MCP’s impact, we’ll break down the problems it addresses, how it works, and the long-term influence it’ll have on API providers.
The Problem MCP Servers Solve
Before MCP, getting AI agents to work reliably with APIs was like trying to teach someone to drive by only describing traffic patterns. The agents could technically make API calls, but the interactions were brittle, context-poor, and required hand-holding.
Consider a typical pre-MCP scenario: an AI agent needs to sync customer data from Salesforce. You'd write custom code to handle the API call, manage authentication, parse responses, and handle errors. Then you'd discover the agent doesn't understand Salesforce's field relationships. It might try to create a Contact without linking it to an Account, or it would repeatedly hit rate limits because it couldn't track request quotas. Each API integration became a custom engineering project.
The predominant issue was the lack of standardized context management between AI agents and APIs. APIs are designed for deterministic, programmatic access, but AI agents need something more conversational and adaptive. Without a bridge, you end up with agents that can technically call APIs but can't maintain context across multi-step workflows or adapt their behavior based on API responses.
The Role of MCP
MCP servers solve this by providing a standardized layer that sits between AI agents and APIs. They don't replace API calls (when an MCP server creates that Jira ticket, it's still making a call to Jira's API), but they provide the contextual scaffolding that makes these interactions reliable and scalable.
The three capabilities that make MCP servers transformative:
Standardized integration: MCP provides a common protocol for LLMs to connect with external tools and data sources.
Rich resource access: MCP servers can expose various resources (files, databases, APIs) that LLMs can access contextually during conversations.
Dynamic tool access: LLMs can discover and use tools in MCP server libraries without requiring hardcoded integrations.
What makes this the "biggest" impact versus other AI developments is scale. Fine-tuning models or improving prompt engineering might optimize specific use cases, but MCP servers unlock entire categories of AI applications that were previously impractical to build.
The possibilities are endless: AI agents can use candidate data from ATSs to recommend high-fit candidates, create and update support tickets on behalf of customer-facing teams, automate onboarding tasks based on the new hire’s data in the integrated HRIS, and more!
How This is Changing API Provider Strategies
API providers are starting to recognize that their traditional design assumptions don't hold in an AI-first world. The shifts I'm seeing fall into two categories: search endpoints and rich metadata.
Search endpoints: Previously, these were often just nice-to-haves for power users. Now they're essential because AI agents need to find relevant data before they can act on it. APIs that lack robust search capabilities create friction for MCP server implementations, and LLMs shouldn’t need to sync a full dataset to find the relevant information.
Rich metadata: The request patterns from AI agents are wholly different from traditional apps. Agents make more exploratory calls and need richer metadata in responses. For example, API providers can enable more helpful responses if they can equip AI agents with detailed messages when the LLM encounters an error.
But MCP also exposes gaps in current API design. Traditional pagination assumes you know roughly what you're looking for. AI agents often need to explore datasets more organically, which breaks down with cursor-based pagination schemes. Similarly, credential management becomes more complex when agents need to dynamically access different APIs based on user context. The most forward-thinking API providers are starting to design "agent-first" endpoints, where APIs are optimized for the exploratory, context-rich interactions that AI agents need.
Preparing for the MCP Era
While MCP adoption is still emerging, engineering teams can take concrete steps today to position themselves for this shift.
For teams building customer-facing API products:
Enhance your search capabilities: Audit your existing search endpoints and prioritize building robust filtering, sorting, and query functionality. As I mentioned earlier, AI agents will rely heavily on search to discover relevant data before taking actions
Improve your error messaging: Review your API error responses and add more descriptive, actionable error messages. AI agents benefit from detailed context about what went wrong and how to fix it
Document your APIs thoroughly: Ensure your OpenAPI specifications are comprehensive and up-to-date. Many MCP server implementations use API documentation to automatically generate tools
Consider rate limiting strategies: Since AI agents can make more exploratory API calls, review your rate limiting to ensure it accommodates this usage pattern without blocking legitimate use
For teams integrating with external APIs:
Start small with internal experimentation: Begin by building MCP servers for internal tools and APIs where you can control both sides of the integration
Choose your integration approach: Evaluate whether to build MCP servers in-house or use existing solutions that provide pre-built, maintained integrations
Focus on authentication early: Since MCP doesn't handle authentication natively, establish clear patterns for credential management and secure token handling
Plan for context management: Design your AI workflows to leverage MCP's strength in maintaining context across multi-step operations
Regardless of your use case for MCP, it's essential to build observability into your API integrations. AI agents tend to generate dynamic and often unpredictable usage patterns that need to be monitored and understood.
You should also establish strong security guardrails, such as data loss prevention policies and granular access controls, before connecting AI agents to sensitive systems. This helps mitigate risks from unintended behaviors or data exposure.
Finally, set aside time to track MCP. The protocol is advancing quickly, and staying aligned with its changes will ensure it remains compatible with future tooling and resilient to breaking changes in the ecosystem.
Looking Forward
The trajectory here is clear: MCP servers will become the standard interface layer for AI-API interactions. But the implications run deeper than just technical architecture.
For engineering teams, this means thinking about API design differently. Instead of optimizing APIs purely for direct application use, teams need to consider how agents will interact with their data. This affects everything from response formats to rate limiting strategies.