Speakeasy Logo
Skip to Content

What is MCP?

The Model Context Protocol (MCP) is an open standard that enables AI agents to securely connect to external tools, data sources, and services. Think of it as a universal translator between AI models and the real world.

The problem MCP solves

Before MCP, connecting AI agents to external systems required building custom integrations for each combination of agent and tool. This meant:

  • M × N integration complexity: Every AI agent needed custom code for every tool it wanted to use.
  • A fragmented ecosystem: There was no standardized way for tools to expose their capabilities to AI agents.
  • Security challenges: Each integration handled permissions and access differently.
  • More maintenance overhead: Updates to any system broke multiple custom integrations.
A diagram shows multiple AI agents (Claude Desktop, VS Code, and Cursor), which each require separate custom integrations to connect to different tools (GitHub, PostgreSQL, and the user's file system) creating a complex web of M × N connections between agents and services.

In the diagram above, we see how each application (Claude Desktop, VS Code, Cursor) requires custom integrations to access various tools (GitHub, PostgreSQL, the user’s file system). This creates a complex web of M × N connections that is difficult to maintain and scale.

With MCP, we introduce a standardized protocol that simplifies this process.

There is now a single integration point for each tool, allowing any compatible AI agent to access it without custom code.

A diagram shows multiple AI agents (Claude Desktop, VS Code, and Cursor) connecting through MCP to access multiple tools (GitHub, PostgreSQL, the user's file system), creating a simplified architecture with M + N connections instead of M × N.

In the diagram above, we see how MCP simplifies the architecture. Each AI agent (Claude Desktop, VS Code, Cursor) connects through a single MCP layer to access multiple tools (GitHub, PostgreSQL, the user’s file system).

How MCP works

MCP establishes a client-server architecture made up of:

  • MCP clients: AI agents, applications, or development tools (like Claude Desktop, VS Code extensions, and Cursor)
  • MCP servers: Services that expose tools, data, or capabilities (like GitHub, databases, and APIs)
  • A standardized protocol: A common language both clients and servers understand

Instead of building custom bridges, you now have:

  • M + N simplicity: One MCP integration per tool, which works with all compatible AI agents
  • Standardized security: Consistent authentication and authorization patterns
  • Easy maintenance: Protocol updates that benefit the entire ecosystem

Key benefits of MCP

Depending on your role in the ecosystem, MCP provides several advantages.

Advantages for AI agent users

When using MCP-compatible AI agents, like Claude Desktop, VS Code extensions, or Cursor, you get:

  • Access to more tools and data sources without custom integrations
  • A consistent experience interaction pattern across different AI agents
  • Better security through standardized authentication and authorization

Advantages for tool providers

When you build an MCP server to expose your tools, you gain:

  • A single integration point for all compatible AI agents
  • More users for your tools without custom integration work
  • Capacity to focus on your core product instead of on AI integrations

Advantages for agent developers

When you build MCP-compatible applications or agents, you benefit from:

  • Access to a growing ecosystem of tools and services
  • Simplified development with standardized protocols
  • Increased collaboration opportunities across different AI agents

Real-world example of MCP in action

See how MCP simplifies the process of connecting AI agents to external tools and data sources.

Imagine you want to use Claude Desktop to read a GitHub issue from your company’s private repository and query your PostgreSQL database for related data.

The manual process

If this were a once-off task, you would typically need to:

  1. Copy and paste the contents of the GitHub issue into Claude Desktop.
  2. Describe the database schema to Claude.
  3. Ask Claude to write a custom query based on the pasted schema.
  4. Copy and paste the query into your database client.
  5. Copy and paste the results back into Claude Desktop.
  6. Manually combine the information from the GitHub issue and database query results.

The custom integration process

If you did this often, you might write a custom integration that connects Claude Desktop to GitHub and PostgreSQL, which would require ongoing maintenance and updates. The steps involved are cumbersome and error-prone, especially if the GitHub issue or database schema changes. This would get even more complex if Claude needed to wait for long-running queries or if you wanted to access multiple tools at once.

The MCP way

With MCP, you would:

  1. Install the GitHub MCP Server  and a PostgreSQL MCP server .
  2. Ask Claude: “Fetch issue #123 from the my-org/my-repo repository and query the my_table table to find related data. Interpret the results and summarize the information.”

This process is much simpler, more secure, and less error-prone. You can also easily switch to a different AI agent that supports MCP without needing to rewrite your integrations.

The Future of MCP

It is still early days for MCP, but the community is rapidly building out the ecosystem. The specification is changing, and new features are being added to the SDKs. Here are some areas where we expect to see significant growth in Q3 of 2025:

Client-server feature parity

As the MCP Specification matures, we expect to see more features implemented in both the MCP server and MCP client SDKs. This will make it easier to build and use MCP servers, improving the overall developer experience.

The current state of MCP implementations shows some gaps between what the specification defines and what the SDKs support. As these gaps close, developers will have access to a more complete and consistent toolset across different languages and platforms.

We’re already seeing rapid iteration on the TypeScript SDK, which serves as the reference implementation. As patterns emerge and stabilize there, we expect to see similar improvements in the Python, Go, and other language SDKs.

Tool and resource discovery

The MCP Specification enables much more dynamic tool and resource discovery. We expect to see more servers implementing this feature, allowing clients to use fewer tokens when providing context to LLMs.

Dynamic discovery is already possible today, but many servers still register all their tools at initialization and never update them. As the ecosystem matures, we’ll see more sophisticated implementations that:

  • Add and remove tools based on authentication state
  • Expose different capabilities based on user permissions
  • Dynamically generate tools based on runtime configuration
  • Notify clients of changes without requiring reconnection

This will make MCP servers more efficient and responsive, reducing the overhead of maintaining unused tools in the LLM’s context.

Security and access control

As MCP servers become more widely used, we expect to see more focus on security and access control. The specification already includes OAuth 2.1 support, but this addresses only a small part of the security model. Better guardrails against prompt injection and other attacks will be needed as the ecosystem grows.

This is, unfortunately, a common theme in the LLM space, and perhaps one that can’t be solved completely. It isn’t unique to MCP, though.

We anticipate several security-related developments:

  • Sandboxing and containerization: More tools like Docker’s MCP Toolkit that run servers in isolated environments
  • Permission models: Granular controls over what tools can access and modify
  • Audit logging: Better tracking of tool invocations and data access
  • Security scanning: Automated tools to detect common vulnerabilities in MCP servers
  • Signature verification: Trust chains for MCP server distribution and updates

The community is already moving in this direction, but we expect to see more standardization and tooling support in the coming months.

Registries and marketplaces

We expect to see more MCP server registries and marketplaces where developers can share and discover MCP servers. This will make it easier to find and use existing servers, as well as to contribute to the ecosystem.

A trusted registry would also help with security, as users could verify the authenticity of the MCP servers they connect to. Think npm or PyPI, but for MCP servers - with package verification, versioning, dependency management, and security scanning.

Early registries like mcp.so  show the demand for discoverability, but we need more robust infrastructure to support enterprise adoption. This includes:

  • Versioned releases with semantic versioning
  • Dependency resolution for servers that depend on other servers
  • Security scanning and vulnerability alerts
  • Usage analytics and ratings
  • Official/verified badges for trusted publishers
  • Private registries for enterprise deployments

Cross-platform standardization

As MCP gains adoption across different LLM providers and platforms, we expect to see more effort toward ensuring consistent behavior across implementations. Currently, different MCP clients may handle the same server differently, leading to fragmentation.

The community will need to develop:

  • Comprehensive test suites for MCP compliance
  • Conformance testing tools for both servers and clients
  • Reference implementations for common use cases
  • Documentation of best practices and anti-patterns

Integration with existing ecosystems

MCP doesn’t exist in isolation. We expect to see deeper integration with existing developer tools and workflows:

  • IDE integrations: Better support in VS Code, JetBrains IDEs, and other editors
  • CI/CD pipelines: Testing and deployment tooling for MCP servers
  • Monitoring and observability: Integration with APM tools and logging platforms
  • Configuration management: Tools for managing MCP server configurations at scale
  • Package managers: First-class support in language-specific package ecosystems

##The verdict

Whether MCP will become the de facto standard for LLM tool integration remains to be seen. It is useful now, and it is likely to become more useful in the near future. But who knows what this space will look like in a year or two? Your guess is as good as ours.

The protocol has genuine strengths - its dynamic discovery, stateful architecture, and standardized communication model solve real problems. But it also has real complexity that may not be justified for many use cases. The ecosystem’s success will depend on:

  • How quickly the tooling matures to abstract away complexity
  • Whether major LLM providers adopt MCP broadly
  • The development of security best practices and tooling
  • The emergence of a healthy marketplace of reusable servers
  • Competition from simpler alternatives

What’s clear is that the problem MCP tries to solve - standardizing how AI agents interact with tools and services - isn’t going away. Whether MCP is the solution, or whether something else emerges, the need for better agent-tool integration will only grow as AI capabilities expand.

For now, the best approach is pragmatic: Use MCP where its benefits clearly outweigh its complexity, keep an eye on the ecosystem’s evolution, and maintain flexibility to adapt as the landscape changes.

How to get started with MCP

Ready to start using MCP? Here are your next steps for:

Last updated on