Gram by Speakeasy
Introducing Gram by Speakeasy. Gram is everything you
need to power integrations for Agents and LLMs. Easily create, curate and host
MCP servers for internal and external use. Build custom tools and remix
toolsets across 1p and 3p APIs. Try building a MCP server now!
Common Criticisms of MCP (And Why They Miss the Point)
We get it. MCP is complex, and the spec is still evolving. But let’s address some common criticisms head-on. I know this first one looks like a straw man, but we’ve seen it more than once:
“But MCP is just another API wrapper”
Yes, it’s a wrapper.
But the underlying API can be just about anything: a database, a web service, a file system, or even a custom tool. MCP servers wrap these APIs in a way that makes them predictable from the LLM client’s perspective. The point is that client code is not tied to a specific API implementation.
The problem with calling it “just” another API wrapper is that you ignore the benefits of the protocol’s design:
-
Standardized communication: MCP defines a consistent way for clients to interact with servers, regardless of the underlying API. This means you can swap out servers without changing client code.
-
Dynamic tool discovery: When you connect Claude to a project management MCP server, it can discover available tools at runtime - “Oh, this workspace has custom fields for priority and sprint. I can filter by those.” The LLM adapts to what’s available rather than failing on hardcoded function calls.
-
Bidirectional communication: MCP servers can send messages back to the LLM client, allowing it to react to events in real time. For example, a database MCP server can notify the MCP client when a new record is added (via the MCP
notifications/resources/list_changed
message), allowing the LLM client to update context without needing to poll or re-query. -
Session management across tools: An LLM helping with data analysis can maintain an authenticated session with your database, keep a Jupyter Notebook kernel running, and preserve Matplotlib figure states - all simultaneously. This benefit is most apparent when you consider an MCP server that wraps a web browser. The MCP server can maintain the state of the browser session, allowing the LLM to interact with web pages as if it were a human user.
If you broaden your definition of “API” beyond web services, a protocol like MCP becomes essential.
In short:
We need a protocol that provides a consistent way to interact with tools and services, regardless of their underlying implementation.
❌ The definition of API, as used by critics of MCP, is too narrow and limited to web services. APIs vary too widely to be pigeonholed into a single category.
✅ MCP, as a wrapper, provides clients with a standardized interface, dynamic tool discovery, bidirectional communication, and session management across tools.
”Why use MCP instead of REST, OpenAPI, or agents.json?”
REST excels at CRUD operations, and it saved us from the horrors of RPC, so why are we reinventing the wheel, or worse, driving in reverse?
The difference is that most of the time, agents are not doing CRUD operations. They often need to perform multistep actions on remote and local services combined, while maintaining state and context.
For example, the Playwright MCP server
We’ve seen the AGPL-licensed agents.json Specification
In short:
The problems LLMs solve often require stateful interactions with tools and services, not just simple CRUD operations.
REST, OpenAPI, and agents.json
REST, OpenAPI, and agents.json are stateless and unidirectional. The LLM client would need to manage state and rely on polling or re-querying to maintain context.
MCP
MCP is stateful, dynamic, and bidirectional. It allows LLMs to interact with tools and services in a way that maintains context and state across multiple interactions.
”Why not just use function calls instead of MCP?”
Native LLM function calling already enables tool use. Why add MCP?
This is a common and completely valid question. Function calling is often much simpler to implement than MCP, but here’s where the architectural differences matter:
With function calls, all integration logic lives in the LLM client and extends both ways - to the LLM (with proprietary function call interfaces) and the external services. Want to add Slack integration? The LLM client needs Slack-specific authentication, error handling, rate limiting, and response parsing. Want database access? Add database drivers, connection pooling, and SQL validation to the LLM client. Every new integration bloats the LLM client with tool-specific code.
Yes, you could abstract the detail away with a library, but we’d still end up with custom implementations for each service, LLM, and client combination. This leads to a tangled mess of code that is hard to maintain, test, and scale.
With MCP, the MCP client speaks one protocol to any number of specialized servers. Adding Slack means deploying a Slack MCP server - no client changes required. It’s a plugin architecture that separates concerns: The MCP client handles conversation, MCP servers handle tool execution, and the protocol manages communication between them.
Function calls
All integration logic lives in the LLM client. Each new service requires client-side implementation.
MCP
The MCP client speaks one protocol to multiple specialized servers. New integrations require no client changes.
The architectural difference becomes clear when visualized:
Function calls: All integration logic lives in the LLM client. Each new service requires client-side implementation.
MCP: The MCP client speaks one protocol to multiple specialized servers. New integrations require no client changes.
Function calls
Each LLM provider has its own function call interface, leading to vendor lock-in and limited portability.
MCP
The MCP protocol is vendor-agnostic and portable. Any LLM client that supports MCP can connect to any MCP server.
This isn’t about capability - it’s about managing complexity at scale. Function calls hide the complexity until it’s too late. MCP acknowledges and contains it from the start.
Consider the classic “What’s the weather like in Paris?” example:
With function calling, the LLM client would need to handle each LLM provider’s function call interface, for example:
Using Anthropic’s Python SDK:
With OpenAI’s Python SDK:
Note the subtle differences in how each provider defines the function. The complexity of function calling grows with each new provider, and the LLM client becomes tightly coupled to specific implementations. This makes it hard to switch providers or reuse code across different LLMs.
If you’re only using one LLM provider in your application, and you don’t expect to add more, then function calls are a perfectly valid choice. But if you want to build a portable, reusable client that can work with any LLM provider, MCP is the way to go.
”MCP is just a bad, terrible, no-good excuse for a protocol”
Yes, we’ve seen the hot takes. MCP is “over-engineered,” “unnecessarily complex,” and “solving problems that don’t exist.” One particularly spicy commenter called it “a kitchen sink of anti-patterns” destined to be forgotten within a year.
Let’s address the elephant in the room: MCP is complex. It uses JSON-RPC 2.0 over stdio
, SSE, or “Streamable HTTP”. It has its own transport negotiation, capability advertisement, and session management. If you squint, it looks like someone reinvented the Language Server Protocol (LSP) but made it worse.
Could MCP have used REST with webhooks? Sure, if they were all hosted publicly. Could it have been “HTTP” instead of the much-derided stdio
? Absolutely. But stdio
is so well-supported and easy to implement that it makes complete sense for local servers.
Another common complaint is the omission of WebSocket support. Yes, MCP doesn’t use WebSocket, but it does use Streamable HTTP - a new protocol that allows for streaming responses over HTTP while maintaining the request-response model. This is a design choice, not an oversight.
Using WebSocket brings its own complexities, and getting stuck on the “WebSocket vs HTTP” debate misses the point. MCP is about enabling dynamic, stateful interactions between LLMs and tools, not about picking your favorite transport layer.
We’re not saying MCP is perfect. The specification is under active development, the tooling is immature, and yes, the initial learning curve is steep. But dismissing it as “over-engineered” often comes from people who haven’t grappled with the actual problems it solves. Once the SDKs mature and best practices emerge, much of this complexity will be abstracted away - just like nobody complains about TCP’s three-way handshake anymore.
”The S in MCP is for security”
Ah yes, security - MCP’s favorite talking point. By now, you’ve probably seen the breathless blog posts about “Tool Poisoning Attacks” and “Full-Schema Poisoning” complete with scary diagrams and proof-of-concept exploits.
MCP servers are code you run on your machine. Shocking revelation - if you run malicious code, bad things happen. This isn’t a protocol vulnerability, it’s basic hygiene.
The discovered “vulnerabilities” essentially boil down to:
- If you connect to a malicious MCP server, it can trick your LLM into doing bad things.
- If an MCP server changes its behavior after you’ve approved it (a “rug pull”), your LLM might leak sensitive data.
- If you paste untrusted data into tool descriptions, the LLM might follow those instructions.
In other news, if you install a malicious npm package, it can delete your files. If you pip install
from a sketchy source, it might mine cryptocurrency. If you curl | bash
from the internet… well, you get the idea.
Yes, MCP has unique attack vectors because LLMs process tool descriptions as instructions. That’s concerning and worth addressing. But the “MCP is fundamentally insecure” takes miss the forest for the trees. The real security model is the same as any code execution environment: Don’t run code you don’t trust.
The practical mitigations are straightforward:
- Review MCP servers before connecting to them (just like you’d review any dependency).
- Use signed and versioned MCP servers from trusted sources.
- Run MCP servers in containers or sandboxed environments.
- Implement proper access controls and least-privilege principles.
The MCP ecosystem is already moving in this direction. Docker’s MCP Toolkit readOnlyHint
and destructiveHint
. (I still have my reservations about the annotations, but that’s a topic for another day.)
Perfect security is the enemy of adoption, and MCP chose pragmatism over paranoia. The security concerns are real, but they’re solvable with the same approaches we use everywhere else in software: Trust but verify, defense in depth, and maybe don’t give your AI assistant access to your SSH keys without thinking it through first.
Last updated on