AI & MCP
Announcing Speakeasy MCP Platform: build, secure, observe tools for all your agents
Sagar Batchu
March 10, 2026 - 5 min read

From Gram to Speakeasy MCP platform
We started Gram as the open source platform for building MCP servers. The thesis was straightforward: to power AI applications, teams needed a fast path to a deployed MCP server, and we could provide it.
Gram OSS Repository
Check out Github to see how it works under the hood, contribute improvements, or adapt it for your own use cases. Give us a star!
View on GitHubWe were right about the importance of MCP servers, but building them turned out not to be the hard problem. After months of design partner work and dozens of customer conversations, we learned that the real blockers to MCP in production were auth, observability, and distribution (deep dive on our learnings).
Today we’re launching Speakeasy MCP platform: the control plane for building, securing, distributing, and observing MCP servers at enterprise scale.
Your MCP control plane
We believe that every employee should be using Claude, or their preferred AI app, as their single interface for all work. No more bouncing between dashboards and SaaS apps.
We’ve built Speakeasy MCP Platform to make that transformation possible. It’s a control plane that connects every employee in your organization to your MCP servers with the security, governance, and observability you need to do it safely at scale.
Here’s what that looks like.
Build — generate, import, deploy, distribute
Most teams have dozens of APIs and SaaS tools that need to be accessible to AI. Building MCP servers for each one by hand doesn’t scale. Speakeasy MCP platform gives you multiple paths to a production server: generate from an API spec, import an existing server, or deploy a pre-built integration. However you build, we’ll handle deployment and distribution from there.

Build servers fast. Use an existing API, import an existing TypeScript server to get platform capabilities without rewriting anything, or deploy a pre-built integration like Stripe, GitHub, or Slack from the SaaS catalog in minutes.
Dynamic servers for token efficiency. Servers built on Speakeasy MCP platform support progressive disclosure, by surfacing two tools: search_tools, execute_tool. Same functionality, better token efficiency.
One-click rollout across your org. The gap between “server exists” and “people use it” has been one of the biggest unspoken problems in the MCP ecosystem. Speakeasy MCP platform closes it: deploy a server and make it available to your entire organization in a single step. Servers can be provisioned as Claude plugins automatically, and they work with Claude Web, Claude Desktop, Cursor, Windsurf, and any other MCP-compatible client.
“Speakeasy made it trivial to turn our database platform API into a production MCP server. Developers interact with branches, schemas, and deploy requests through AI agents instead of dashboards.”
— Mike Coutermarsh, PlanetScale
Secure — unified auth for the whole team
Security was the number one blocker we heard from every customer segment. Teams were building MCP servers and watching them sit unapproved for months while auth got sorted out. Speakeasy MCP platform makes auth a platform feature, not a configuration exercise. Connect your SSO to our OAuth proxy, scope access per team, and skip the months of auth plumbing.
Unified auth layer. Every MCP server in your org gets the same auth experience regardless of how the underlying service connects. OAuth, API keys, custom tokens, Speakeasy abstracts it all behind a single OAuth 2.1 layer with DCR and PKCE. Plug in your SSO once, and every server is covered.
Scoped access. Role-based permissions at the server, toolset, and individual tool level. Provision sub-catalogs so every team sees exactly what they need.
Client hooks. Get visibility across your entire org’s AI usage by intercepting MCP traffic at the client level. Identify shadow servers, enforce policies, and ensure every MCP connection flows through your control plane.
“I was able to equip the entire solutions engineering org with MCP servers in a single afternoon. Speakeasy handled auth and permissions, so we didn’t have to choose between speed and security.”
— Eli Davis, Fivetran
Observe — see what’s happening in production
Teams told us they were flying blind once servers hit production. They could see that tool calls were happening. They couldn’t see why calls were failing, which server version handled a request, or whether agents were using tools the way they’d intended.

Real-time telemetry and distributed tracing. Live visibility into every tool call, every session, every server across your organization. Trace requests across servers and sessions to understand the full path of an agent interaction, not just individual tool calls in isolation.
Performance metrics per tool. Latency, error rates, and usage patterns broken down by individual tool. Understand adoption across your organization and identify bottlenecks before they become incidents.
Actionable failure insights. Not just dashboards that tell you something broke — context that helps you understand why and fix it. Error categorization, version correlation, and root cause signals.
“We needed to understand how AI was being adopted across the company and where people were getting stuck. Speakeasy gave us that visibility from day one.”
— Shreyas Kumar, Co-founder, Fermat
Get started
Speakeasy MCP platform is live today. If you’re building MCP servers — or trying to run them in production — this is the control plane that makes it work.
Get started at speakeasy.com/mcp
For the full backstory on how we got here, read We were wrong about the hard problem.
Speakeasy MCP platform is the control plane for building, securing, distributing, and observing AI agent tools — with production governance included from the start. speakeasy.com/mcp