AI agents are becoming a standard part of software interaction. From Claude Desktop assisting with research to custom agents automating workflows, these tools need a reliable way to interact with APIs.
This is where the Model Context Protocol (MCP) comes in. Introduced by Anthropic in November 2024, MCP provides a universal standard for connecting AI agents to APIs and data sources. Think of it as the bridge between natural language and your API endpoints.
If you already have an API documented with OpenAPI, you’re in luck. You can automatically generate a fully functional MCP server from your existing OpenAPI document. In this guide, we’ll look at four tools that make this possible: Speakeasy’s SDK generation, the Gram platform, FastMCP for Python developers, and the open-source openapi-mcp-generator.
What is MCP and why does it matter?
The Model Context Protocol is an open standard that enables AI agents to interact with APIs in a consistent way. Instead of building custom integrations for each AI platform, you create one MCP server that works across compatible AI tools like Claude Desktop, Cursor, and others.
MCP follows a client-server architecture where AI applications act as clients, and your MCP server exposes API operations as “tools” that agents can discover and use. These tools are structured descriptions of what your API can do - complete with parameters, expected inputs, and return types. When an AI agent needs to accomplish a task that requires your API, it queries the MCP server for available tools, selects the appropriate one, and executes it with the necessary parameters.
From OpenAPI to MCP: Where generators fit in
Now, why does this matter if you already have an OpenAPI document? Because OpenAPI specs already contain everything needed to create MCP tools:
Endpoint paths and methods define what operations are available
Parameters and request schemas specify what inputs each operation needs
Response schemas describe what data comes back
Operation descriptions explain the purpose of each endpoint
An MCP generator transforms your OpenAPI specification into a functioning MCP server that exposes your API endpoints as tools AI agents can use:
Your OpenAPI document serves as the source of truth, with its operations documented in OpenAPI format. An MCP generator reads this document and automatically creates tool definitions for each endpoint. The generated MCP server then acts as a bridge, translating AI agent requests into proper API calls. AI agents can discover and use your API’s capabilities through the MCP server, without needing to understand your API’s specific implementation details.
Your OpenAPI document becomes the single source of truth, eliminating the need to manually maintain two separate specifications. Your MCP server stays in sync with your API automatically.
The challenge: Building MCP servers manually
While the MCP specification is well-documented, building a server from scratch involves significant work:
Understanding the protocol: MCP uses JSON-RPC 2.0 for transport and has specific conventions for tool definitions, resource handling, and error responses.
Keeping tools in sync: Every time your API changes, you need to manually update tool definitions to match.
Type safety and validation: You’ll need to implement request validation and ensure type safety across the entire chain.
Hosting and deployment: MCP servers need infrastructure for hosting, whether locally for development or remotely for team-wide access.
For teams that already maintain OpenAPI documents, duplicating this effort in MCP format creates unnecessary maintenance burden. This is where automation helps.
Four tools for generating MCP servers from OpenAPI documents
In this post we’ll look at four platforms and tools that automatically generate MCP servers from OpenAPI documents:
Speakeasy generates TypeScript SDKs and MCP servers together, giving you full control over the generated code and deployment.
Gram is a managed platform made by Speakeasy that provides instant hosting and built-in toolset curation.
FastMCP is a Pythonic framework that converts OpenAPI specs and FastAPI applications into MCP servers with minimal code.
openapi-mcp-generator is an open-source CLI tool that generates standalone TypeScript MCP servers.
These tools differ primarily in their hosting models. Managed platforms like Gram and FastMCP Cloud handle hosting and infrastructure for you - on Gram you upload your OpenAPI document and you get an instantly accessible MCP server. Self-hosted tools like Speakeasy and openapi-mcp-generator generate code that you deploy and run yourself, giving you full control over infrastructure, customization, and deployment.
Here’s how they compare across hosting model and automation level:
Gram offers the fastest path to production with a fully managed platform - no infrastructure to maintain. Speakeasy provides comprehensive code generation for self-hosted deployment. FastMCP gives you both options: use the Python framework for self-hosted servers or FastMCP Cloud for managed hosting. openapi-mcp-generator generates standalone TypeScript servers for complete self-hosted control.
MCP server generators comparison
Here’s a more detailed comparison of the features of each:
Feature
Speakeasy
Gram
FastMCP
openapi-mcp-generator
Language
TypeScript
N/A (hosted)
Python
TypeScript
Setup complexity
Low
Low
Low
Low
Customization
Full code access
Config-based
Programmatic
None
Tool curation
N/A
Yes, built-in
Programmatic
None
Hosting
Self-hosted
Managed
Self-hosted/Managed
Self-hosted
Type safety
Full (Zod)
N/A
Partial
Full (Zod)
SDK generation
Yes (7+ langs)
No
No
No
Auth handling
OAuth 2.0
OAuth 2.0
Manual config
Env variables
Test clients
Generated Test Client
Playground
No
HTML clients
Next, we’ll explore each tool in detail.
Speakeasy: SDK + MCP server generation
Speakeasy is an SDK generation platform that creates production-ready SDKs from OpenAPI documents. When you generate a TypeScript SDK with Speakeasy, you also get a complete MCP server implementation.
How it works
When Speakeasy processes your OpenAPI document, it generates a type-safe TypeScript SDK using Zod schemas for validation, creates an MCP server that wraps the SDK methods, transforms each SDK operation into an MCP tool with proper type definitions, and provides a CLI for starting and configuring the server.
The generated MCP server includes:
mcp-server/├── tools/ # Each API operation as an MCP tool│ ├── listTasks.ts│ ├── createTask.ts│ ├── getTask.ts│ └── updateTask.ts├── server.ts # Main MCP server implementation├── scopes.ts # Scope-based access control└── cli.ts # Command-line interface
Example: Task management API
Let’s say you have a simple task management API with this OpenAPI operation:
paths: /tasks: post: operationId: createTask summary: Create a new task description: Creates a new task with the provided details requestBody: required: true content: application/json: schema: type: object required: - title properties: title: type: string description: Task title description: type: string description: Detailed task description priority: type: string enum: [low, medium, high] responses: '201': description: Task created successfully content: application/json: schema: $ref: '#/components/schemas/Task'
Speakeasy generates an MCP tool that looks like this:
import { createTask } from "../../funcs/createTask.js";import * as operations from "../../models/operations/index.js";import { formatResult, ToolDefinition } from "../tools.js";const args = { request: operations.CreateTaskRequest$inboundSchema,};export const tool$createTask: ToolDefinition<typeof args> = { name: "create-task", description: "Create a new task\n\nCreates a new task with the provided details", args, tool: async (client, args, ctx) => { const [result, apiCall] = await createTask(client, args.request, { fetchOptions: { signal: ctx.signal }, }).$inspect(); if (!result.ok) { return { content: [{ type: "text", text: result.error.message }], isError: true, }; } return formatResult(result.value, apiCall); },};
The tool is type-safe, handles errors gracefully, and includes full request validation using Zod schemas.
Customizing with x-speakeasy-mcp
Speakeasy supports the x-speakeasy-mcp OpenAPI extension for fine-tuning your MCP tools:
paths: /tasks: post: operationId: createTask summary: Create a new task x-speakeasy-mcp: name: "create_task_tool" description: | Creates a new task in the task management system. Use this when the user wants to add a new task or todo item. Requires a title and optionally accepts a description and priority level. scopes: [write, tasks]
This allows you to provide AI-specific descriptions and organize tools using scopes.
Using scopes for access control
Scopes let you control which tools are available in different contexts:
# Read-only modenpx mcp start --scope read# Read and write, but not destructive operationsnpx mcp start --scope read --scope write
When to use Speakeasy
Choose Speakeasy if you need full control over generated code and deployment, want type-safe SDKs alongside your MCP server, require extensive customization of the MCP implementation, or prefer self-hosting with complete infrastructure control.
Gram: Managed MCP platform
Gram is a managed platform made by Speakeasy that takes a different approach to MCP server generation. Instead of generating code for you to deploy, Gram builds on Speakeasy’s excellent MCP generation to provide a fully hosted platform where you upload your OpenAPI document and get an instantly accessible MCP server.
How Gram works
Upload your OpenAPI document and the platform parses your API specification. Create toolsets by selecting and organizing relevant operations into use-case-specific groups. Configure environments with API keys and environment variables. Deploy and your MCP server is immediately available at a hosted URL.
Enterprise-ready features
Gram provides a comprehensive platform built for teams and production deployments:
Toolset curation: Not every API operation makes sense as an MCP tool. Gram lets you select specific operations to include or exclude, create multiple toolsets for different use cases (like “read-only” vs “full-access”), add custom prompts and context to individual tools, and combine operations into workflow-based custom tools.
Environment management: Configure multiple environments (development, staging, production) with different API keys, base URLs, and credentials. Switch between environments without changing your MCP server configuration.
Built-in authentication: Gram handles OAuth flows, API key management, and token refresh automatically. Your MCP server can authenticate with APIs that require OAuth without you needing to implement the flow yourself.
Team collaboration: Share toolsets across your organization, manage access controls, and collaborate on tool definitions with your team.
Managed hosting: Instant deployment at app.getgram.ai/mcp/your-server or use custom domains like mcp.yourdomain.com. No infrastructure to manage, no servers to maintain.
Interactive playground: Test your tools directly in Gram’s playground before deploying to production. Try natural language queries and see exactly how AI agents interact with your API.
The x-gram extension
Similar to Speakeasy, Gram supports OpenAPI extensions for customization:
paths: /tasks/{id}: get: operationId: getTask summary: Get task details x-gram: name: get_task_details description: | <context> Retrieves complete details for a specific task including title, description, priority, status, and timestamps. </context> <prerequisites> - You must have a valid task ID. Use list_tasks first if needed. </prerequisites> responseFilterType: jq
The x-gram extension lets you provide LLM-optimized descriptions with context, specify prerequisites for using a tool, and configure response filtering to reduce token usage.
When to use Gram
Choose Gram if you want the fastest path from an OpenAPI document to a hosted MCP server, prefer managed infrastructure over self-hosting, need multiple toolsets for different use cases, require built-in OAuth and API key management, or value ease of use and team collaboration over infrastructure control.
FastMCP is a Python framework for building MCP servers. As of FastMCP version 2.0.0 it can automatically convert OpenAPI specifications into MCP servers with just a few lines of code. FastMCP also supports converting FastAPI applications directly, making it ideal for Python developers who want to expose existing FastAPI endpoints as MCP tools.
How FastMCP works
By default, every endpoint in your OpenAPI specification becomes a standard MCP Tool, making all your API’s functionality immediately available to LLM clients. You can customize which endpoints to include and how they’re exposed using route mapping.
Getting started with FastMCP
Install FastMCP:
pip install fastmcp
Create an MCP server from OpenAPI:
import httpxfrom fastmcp import FastMCP# Load your OpenAPI specclient = httpx.AsyncClient(base_url="https://api.example.com")openapi_spec = httpx.get("https://api.example.com/openapi.json").json()# Create MCP servermcp = FastMCP.from_openapi( openapi_spec=openapi_spec, client=client, name="Task Management API")# Run the servermcp.run()
FastMCP allows you to customize how OpenAPI endpoints are converted:
from fastmcp import FastMCP, RouteMap# Create custom route mappingroute_map = RouteMap()# Exclude specific endpointsroute_map.exclude("/internal/*")# Convert specific routes to Resources instead of Toolsroute_map.map("/tasks", component_type="resource")mcp = FastMCP.from_openapi( openapi_spec=openapi_spec, client=client, name="Task Management API", route_map=route_map)
FastMCP Cloud: Managed hosting
FastMCP also offers FastMCP Cloud , a managed hosting platform similar to Gram. With FastMCP Cloud, you can upload your OpenAPI document or deploy your FastMCP server and get instant hosted access without managing infrastructure.
When to use FastMCP
Choose FastMCP if you work primarily in Python, want minimal boilerplate code, need programmatic control over server configuration, or prefer a lightweight, code-first approach. Use FastMCP Cloud for managed hosting while staying in the Python ecosystem.
FastMCP recommends manually designed MCP servers for complex APIs to achieve better performance. The auto-conversion is best for getting started quickly or for simpler APIs.
openapi-mcp-generator is an open-source CLI tool that generates standalone TypeScript MCP servers from OpenAPI specifications.
How it works
The generator creates a complete TypeScript project with automatic Zod validation schemas, multiple transport mode support (stdio, web server with SSE, StreamableHTTP), built-in HTML test clients, environment-based authentication, and fully typed server code.
The generator handles authentication via environment variables:
# In your .env file or environmentAPI_KEY=your-api-key-hereAPI_BASE_URL=https://api.example.com
The generated server automatically includes these in requests to your API.
When to use openapi-mcp-generator
Choose this tool if you want a basic, standalone MCP server without additional complexity. It’s a straightforward CLI that generates TypeScript code from your OpenAPI spec - what you see is what you get. You can edit the generated server code afterward if needed, but there’s zero customization during generation beyond choosing transport modes. Best suited for simple use cases where you need a functional MCP server quickly without enterprise features, managed hosting, or advanced tooling.
Optimizing your OpenAPI document for MCP
Regardless of which tool you choose, the quality of your resulting MCP tools depends on the quality of your OpenAPI document. AI agents need more context than human developers to use APIs effectively.
Write for AI agents, not just humans
Humans can infer context from brief descriptions. AI agents cannot. Compare these descriptions:
Basic description (for humans):
get: summary: Get task description: Retrieve a task by ID
Optimized description (for AI agents):
get: summary: Get complete task details description: | Retrieves full details for a specific task including title, description, priority level (low/medium/high), current status, assignee, due date, and creation/update timestamps. Use this endpoint when you need complete information about a task. If you only need a list of task IDs and titles, use listTasks instead.
The optimized version tells the AI agent what data to expect in the response, when to use this endpoint vs alternatives, and what each field means (priority values, status types).
Provide clear parameter guidance
Include examples and explain constraints:
parameters: - name: task_id in: path required: true schema: type: string format: uuid description: | The unique identifier for the task. Must be a valid UUID v4. You can get task IDs by calling listTasks first. examples: example1: summary: Valid task ID value: "550e8400-e29b-41d4-a716-446655440000"
Add response examples
Example responses help AI agents understand what successful responses look like:
Operation IDs become tool names. Make them clear and action-oriented:
# GoodoperationId: createTaskoperationId: listActiveTasksoperationId: archiveCompletedTasks# Less clearoperationId: post_tasksoperationId: get_tasks_listoperationId: update_task_status
Document error responses
Help AI agents understand what went wrong:
responses: '404': description: | Task not found. This usually means: - The task ID is incorrect - The task was deleted - You don't have permission to view this task content: application/json: schema: $ref: '#/components/schemas/Error'
Conclusion
Generating MCP servers from OpenAPI documents bridges the gap between your existing APIs and AI agents. With an OpenAPI document you can have a working MCP server in minutes instead of days.
Ready to get started? Try Gram today to get your MCP server hosted instantly.
The era of AI agents is here. Make sure your API is ready for it.