Using the Vercel AI SDK with Gram-hosted MCP servers
The Vercel AI SDK supports remote MCP servers through its experimental MCP client feature. This allows you to give AI models direct access to your tools and APIs by connecting to Gram-hosted MCP servers.
This guide shows you how to connect the Vercel AI SDK to a Gram-hosted MCP server using an example Push Advisor API . You’ll learn how to create an MCP server from an OpenAPI document, set up the connection, configure authentication, and use natural language to query the example API.
Follow the steps to configure a toolset and publish an MCP server. At the end of the setup, you’ll have a Gram-hosted MCP server ready to use.
For this guide, we’ll use the public server URL https://app.getgram.ai/mcp/canipushtoprod.
For authenticated servers, you’ll need an API key. Generate an API key in the Settings tab.
Connecting the Vercel AI SDK to your Gram-hosted MCP server
The Vercel AI SDK supports MCP servers through the experimental_createMCPClient function with Streamable HTTP transport. Here’s how to connect to your Gram-hosted MCP server:
Installation and setup
First, install the Vercel AI SDK, the MCP SDK, and an AI provider SDK. The samples in this guide use OpenAI, but you can also use Anthropic or other providers.
Run the following:
Project configuration
Configure your project for ES modules by adding "type": "module" to your package.json:
Environment variables
Create a .env file in your project root to store your API keys:
Load these environment variables at the top of your JavaScript files:
Basic connection (public server)
Here’s a basic example using a public Gram MCP server with Streamable HTTP transport:
Authenticated connection
For authenticated Gram MCP servers, include your Gram API key in the headers:
Understanding the configuration
Here’s what each parameter in the createMCPClient configuration does:
StreamableHTTPClientTransport uses Streamable HTTP transport (as opposed to SSE or stdio).
new URL(...) adds your Gram-hosted MCP server URL.
headers adds optional HTTP headers for authentication (passed as the second parameter to the transport constructor).
Tool filtering and schema approaches
The Vercel AI SDK supports two approaches to working with MCP tools: schema discovery and schema definition.
Schema discovery (recommended for Gram)
Schema discovery is the simplest approach, where all tools offered by the server are listed automatically:
Schema definition with TypeScript
For better type safety and IDE support, you can define schemas explicitly:
When you define schemas, the client only pulls the explicitly defined tools, even if the server offers additional tools.
Limiting tools with activeTools
You can also limit which tools are available to the model using the activeTools parameter:
Working with responses
The Vercel AI SDK provides different ways to handle MCP tool calls depending on whether you use generateText or streamText.
Using generateText
With generateText, you get access to tool calls and results in the response:
Using streamText
With streamText, you can handle tool calls as they stream in:
Error handling
The Vercel AI SDK includes error handling for MCP tool calls:
Important: By default, generateText() stops after executing tools and may return empty text responses. To get text responses after tool calls, you need to use multi-step generation.
Understanding the default behavior
When using tools with generateText(), the default behavior is as follows:
The model calls the appropriate tool(s).
The model receives tool results.
The model stops without generating a text response.
This is by design - some tool calls are meant to be fire-and-forget operations that don’t require text responses.
Getting text responses after tool calls
To generate text responses after tool calls, use stopWhen: stepCountIs(N), where N ≥ 2:
This forces the model to:
Step 1: Execute tool calls
Step 2: Generate text response based on tool results
Multi-step tool calls
For complex workflows requiring multiple tools and responses, use higher step counts:
Client lifecycle management
Proper management of the MCP client lifecycle is important for resource efficiency:
Short-lived usage (recommended)
For single requests or short-lived operations, close the client when finished:
Long-running applications
For server applications or CLI tools, you might keep the client open:
Differences from other MCP integrations
The Vercel AI SDK’s approach to MCP differs from OpenAI and Anthropic’s native implementations:
Connection method
The Vercel AI SDK uses experimental_createMCPClient with Streamable HTTP or stdio transports.
OpenAI uses the tools array with type: "mcp" in the Responses API.
Anthropic uses the mcp_servers parameter in the Messages API.
Authentication
The Vercel AI SDK uses HTTP headers in the transport configuration.
OpenAI uses a headers object in the tool configuration.
Anthropic uses an authorization token parameter.
Tool management
The Vercel AI SDK allows schema discovery and definition at the client level and uses activeTools for filtering.
OpenAI allows tool filtering via allowed_tools parameter.
Anthropic uses a tool configuration object with an allowed_tools array.
Model flexibility
The Vercel AI SDK works with any AI provider (including OpenAI, Anthropic, and Google).
OpenAI only works with OpenAI models.
Anthropic only works with Anthropic models.
Transport options
The Vercel AI SDK uses Streamable HTTP, stdio, or custom transports.
OpenAI uses direct HTTP or HTTPS connections.
Anthropic uses URL-based HTTP connections.
Response handling
The Vercel AI SDK handles responses via unified tool calls and results across all providers.
OpenAI handles responses via the mcp_call and mcp_list_tools items.
Anthropic handles responses via the mcp_tool_use and mcp_tool_result blocks.
Complete example
Here’s a complete example that demonstrates connecting to a Gram MCP server and using it with the Vercel AI SDK:
Testing your integration
If you encounter issues during integration, follow these steps to troubleshoot:
Validate MCP server connectivity
Before integrating into your application, test your Gram MCP server in the Gram Playground to ensure the tools work correctly.
Use the MCP Inspector
Anthropic provides an MCP Inspector command line tool that helps you test and debug MCP servers before integrating them with the Vercel AI SDK. You can use it to validate your Gram MCP server’s connectivity and functionality.
Run the command below to test your Gram MCP server with the Inspector:
In the Transport Type field, select Streamable HTTP.
Enter your server URL in the URL field, for example:
Click Connect to establish a connection to your MCP server.
Use the Inspector to verify that your MCP server responds correctly before integrating it with your Vercel AI SDK application.
Debug tool discovery
You can debug which tools are available from your MCP server:
Environment setup
Ensure your environment variables are properly configured:
Then load them in your application:
What’s next
You now have the Vercel AI SDK connected to your Gram-hosted MCP server, giving your AI applications access to your custom APIs and tools, and giving you the flexibility to use any AI provider.
The Vercel AI SDK’s provider-agnostic approach means you can switch between OpenAI, Anthropic, Google, and other providers while keeping the same MCP tool integration.
Ready to build your own MCP server? Try Gram today and see how easy it is to turn any API into agent-ready tools that work with the Vercel AI SDK and all major AI providers.