Speakeasy Logo
Skip to Content

Using Pydantic AI with Gram-hosted MCP servers

Pydantic AI  supports MCP servers through the pydantic-ai-mcp-client library. This allows you to give your Pydantic AI agents direct access to your tools and infrastructure by connecting to Gram-hosted MCP servers.

This guide shows you how to connect Pydantic AI to a Gram-hosted MCP server using an example Push Advisor API . You’ll learn how to create an MCP server from an OpenAPI document, set up the connection, configure authentication, and use natural language to query the example API.

Find the full code and OpenAPI document in the Push Advisor API repository .

Prerequisites

You’ll need:

Creating a Gram-hosted MCP server

If you already have a Gram-hosted MCP server configured, you can skip to connecting Pydantic AI to your Gram-hosted MCP server.

For this guide, we’ll use the public server URL https://app.getgram.ai/mcp/canipushtoprod.

For authenticated servers, you’ll need an API key. Generate an API key in the Settings tab. For an in-depth guide to how Gram works and to creating a Gram-hosted MCP server, check out our introduction to Gram.

Connecting Pydantic AI to your Gram-hosted MCP server

Pydantic AI supports MCP servers through built-in MCP support using the MCPServerStreamableHTTP class. Here’s how to connect to your Gram-hosted MCP server.

Installation

First, install the required packages:

Environment setup

Set up your environment variables by creating a .env file:

Load these in your code:

To run the async code given in the sections to follow, you can import asyncio and wrap the code in an async function definition as shown below:

Basic connection (public server)

Here’s a basic example using a public Gram-hosted MCP server with Streamable HTTP transport:

Authenticated connection

For private MCP servers, include your Gram API key in the headers:

Understanding the configuration

Here’s what each parameter in the MCPServerStreamableHTTP configuration does:

  • url adds your Gram-hosted MCP server URL.
  • headers adds optional HTTP headers for authentication.

The server uses Streamable HTTP transport, which is compatible with Gram’s HTTP-based MCP servers.

Working with tool responses

Pydantic AI provides detailed information about tool usage in agent responses:

Streaming responses

Pydantic AI supports streaming responses with MCP tools:

Using structured outputs

Pydantic AI excels at structured outputs, which you can combine with MCP tools:

Error handling

Pydantic provides an McpError class for handling errors from MCP servers. You can catch this error to handle issues like connection failures or invalid requests:

Using instructions with MCP tools

Pydantic AI allows you to combine instructions with MCP tools for more controlled behavior:

Using dependencies with MCP tools

Pydantic AI’s dependency injection works with MCP tools:

Complete example

Here’s a complete example that demonstrates connecting to a Gram-hosted MCP server and using it with Pydantic AI:

Differences from other MCP integrations

Pydantic AI’s approach to MCP differs from other frameworks:

Connection method

  • Pydantic AI uses MCPServerStreamableHTTP as toolsets.
  • LangChain uses MultiServerMCPClient with multiple servers.
  • OpenAI uses a tools array with type: "mcp" in the Responses API.
  • Anthropic uses mcp_servers parameter in the Messages API.
  • The Vercel AI SDK uses experimental_createMCPClient.

Type safety

  • Pydantic AI offers strong type safety with Pydantic models for structured outputs.
  • LangChain offers dynamic typing with tool discovery.
  • Others offer basic type support without structured output capabilities.

Framework features

  • Pydantic AI includes dependency injection, structured outputs, and type validation.
  • LangChain includes workflow abstractions, chains, and multi-server support.
  • Others are limited to direct API usage without additional abstractions.

Transport support

  • Pydantic AI supports Streamable HTTP transport for remote servers.
  • LangChain supports both streamable_http and stdio transports.
  • The Vercel AI SDK supports SSE, stdio, and custom transports.
  • Others use direct HTTP connections.

Testing your integration

If you encounter issues during integration, follow these steps to troubleshoot:

Validate MCP server connectivity

Before integrating into your application, test your Gram-hosted MCP server in the Gram Playground  to ensure the tools work correctly.

Use the MCP Inspector

Anthropic provides an MCP Inspector  command line tool that helps you test and debug MCP servers before integrating them with Pydantic AI. You can use it to validate your Gram-hosted MCP server’s connectivity and functionality.

Run the following command to test your Gram-hosted MCP server with the Inspector:

In the Transport Type field, select Streamable HTTP.

Enter your server URL in the URL field, for example:

Click Connect to establish a connection to your MCP server.

Screenshot of the MCP Inspector connecting to a Gram-hosted MCP server

Use the Inspector to verify that your MCP server responds correctly before integrating it with your Pydantic AI application.

Debug tool discovery

You can debug which tools are available from your MCP server by inspecting the agent after creation:

Environment setup

Ensure your environment variables are properly configured:

Then load them in your application:

What’s next

You now have Pydantic AI connected to your Gram-hosted MCP server, giving your agents access to your custom APIs and tools with the power of type-safe, structured outputs.

Pydantic AI’s focus on type safety, structured outputs, and dependency injection makes it ideal for building robust, production-ready AI applications that can reliably interact with your infrastructure.

Ready to build your own MCP server? Try Gram today  and see how easy it is to turn any API into agent-ready tools that work with Pydantic AI and all major AI frameworks.

Last updated on