Using OpenAI's Responses API with Gram-hosted MCP servers
The OpenAI Responses API
This guide shows you how to connect the OpenAI Responses API to a Gram-hosted MCP server using an example Push Advisor API
Find the full code and OpenAPI document in the Push Advisor API repository
Understanding OpenAI API options
OpenAI provides three main approaches for integrating with MCP servers:
- The Responses API (this guide): An API with a simple request-response pattern, ideal for basic tool calling and quick integrations.
- The Agents SDK: An advanced agent framework with sessions, handoffs, and persistent context that is perfect for complex conversational workflows.
- ChatGPT Connectors: Connectors offer direct ChatGPT integration to end users via a web UI.
If you need more advanced features like persistent conversations or complex workflows, consider the Agents SDK guide, or try ChatGPT Connectors for a web UI solution.
Prerequisites
You’ll need:
- A Gram account
- An OpenAI API key
- A Python environment set up on your machine
- Basic familiarity with making API requests
Creating a Gram MCP server
If you already have a Gram MCP server configured, you can skip to connecting the Responses API to your Gram-hosted MCP server. For an in-depth guide to how Gram works and more details on creating a Gram-hosted MCP server, check out our introduction to Gram.
Setting up a Gram project
In the Gram dashboard
Enter a project name and click Submit.
Gram will then guide you through the following steps:
1. Upload the OpenAPI document
Upload the Push Advisor OpenAPI document
2. Create a toolset
Give your toolset a name (for example, Push Advisor
) and click Continue.
Notice that the names of the tools that will be generated from your OpenAPI document are displayed in this dialog.
3. Configure MCP
Enter a URL slug for the MCP server and click Continue.
Gram will create a new toolset from the OpenAPI document.
Click Toolsets in the sidebar to view the Push Advisor toolset.
Publishing an MCP server
Let’s make the toolset available as an MCP server.
Go to the MCP tab, find the Push Advisor toolset, and click the title of the server.
On the MCP Details page, tick the Public checkbox and click Save.
Scroll down to the MCP Config section and note your MCP server URL. For this guide, we’ll use the public server URL format:
https://app.getgram.ai/mcp/canipushtoprod
For authenticated servers, you’ll need an API key. Generate an API key in the Settings tab.
Connecting the Responses API to your Gram-hosted MCP server
The OpenAI Responses API supports MCP servers through the tools
parameter. Here’s how to connect to your Gram-hosted MCP server.
Basic connection (public server)
Here’s a basic example using a public Gram MCP server. Start by setting your OpenAI API key:
Install the OpenAI Python package:
Then run the following Python script:
Authenticated connection
For authenticated Gram MCP servers, include your Gram API key in the headers.
It is safest to use environment variables to manage your API keys, so let’s set that up first:
Again, with the OpenAI Python client installed, run the following Python script to connect to your authenticated Gram MCP server:
Understanding the configuration
Here’s what each parameter in the tools
array does:
type: "mcp"
specifies that this is an MCP tool.server_label
adds a unique identifier for your MCP server.server_url
adds your Gram-hosted MCP server URL.headers
adds authentication headers (optional for public servers).require_approval
controls tool call approval behavior.
Tool filtering and permissions
Using the allowed_tools
parameter, you can control which tools are available for use in your MCP server while making an API call.
Filtering specific tools
If your Gram MCP server has multiple tools but you only want to expose certain ones in this particular API call, use the allowed_tools
parameter:
Note how the vibe_check
tool is excluded from the allowed_tools
list. This means it won’t be available for use in this API call, even if it’s defined in your curated toolset and MCP server.
Managing tool approvals
For production applications, you might want to control when tools are called. The OpenAI Responses API provides several approval options:
- Never require approval (fastest):
- Always require approval (most secure):
- Selective approval:
When approvals are required, the API will return an mcp_approval_request
that you can respond to in a subsequent API call. See OpenAI’s documentation about approvals
Working with responses
The OpenAI Responses API returns detailed information about MCP tool usage:
Successful tool calls
When a tool call succeeds, you’ll see an mcp_call
item in the response:
Error handling
Failed tool calls will populate the error
field:
Differences from Anthropic’s MCP integration
While both OpenAI and Anthropic support MCP servers, there are key differences in their approaches:
Connection method
- OpenAI connects directly to remote MCP servers via HTTP/HTTPS in the Responses API.
- Anthropic uses both direct HTTP connections (Claude API) and local MCP clients (Claude Desktop/Code).
Authentication
- OpenAI uses simple HTTP headers for authentication.
- Anthropic supports OAuth Bearer tokens and more complex authentication flows.
Tool management
- OpenAI allows tool filtering via the
allowed_tools
parameter. - Anthropic allows tool configuration through the
tool_configuration
object.
Approval workflow
- OpenAI handles approval requests through response chaining with
previous_response_id
. - Anthropic has direct tool execution with optional authentication prompts.
API structure
- OpenAI uses the
tools
array withtype: "mcp"
. - Anthropic uses the
mcp_servers
parameter with server configurations.
Response format
- OpenAI returns
mcp_call
andmcp_list_tools
items. - Anthropic returns
mcp_tool_use
andmcp_tool_result
blocks.
Testing your integration
If you encounter issues during integration, follow these steps to troubleshoot:
Validating MCP server connectivity
Before integrating into your application, test your Gram MCP server in the Gram Playground
Using the MCP Inspector
Anthropic provides an MCP Inspector
To test your Gram MCP server with the Inspector:
In the Transport Type field, select Streamable HTTP.
Enter your server URL in the URL field. For example:
Click Connect to establish a connection to your MCP server.
Use the Inspector to verify that your MCP server responds correctly before integrating it with your OpenAI API calls.
What’s next
You now have OpenAI’s GPT models connected to your Gram-hosted MCP server, giving them access to your custom APIs and tools.
Ready to build your own MCP server? Try Gram today
Last updated on