What are MCP Transports?
So, you’ve built your first MCP server – you’ve defined some tools, exposed a few resources, and maybe even wired up prompts – but how do you actually connect to it? That’s where transports come in.
MCP Transports define how messages move between clients and servers. While MCP is a model-agnostic, JSON-RPC-based protocol, the transport layer handles the real-world details of sending and receiving data – whether you’re embedding the server in a local CLI, streaming updates to a browser, or wiring it into a desktop agent.
MCP currently supports two standard forms of transport: stdio
and SSE
. Custom transports that conform to the standard input and output (stdio) and message-handling semantics are also supported.
Request, response, and notification formats
All communication follows the JSON-RPC 2.0 wire protocol. Every message sent between the client and the server falls into one of these categories:
Request
The client sends the following payload to invoke a method on the server:
{"jsonrpc": "2.0","id": 1,"method": "tools/call","params": { "name": "place_order", "arguments": { ... } }}
Response
The server returns the following data after handling a request:
{"jsonrpc": "2.0","id": 1,"result": { "status": "ok" }}
Notification
A notification is a one-way message that does not expect a response.
{"jsonrpc": "2.0","method": "notifications/prompts/list_changed","params": { "updated": true }}
Built-in MCP Transports
MCP servers can be exposed over two built-in transport mechanisms:
stdio
Standard input and output (stdio) is the simplest and most universal transport for MCP. It allows a server to read from stdin
and write to stdout
, which is ideal for local use, CLIs, and agent plugins.
Other use cases are:
- CLI tools
- Local development
- Agent plugins (like the Claude desktop app or LangGraph steps)
Here is a Python implementation of an MCP transport with stdio
:
from mcp.server import Serverfrom mcp.server import stdio_serverapp = Server("stdio-mcp-server")@app.list_tools()async def tools():return []async def main():async with stdio_server() as (read_stream, write_stream):await app.run(read_stream,write_stream,app.create_initialization_options())if __name__ == "__main__":import asyncioasyncio.run(main())
In the example above, we:
- Register an empty list of tools via
@list_tools
, which could later be populated with callable server-side functions. - Use
stdio_server()
as the transport, which opens standard input and output streams for communication and is ideal for local processes and CLI integrations. - Run the server with
app.run(...)
, passing the opened streams and initialization options to handle incoming JSON-RPC messages and serve responses.
SSE
Server-sent events (SSE) provide a persistent connection for server-to-client streaming, while client-to-server messages are sent using HTTP POST
. It’s useful for interactive UIs, dashboards, or environments where long-lived connections are preferred.
Here is an implementation of an MCP server serving via SSE:
from mcp.server import Serverfrom mcp.server.sse import SseServerTransportfrom starlette.applications import Starlettefrom starlette.routing import Routeapp = Server("sse-mcp-server")sse = SseServerTransport("/messages")async def sse_handler(scope, receive, send):async with sse.connect_sse(scope, receive, send) as streams:await app.run(streams[0], streams[1], app.create_initialization_options())async def post_handler(scope, receive, send):await sse.handle_post_message(scope, receive, send)starlette_app = Starlette(routes=[Route("/sse", sse_handler),Route("/messages", post_handler, methods=["POST"])])
In the example above, we:
- Initialize the SSE transport using
SseServerTransport("/messages")
, which defines the internalPOST
message endpoint that the server will listen to for client inputs. - Define a
/sse
endpoint viasse_handler
, which establishes a persistent SSE connection. Inside it, we callapp.run()
with the input and output streams from the connected client. - Define a
/messages
POST
route withpost_handler
, allowing the client to send messages back to the server via HTTPPOST
, which is how the client communicates with the server. - Wrap the handlers in a Starlette app using route definitions, making it ready to be served by any ASGI server (for example, Uvicorn).
Custom transport implementations
MCP does not require servers to use stdio
or SSE
. Any transport can be used, as long as it:
- Supports bidirectional messaging via JSON-RPC 2.0.
- Can be hooked into the server’s
run(read_stream, write_stream, initialization_options)
method. - Implements proper connection lifecycle and error handling.
So, you can use:
- WebSockets
- gRPC
- Shared memory channels
- Named pipes
- Custom browser bridges
You must ensure that the messages follow the JSON-RPC structure and that your transport yields the appropriate async streams.
Connection lifecycle and control
Regardless of the transport used, all MCP connections follow the same lifecycle:
Initialization
- The client connects and sends an
initialize
request with its capabilities. - The server responds with its capabilities and protocol version.
- The client acknowledges with
notifications/initialized
.
Active session
During an active session, both clients and servers can:
- Make requests that expect responses.
- Send notifications that don’t expect responses.
- Cancel in-flight requests with
notifications/cancelled
. - Send progress updates with
notifications/progress
(although this is not yet implemented by the Python MCP SDK).
Termination
Either side can terminate the connection by:
- Closing the transport.
- Sending a protocol-specific termination message.
- Disconnecting without notification (although clean termination is preferred).
Final thoughts
Transports should be secured according to their communication layers:
- For
SSE
, validate Origin headers, enforce authentication, and bind tolocalhost
during local development to avoid DNS rebinding vulnerabilities. - For custom transports, ensure that authentication, rate limiting, and input sanitization are enforced. Always follow the JSON-RPC security best practices (opens in a new tab) and sanitize all payloads.
The transport may be low-level but it defines the perimeter of your server, so treat it as a security boundary.
An MCP server can support multiple transport mechanisms simultaneously, but it must implement at least one transport to be operational. Each client connection uses one specific transport, but the server itself can expose its functionality through different transport interfaces based on various client needs.
Use case | Recommended transport |
---|---|
Local development | stdio |
CLI tools | stdio |
Agent plugins (LangGraph, Claude desktop) | stdio |
Server UIs / browsers | SSE |
Cross-network communication | SSE with CORS/Auth |
Custom hardware or IPC | Custom |
WebSocket or real-time streaming | Custom (WebSocket) |