Gram by Speakeasy
Introducing Gram by Speakeasy. Gram is everything you
need to power integrations for Agents and LLMs. Easily create, curate and host
MCP servers for internal and external use. Build custom tools and remix
toolsets across 1p and 3p APIs. Try building a MCP server now!
What are MCP prompts?
MCP prompts are reusable, structured message templates exposed by MCP servers to guide interactions with agents. Unlike tools (which execute logic) or resources (which provide read-only data), prompts return a predefined list of messages meant to initiate consistent model behavior.
Prompts are declarative, composable, and designed for user-initiated workflows, such as:
- Slash commands or quick actions triggered via UI
- Task-specific interactions, like summarization or code explanation
You can use prompts when you want to define how users engage with the model but not to perform logic or to serve contextual data.
Prompt structure
A prompt is a named, parameterized template. It defines:
- A
name
(a unique identifier) - An optional
description
- An optional list of structured
arguments
The server exposes prompts via prompts/list
and provides message content on prompts/get
.
Discovering prompts
Clients use prompts/list
to fetch available prompt definitions:
The response includes a list of prompts:
Using prompts
To use a prompt, clients call prompts/get
with a prompt name
and arguments
:
The server responds with a messages[]
array, ready to send to the model:
Defining and serving prompts in Python
The following example defines a simple MCP prompt called git-commit
that helps users generate commit messages from change descriptions.
In this example, we:
- Register a static prompt named
git-commit
with a human-readable description and a requiredchanges
argument. - Expose metadata via
@list_prompts
so UIs and clients can discover the prompt. - Implement prompt generation via
@get_prompt
, which creates a single message that asks the agent to produce a commit message based on input. - Avoid side effects, as the server does not evaluate or format the response but it does structure a message.
Best practices and pitfalls to avoid
Here are some best practices for implementing MCP prompts:
- Use clear, actionable names (for example,
summarize-errors
, notget-summarized-error-log-output
). - Validate all required arguments up front.
- Keep prompts deterministic and stateless (using the same input should produce the same output).
- Embed resources directly, if needed, for model context.
- Provide concise descriptions to improve UI discoverability.
When implementing MCP prompts, avoid the following common mistakes:
- Allowing missing or malformed arguments
- Using vague or overly long prompt names
- Passing oversized inputs (such as full files or large diffs)
- Failing to sanitize non-UTF-8 or injection-prone strings
Prompts vs tools vs resources
The table below compares the three core primitives in MCP:
Feature | Prompts | Tools | Resources |
---|---|---|---|
Purpose | Guide model interaction | Execute logic with side effects | Provide structured read-only data |
Triggered by | User or UI | Agent or client (tools/call ) | Agent or client (resources/read ) |
Behavior | Returns messages[] | Runs a function; returns a result | Returns static or dynamic content |
Side effects | None | Yes (I/O, API calls, mutations) | None |
Composition | Can embed arguments and resources | Accepts structured input | URI-scoped, optionally templated |
Use cases | Summarization, Q&A, message templates | File writing, API calls, workflows | Logs, config files, external data |
Practical implementation example
MCP prompts are a powerful way to define reusable templates that combine context from your application with instructions for the LLM. Here’s how to implement a prompt using the TypeScript SDK.
This example creates a WhatsApp chat summarization prompt that retrieves chat data and formats it for the LLM:
This defines a prompt called whatsapp_chat_summarizer
that takes a chatName
argument and generates a formatted prompt with the chat data.
How prompts work in practice
The LLM client presents a list of available prompts to the user, who can then select one to use. When the user selects a prompt with arguments, the client should display a modal or form allowing the user to fill in the required arguments.
Once the user submits the form, the MCP client sends a prompts/get
request to the MCP server with the selected prompt and its arguments. The MCP server adds the relevant context to the prompt (in this case, the WhatsApp chat data) and returns the formatted messages to the MCP client. The client can then send these messages to the LLM for processing.
This is especially useful for repetitive tasks where a user needs to combine tool call results with a complex prompt. If you can anticipate the user’s needs, you can define a prompt that combines the necessary context and tool calls into a single reusable template.
Last updated on