Your fast path to production MCP. build, deploy and scale your MCP servers with ease using Gram's cloud platform.
Start building MCPWhat are MCP prompts?
MCP prompts are reusable, structured message templates exposed by MCP servers to guide interactions with agents. Unlike tools (which execute logic) or resources (which provide read-only data), prompts return a predefined list of messages meant to initiate consistent model behavior.
Prompts are declarative, composable, and designed for user-initiated workflows, such as:
- Slash commands or quick actions triggered via UI
- Task-specific interactions, like summarization or code explanation
You can use prompts when you want to define how users engage with the model but not to perform logic or to serve contextual data.
Prompt structure
A prompt is a named, parameterized template. It defines:
- A
name(a unique identifier) - An optional
description - An optional list of structured
arguments
{
"name": "summarize-errors",
"description": "Summarize recent error logs",
"arguments": [
{
"name": "logUri",
"description": "URI of the log resource",
"required": true
}
]
}The server exposes prompts via prompts/list and provides message content on prompts/get.
Discovering prompts
Clients use prompts/list to fetch available prompt definitions:
{
"method": "prompts/list"
}The response includes a list of prompts:
{
"prompts": [
{
"name": "explain-code",
"description": "Explain how a function works",
"arguments": [{ "name": "code", "required": true }]
}
]
}Using prompts
To use a prompt, clients call prompts/get with a prompt name and arguments:
{
"method": "prompts/get",
"params": {
"name": "explain-code",
"arguments": {
"code": "def hello(): print('hi')"
}
}
}The server responds with a messages[] array, ready to send to the model:
{
"description": "Explain how a function works",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "Explain this Python code:\n\ndef hello(): print('hi')"
}
}
]
}Defining and serving prompts in Python
The following example defines a simple MCP prompt called git-commit that helps users generate commit messages from change descriptions.
from mcp.server import Server, stdio
import mcp.types as types
import asyncio
app = Server("git-prompts-server")
@app.list_prompts()
async def list_prompts() -> list[types.Prompt]:
return [
types.Prompt(
name="git-commit",
description="Generate a Git commit message from a code diff or change summary",
arguments=[
types.PromptArgument(
name="changes",
description="Code diff or explanation of the changes made",
required=True
)
]
)
]
@app.get_prompt()
async def get_prompt(name: str, arguments: dict[str, str]) -> types.GetPromptResult:
if name != "git-commit":
raise ValueError("Unknown prompt")
changes = arguments.get("changes", "")
return types.GetPromptResult(
messages=[
types.PromptMessage(
role="user",
content=types.TextContent(
type="text",
text=(
"Generate a Git commit message summarizing these changes:\n\n"
f"{changes}"
)
)
)
]
)In this example, we:
- Register a static prompt named
git-commitwith a human-readable description and a requiredchangesargument. - Expose metadata via
@list_promptsso UIs and clients can discover the prompt. - Implement prompt generation via
@get_prompt, which creates a single message that asks the agent to produce a commit message based on input. - Avoid side effects, as the server does not evaluate or format the response but it does structure a message.
Best practices and pitfalls to avoid
Here are some best practices for implementing MCP prompts:
- Use clear, actionable names (for example,
summarize-errors, notget-summarized-error-log-output). - Validate all required arguments up front.
- Keep prompts deterministic and stateless (using the same input should produce the same output).
- Embed resources directly, if needed, for model context.
- Provide concise descriptions to improve UI discoverability.
When implementing MCP prompts, avoid the following common mistakes:
- Allowing missing or malformed arguments
- Using vague or overly long prompt names
- Passing oversized inputs (such as full files or large diffs)
- Failing to sanitize non-UTF-8 or injection-prone strings
Prompts vs tools vs resources
The table below compares the three core primitives in MCP:
| Feature | Prompts | Tools | Resources |
|---|---|---|---|
| Purpose | Guide model interaction | Execute logic with side effects | Provide structured read-only data |
| Triggered by | User or UI | Agent or client (tools/call) | Agent or client (resources/read) |
| Behavior | Returns messages[] | Runs a function; returns a result | Returns static or dynamic content |
| Side effects | None | Yes (I/O, API calls, mutations) | None |
| Composition | Can embed arguments and resources | Accepts structured input | URI-scoped, optionally templated |
| Use cases | Summarization, Q&A, message templates | File writing, API calls, workflows | Logs, config files, external data |
Practical implementation example
MCP prompts are a powerful way to define reusable templates that combine context from your application with instructions for the LLM. Here’s how to implement a prompt using the TypeScript SDK.
This example creates a WhatsApp chat summarization prompt that retrieves chat data and formats it for the LLM:
mcpServer.prompt(
"whatsapp_chat_summarizer",
"Summarize WhatsApp chat and provide insights",
{
chatName: z.string().describe("Name of the WhatsApp chat to summarize"),
},
async (args) => {
const { chatName = "" } = args;
// Find the chat by name
// A real implementation would be more robust
const targetChat = await chatService.findChatByName(chatName);
// Get recent messages for analysis
const messages = await messageService.getMessages(targetChat.id);
const promptText = `Analyze this WhatsApp chat data for insights:
Chat Information:
- Chat Name: ${targetChat.name}
- Chat Type: ${targetChat.isGroup ? "Group Chat" : "Individual Chat"}
- Analysis Type: summary
Analysis Focus:
Provide a comprehensive overview including key topics, sentiment, and notable patterns.
Recent Messages (${messages.length} messages):
${messages.map((msg) => msg._serializedContent).join("\n")}
Please provide a detailed summary.`;
return {
description: `Summary of WhatsApp chat: ${targetChat.name}`,
messages: [
{
role: "user",
content: {
type: "text",
text: promptText,
},
},
],
};
},
);This defines a prompt called whatsapp_chat_summarizer that takes a chatName argument and generates a formatted prompt with the chat data.
How prompts work in practice
The LLM client presents a list of available prompts to the user, who can then select one to use. When the user selects a prompt with arguments, the client should display a modal or form allowing the user to fill in the required arguments.
Once the user submits the form, the MCP client sends a prompts/get request to the MCP server with the selected prompt and its arguments. The MCP server adds the relevant context to the prompt (in this case, the WhatsApp chat data) and returns the formatted messages to the MCP client. The client can then send these messages to the LLM for processing.
This is especially useful for repetitive tasks where a user needs to combine tool call results with a complex prompt. If you can anticipate the user’s needs, you can define a prompt that combines the necessary context and tool calls into a single reusable template.
Last updated on