CLI Generation
Agent mode
Generated CLIs detect when agents are calling them and automatically switch output to machine-readable formats. No configuration needed.
Agent mode
Pass --agent-mode to disable interactivity and switch output to TOON, a structured format agents can parse reliably.
llms.txt, skills.md
Expose CLI capabilities through llms.txt for LLM discovery, and skills.md for agent skill registries.
Agent discoverable
Agents find and understand CLI capabilities through built-in help docs, and shell completions — all generated automatically from the API spec.
Machine-readable schemas
Output JSON Schema definitions for all commands and responses. Agents can validate inputs and parse outputs with full type safety.
Agent mode
Pass --agent-mode to disable interactivity and switch output to TOON, a structured format agents can parse reliably.
llms.txt, skills.md
Expose CLI capabilities through llms.txt for LLM discovery, and skills.md for agent skill registries.
Agent discoverable
Agents find and understand CLI capabilities through built-in help docs, and shell completions — all generated automatically from the API spec.
Machine-readable schemas
Output JSON Schema definitions for all commands and responses. Agents can validate inputs and parse outputs with full type safety.
Generated CLIs ship to every major package manager. Users install with a single command. Agents register them as skills.
Feature rich
Every generated CLI ships with features that make it a joy to use interactively and trivial to automate.
Interactive TUI
Explore mode with full TUI for browsing commands, viewing help, and navigating API resources interactively.
Retries
Automatic retries with exponential backoff and configurable retry policies for transient failures.
jq filtering and chaining
Built-in jq support for filtering JSON output. Agents can chain commands together, piping output from one operation into the next.
Pagination
Automatic pagination for list endpoints. Fetch all results or page through them one at a time.
Auth
OAuth 2.0, API keys, and bearer tokens out of the box. Browser-based login flows for humans, programmatic auth for agents.
Extensibility
Add custom commands on top of the generated base CLI. Extend with business logic, workflows, or domain-specific operations.
"We are very happy with Speakeasy's support… Internally, our developers find the SDK useful, it's actively used, and continues to generate valuable feedback. The Speakeasy team has been instrumental throughout our implementation journey."
Gaspard Blanchet
SOFTWARE ENGINEER @ MISTRAL AI
"We've been using Speakeasy to create Dub.co's TypeScript SDK and it's been an amazing experience so far."
Steven Tey
FOUNDER & CEO @ DUB
"Speakeasy has been a really good partner for us. We've been able to get support quickly when needed, and the platform has significantly streamlined our SDK generation process."
Joseph Spurrier
SR. STAFF ENGINEER @ PROVE
"We are very happy with Speakeasy's support… Internally, our developers find the SDK useful, it's actively used, and continues to generate valuable feedback. The Speakeasy team has been instrumental throughout our implementation journey."
Gaspard Blanchet
SOFTWARE ENGINEER @ MISTRAL AI
"We've been using Speakeasy to create Dub.co's TypeScript SDK and it's been an amazing experience so far."
Steven Tey
FOUNDER & CEO @ DUB
"Speakeasy has been a really good partner for us. We've been able to get support quickly when needed, and the platform has significantly streamlined our SDK generation process."
Joseph Spurrier
SR. STAFF ENGINEER @ PROVE
CLI vs MCP
CLIs and MCP servers serve different personas, runtimes, and distribution models. The question isn't which one wins — it's when to reach for which.
Chain commands with Unix pipes. An agent can filter, transform, and combine CLI output using jq, grep, and other tools — saving tokens by stripping irrelevant data before it enters context.
Every command is reproducible. Copy it from the audit trail, paste it in a terminal, and get the same result. Debugging is straightforward.
Models trained on billions of shell scripts already know popular CLIs. For tools like git, curl, and kubectl, agents often don't need to read help text.
Subcommands and --help at every level let agents explore incrementally without loading hundreds of tool definitions upfront.
Chain commands with Unix pipes. An agent can filter, transform, and combine CLI output using jq, grep, and other tools — saving tokens by stripping irrelevant data before it enters context.
Share a URL in Slack or embed it in docs. Any MCP-compatible client connects immediately — no package manager, no install step, no version issues.
Every command is reproducible. Copy it from the audit trail, paste it in a terminal, and get the same result. Debugging is straightforward.
OAuth flows are built into the protocol. Non-technical users complete auth entirely within their chat interface — no terminal configuration required.
Models trained on billions of shell scripts already know popular CLIs. For tools like git, curl, and kubectl, agents often don't need to read help text.
Every tool declares input and output schemas as part of the protocol. Agents know parameter types and expected responses before making a single call.
Subcommands and --help at every level let agents explore incrementally without loading hundreds of tool definitions upfront.
Fix a vulnerability or ship a feature, and every connected client gets it immediately. No "please update" emails or outdated version drift.
The answer is usually both. Build on a shared core, choose depth of investment by audience.
Read the full analysis