AI control plane
The governing layer between every AI agent in an organization and every system theyโre allowed to reach. It unifies connection, identity, policy enforcement, and observability so that every prompt, response, and tool call flows through a single controlled path.
Every enterprise is rolling out AI faster than IT can keep up with. Claude, ChatGPT, Codex, Cursor, Copilot and many more: these arenโt experiments anymore, theyโre how employees work. And most of the organizations deploying them have no idea which tools are in use, by whom, or with what data.
This is the problem the AI control plane exists to solve. Itโs the governing layer between every AI agent in the company and every system theyโre allowed to reach: the part that decides whatโs connected, whoโs authenticated, whatโs inspected, and whatโs measured. The name borrows from networking, where Kubernetes and service meshes and Cloudflare already have control planes for the same reason: they name the part of the system that governs the rest of the system.
The insight driving the category is that enablement and governance are the same problem seen from two sides. Any architecture that treats them separately ends up failing at both.
What follows is drawn from thirty days of conversations with more than 50 technology executives actively navigating this rollout.
Every company is becoming AI-native
Every enterprise is under pressure to become AI-native. Boards have told CEOs to deliver. CEOs have pushed the mandate down to CTOs, CIOs, CISOs, and the newly appointed Chief AI Officers. Budgets have been unlocked. Announcements have been made.
The problem these executives are discovering is that โbecome AI-nativeโ isnโt a simple checkbox exercise. Much like the move to the cloud, the move to AI-native entails a long list of platform components that need to be put in place. AI clients like Cursor, Claude Code, Copilot, and ChatGPT are merely the tip of the iceberg. After purchasing licenses, every company runs into a set of problems that need to be solved:
- Thereโs no central place to provision MCP, Skills and other tols. Every team picks its own.
- Thereโs no consistent identity layer. People connect their personal accounts to enterprise data.
- Thereโs no visibility into whatโs being used, by whom, with what data.
- Thereโs no way to enforce policy on tools IT canโt see.
- Thereโs no measurement of adoption, and therefore no way to report progress against the board mandate.
Companies that respond by locking everything down kill adoption. Companies that donโt lock anything down get incidents. Neither path is survivable for long.
This is the problem the AI control plane exists to solve.
โWe're rolling out AI faster than we can govern it. I don't think anybody in this industry isn't.โ
CIO,
Fortune 500 retailer
Two jobs, held in tension
The AI control plane has two jobs. The first is enablement: rolling out AI capabilities across the organization so every team can use them. The second is governance: making sure that rollout doesnโt cause a security incident, a data leak, a compliance violation, or a regulatory problem.
These two jobs are in tension. Enablement wants to remove friction. Add more tools, connect more data, give more people access. Governance wants to proceed cautiously. Inspect every interaction, scope every permission, audit every action. Organizations that treat them as two separate initiatives run by two separate teams end up with either a governance program that blocks adoption or an adoption program that bypasses governance.
The insight behind the AI control plane is that these are not two separate problems. They are the same problem seen from two sides. Enablement without governance produces incidents that eventually force a crackdown that kills adoption. Governance without enablement produces a posture that looks safe on paper while employees route around it with personal accounts and shadow tools.
A control plane resolves the tension by making governance a property of the enablement layer itself. You donโt choose between rolling AI out and keeping it safe. The thing that rolls it out is the thing that keeps it safe.
The shape of an AI control plane
Before diving into the components, itโs valuable to look at the jobs to be done. We believe the control plane has four functions:
AI control plane
Connect. Bring every AI agent (Claude, ChatGPT, Cursor, Copilot, Codex, internal agents, product agents) and every system that matters (SaaS tools, internal APIs, databases, skills) onto a single plane. No custom integration work per tool. Per-team registries. SSO-integrated so identity flows through. So what: new AI capabilities reach the right teams in days instead of months, and IT stops chasing a moving target of ad-hoc integrations.
Control. Enforce who can use what, under what conditions. Scoped access by team or role. Credential management that doesnโt rely on employees pasting API keys into config files. OAuth 2.1 where the protocols support it. Full audit trail. The shift that matters here is that policies become executable rules rather than documents in a wiki: versioned, testable, and applied automatically at the point of use rather than relied on to be remembered. A policy that lives in a Confluence page and gets enforced through training is not a control. A policy that the control plane evaluates on every request is. So what: the policy on paper is the policy at runtime. Audits stop surfacing questions the CISO canโt answer.
Secure. Inspect every prompt, response, and tool call in real time. Active blocking of PII and data exfiltration. Passive detection of prompt injection and shadow MCPs. Alert the incident response team when something warrants it. Integrate with existing SIEM and security tooling rather than replacing it. So what: incidents are detectable in real time instead of discovered in postmortems, and the response team works from the tooling they already know.
Observe. Measure whatโs actually happening. Visibility of token use by team, client, tool, and user. Track adoption against organizational targets. Produce the data that proves, or disproves, that the AI investment is working. So what: the board mandate has metrics behind it instead of anecdotes, and leadership can tell real adoption from theatre.
The components of the AI control plane
AI control plane
Several categories of technology have emerged over the last eighteen months, each addressing a slice of the problem. Understanding what each does, and what it doesnโt do, is the fastest way to see why the control plane framing is necessary.
LLM gateways. An LLM gateway sits between applications and the language models they call. It handles routing across providers (OpenAI, Anthropic, Google, open-source), manages API keys, enforces rate limits, and often provides caching and cost tracking. Portkey and LiteLLM are examples. LLM gateways solve a real problem (managing model access at scale), but they sit at the model layer. They see prompts and completions. They donโt see which employee asked the question, which tool the AI called, which data was returned, or whether any of it should have been allowed.
MCP gateways and MCP security tools. The Model Context Protocol has become the emerging standard for how AI clients connect to tools and data. MCP gateways sit in front of MCP servers and handle authentication, authorization, and inspection of tool calls. Products like Speakeasy, Runlayer, and MintMCP have a focus here. These tools solve the discovery, security and observability slice of the problem, specifically protecting the tool-calling path, but they donโt handle model routing, organization-wide adoption, or the broader enablement story.
Identity and access controls. Enterprises already have identity providers (Okta, Azure AD, Google Workspace) and SSO infrastructure. Extending these systems to AI tools is non-trivial. Most AI clients were built for individual users, not enterprise identity, and scoping permissions to teams, roles, or individual tools requires a layer the identity provider itself doesnโt provide.
Observability. Observability in the AI context means visibility into what employees and agents are actually doing with AI: which tools are being used, by which teams, with what frequency, producing what outcomes. This is the layer executives most often realize is missing first, because itโs the one they need to report against the board mandate. Traditional APM and logging tools donโt cover this; they werenโt designed to capture the semantics of AI interactions.
Policy and threat detection. Real-time inspection of prompts, responses, and tool calls, looking for PII leakage, prompt injection, data exfiltration, and policy violations. Some of this overlaps with MCP security. Some of it is new territory because the threats are new.
Tools in the wild
Each of these categories is real and each solves a real problem. The reason none of them is sufficient on its own is structural: they see different parts of the same interaction. An LLM gateway sees the model call but not the user. An MCP gateway sees the tool call but not the model. An identity provider knows who the user is but not what theyโre doing. An observability tool sees the activity but canโt enforce policy on it.
The AI control plane is what you get when you put these pieces on a single architectural foundation so they see each other. The value is not in any individual component. Itโs in the integration.
Why the AI control plane is emerging now
Three forces are converging to make the AI control plane a visible category rather than an academic concept.
The mandate is real and top-down. Boards have told CEOs to deliver AI transformation. CEOs have tasked their C-suite with executing. The executives who now own this (CTOs, CIOs, CISOs, Chief AI Officers) need to produce something concrete. They are actively looking for a category to buy from.
Sprawl is visible. Most enterprises, when they run an audit, discover they have dozens of AI tools in use across the organization, most of which were never approved and none of which are centrally managed. The gap between official AI policy and actual AI usage has become wide enough that leadership can see it.
Risk is priced in. Data leaks through consumer AI tools, prompt injection attacks on agent systems, regulatory pressure from the EU AI Act, and the general anxiety around data handling have made the cost of doing nothing concrete. The conversation has moved from โwhat if something happensโ to โwhat are we doing about the things that are already happening.โ
The category is being named now because the conditions that make it necessary have all arrived inside the same eighteen-month window.
Why existing tools donโt absorb the category
Three types of incumbent vendor could plausibly claim to own this space. Each has a structural reason they will struggle to.
Hyperscaler AI suites (AWS, Azure, Google Cloud) bundle AI capabilities into their platform offerings. The structural problem: enterprises increasingly operate in multi-cloud, multi-model environments, and they donโt want their control plane locked to a single provider. A control plane that only sees one cloudโs traffic isnโt a control plane. Itโs a silo.
Enterprise platform incumbents (ServiceNow, Crowdstrike the major ITSM and GRC vendors) have the C-suite relationships and the procurement inertia. The structural problem: they will ship a tile labeled โAI governanceโ inside a larger suite, and it will be a feature rather than a focus. Features built inside suites rarely catch up to products built end-to-end for a single purpose, especially in a category moving this fast.
Point tools in the LLM gateway or MCP security spaces will try to expand upward. The structural problem: each of them was built for a slice, and expanding to cover the full lifecycle means rebuilding significant portions of what they already have. Some will succeed at this. Most will be absorbed or consolidated.
None of this means incumbents are irrelevant. It means the window for category definition is open, and the organizations that establish the architecture and the vocabulary now will set the terms of how buyers evaluate everything that follows.
What a mature AI control plane enables
When the pieces are in place, the operational shape of the organization changes in a handful of specific ways.
AI is available to every employee on day one, not after a months-long provisioning process. New tools get added to the central registry once and are available to the teams entitled to use them. Teams donโt stand up their own integrations in parallel.
Identity, permissions, and policy are enforced consistently across every AI client and every tool. An engineer using Cursor, a salesperson using ChatGPT Enterprise, and an analyst using an internal agent are all subject to the same permission model, because the permissions live in the control plane rather than in each client.
Security has visibility into the AI layer in the same way it has visibility into the network, the endpoints, and the cloud. Incidents are detectable in real time. Audits have data to draw from. The CISO is not operating blind.
Leadership has numbers. Adoption by team. Which tools are producing outcomes and which are shelfware. Where AI is moving the business and where it isnโt. The board mandate has metrics behind it instead of anecdotes.
New AI initiatives (a new agent, a new internal tool, a new workflow) ship on shared infrastructure instead of reinventing the connection, identity, and governance layers every time. The second, third, and tenth AI project cost a fraction of the first.
None of this is speculative. The components to do each of these things exist today. Whatโs been missing is the framing that puts them together under a single architecture and a single name.
A note on Speakeasy
Speakeasy is building the AI control plane. We started with the connection and identity layer, because thatโs where the pain is most acute for companies trying to move beyond bottoms up led AI adoption, and weโve been extending across the four functions since. We are not the only company working on this, and we wonโt be. But we believe the category is real, we believe the window for defining it is open, and we believe the organizations that take the architecture seriously now will be meaningfully better positioned than the ones that wait for the market to tell them what to buy.
Where Speakeasy plays
If youโre one of the executives tasked with turning an AI mandate into an operating reality, the most useful thing you can do this quarter is probably not to buy a product. Itโs to draw the architecture of how AI should flow through your organization, figure out which pieces you already have and which you donโt, and decide whether you want to assemble it yourself or build on a foundation designed for it. Either way, the term to have in your head while you do that work is AI control plane. It names the thing youโre actually building.