CSA Defines the Agentic Control Plane. Here's What We Built.
The Cloud Security Alliance published “Securing the Agentic Control Plane in 2026” last week. Jim Reavis, CSA’s CEO, laid out the governance challenge for autonomous AI agents and asked three questions that enterprise security teams have been circling for months. We’ve spent the last year building the answers.
What CSA got right
Credit where it’s due — the CSA piece nails the framing.
Reavis identifies the three-party problem at the center of agentic AI: agents operating between users and systems, making autonomous decisions, calling tools, and accessing data without the identity guarantees that traditional architectures provide. CSA calls out the need for identity-first design, runtime authorization, and agent capability classification. These aren’t new concepts to anyone building in this space, but having the Cloud Security Alliance say them carries weight that practitioners alone don’t.
The most important line in the article is the recognition that guidance alone is not enough. Frameworks without implementations are whitepapers. Standards without reference architectures are aspirational. CSA publishing this validates what builders have been saying since agents started calling APIs with shared service keys: the governance layer is missing, and no amount of best-practice documentation fills an architectural gap.
The launch of the CSAI Foundation alongside the article signals that CSA sees this as a sustained effort, not a one-off position paper. That matters. Cloud security didn’t mature because one organization published one document. It matured because standards bodies, practitioners, and vendors iterated together over years. AI agent security needs the same sustained attention.
What’s still missing: implementations
CSA’s article is a framework announcement, not a technical architecture. There’s no code. No protocol specification. No reference implementation. The questions are right. The gap is between defining the problem and showing working infrastructure that solves it.
This is normal. Standards bodies define requirements. Builders implement solutions. NIST publishes security guidelines; vendors build the systems that satisfy them. W3C publishes protocol specs; developers build the libraries. CSA is doing what CSA does — and doing it well.
The risk comes when definitions exist without implementations to validate them. Frameworks that aren’t tested against real deployments drift toward theoretical elegance and away from operational reality. The cloud security world learned this lesson: the most useful standards were the ones refined by feedback from people running production workloads, not the ones written in isolation.
AI agent governance needs the same feedback loop. The questions CSA asked are the right starting point. The answers need to come from working systems.
The three questions, answered
Reavis posed three questions in his article. We’ve been building infrastructure that answers each of them.
How do we establish identity and accountability for non-human actors?
We solved this with JWT-based identity binding at the gateway. Every agent request carries a verified token with the originating user’s sub, scopes, and tenant context. Not API keys. Not service accounts. Cryptographically verified delegation from user to agent to tool.
The existing standard is RFC 8693 — OAuth 2.0 Token Exchange. It specifies a mechanism for exchanging one token for another with different scope or audience, which is exactly what agent delegation requires. The spec has been available since January 2020. Most agent frameworks don’t implement it. Most MCP servers don’t reference it. The standard exists; adoption doesn’t.
Here’s what identity binding looks like in practice:
import { identifiabl } from "@gatewaystack/identifiabl";
app.use(identifiabl({
issuer: process.env.OAUTH_ISSUER!,
audience: process.env.OAUTH_AUDIENCE!,
}));
// Every request now carries verified identity:
// req.user.sub → "auth0|8f3a2b1c"
// req.user.scope → "tool:crm:read tool:jira:write"
// req.user.org_id → "org_acme_corp"
No shared API keys. No service accounts masquerading as users. Every request is cryptographically tied to the human who initiated it. Full explanation in AI Agent Identity: The Problem No One Has Solved Yet.
How do we enforce boundaries and permissions in dynamic, autonomous environments?
Deny-by-default policy enforcement, evaluated on every tool call. An agent starts with zero permissions and receives only the scopes explicitly granted to the user it’s acting for. Scope inheritance means the agent never exceeds the delegating user’s permissions — if the user has tool:crm:read, the agent cannot escalate to tool:crm:write.
This isn’t a guardrail layered on top of an execution engine. It’s architectural enforcement at the control plane — the governance layer that sits between the agent and your backend, independent of which model or framework you’re running.
The OWASP Top 10 for Agentic AI traces multiple attack categories — tool misuse, privilege escalation, unauthorized actions — directly back to the absence of this enforcement layer. These aren’t implementation bugs. They’re symptoms of a missing architectural boundary. You can’t patch your way out of a design gap.
How do we continuously measure and validate trust at scale?
Identity-attributed audit logging. Not just what happened, but who caused it, which policy allowed it, whether PII was involved, and what it cost. Every action logged with full chain-of-custody: user to agent to tool to outcome.
This enables continuous compliance evidence — not quarterly audits. When your SOC 2 auditor asks “show me every AI-mediated access to customer data in the last 30 days, attributed to specific employees,” the answer is a query, not a project. When HIPAA requires you to demonstrate minimum necessary access for PHI, the evidence is already structured and searchable.
The ClawHub security audit reinforces why this matters. Snyk found that 12% of published skills on ClawHub were compromised — including a campaign delivering macOS malware through markdown instructions. Without identity-attributed logging, you can’t tell which users were affected, which tools were called, or what data was exposed. With it, you can reconstruct the full chain in minutes.
Full detail on the logging architecture: AI Agent Audit Trails: What CISOs Actually Need to Know.
Standards bodies and builders
The relationship between CSA’s framework and working implementations is complementary, not competitive.
CSA defines governance frameworks. NIST writes security guidelines — their AI Security RFI is actively collecting input from practitioners. W3C publishes protocol specs and hosts the governance discussions that will shape how MCP handles identity and authorization (issue #96 in the W3C MCP governance repo is directly relevant). Practitioners build the infrastructure that makes the frameworks real.
We’ve contributed to both the W3C discussions and the NIST RFI because the feedback loop matters. Standards that don’t reflect operational reality don’t get adopted. Implementations that ignore standards don’t interoperate. The productive path is standards bodies identifying requirements, builders implementing solutions, and the ecosystem maturing through iteration.
This is exactly how cloud security evolved. AWS didn’t wait for NIST to publish a complete cloud security framework before building IAM. NIST didn’t wait for every cloud provider to settle on conventions before writing SP 800-144. They moved in parallel, each informing the other. AI agent security will follow the same path — and CSA’s article is an important early marker.
The EU AI Act reinforces this pattern. Article 14 requires human oversight that is independent of the AI system itself. That’s a regulatory requirement for the architectural separation between the data plane and the control plane. GDPR Article 22 requires meaningful information about automated decision-making. SOC 2 requires demonstrable access controls with identity attribution. These frameworks were written before AI agents, but their requirements map directly to the infrastructure CSA is now calling for.
Where to start
If you’re building AI agent infrastructure, start with the three questions CSA asked. They’re the right diagnostic:
-
Identity. Does your backend know which user initiated each agent request? If you’re using a shared API key, you have the gap. Start here.
-
Boundaries. Are agent permissions enforced at the infrastructure level, or does your application code make per-request authorization decisions? If it’s the latter, you have inconsistent enforcement. Understand the architecture.
-
Trust validation. Can you produce a compliance-ready audit trail showing who accessed what through which agent, with policy justification? If your logs show
service_account: POST /api 200, you can’t. See what compliance-ready looks like.
GatewayStack is one implementation of the Agentic Control Plane pattern — open source, MIT licensed, incrementally adoptable. It’s not the only way to answer CSA’s questions, but it’s a working answer with production deployments and real audit data.
Get started or explore the reference architecture.
The questions are right. The infrastructure exists. The gap is adoption.