Skip to content
Agentic Control Plane

ACP and Okta for AI Agents: composition, not collision

David Crowe · 5 min read
okta identity composition governance comparison

Okta launched Okta for AI Agents to GA on April 30, 2026 — Agent Identity, MCP Bridge, and the five-question framing (who is the agent, what’s it allowed to do, on whose behalf, against which resource, with what approval) shipped as a real product. For Fortune 500 enterprises with Okta already in the stack, this is the path of least resistance for agent-identity governance.

The five-question framing is good. The product is real. And it raises a useful question for everyone building in this category: what’s the right architectural layering for agent governance, and where does each component sit?

This post is the analytical answer. Okta for AI Agents and ACP intercept at different architectural points and serve overlapping but distinct scopes. Both are useful; most enterprises with serious AI deployments will compose them.

What Okta for AI Agents brings

Stripping the marketing copy and reading the launch architecturally, three things are doing the work:

  1. Agent Identity as a first-class primitive. Distinct from human users, distinct from traditional service accounts. Integrates with the existing Okta directory, Conditional Access, RBAC. This is the kind of thing only an IdP can ship credibly — Okta has been operating identity primitives at scale for fifteen years and it shows.
  2. MCP Bridge. An identity-aware proxy for MCP-mediated agent traffic. Brings MCP tools inside the identity perimeter “without any code changes.” For customers running MCP-aware agents that already trust Okta as their IdP, this is incremental adoption rather than new infrastructure.
  3. Discover / onboard / protect / govern tooling. Operational primitives for finding agents in an environment, registering them, applying policy, producing audit. Familiar lifecycle to anyone who’s deployed Okta workforce identity before.

The whole package is honest about what it is: identity-perimeter governance for AI agents. That’s a real category and Okta is well-positioned to own it.

What’s outside the identity-perimeter scope

A control-plane needs to intercept somewhere. Okta for AI Agents intercepts at the identity perimeter. That’s the right architectural choice for a class of governance questions — agent identity, scope-against-resource, approval flows. It’s the wrong architectural choice for a different class:

  • Coding agents on developer laptops. Claude Code’s PreToolUse hook fires for every tool dispatch — Bash, Edit, Read, Write, file globs, MCP, all of it. Cursor and Codex CLI have similar hook surfaces. These calls happen on developer machines against local file systems, outside any identity perimeter.
  • Framework-internal dispatch. When a CrewAI agent hands off to another CrewAI agent, when a LangGraph supervisor routes to a worker, when an Anthropic Agent SDK loop dispatches a tool — these are in-process Python or TypeScript calls that don’t cross a network boundary the perimeter can see.
  • Per-tool-call output content. PII redaction, secret detection, tool-output policy (“the database returned 50,000 rows; truncate before the model sees them”). The intercept point for these is the tool boundary, not the identity boundary.
  • Multi-agent delegation chains. When agents compose, the chain itself is the audit-relevant artifact: who initiated, who delegated to whom, with what scope at each hop. Tracking that requires intercepting in the call path, not at the perimeter.

These aren’t bugs in Okta’s product. They’re the natural scope boundary of an identity-perimeter architecture. A different architectural intercept is needed for each — and that’s where a tool-call-level layer fits.

ACP as the runtime-layer companion

ACP intercepts at three different points depending on the architectural shape of the agent:

  • Hook layer for coding agents (Claude Code, Cursor, Codex CLI, Zed) — every tool dispatch flows through /govern/tool-use before execution.
  • Decorator layer for framework code (CrewAI, LangGraph, Anthropic Agent SDK, OpenAI Agents SDK, Vercel AI SDK, Mastra, Pydantic AI, AutoGen, Google ADK) — @governed wraps each tool, the dispatch is invisible to the framework.
  • Proxy layer at the LLM call (when applicable) — full request visibility including system prompts, tool declarations, tool calls, model output.

Different intercept points cover different surfaces. The composition with Okta is clean: Okta-issued JWTs propagate to ACP via standard OIDC trusted-issuer setup, so agent identity established at the perimeter flows through to per-call audit at the runtime layer.

How the two layers fit

Three composition patterns, picking based on what’s in the stack:

Pattern Best when
Okta + ACP (full stack) Customer has Okta in stack and wants full coverage including coding agents and self-built framework code
Okta for managed, ACP for self-built Customer has a mix: managed agents that fit Okta’s discover-and-onboard flow, plus self-built agents in Python/TypeScript
Okta-only Customer is Okta-deep and operates entirely in MCP-aware managed environments without developer-laptop coding agents in scope

For most enterprises with serious AI deployments — meaning more than one runtime, more than one agent framework, plus developer productivity tools like Claude Code on engineering laptops — the first or second pattern is the realistic answer.

What the launch validates

A category becomes a category when an incumbent in an adjacent space ships into it. Okta launching Okta for AI Agents tells the market that agent governance is a real product line, not a nice-to-have. That’s good news for everyone building here, and it’s good news for buyers — the conversation shifts from “do we need this?” to “what’s the right architectural layering?”

The right architectural layering is what this post tries to map. There isn’t one product that solves it all, and that’s fine. Identity-perimeter governance is one component. Tool-call-level governance is another. Different teams will buy them in different orders depending on stack constraints, but most production AI deployments end up wanting both.

Where to read more

Get the next post
Agentic governance, AgentGovBench updates, the occasional incident post-mortem. One email per post. No marketing fluff.
Share: Twitter LinkedIn
Related posts

← back to blog