The governance model
Every ACP integration — Claude Code hook, CrewAI decorator, OpenAI-compatible proxy — lands at the same governance pipeline. Learn the model here; each framework guide then reduces to “how to wire this framework’s tool dispatch into the pipeline.”
The shape of a governed call
Every tool invocation in a governed agent takes the same six steps:
- Identity binds to the request. The end user’s JWT is attached via
set_context(user_token=...)(Python) orwithContext(...)(TS). Not your service key — the human’s token. - Pre-tool check. Before the tool runs, the SDK POSTs
{ tool_name, tool_input, session_id }plusAuthorization: Bearer <user-jwt>to/govern/tool-use. - ACP evaluates. Server verifies the JWT against the configured IdP, then runs the governance pipeline: immutable rules, scope intersection, ABAC, rate limits, plan limits, PII detection.
- Decision returns. One of
allow,deny, orredact, with a human-readable reason. - Tool runs (or doesn’t). Deny returns a tool-error string the LLM sees as the tool’s output. Allow runs your function.
- Post-tool scan. The output is POSTed to
/govern/tool-outputfor PII, prompt-injection, and secret detection. Findings write to the audit log. If policy saysredact, the redacted string replaces the output.
This is the whole model. Everything else is framework-specific plumbing.
Two install patterns
ACP enters the loop in one of two places depending on the framework:
Pattern A — wrap the tool (@governed)
For frameworks where you define tools as functions or classes, you stack a governance decorator on each tool. The decorator is synchronous with the tool dispatch — the governance check runs before your function body.
from acp_langchain import governed, set_context
@tool
@governed("web_search")
def web_search(query: str) -> str:
"""Search the web."""
return my_search(query)
Used by CrewAI, LangChain / LangGraph, Anthropic Agent SDK. The LLM call itself goes direct to your model provider.
Pattern B — proxy the LLM (OpenAI-compatible)
For frameworks that talk to an OpenAI-compatible client, you point the base_url at ACP. Every LLM call — and the tool calls it emits — flows through the proxy. Per-agent attribution via an x-acp-agent-name header.
from openai import AsyncOpenAI
client = AsyncOpenAI(
base_url="https://api.agenticcontrolplane.com/v1",
api_key=os.environ["ACP_API_KEY"],
default_headers={"x-acp-agent-name": "researcher"},
)
Used by the OpenAI Agents SDK, Aider, and anything that speaks the OpenAI chat-completions API.
The two patterns are not mutually exclusive — you can wrap tools and proxy the LLM. Both land at the same audit log.
Four building blocks
Every framework starter teaches the same four primitives.
1. End-user JWT binding
Your service is the one holding the service account. ACP needs the end user’s token — the human who triggered the run — to attribute the call correctly.
@app.post("/run")
def run(payload: Payload, authorization: str = Header(...)):
set_context(user_token=authorization.removeprefix("Bearer ").strip())
# ...kickoff the agent
ACP verifies this JWT on every /govern/tool-use call against the IdP you configured in Settings → Identity Provider. Firebase, Auth0, Okta, any OIDC.
2. The @governed decorator
Marks a tool as governance-enforced. Stack it under the framework’s own tool decorator so the governance check runs inside the tool’s dispatch:
@tool # framework (CrewAI / LangChain) decorator — outer
@governed("send_email") # ACP decorator — inner
def send_email(to, subject, body):
return sendmail(to, subject, body)
Tools without @governed are not governed. This is intentional — the decorator makes governance an explicit choice, visible in diffs.
3. Session IDs
Every tool call for one request shares a session_id, so the audit log groups them into a single logical trace. SDKs generate this automatically per set_context call; you rarely set it yourself.
Session IDs are how the dashboard’s Activity view shows “these five tool calls were part of the same user request.”
4. Fail-open
If /govern/tool-use times out (5s default) or is unreachable, the SDK returns allow with reason fail-open. The tool proceeds. This is deliberate: governance is never a single point of failure for your agent.
Fail-open is opinionated. It means a downed governance plane doesn’t break user-facing functionality. If you need fail-closed semantics for specific tools, set policy server-side — the Claude Code hook is fail-closed by default because its ACP connection is a hard dependency, but framework SDKs lean fail-open for availability.
Decisions and their semantics
| Decision | What happens | LLM-visible |
|---|---|---|
allow |
Tool runs. Output passes through post-scan. | Yes — real tool output. |
deny |
Tool does not run. SDK returns "tool_error: <reason>". |
Yes — the model sees the error string and adapts. |
redact |
Tool runs, but post-scan rewrites the output per policy. | Yes — redacted output. |
fail-open |
Governance unreachable. Tool runs. Reason annotated in audit log. | Same as allow. |
All four write structured rows to the audit log, viewable at cloud.agenticcontrolplane.com/activity.
What’s audited
Every decision — allow, deny, redact, fail-open — emits one row with:
- Actor. The end user’s
sub(from JWT), not your service key. - Tool name. Whatever string you passed to
@governed("...")or the framework’s tool name. - Decision and reason. Human-readable, machine-parseable.
- Session ID. Groups all tool calls from one request.
- Findings. PII detected in input or output, if any.
- Latency, cost, depth. Metrics for dashboards and budgets.
Claude Code, Cursor, CrewAI, LangGraph — all write to the same audit log, keyed by the same end-user identity. One log per human across every agent surface.
Inter-agent handoffs
Some frameworks delegate work between agents without a tool boundary:
- CrewAI has sequential task handoffs and hierarchical “delegate to coworker” tools.
- LangGraph has supervisor-worker patterns.
- OpenAI Agents SDK has first-class
handoffs.
These don’t hit /govern/tool-use directly (no tool was called). Each SDK adapter provides a hook — e.g. install_crew_hooks(crew) — that emits synthetic Agent.Handoff audit events for these transitions. PII scanning applies; existing callbacks chain, not overwrite.
Framework coverage today
| Framework | Pattern | Governs tool calls | Governs handoffs | SDK |
|---|---|---|---|---|
| CrewAI | A | via @governed |
via install_crew_hooks |
acp-crewai |
| LangChain / LangGraph | A | via @governed |
via graph callbacks | acp-langchain |
| Anthropic Agent SDK | A | via governHandlers |
n/a (single-agent loop) | @agenticcontrolplane/governance-anthropic |
| OpenAI Agents SDK | B | via proxy | per-agent via header | No install — base_url change |
| Claude Code | A | via PreToolUse hook |
via delegation chain | install.sh |
| Aider | B | via proxy | n/a | base_url change |
Related
- Frameworks index — starter code for every framework.
- Integrations index — off-the-shelf AI clients.
- Policies & scopes — how allow/deny rules are configured.
- Agent identity — deeper dive on how JWTs flow through the LLM.
- Agent-to-agent governance — delegation chain semantics.