Skip to content
Agentic Control Plane

You're writing the agent. We give you the governance.

ACP ships an SDK for each major agent framework. One pip install (or npm install), one decorator or one base-URL change, and every tool call your agents make gets identity, per-user policy, PII detection, and audit — rooted in the end user's identity, not your service key.

Different from Integrations, which is for plugging off-the-shelf AI clients (Claude Code, Cursor, Claude Desktop) into ACP. This page is for you building the agent.

Starters

Pick your framework. Copy the starter. Ship governed.

Each guide ships a runnable minimal example — a FastAPI or Express endpoint, one wrapped tool, an end-user JWT bound to the request. Clone, set two env vars, pip install, hit the endpoint, watch the audit log populate.

Using a framework we don't list (Vercel AI SDK, Mastra, Pydantic AI, Google ADK, Autogen)? The core governance model is framework-agnostic — the /govern/tool-use endpoint is HTTP. Drop us a line and we'll prioritize the adapter.

Two install patterns

Either wrap your tools, or proxy your LLM.

Depending on the framework, governance enters the loop in one of two places. Both land at the same audit log.

Pattern A — wrap the tool
You decorate each tool with @governed("tool_name"). Before the tool runs, ACP evaluates policy, rate limits, and PII on the input; after it runs, ACP scans the output. The LLM call goes direct to your provider.
Used by: CrewAI, LangGraph, Anthropic Agent SDK.
Pattern B — proxy the LLM
You point the framework's OpenAI-compatible client at api.agenticcontrolplane.com/v1. Every LLM call — and the tool calls it emits — is audited at the proxy. Per-agent attribution via x-acp-agent-name header.
Used by: OpenAI Agents SDK, Aider, any OpenAI-compatible client.

Deeper explanation of both paths: the governance model →

Shared concepts

One governance model. Framework-agnostic.

Every starter teaches the same four building blocks. Learn them once; they apply no matter which framework you pick.

End-user JWT binding
Every request carries the human's token, not your service key. set_context(user_token=...) at request start.
@governed decorator
Marks a tool as governance-enforced. Deny returns a tool-error string the LLM can adapt to. Allow runs your code.
Session IDs
All tool calls for one request share a session_id, so the audit log groups them into a single trace.
Fail-open
If /govern/tool-use times out, the tool proceeds with reason fail-open. Governance is never a single point of failure.

Deep dive on the governance model → · How workspace policies work →

What every framework starter gets

No matter which framework — every tool call your agent makes passes through the same governance pipeline.

Per-user policy
Allow / deny / redact evaluated against the end user's scopes and the workspace's rules.
Per-user rate limits
Budgets and throttles rooted in the human, not the shared service key.
PII detection
Scan tool input and output; redact or flag based on policy.
Cross-framework audit
CrewAI, LangGraph, Claude Code, Cursor — one log per user across every agent surface.
Inter-agent handoffs
Multi-agent delegation audited as Agent.Handoff events where the framework supports it.
Identity provider of your choice
Firebase, Auth0, Okta, Entra ID, any OIDC. Your users stay in your directory.

Your framework. Our governance.

Free to start. Add per-user governance to your agent code in minutes.