Skip to content
Agentic Control Plane

Pydantic AI + ACP — Governance Install Guide

Pydantic AI is a Python framework for building agents with type-safe tools, structured outputs, and provider-agnostic model strings. Out of the box, a production deployment shares one backend API key across every end user’s request — no per-user policy enforcement, no per-user audit trail, no way to tell downstream systems which human triggered which action.

acp-governance closes that gap. Stack @governed under Pydantic AI’s @agent.tool_plain (or @agent.tool) and bind the end user’s identity per request with set_context. Same governance model as Claude Code — same /govern/tool-use endpoint, same workspace policies.

Starter · 5-minute install. pip install acp-governance, stack @governed under @agent.tool_plain, bind the JWT per request. See the runnable starter, the governance model, or the frameworks index.

Install

pip install acp-governance pydantic-ai

Minimal governed agent

from fastapi import FastAPI, Header
from pydantic_ai import Agent
from acp_governance import configure, governed, set_context

configure(base_url="https://api.agenticcontrolplane.com")
app = FastAPI()

agent = Agent(
    "anthropic:claude-sonnet-4-6",
    instructions="You are an ACP-governed agent. Use the tools available.",
)


# Stack @governed INSIDE @agent.tool_plain — Pydantic AI's introspection
# walks through __wrapped__ to read the original signature for tool schema.
@agent.tool_plain
@governed("lookup_record")
def lookup_record(id: str) -> str:
    """Look up a record by ID."""
    return json.dumps(db.records.find_one({"id": id}))


@agent.tool_plain
@governed("send_email")
def send_email(to: str, subject: str, body: str) -> str:
    """Send an email."""
    return mailer.send(to=to, subject=subject, body=body)


@app.post("/run")
def run(prompt: str, authorization: str = Header(...)):
    set_context(
        user_token=authorization.removeprefix("Bearer ").strip(),
        agent_name="my-pydantic-agent",
        agent_tier="interactive",
    )
    result = agent.run_sync(prompt)
    return {"result": result.output}

What happens on every tool call

  1. @governed POSTs to ACP’s /govern/tool-use with the tool name, input, and the user JWT bound by set_context.
  2. Deny → the wrapped function returns "tool_error: <reason>". Pydantic AI delivers it to the agent as tool output; the model sees the denial and adapts.
  3. Allow → your function runs.
  4. Post-audit: ACP scans the output for PII / secrets. If policy says redact, the redacted version replaces the original. Audit row written, rooted in the end user’s identity.

Decorator order matters

@agent.tool_plain     # outer — registers with Pydantic AI
@governed("...")      # inner — wraps the call with governance
def my_tool(...): ...

@governed must sit closer to the function so the governance check runs inside Pydantic AI’s tool dispatch. The functools.wraps inside @governed preserves __wrapped__, so Pydantic AI’s inspect.signature walks through it to build the tool schema from your original function signature, type hints, and docstring.

If you need access to the agent’s run context inside the tool, use @agent.tool instead of @agent.tool_plain and keep RunContext[Deps] as the first parameter — @governed composes with both.

Per-tier policy

set_context(agent_tier="...") controls the policy tier:

  • interactive — human at the keyboard, permissive default.
  • subagent — invoked by another agent, no human in the immediate loop.
  • background — autonomous, most restrictive.
  • api — programmatic call from your backend.

Pydantic AI–specific notes

  • Hooks API migration path. Pydantic AI ships a first-class Hooks capability (before_tool_execute, after_tool_execute, wrap_tool_execute) that’s a cleaner integration point than per-function decorator stacking — it governs every tool registered with the agent without requiring users to decorate each one. A future acp-pydantic-ai v0.2 package will expose an ACPHooks() helper using this surface. The decorator stacking documented here will keep working; the migration is an ergonomic upgrade, not a correctness requirement.
  • Multi-provider model strings. Agent("anthropic:claude-sonnet-4-6"), Agent("openai:gpt-4o-mini"), Agent("google-gla:gemini-...") all resolve to provider-specific clients using their respective API keys. Governance is identical across providers.
  • Structured outputs. Pydantic AI’s output_type=MyModel works through the governance layer unchanged — the governance hook runs on tool calls, not on the agent’s structured-output validation.

Async tools

@agent.tool_plain
@governed("fetch")
async def fetch(url: str) -> str:
    """Fetch a URL."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(url)
    return resp.text

@governed detects coroutine functions and dispatches accordingly.

Limitations

  • Only tools routed through @governed are covered. Plain functions registered with @agent.tool_plain without @governed bypass governance.
  • LLM calls go direct to your provider. ACP governs tools, not tokens. Pair with Portkey or LiteLLM virtual keys for per-user LLM cost attribution.
  • Pre-release. acp-governance is on 0.x. A framework-specific acp-pydantic-ai package will land with the Hooks API integration.