Governed OpenAI Agents SDK in 3 Minutes
The OpenAI Agents SDK is built for multi-agent systems — Agent, Runner, handoffs, Guardrails. What it doesn’t build is the governance layer. Every LLM call and every emitted tool call attributes to one shared OPENAI_API_KEY. Your backend has no idea which human triggered which action.
ACP fixes this with the simplest integration pattern of any framework in our series: one base_url change. No SDK to install, no decorator, no hook.
The 3-minute setup
from agents import Agent, Runner, set_default_openai_client
from openai import AsyncOpenAI
import os
client = AsyncOpenAI(
base_url="https://api.agenticcontrolplane.com/v1",
api_key=os.environ["ACP_API_KEY"], # gsk_yourslug_xxxxx
)
set_default_openai_client(client)
# Your existing Agents SDK code — unchanged
researcher = Agent(
name="researcher",
instructions="...",
tools=[web_search, hn_search],
)
result = await Runner.run(researcher, "...")
Four lines. Every LLM call made by every agent now flows through ACP’s OpenAI-compatible proxy. Your existing Agent code, Runner calls, tools, and handoff structure are unchanged.
Grab ACP_API_KEY (it starts with gsk_) from cloud.agenticcontrolplane.com → Settings → API Keys.
Per-agent attribution
Each Agent shows up as a distinct row in the dashboard when you tag it via the x-acp-agent-name header:
def make_client(agent_name: str) -> AsyncOpenAI:
return AsyncOpenAI(
base_url="https://api.agenticcontrolplane.com/v1",
api_key=os.environ["ACP_API_KEY"],
default_headers={"x-acp-agent-name": agent_name},
)
researcher_client = make_client("researcher")
writer_client = make_client("writer")
# pass each client to its corresponding Agent's model configuration
Every agent now has its own row on the Agents page with its own policy, rate limits, and budget. The dashboard, your SIEM, and compliance reports can slice by agent.
What you get for free
Every LLM call plus every emitted tool call is captured at the proxy layer:
- Identity per call —
x-acp-agent-name+gsk_API key - Per-agent policy — allow / deny / redact on a per-Agent basis
- Per-user budgets — rate limits on cost, token count, and call volume
- PII detection — on prompt inputs and model outputs
- Delegation chain —
handoffsbetween agents are captured as the proxy sees each agent’s requests - Full request envelope — system prompt, messages, tools, model config — everything the SDK was about to serialize
Why proxy beats decorator for OpenAI Agents SDK
We benchmarked this. OpenAI Agents SDK + ACP scores 45/48 on AgentGovBench — higher than CrewAI + ACP or LangGraph + ACP (both 40/48).
The reason: the proxy sits at the natural request serialization boundary. The SDK serializes everything it knows (system prompt, all messages, all tools, handoff context, model config) into one JSON payload before the HTTP call. The proxy sees the complete picture.
Decorator-pattern frameworks (CrewAI, LangGraph) wrap individual tool functions. Smaller scope. They miss orchestration context the proxy naturally captures.
Known limits
- Guardrails run before the proxy. If you’ve configured Agents SDK Guardrails, they short-circuit before any request reaches ACP. Governance outcomes from guardrails are invisible to the audit log. Most teams run guardrails and ACP together — they’re complementary.
- The proxy pattern fails closed. When the proxy is unreachable, the OpenAI client returns a network error to the SDK. Fail-open is an application-level concern; decide in your app code whether to retry, cache, or degrade.
Next steps:
- OpenAI Agents SDK integration guide — per-agent attribution + advanced patterns
- OpenAI Agents SDK scorecard — 45/48 on AgentGovBench
- Governance in Three Minutes series — one install, every framework
- 1. Governance for Claude Code in 60 seconds
- 2. Governing the Anthropic Agent SDK
- 3. Governed LangGraph in 3 Minutes
- 4. Governed CrewAI in 3 Minutes
- 5. Governed Cursor in 3 Minutes
- 6. Governed Codex CLI in 3 Minutes
- 7. Governed OpenAI Agents SDK in 3 Minutes · you are here