Governed AutoGen in 3 minutes
Microsoft AutoGen is the multi-agent framework the research community keeps coming back to. The v0.4 rewrite (Jan 2025) replaced the entire v0.2 API; v0.7+ is the current shape — AssistantAgent, async-first model clients, tools as plain Python callables. It’s now production-ready, used in everything from SelectorGroupChat orchestrations to single-agent assistants.
What it doesn’t ship: per-user identity attribution on tool calls, cross-tenant audit, output redaction, or pluggable policy. AutoGen’s guardrails tracking issue (#6017) is still open — safety primitives landed in the adjacent Microsoft Agent Framework, not in AutoGen 0.7.x.
This post is the 3-minute path with Agentic Control Plane. Decorate your tool functions with @governed, bind identity with set_context, ship.
The pattern
Tool-layer governance via @governed. AutoGen’s AssistantAgent accepts plain Python callables in tools=[...]; @governed wraps each callable with ACP’s pre/post hooks. AutoGen’s inspect.signature walks through __wrapped__ to read the original function’s type hints and docstring for tool schema generation.
Three minutes from blank slate
1. Install
pip install acp-governance "autogen-agentchat" "autogen-ext[openai]"
2. Wrap your tools
import asyncio, os, json
from acp_governance import configure, governed, set_context
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
configure(base_url="https://api.agenticcontrolplane.com")
@governed("lookup_record")
async def lookup_record(id: str) -> str:
"""Look up a record by ID."""
return json.dumps(await db.records.find_one({"id": id}))
@governed("send_email")
async def send_email(to: str, subject: str, body: str) -> str:
"""Send an email."""
return await mailer.send(to=to, subject=subject, body=body)
async def run(prompt: str, user_jwt: str):
set_context(
user_token=user_jwt,
agent_name="my-autogen-agent",
agent_tier="interactive",
)
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
agent = AssistantAgent(
name="my_autogen_agent",
model_client=model_client,
tools=[lookup_record, send_email],
system_message="You are an ACP-governed agent.",
)
try:
result = await agent.run(task=prompt)
return result.messages[-1].content
finally:
await model_client.close()
3. Run it
export ACP_USER_TOKEN=gsk_...
export OPENAI_API_KEY=...
python -c "import asyncio; from my_agent import run; print(asyncio.run(run('Look up record id=abc-123.', '$ACP_USER_TOKEN')))"
Open cloud.agenticcontrolplane.com/activity. One row per tool call with actor, tool, decision, session, input/output preview.
What @governed does
Every call to a decorated tool:
- POSTs to
/govern/tool-usewith the tool name, input arguments, and the user token bound byset_context. - Deny → returns
"tool_error: <reason>". AutoGen delivers it to the agent as tool output; the model adapts. - Allow → runs your function.
- Post-audit: ACP scans the output for PII / secrets, optionally redacts.
AutoGen’s inspect.signature walks through __wrapped__ (set by functools.wraps) to read the original function signature, so tool schema generation works correctly with the decorator in place.
Why no acp-autogen package?
AutoGen v0.7.5’s AssistantAgent doesn’t expose a tool-dispatch middleware seam — there’s no on_tool_call / before_tool / hook surface to integrate against. Inline-decorating each function is the documented path. When AutoGen ships a tool-dispatch middleware (likely once the guardrails issue lands), an acp-autogen adapter will use it; until then, decorator stacking is correct, idiomatic, and what AutoGen’s own docs recommend for any per-tool wrapping.
Per-tier policy
set_context(agent_tier="...") controls the policy tier:
interactive— human at the keyboard.subagent— invoked by another agent.background— autonomous, most restrictive.api— programmatic call from your backend.
For an AssistantAgent running inside a SelectorGroupChat or other multi-agent topology, set agent_tier="subagent" for the worker agents — distinct from the orchestrator’s tier.
Multi-agent topologies
SelectorGroupChat, RoundRobinGroupChat, and hand-rolled multi-agent topologies all work the same way. Decorate each tool once, pass to whichever agents need them:
from autogen_agentchat.teams import SelectorGroupChat
researcher = AssistantAgent(name="researcher", model_client=client, tools=[lookup_record])
writer = AssistantAgent(name="writer", model_client=client, tools=[send_email])
team = SelectorGroupChat([researcher, writer], model_client=client)
The governance call runs on every tool dispatch regardless of which agent in the topology fired it. For finer attribution, call set_context with different agent_name values inside each agent’s tool handler — every governed call inside that scope reads the bound name.
Async tools, sync tools
@governed detects sync vs async functions; both work. AutoGen v0.7+ also accepts sync callables in tools=[...], so you can mix:
@governed("instant")
def instant(x: int) -> int:
return x * 2
@governed("slow")
async def slow(url: str) -> str:
async with httpx.AsyncClient() as client:
return (await client.get(url)).text
Migration note
If you’re on AutoGen v0.2, the framework you’re using has been replaced. v0.4 (Jan 2025) was a complete rewrite. Older tutorials showing ConversableAgent, config_list, or initiate_chat are pre-rewrite and won’t work with current packages. This guide targets v0.7.5+. Migrate before installing ACP — the governance integration assumes the new API.
What this unlocks
AutoGen is back in production after the rewrite. ACP plugs in via tool-layer decorators — three lines of integration, full audit, identity-propagated policy, no adapter to learn.
AutoGen integration guide → · Three-minute integrations → · Get started →