Skip to content
Agentic Control Plane

AutoGen + ACP — Governance Install Guide

Microsoft AutoGen is a Python multi-agent framework. The v0.4 rewrite (Jan 2025) replaced the v0.2 API entirely; v0.7+ is the current shape, with AssistantAgent, model clients, and tools as plain Python callables. Out of the box, a production deployment shares one backend API key across every end user’s request — no per-user policy enforcement, no per-user audit trail.

acp-governance closes that gap. Decorate tool functions with @governed; bind the end user’s identity per request with set_context. Same governance model as Claude Code — same /govern/tool-use endpoint, same workspace policies.

Starter · 5-minute install. pip install acp-governance autogen-agentchat, decorate tools with @governed, bind the JWT per request. See the runnable starter, the governance model, or the frameworks index.

Install

pip install acp-governance "autogen-agentchat" "autogen-ext[openai]"

For Anthropic models swap [openai] for [anthropic] and import AnthropicChatCompletionClient from autogen_ext.models.anthropic.

Minimal governed agent

import asyncio, os
from acp_governance import configure, governed, set_context
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

configure(base_url="https://api.agenticcontrolplane.com")


# @governed wraps the coroutine. AutoGen's signature introspection walks
# through __wrapped__ (set by functools.wraps) to read the original
# signature for tool schema generation.
@governed("lookup_record")
async def lookup_record(id: str) -> str:
    """Look up a record by ID."""
    return json.dumps(await db.records.find_one({"id": id}))


@governed("send_email")
async def send_email(to: str, subject: str, body: str) -> str:
    """Send an email."""
    return await mailer.send(to=to, subject=subject, body=body)


async def run(prompt: str, user_jwt: str):
    set_context(
        user_token=user_jwt,
        agent_name="my-autogen-agent",
        agent_tier="interactive",
    )

    model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
    agent = AssistantAgent(
        name="my_autogen_agent",
        model_client=model_client,
        tools=[lookup_record, send_email],
        system_message="You are an ACP-governed agent. Use the tools available.",
    )

    try:
        result = await agent.run(task=prompt)
        return result.messages[-1].content
    finally:
        await model_client.close()

What @governed does

Every call to a decorated tool:

  1. POSTs to ACP’s /govern/tool-use with the tool name, input arguments, and the user token bound by set_context.
  2. If ACP denies, returns "tool_error: <reason>" — AutoGen delivers it to the agent as tool output; the model adapts.
  3. If ACP allows, runs your function.
  4. POSTs the output to /govern/tool-output for audit logging and PII / secret scanning.
  5. If ACP redacts, replaces the output with the redacted version.
  6. If ACP blocks the output post-hoc, returns "tool_error: <reason>".

AutoGen’s inspect.signature walks through __wrapped__ to read the original function signature, so tool schema generation works correctly with the decorator in place.

Per-tier policy

set_context(agent_tier="...") controls the policy tier:

  • interactive — human at the keyboard, permissive default.
  • subagent — invoked by another agent, no human in the immediate loop.
  • background — autonomous, most restrictive.
  • api — programmatic call from your backend.

For an AssistantAgent running inside a SelectorGroupChat or other multi-agent topology, set agent_tier="subagent" for the worker agents — distinct from the orchestrator’s tier.

AutoGen-specific notes

  • No tool-level hooks in AssistantAgent (as of v0.7.5). The framework has no on_tool_call / before_tool / middleware seam. Inline-decorating each function is the documented path to add per-tool behavior.
  • Post-v0.4 rewrite. AutoGen v0.4 (Jan 2025) was a full rewrite. Older tutorials showing ConversableAgent, config_list, or initiate_chat are pre-rewrite and won’t work with current packages. This guide targets v0.7.5+.
  • Guardrails landed in Microsoft Agent Framework, not AutoGen 0.7.x. Tracking issue #6017 is open. Until guardrails ship in AutoGen itself, ACP fills the tool-layer gap via @governed.
  • Sync tools too. @governed detects sync vs async functions; both work. AutoGen v0.7+ also accepts sync callables in tools=[...].

Multi-agent setups

For SelectorGroupChat, RoundRobinGroupChat, or hand-rolled topologies, decorate each tool once and pass them to whichever agents need them. The governance per-call cost is the same regardless of which agent in the topology calls the tool.

from autogen_agentchat.teams import SelectorGroupChat

researcher = AssistantAgent(name="researcher", model_client=client, tools=[lookup_record, ...])
writer     = AssistantAgent(name="writer",     model_client=client, tools=[send_email, ...])

team = SelectorGroupChat([researcher, writer], model_client=client)

The agent_name you pass to set_context shows up in the audit log; for finer attribution you can call set_context with different agent_name values inside each agent’s tool handler.

Limitations

  • Only tools wrapped with @governed are covered. Plain functions in tools=[...] bypass governance.
  • LLM calls go direct to your provider. ACP governs tools, not tokens. For per-user LLM cost attribution, pair with Portkey or LiteLLM virtual keys.
  • No middleware integration today. When AutoGen ships a tool-dispatch middleware seam, an acp-autogen adapter will use it; until then decorator stacking is the documented path.
  • Pre-release. acp-governance is on 0.x.