Govern Aider with Agentic Control Plane
Aider is a popular open-source AI pair programmer — Python CLI, runs against any LLM (OpenAI, Anthropic, local models). It’s beloved by developers who want fine-grained control over AI coding sessions. ACP adds an audit trail and policy layer on top.
TL;DR — fastest path
Set Aider’s LLM endpoint to ACP’s OAI-compatible proxy:
export OPENAI_API_BASE="https://api.agenticcontrolplane.com/v1"
export OPENAI_API_KEY="gsk_yourslug_xxxxxxxxxxxx"
aider
Or persist via ~/.aider.conf.yml:
openai-api-base: https://api.agenticcontrolplane.com/v1
openai-api-key: gsk_yourslug_xxxxxxxxxxxx
Every Aider LLM call now flows through ACP. Audit log entries show up in your dashboard with client.name: "aider".
How it works
Aider sends model calls via the OpenAI Python client. ACP’s OAI-compatible endpoint accepts the same shape, applies governance, and proxies to the configured upstream model (your choice of provider). Tool calls that the model emits are governed at the call level.
What you’ll see in the dashboard
Aider sessions appear on the Agents page under the identity tied to your gsk_ key. The activity log shows individual tool calls (file reads, edits, command execution) with full context.
Limitations
- Aider’s own tool execution (file edits, git commits) bypasses ACP. ACP sees the LLM-level instructions but not Aider’s local file operations. The Python SDK adapter (in development) wires into Aider’s hook points for full per-action governance.
- Native delegation chain support requires the Python SDK adapter. OAI proxy alone treats every Aider session as one undifferentiated agent.
--lint-cmd,--test-cmd,--auto-commitsand similar autonomous behaviors run locally and aren’t intercepted. ACP’s role is audit + LLM-call policy enforcement, not local-action gating.
Related integrations
- Claude Code — Anthropic’s competing terminal coding agent with full hook integration
- OpenAI Codex CLI — OpenAI’s competing terminal coding agent
- CrewAI — Python multi-agent framework, same OAI-proxy pattern