Governance for ChatGPT Workspace Agents
OpenAI announced Workspace Agents — “Codex-powered agents for teams” — in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans. They’re framed as “an evolution of GPTs”: shared agents that run in the cloud, can be invoked from ChatGPT or Slack, carry memory across sessions, and scale from “answer a prompt” to running on a schedule without a human in the loop.
Free until May 6, 2026. Credit-based pricing kicks in after.
This page is the honest read: what governance ships natively today, what’s explicitly documented, and where ACP fits for the self-built agent surface that sits next to Workspace Agents inside the same organization.
TL;DR. OpenAI ships a real governance surface for Workspace Agents: role-based access to build/use/share, admin control over “connected tools and actions user groups can access,” approval gating on sensitive steps, and the Compliance API for visibility into every agent’s configuration, updates, and runs. What isn’t documented is a pre-execution hook or webhook for third-party policy enforcement at the tool-call boundary — the Compliance API is retrospective visibility, not an inline gate. If your organization also builds agents outside the hosted Workspace Agents surface (OpenAI Agents SDK, MCP-backed tools, self-hosted services), those are where ACP plugs in — with the Proxy integration pattern scoring 45/48 on AgentGovBench.
What OpenAI ships natively
Quoting the announcement and adjacent docs:
- Role-based controls. “Enterprise and Edu admins can enable agents using role-based controls.” Admins can “manage who has access to use, build, and share agents.”
- Connected-tool governance. “ChatGPT Enterprise and Edu admins can control which connected tools and actions user groups can access.” This is the app-level allowlist — per user group, per tool/action.
- Per-step approval. “For sensitive steps, like editing a spreadsheet, sending an email, or adding a calendar event, you can require the agent to ask for permission before moving forward.” Designed into the agent builder, not an external hook.
- Frozen MCP tool snapshots (from adjacent Admin Controls docs): when an admin approves an MCP app for the workspace, ChatGPT uses a “frozen” snapshot of its available tools and inputs — upstream changes don’t take effect until an admin reviews and republishes. This is meaningful supply-chain governance for MCP-backed actions.
- The Compliance API. “Gives admins visibility into every agent’s configuration, updates, and runs, so they can monitor and control how agents are being built and used.” Admins can also suspend agents.
- Prompt-injection safeguards. “Built-in safeguards help agents stay aligned with your instructions when they encounter misleading external content, including prompt injection attacks.”
- Admin-console agent inventory (coming soon). “Soon, admins will also be able to view every agent built across their organization in the admin console, including usage patterns and connected data sources.”
- Analytics for agent owners. “After you share an agent, analytics help you see how it’s being used, including how many runs it has completed and how many people are using it.”
This is a serious governance surface — role separation, connected-tool control, per-step approval, Compliance API for audit. For organizations whose agent surface is entirely inside ChatGPT, it covers the big rocks.
What isn’t documented (and what that means for governance)
The announcement doesn’t describe:
- A pre-execution hook or webhook. The Compliance API is visibility after the fact, not a
PreToolUse-style callback that lets an external system approve or deny a tool call in-line. If your policy is “don’t call Salesforce write actions from agents invoked by users lackingsalesforce:writein our IdP,” that check has to live either in the downstream app or inside OpenAI’s role-based controls — there’s no third-party gate at the OpenAI side of the tool call. - Per-user identity propagation to downstream apps. The announcement frames workspace agents as “run in the cloud” and “keep working even when you’re not.” When an agent calls a connected app, what identity the downstream system sees (the human invoker? the workspace? a service identity?) is not spelled out in the public launch material. For Salesforce / Google Drive / Slack audit chains, that matters.
- Delegation chain provenance. When an agent invokes another agent or composes across tools, whether the original user’s identity and intent propagate end-to-end isn’t documented. The Compliance API captures configurations and runs, but the granularity of chained-call attribution isn’t spelled out.
- Cost cascade controls across connected tools. A single agent that fans out into many tool calls can produce many downstream API calls. OpenAI’s role-based controls don’t document a per-user rate or cost ceiling that cascades into the connected apps.
- PII scanning on agent outputs. Output content scanning (the text the agent sends back to users or to the next tool) isn’t part of the documented native surface.
These gaps are architectural — they’d require OpenAI to expose an interception boundary that isn’t there today. For a hosted agent runtime, that’s a defensible product choice. For a governance buyer, it’s a gap.
Where ACP fits
ACP sits on the tool-call boundary. For organizations using Workspace Agents, here are the three integration paths honestly ranked:
Path 1 — Agents SDK + ACP proxy (works today)
If you build agents using the OpenAI Agents SDK and host them yourself — even if they’re positioned alongside Workspace Agents in your org’s mental model — point the SDK’s OpenAI client at ACP:
from openai import AsyncOpenAI
client = AsyncOpenAI(
base_url="https://api.agenticcontrolplane.com/v1",
api_key=os.environ["ACP_API_KEY"],
)
Every LLM round-trip, every tool call, every handoff routes through ACP. You get per-user identity attribution via x-acp-user-id, policy enforcement by role/scope, rate-limit cascade protection, PII scanning on tool outputs, and audit that reconstructs the full delegation chain. Scored 45/48 on AgentGovBench. Full guide: /integrations/openai-agents-sdk.
Path 2 — MCP-backed custom actions (works today)
If you extend the organization’s agent surface with custom MCP servers (either consumed by Workspace Agents via the frozen-snapshot approval flow, or by self-hosted agents), ACP can run as the MCP backend. Every custom action flows through the full governance pipeline — same pattern as Cursor. OpenAI’s frozen-snapshot policy plus ACP’s pre-execution governance is a genuinely strong combination: OpenAI gates the tool set, ACP gates the invocation.
Path 3 — hosted Workspace Agents themselves (limited today)
For agents running entirely inside ChatGPT’s hosted runtime, calling OpenAI’s managed connectors: there’s no third-party interception point documented. You get role-based controls, per-step approval, connected-tool allowlists, and Compliance API visibility from OpenAI. For organizations where compliance means “we need OpenAI’s audit trail + server-side compensating controls at downstream apps” — that’s sufficient. For organizations where compliance means “we need an inline gate under our control” — the hosted surface doesn’t offer one today, and ACP can’t proxy calls OpenAI makes inside its own runtime.
The honest recommendation for the hosted-only case: use OpenAI’s native surface, configure downstream apps’ own access controls (Salesforce field-level security, Google Drive sharing policies, Slack workspace settings) as compensating controls, and export the Compliance API into your SIEM.
If OpenAI later exposes a pre-invocation webhook for Workspace Agent tool calls, ACP will support it out of the box.
Install (for Paths 1 and 2)
curl -sf https://agenticcontrolplane.com/install.sh | bash
Windows:
irm https://agenticcontrolplane.com/install.ps1 | iex
Read next
- Architecture is governance — the four-pattern taxonomy (Decorator / Hook / Proxy / MCP) and where each surfaces a governance ceiling.
- OpenAI Agents SDK integration guide — the detailed Path 1 setup.
- AgentGovBench scenarios — what the 45/48 score actually tests.
Running ChatGPT Enterprise + Workspace Agents with governance needs that span both OpenAI’s hosted surface and your self-built agents? Say hi.