Skip to content
agentic control plane Beta

Why Agentic Control Planes Will Matter

2025-10-15

We’re at an inflection point in AI deployment.

For the past two years, most AI integrations have been conversational — chatbots that answer questions, summarize documents, generate content. The backend risk was limited because the AI wasn’t doing anything. It was reading and writing text.

That’s changing. Fast.

Agents are taking real actions

OpenAI’s Apps SDK lets ChatGPT call any HTTP endpoint on behalf of a user. Anthropic’s Model Context Protocol gives LLMs a standard way to discover and invoke tools. Every enterprise is building internal copilots that don’t just suggest actions — they take them.

An AI assistant that queries patient records. A copilot that creates JIRA tickets. An agent that processes refunds, pulls credit reports, or modifies infrastructure.

These are real actions with real consequences. And most of them are happening without any governance layer.

The three-party problem

Traditional web apps have two parties: a user and a backend. Authentication is straightforward — the user proves their identity to the backend, and the backend enforces access controls.

AI apps introduce a third party: the LLM runtime. The user authenticates with the LLM, but when the LLM calls your backend, it typically does so with a shared API key. Your backend receives the request but has no way to know:

  • Who is this request actually for?
  • What are they allowed to do?
  • Whether this action complies with your policies?

This is the three-party problem, and it’s the fundamental reason agentic control planes will matter.

What goes wrong without one

The consequences are already visible:

Shadow AI. Teams integrate AI tools without security review. Shared API keys get passed around. Nobody knows which users are making which requests.

Data leakage. Patient records, financial data, and legal documents flow into LLM prompts with no PII detection or redaction. You’re sending sensitive data to third-party models without any filtering.

No audit trail. A compliance officer asks who accessed what through the AI assistant. Without identity binding at the gateway, the answer is: “We don’t know.”

Runaway costs. An agent loop fires thousands of API calls in minutes. Without per-user rate limits enforced at the gateway, the first sign of trouble is the invoice.

What an agentic control plane does

An agentic control plane sits between the LLM and your backend. It handles six concerns:

  1. Identity binding — verify OAuth tokens and attach verified user identity to every request
  2. Content safety — detect PII in prompts before they reach the model
  3. Policy enforcement — deny-by-default authorization based on user roles and scopes
  4. Usage governance — per-user rate limits, budget caps, and agent runaway detection
  5. Secure routing — route to backends with identity intact, SSRF protection
  6. Audit trails — log every action with user identity, policy decisions, and cost

These aren’t nice-to-haves. They’re the table stakes for deploying AI in any regulated or multi-user environment.

The timing

Three things are converging right now:

Platform support. MCP and the Apps SDK mean LLMs can now call tools in a standard way. The protocol layer is ready.

Enterprise adoption. Every large company is deploying agent workflows. The demand is here.

Regulatory pressure. HIPAA, SOC 2, GDPR, and PCI all have implications for AI-mediated access to protected data. The compliance requirements are real.

The governance layer is the missing piece. The companies that build it into their AI stack now will have a structural advantage over those that bolt it on later.

Where to start

GatewayStack is the open-source reference implementation. Six composable npm modules, MIT licensed. Start with identifiabl for identity verification and add layers as your needs grow.

Get started →

← back to writing