Skip to content
agentic control plane Beta

Compliance-Ready AI Governance

2026-01-22

Every compliance framework was written before AI agents started calling APIs on behalf of users. But the requirements still apply — and in many cases, AI makes them harder to satisfy.

Here’s the problem: most AI integrations today use shared API keys, have no per-user access controls, and produce no identity-attributed audit trails. This fails the requirements of every major compliance framework.

An Agentic Control Plane addresses these gaps systematically.

HIPAA: identity and minimum necessary access

HIPAA requires that access to protected health information (PHI) is limited to the minimum necessary for a user’s role, and that all access is logged and attributable.

The AI problem: A hospital’s AI diagnostic assistant queries patient records via a shared API key. The backend can’t distinguish between a licensed physician and a medical assistant. Everyone gets the same access. There’s no audit trail linking queries to specific clinicians.

With an ACP:

  • identifiabl verifies the clinician’s identity and maps their license type to request context
  • transformabl detects and redacts patient PII before it reaches the LLM
  • validatabl enforces role-based access — radiologists see imaging, primary care sees full records
  • explicabl logs every query with the clinician’s identity, the patient accessed, and the policy that authorized it

The result is minimum necessary access controls and a compliance-ready audit trail — enforced at the gateway, not in application code.

SOC 2: access controls, monitoring, and logging

SOC 2 Trust Service Criteria require access controls, monitoring, and logging for all systems that process customer data. The criteria don’t care whether the access is from a human clicking a button or an AI agent calling an API.

The AI problem: An enterprise copilot has access to internal tools — HR data, financial systems, engineering infrastructure. All employees go through the same shared key. There’s no monitoring of who’s using what, and no way to demonstrate access controls in an audit.

With an ACP:

  • identifiabl binds every request to the employee’s SSO identity
  • validatabl enforces role-based and department-based tool access
  • limitabl provides usage monitoring and spend controls per user
  • explicabl produces structured audit logs that map directly to SOC 2 evidence requirements

When auditors ask “show me your access controls for AI-mediated tool access,” you have automated evidence — not a spreadsheet of manual reviews.

GDPR: data subject rights and processing records

GDPR requires organizations to know what personal data they process, who processes it, and for what purpose. Data subjects have the right to know what data has been accessed about them.

The AI problem: Without identity binding, you can’t answer “which of our employees accessed this customer’s data through the AI assistant?” Without content filtering, you can’t demonstrate that personal data wasn’t unnecessarily sent to third-party LLM providers.

With an ACP:

  • identifiabl creates a verifiable link between the employee identity and the data access
  • transformabl detects personal data in prompts and prevents unnecessary exposure to third-party models
  • explicabl maintains processing records that satisfy Article 30 requirements — who accessed what, when, and under which legal basis

Data subject access requests become answerable: “Here are all AI-mediated accesses to your data, attributed to specific employees, with policy justification.”

PCI DSS: cardholder data protection

PCI DSS requires strict access controls and logging for systems that process cardholder data. The principle of least privilege applies to every access path — including AI agents.

The AI problem: A fintech AI assistant queries payment APIs. The shared API key gives it access to cardholder data regardless of which user is asking. There’s no access control scoping, no redaction of card numbers in prompts, and no per-user audit trail.

With an ACP:

  • identifiabl verifies the user’s identity and role
  • transformabl detects and masks credit card numbers, CVVs, and account numbers before they reach the model
  • validatabl restricts cardholder data access to authorized roles only
  • explicabl logs every access with user identity for PCI compliance review

The pattern

Every compliance framework has the same core requirements for AI governance:

  1. Know who is accessing data — identity binding, not shared keys
  2. Enforce appropriate access controls — per-user policies, not all-or-nothing
  3. Protect sensitive data — PII detection and redaction at the gateway
  4. Maintain audit trails — identity-attributed logging of every action
  5. Demonstrate controls — automated evidence, not manual reviews

An Agentic Control Plane addresses all five at the infrastructure level. The governance is enforced at the gateway — not scattered across application code, not dependent on developer discipline, and not retroactively bolted on before an audit.

Implementation

GatewayStack implements these controls as composable npm modules:

npm install @gatewaystack/identifiabl @gatewaystack/transformabl \
  @gatewaystack/validatabl @gatewaystack/limitabl @gatewaystack/explicabl

Each module handles one compliance concern. Wire them together for full-stack governance, or adopt incrementally starting with identity.

Getting started → · Use cases → · Reference architecture →

← back to writing