Skip to content
Agentic Control Plane

Agent Identity

I spent three days debugging a problem that turned out to have nothing to do with my code. I was building an MCP server for a CRM integration. The LLM was calling my tools correctly. My backend was returning the right data. Everything worked in testing.

Then I deployed it for a team. And every user saw every other user’s data.

The LLM was forwarding a shared API key. My backend had no idea who was asking. There was no identity flowing through the system — just a service credential that said “this request came from ChatGPT” with no information about which person in ChatGPT triggered it.

This is the agent identity problem. And it’s the first thing an Agentic Control Plane solves.


Why Identity Is the Foundation

Without verified identity, nothing else in governance works:

  • You can’t enforce policies if you don’t know who’s asking
  • You can’t attribute costs if you don’t know who spent them
  • You can’t audit actions if you don’t know who took them
  • You can’t scope permissions if every request looks the same
  • You can’t detect anomalies if there’s no baseline per user

Identity is the foundation. Every other governance capability — policy enforcement, rate limiting, PII detection, audit logging — depends on knowing who is making the request. Get identity wrong (or skip it entirely), and everything downstream is blind.

Most AI systems today treat identity as an afterthought. API keys get shared. Service accounts get reused. The LLM sits in the middle, holding one credential on each side, bridging neither.

ACP treats identity as a first-order architectural primitive. Every request — whether it comes from a human using ChatGPT, an agent built in the console, or an external agent via A2A protocol — carries a cryptographically verified identity that flows through the entire governance pipeline.


The Four Paths to Identity

Not every agent gets its identity the same way. ACP supports four paths, depending on how the agent was created and who’s using it.

1. User-bound sessions (JWT from your IdP)

This is the most common path. A human authenticates with your identity provider (Auth0, Okta, Entra ID, Firebase, Keycloak — any OIDC provider) and receives an RS256-signed JWT. When they use an AI client — ChatGPT, Claude, Cursor, any MCP client — that JWT is forwarded to the control plane.

The control plane verifies the JWT cryptographically, extracts the user’s identity, and attaches it to every downstream request. Your backend receives x-user-uid on every call — a verified user ID, not a shared API key.

This is how identity flows for most enterprise deployments: the human has a login, the login produces a JWT, and the JWT proves who they are on every tool call.

2. Console-created agents

In ACP Cloud, you can create agents directly from the dashboard. These agents get an agent ID bound to the creating user and workspace. Permissions are set visually — which tools the agent can access, which scopes it operates under, what budget it has.

When a console-created agent runs autonomously (on a schedule, triggered by a webhook, or invoked by another agent), it carries its creator’s identity context. The audit trail records both the agent and the human who created it — you always know who is accountable.

3. API-created agents

Agents can also be created programmatically via the REST API. API keys use the format gsk_{slug}_{random} — the slug identifies the workspace, so no tenant parameter is needed:

POST /api/v1/agents
Authorization: Bearer gsk_acmecorp_k8f7a2b1c9d4e5f6
Content-Type: application/json

{
  "name": "Daily report generator",
  "model": "gpt-4o",
  "systemPrompt": "Generate a daily sales summary from CRM data.",
  "enabledTools": ["salesforce.query", "salesforce.getRecord"],
  "scopes": ["salesforce.*"],
  "maxBudgetCents": 500,
  "maxToolCalls": 50,
  "delegatable": true,
  "canDelegate": false
}

The API supports full CRUD (GET, POST, PUT, DELETE) plus agent invocation (POST /api/v1/agents/:id/run). The creating principal’s identity is recorded on the agent profile (createdBy: "apikey:{keyId}"). The agent inherits workspace-level governance policies and can only operate within the scopes and tools it was granted at creation time.

API key identity flows through the governance pipeline like any other identity. When an API-created agent runs, the audit trail attributes every action to the key that triggered it — you always know which integration or automation initiated the chain.

4. External agents (A2A protocol)

Agents from other systems discover your ACP workspace via well-known endpoints and present credentials. ACP verifies their identity before granting access to tools or other agents. This is how agent-to-agent communication works across organizational boundaries. Deep dive on agent-to-agent governance →


Inside the Identity Flow

Here’s exactly what happens when a user triggers a tool call through an MCP client. This is the part that took me days to get right manually — the control plane handles it in under 5 milliseconds.

User AI Client IdP ACP Your Backend 1. OAuth login (PKCE) 2. RS256 JWT (sub, scope, aud, org_id) 3. Prompt + Bearer token 4. Tool call + Bearer token 5. Verify JWT JWKS fetch (cached) 6. Governance 7. Audit log 8. x-user-uid + x-user-scope 9. Response 10. Tool result → user

Steps 5-7 are where the governance happens. In under 5ms, the control plane:

  1. Extracts the Bearer token from the Authorization header
  2. Fetches your IdP’s public keys from the JWKS endpoint (cached after first fetch)
  3. Verifies the RS256 signature cryptographically — no shared secrets, no trust-on-first-use
  4. Validates standard claimsexp (not expired), aud (correct audience), iss (correct issuer)
  5. Extracts the identitysub, scopes, roles, tenant ID, email — into a normalized GatewayIdentity object
  6. Runs the governance pipeline — scope enforcement, ABAC policies, rate limits, PII detection
  7. Logs the action with full identity attribution

Only then does the request reach your backend — with verified identity headers injected.


What’s in the JWT

When a user authenticates with your identity provider, they receive an RS256-signed JWT. This is the cryptographic proof of identity that flows through the entire system. Here’s what it looks like decoded:

{
  "header": {
    "alg": "RS256",
    "typ": "JWT",
    "kid": "NjVBRjY5MDlCMUIwNzU4RTA2QzZFMD..."
  },
  "payload": {
    "sub": "auth0|8f3a2b1c9d4e5f6a",
    "aud": "https://api.yourapp.com",
    "iss": "https://yourcompany.auth0.com/",
    "scope": "tool:crm:read tool:jira:write",
    "org_id": "org_acme_corp",
    "permissions": ["salesforce.query", "github.listRepos"],
    "exp": 1739145600,
    "iat": 1739142000
  }
}

Each claim has a specific role in the governance pipeline:

  • sub — the user’s unique identifier. This becomes x-user-uid on downstream requests. Not an API key. Not an email. A cryptographically verified user ID that can’t be forged.
  • aud — the intended audience. The control plane rejects tokens meant for a different service — even if the signature is valid, a token issued for your staging environment won’t work in production.
  • iss — the issuer. ACP verifies this matches your configured IdP, then fetches the matching public keys from the issuer’s JWKS endpoint.
  • scope — what the user is allowed to do. tool:crm:read means they can read from the CRM connector. tool:jira:write means they can create Jira tickets. No scope, no access.
  • org_id — tenant isolation. Users in one organization can’t access another organization’s data.
  • kid — the key ID used to sign the token. ACP fetches the matching public key from your IdP’s JWKS endpoint and verifies the signature locally. No shared secrets leave your IdP.

The Verification Code

This is the actual implementation in identifiabl-core — the open-source module that handles JWT verification:

import { createRemoteJWKSet, jwtVerify } from "jose";

export function createIdentifiablVerifier(config: IdentifiablCoreConfig) {
  const issuerNoSlash = trimTrailingSlashes(config.issuer);
  const jwksUri = config.jwksUri
    || `${issuerNoSlash}/.well-known/jwks.json`;

  // JWKS keys are cached in memory — re-fetched only on key rotation
  const JWKS = createRemoteJWKSet(new URL(jwksUri));

  return async (token: string): Promise<VerifyResult> => {
    try {
      const { payload } = await jwtVerify(token, JWKS, {
        audience: config.audience,
        algorithms: ["RS256"],   // No HS256 — asymmetric only
        clockTolerance: "60s",   // Allow 60s clock skew
      });

      // Double-check issuer (normalized — handles trailing slashes)
      const iss = trimTrailingSlashes(String(payload.iss || ""));
      if (iss !== issuerNoSlash) {
        return { ok: false, error: "invalid_token",
          detail: `unexpected "iss" claim value: ${payload.iss}` };
      }

      // Map JWT claims → GatewayIdentity
      const identity = mapPayloadToGatewayIdentity(payload, config);
      return { ok: true, identity, payload };
    } catch (e: any) {
      return { ok: false, error: "invalid_token", detail: e?.message };
    }
  };
}

A few design decisions worth noting:

RS256 only. The verifier rejects HS256 tokens. This prevents a well-known class of attacks where an attacker takes a public key and uses it as an HMAC secret to forge tokens. Asymmetric signatures mean the private key never leaves your IdP.

JWKS caching. The createRemoteJWKSet function from the jose library caches public keys in memory. On the first request, it fetches your IdP’s JWKS endpoint. After that, it serves from cache — adding only 2-5ms of latency per request. If a token arrives with an unknown kid (because your IdP rotated keys), it re-fetches automatically.

Issuer normalization. Auth0 tokens include iss: "https://tenant.auth0.com/" with a trailing slash. Okta tokens don’t. Rather than breaking on slash differences, the verifier normalizes both sides before comparing.

Clock tolerance. A 60-second tolerance handles small time differences between your IdP and the control plane. Without this, tokens would occasionally be rejected as “expired” due to clock skew — a maddening intermittent failure.


The GatewayIdentity Object

After verification, the JWT is mapped into a normalized GatewayIdentity that flows through the entire pipeline:

interface GatewayIdentity {
  sub: string;           // "auth0|8f3a2b1c9d4e5f6a"
  issuer: string;        // "https://yourcompany.auth0.com"
  tenantId?: string;     // "org_acme_corp"
  email?: string;        // "alice@acme.com"
  name?: string;         // "Alice Chen"
  roles?: string[];      // ["admin", "sales"]
  scopes?: string[];     // ["tool:crm:read", "tool:jira:write"]
  plan?: string;         // "pro"
  source: IdentitySource; // "auth0" | "okta" | "entra" | "cognito" | ...
  raw: Record<string, unknown>; // Full JWT payload preserved
}

The claim-to-field mapping is configurable per IdP. Auth0 puts roles in permissions. Okta puts them in groups. Entra ID puts scopes in scp. The identifiabl config normalizes all of this:

// Auth0
identifiabl({
  issuer: "https://dev-xxxxx.us.auth0.com/",
  audience: "https://gateway.local/api",
  scopeClaim: "scope",
  roleClaim: "permissions",
  tenantClaim: "org_id",
})

// Okta
identifiabl({
  issuer: "https://dev-xxxxx.okta.com",
  audience: "https://gateway.local/api",
  scopeClaim: "scp",
  roleClaim: "groups",
  tenantClaim: "tenant",
})

// Entra ID (Azure AD)
identifiabl({
  issuer: "https://login.microsoftonline.com/{tenant}/v2.0",
  audience: "api://{app-id}",
  scopeClaim: "scp",
  roleClaim: "roles",
})

The raw field preserves the complete JWT payload. If your IdP includes custom claims (department, cost center, clearance level), they’re available downstream even if they’re not mapped to a standard field.


What Your Backend Receives

This is the difference that matters. Without an ACP, your backend gets this:

POST /api/crm/contacts
Authorization: Bearer sk-shared-api-key-for-everyone
Content-Type: application/json

{"query": "show me all contacts in the pipeline"}

No user identity. No scopes. No way to filter results per user. No way to know if this came from your CEO or a contractor.

With an ACP, your backend gets this:

POST /api/crm/contacts
x-user-uid: auth0|8f3a2b1c9d4e5f6a
x-user-scope: tool:crm:read
x-user-org: org_acme_corp
x-request-id: req_7f8a9b0c
Content-Type: application/json

{"query": "show me all contacts in the pipeline"}

Your backend code can now do:

SELECT * FROM contacts
WHERE org_id = 'org_acme_corp'
AND pipeline_access IN ('tool:crm:read')

Verified identity. Scoped permissions. Tenant context. Full traceability. And the user never had to log in again — their existing SSO session flows through the AI layer transparently.


How Identity Flows Through the Governance Pipeline

The identity object doesn’t just sit on the request — it’s actively used by every governance layer downstream. Here’s the actual pipeline from the governToolCall function — seven layers, evaluated in order on every tool call:

// 0a. Immutable platform rules (bypass-immune — SSN, credit card, SSRF)
const immutable = checkImmutableRules(toolName, input);

// 0b. Delegation enforcement (if agent-to-agent)
//     → Override auth.scopes with chain's effective scopes

// 1. Scope enforcement (validatabl-core)
const required = tenantCtx.policies?.toolRequiredScopes?.[toolName];
const permResult = checkPermissions(
  { scope: auth.scopes.join(" "), permissions: auth.claims?.permissions },
  required,
);

// 2. ABAC policy rules (validatabl-core)
const policyDecision = applyPolicies(
  { rules: tenantCtx.policies.rules, defaultEffect: "deny" },
  { identity: { sub: auth.sub, scope: auth.scopes.join(" ") }, tool: toolName },
);

// 3. Rate limit + budget preflight (limitabl-core)
const preflightResult = limitablEngine.preflight(
  { sub: auth.sub, orgId: auth.claims?.org_id, ip, tenantId },
  { workflowId },
);

// 4. Plan limit check (dynamic, Firestore-backed with per-tenant overrides)
const plan = await loadEffectivePlan(subscription?.planId ?? "free", tenantId);
const planCheck = checkPlanLimit(plan, { toolCallsThisPeriod });

// 5. Content scanning + redaction (transformabl-core)
const transformResult = transformContent(inputStr);

Every layer uses auth.sub — the verified user identity. Rate limits are per-user. Budget caps are per-user. ABAC policies evaluate against the user’s roles and claims. Scope enforcement checks the user’s scopes against tool requirements. Immutable rules protect against dangerous input regardless of configuration. The identity thread runs through everything.

Plan limits are now dynamic — base plans can be overridden per-tenant via Firestore, so enterprise customers can have custom limits that merge on top of their subscription tier.


Identity in the Audit Trail

Every tool call produces an audit record with full identity attribution:

{
  ts: "2026-04-01T14:23:07.841Z",
  tenantId: "acme-corp",
  requestId: "req_7f8a9b0c",
  tool: "salesforce.query",
  sub: "auth0|8f3a2b1c9d4e5f6a",  // or SHA-256 hash if hashSub enabled
  scopes: ["salesforce.*", "tool:crm:read"],
  client: { name: "claude-desktop", version: "1.2.0" },
  ok: true,
  latencyMs: 142,
  contentScan: {
    piiDetected: false,
    piiTypes: [],
    riskScore: 12
  }
}

When the compliance officer asks “who accessed customer data through the AI assistant last Tuesday?” — you have the answer. One log, one format, one place to look. Identity-attributed, timestamped, with the full governance decision chain.

If the user’s identity is sensitive (say, in a healthcare context where you don’t want user IDs in logs visible to log aggregators), the hashSub option replaces the sub with a SHA-256 hash. The hash is deterministic — you can still correlate actions by the same user — but the original identity isn’t exposed in the log data.


Identity Providers Supported

ACP works with any OIDC-compliant identity provider that issues RS256-signed JWTs with a discoverable JWKS endpoint. In practice, this covers every major provider:

Auth0
Okta
Microsoft Entra ID
Firebase Auth
Keycloak
AWS Cognito
Google Identity
PingIdentity
Any OIDC provider

The verification is protocol-level, not vendor-specific. If your IdP publishes a .well-known/openid-configuration endpoint with a jwks_uri, ACP can verify its tokens. No vendor integration required. No SDK to install. No webhook to configure.


Why This Is Different From API Keys

It’s worth being explicit about why JWT-based identity is fundamentally different from — and better than — API key authentication for AI agents.

API keys are shared secrets. If the key leaks, everyone with it has the same access. There’s no way to attribute actions to specific users. There’s no way to scope a key to specific tools without creating a new key per tool per user (which nobody does). And there’s no expiration built in — leaked keys work forever until manually rotated.

JWTs are signed assertions. They carry the user’s identity, permissions, and tenant context inside the token. They expire. They can be scoped. They can be verified without contacting the issuer (because verification uses public keys, not shared secrets). And they can’t be forged — the private key stays at the IdP.

Service accounts are identity without accountability. They identify the application, not the person. When an agent uses a service account, your audit trail says “the AI assistant accessed patient data” — not “Dr. Chen asked the AI assistant to access patient data.” Service accounts are necessary for background jobs. They’re insufficient for human-initiated agent actions.

ACP bridges this gap. Human identity flows through the AI layer. Your backend sees the person, not the pipe.


Self-Hosting the Identity Layer

All of this is available as an MIT-licensed npm module. You don’t need ACP Cloud to use agent identity — you can add it to any Express application in three lines:

import { identifiabl } from "@gatewaystack/identifiabl";

app.use(identifiabl({
  issuer: process.env.OAUTH_ISSUER!,
  audience: process.env.OAUTH_AUDIENCE!,
}));

// Every request now has req.user with verified identity
app.get("/api/me", (req, res) => {
  res.json({
    user: req.user.sub,
    scopes: req.user.scopes,
    tenant: req.user.tenantId,
  });
});

For non-Express environments (Cloud Functions, serverless, etc.), use the framework-agnostic core:

import { createIdentifiablVerifier } from "@gatewaystack/identifiabl-core";

const verify = createIdentifiablVerifier({
  issuer: "https://yourcompany.auth0.com/",
  audience: "https://api.yourapp.com",
  tenantClaim: "org_id",
  roleClaim: "permissions",
});

// In your request handler:
const result = await verify(bearerToken);
if (!result.ok) {
  return { status: 401, body: { error: result.error } };
}

const { identity } = result;
// identity.sub, identity.scopes, identity.tenantId, etc.

Start with identity. Add the other five governance layers when you need them. The architecture is composable — each module is independent.


Get started → · Agent-to-agent governance → · Architecture deep dive → · View on GitHub →