Authentication Is Broken in AI Systems
I spent three days building an MCP server. Connected it to a real backend with real customer data. Got tool calls working, got streaming working, got it talking to Claude Desktop. Felt great.
Then I looked at my backend logs.
Every request showed the same thing: service-account-mcp-gateway. That was the API key I’d configured during setup. That was the only identity my backend ever saw. It didn’t matter which user was in Claude Desktop. It didn’t matter what they’d asked the agent to do. My backend processed every request the same way, with the same permissions, logged against the same service account.
Three users were testing it. My backend couldn’t tell them apart.
This wasn’t a misconfiguration. I followed the docs. I did what every MCP tutorial tells you to do: create an API key, put it in the config, connect your tools. The tutorials don’t mention that you’ve just given an AI agent blanket access to your backend with no user identity attached. Because that’s just how it works.
Authentication — real authentication, the kind your security team cares about — answers one question: who is making this request?
Your login page answers it. Your OAuth flow answers it. Your API gateway checks a JWT, extracts the sub claim, and now your backend knows: this request is from Sarah in accounting, she has read access to financial reports, and this action will be logged against her identity.
That entire model assumes two parties. A user and a service. The user proves who they are. The service checks and acts accordingly. Every protocol you use — OAuth 2.0, OIDC, SAML — models this as a bilateral relationship.
AI agents add a third party and the model collapses.
Here’s what actually happens: Sarah asks the agent to pull last quarter’s revenue numbers. The agent decides to call your finance API. It makes the request with the API key your team configured — a static credential that has nothing to do with Sarah. Your finance API sees a valid key, returns the data, and logs the request against the service account.
Sarah didn’t authenticate to your finance API. She authenticated to the LLM. The LLM authenticated to your API. But the link between Sarah and the API call? It doesn’t exist. Your backend served sensitive financial data and has no record of who asked for it.
Now swap Sarah for anyone. A contractor. An intern. Someone who left the company last week but whose agent session is still active. Your backend can’t tell the difference because it never knew who was asking in the first place.
This breaks more than logging.
Your access control policies are built on user identity. Finance team sees financial data. Sales team sees CRM data. Support team sees tickets. These boundaries exist because someone decided they matter — compliance, data protection, need-to-know.
Your AI agent doesn’t have a role. It has an API key. That key typically has access to everything the agent might need to serve any user in any role. It has to — otherwise it can’t function across different users and contexts.
So when the intern asks the agent for the CEO’s compensation details, what stops it? Not your access control system. That system sees a valid API key with broad permissions. The only thing between the intern and that data is the LLM’s judgment about whether the request seems reasonable.
That’s not access control. That’s hope.
Security researchers have a name for this: the confused deputy problem. A trusted intermediary with elevated privileges gets tricked — or just misconfigured — into performing actions the requester wouldn’t be authorized to perform directly. In traditional systems, this is an edge case. In AI agent deployments, it’s the default architecture.
I fixed my MCP server. Not by changing the tools or the prompts or the model. By adding a layer between the user and my backend that does what authentication is supposed to do: prove who’s asking.
Every request from the agent now carries a scoped token — not a shared API key, but a JWT with the actual user’s identity. My backend validates it against the identity provider. Sarah’s requests get Sarah’s permissions. The intern’s requests get the intern’s permissions. The audit trail shows who initiated what, through which agent, at what time.
The agent still works the same way from the user’s perspective. They ask a question, the agent calls tools, they get an answer. The difference is invisible to them and critical to everyone else: the backend now knows who’s asking.
This is what an Agentic Control Plane does. It sits between the agent and your backends. It verifies user identity on every tool call. It enforces the same access policies that apply when users interact directly. It produces the audit trail your compliance team needs.
It’s not a new protocol. It’s not a new authentication standard. It’s applying the authentication guarantees you already have — the ones you spent years building — to the one place they currently don’t reach: the gap between a user and the agent acting on their behalf.
If you have agents calling backend APIs today, run this test: look at your backend logs for agent-initiated requests. Can you tell which user initiated each one?
If every request shows the same service account, you have the gap. Every agent framework will tell you that’s fine. Your auditor won’t.
Get started with ACP → · Read the three-party identity problem →