Your API Keys Already Give Agents Production Access
Go look at the .env file for any project where you’ve connected an AI agent to your tools. You’ll find something like this:
SALESFORCE_API_KEY=sk-prod-3f8a...
GITHUB_TOKEN=ghp_9x2k...
SLACK_BOT_TOKEN=xoxb-...
JIRA_API_TOKEN=ATATT3...
These are production credentials. They can read contacts, close deals, post messages, create issues. Each one was created for a specific integration — probably by a developer who needed the agent to “just work.”
Now ask yourself: what happens when the agent uses them?
The answer: the agent acts with the full permissions of whoever created those keys. Not the permissions of the user talking to the agent. The permissions of the key itself — which is usually an admin, because that’s who set up the integration.
Your intern asks the agent to look up a customer. The agent calls Salesforce with the admin’s API key. Salesforce sees admin credentials, returns everything. The intern now has access to data they’d never see in the Salesforce UI. Nobody logged it against the intern. Nobody checked whether the intern should see it. The key worked, so the request succeeded.
This isn’t hypothetical. This is how every agent connected to a production API works right now, unless you’ve explicitly built the plumbing to prevent it.
API keys are identity-less. That’s the core issue.
When a human logs into Salesforce, the system knows who they are. Their permissions determine what they see. Their actions are logged against their name. They can be audited, restricted, and revoked individually.
An API key knows nothing about the person behind the request. It carries permissions but not identity. It says “this request is authorized” without saying “this request is from Sarah in accounting who should only see EMEA accounts.”
This was fine when API keys were used by backend services that your team built and controlled. The service had a specific purpose, made predictable calls, and was maintained by people who understood its scope.
An AI agent is not a predictable service. It decides what to call based on a conversation with a user. It might call one API or five. It might read data or write it. Its behavior changes based on who’s talking to it and what they ask — but the credentials it uses don’t change at all.
Here’s what this looks like in practice.
A sales rep asks the agent: “What’s our pipeline for Q2?” The agent calls the CRM API with the shared key. The key has full read access — it was set up by the VP of sales who needed to build dashboards. The agent returns a comprehensive pipeline view including deal sizes, close probabilities, and rep-level performance. The sales rep is now looking at their manager’s pipeline, their peers’ deals, and compensation-related data that would never appear in their individual Salesforce view.
The agent didn’t hack anything. It didn’t exploit a vulnerability. It used the key it was given. The key doesn’t know that this particular user should only see their own deals.
Or worse: a support agent asks the AI to “update the customer’s email address.” The agent calls the CRM API with the same key. The key has write access — because the integration also handles syncing contacts. The update goes through. Now the customer’s email is changed in the system of record because a support rep asked a chatbot, using credentials that were never meant for ad-hoc updates by individual users.
Your access control system didn’t fail. It was never consulted. The API key bypassed it entirely.
The instinct is to fix this with more restrictive keys. Create a read-only key for the agent. Limit its scope. Lock down the endpoints.
This helps, but it doesn’t solve the problem. Because the fundamental issue isn’t what the key can do — it’s that the key has no concept of who is using it.
A read-only key that serves 50 users still can’t distinguish between them. It can’t enforce that the sales rep only sees their own pipeline. It can’t log which user initiated which query. It can’t be revoked for one user without revoking it for everyone.
You end up in a trap: either the key is permissive enough to serve all users (and gives each user access to everything), or it’s restricted enough to be safe (and can’t serve most use cases). There’s no middle ground because the key doesn’t carry the one piece of information that would make it work: the identity of the person behind the request.
The fix is to stop letting agents use identity-less credentials.
Instead of the agent calling your CRM with a shared API key, the agent calls through a layer that attaches the user’s verified identity to the request. The layer checks: is this user authenticated? Do they have permission to access this tool? Does this specific action fall within their scopes? Then — and only then — the request goes to your backend, carrying the user’s identity, not a service account’s.
This is what an Agentic Control Plane does. It replaces the shared API key with per-user identity propagation. The agent still calls your tools the same way. But every call is scoped to the user who initiated it, checked against the same access policies you enforce everywhere else, and logged with full attribution.
Your API key is still there — it’s how the control plane connects to your backend. But the agent never touches it directly. The user’s identity determines what the agent can do, not the key’s permissions.
Take five minutes and do this:
- List every API key your AI agent can access
- For each key, write down what permissions it has
- Ask: if any user of the agent used this key directly, what could they access?
If the answer is “more than they should” — and it almost certainly is — you have shadow access. Not a vulnerability in the traditional sense. No CVE. No patch. Just a standing invitation for any agent user to act with permissions they were never meant to have, with no record of who did what.
Every one of those keys was safe before you connected an agent to it. The keys didn’t change. The threat model did.
Get started with ACP → · Read how authentication broke →