Skip to content
Agentic Control Plane

Stop your AI agent from leaking secrets in your `.env` file — in three steps

David Crowe · 5 min read
governance defense-in-depth secrets-management pii-redaction

GitGuardian counted 29 million leaked secrets across public repositories in 2025. Commits co-authored by AI agents leaked secrets at roughly twice the baseline rate. The pattern is consistent enough that security firms have started naming it: agents read .env files by default, treat their contents as referenceable text, and paste those contents into commits, chat threads, and tool outputs without thinking about what they are.

The agent isn’t malicious. It’s autocompleting. From the model’s perspective, the value of STRIPE_SECRET_KEY is just a string to be referenced like any other. The fact that printing it into a tool output makes it indexable forever isn’t a property the model can introspect.

This is one of the most-likely-to-occur incidents in AI-agent dev workflows right now. It’s also one of the easiest to gate.

Why this is the default failure mode

Agents read your environment for legitimate reasons — checking that a key is present, looking up an API endpoint, debugging a config issue. The natural completion of those tasks is “and now I’ll show you what I read.” That’s where the leak happens:

  • The agent is asked to debug a Stripe webhook → opens .env → quotes the secret in its explanation
  • The agent is asked to add a logging line → reads existing config → includes the API key in the diff
  • The agent is asked to check why a feature flag isn’t loading → prints process.env → secrets land in your chat history forever
  • The agent commits its work → the commit includes a screenshot, log, or quoted block containing the secret

Once a secret is in a commit, a chat log, a saved tool output, or a CI artifact, it’s leaked. Rotation is the only fix, and rotation is expensive — every dependent service has to be updated.

Three steps that put a control plane between your agent and your secrets

Step 1 — Install the hook

For Cursor, Claude Code, or Codex CLI:

curl -sf https://agenticcontrolplane.com/install.sh | bash

The hook intercepts every tool call (Read, Write, Bash, file operations) before it executes. ACP classifies file operations by path — Read.env, Write.env, Read.credentials, etc. — so your policy doesn’t have to enumerate every variant of “a path that contains secrets.”

Step 2 — Set “deny reading sensitive paths in background tier” as the default

In your dashboard (cloud.agenticcontrolplane.com) → Policies:

{
  "mode": "enforce",
  "tools": {
    "Read.env":           { "background": { "permission": "deny" }, "interactive": { "permission": "ask" } },
    "Read.credentials":   { "background": { "permission": "deny" }, "interactive": { "permission": "deny" } },
    "Write.env":          { "background": { "permission": "deny" }, "interactive": { "permission": "deny" } },
    "Bash.curl":          { "background": { "permission": "ask"  } }
  }
}

The semantic: in background tier (scheduled jobs, headless agents, CI integrations), the agent cannot read .env files at all. In interactive tier, reading requires explicit approval — you see an ask prompt in the dashboard before the call returns the file contents.

If your agent’s job genuinely requires reading config (e.g., checking which environment a deployment is targeting), promote that specific tool to a custom name and allow it explicitly. The point isn’t to break the agent’s work; it’s to make the difference between “I’m reading config to validate a deploy” and “I’m reading the entire .env to summarize it for the user” a policy decision rather than a model-internal one.

Step 3 — Turn on output redaction

Even when reading is allowed, the contents shouldn’t leak into tool outputs, commits, or chat logs. ACP’s redaction layer scans tool outputs for known credential patterns (Stripe keys, AWS access keys, GitHub tokens, PEM-formatted keys, OpenAI keys, Anthropic keys, etc.) and replaces matches with [redacted:credential_type] before the agent — or any audit consumer — sees them.

In your policy:

{
  "tools": {
    "Read.env": {
      "interactive": {
        "permission": "ask",
        "outputRedaction": "credentials"
      }
    },
    "Bash.cat": {
      "interactive": {
        "permission": "allow",
        "outputRedaction": "credentials"
      }
    }
  }
}

Now even if the agent is allowed to read a sensitive file, what it actually receives is [redacted:stripe_secret_key] rather than sk_live_.... The agent’s reasoning still has the structural information — “there’s a Stripe key here” — without the value. If the agent then quotes the redacted form into a commit or chat, what gets quoted is the redaction marker, not the secret.

(Free fourth step) — Audit + dashboard alerts

Every secret-class read or write writes a structured row to your activity log. The dashboard surfaces redaction events visually so you can see at a glance which agent reads have been scrubbed. Set up a simple alert: “if more than N secret reads in the last hour from a single agent, notify me.”

The total time investment

  • One curl command (Step 1): ~30 seconds
  • Three policy entries in the dashboard (Step 2): ~90 seconds
  • Enabling output redaction (Step 3): ~30 seconds — it’s a single field per tool

Three minutes from blank slate to “no agent in this environment can read sensitive paths in background mode, and any output containing credentials is automatically scrubbed before reaching commits, logs, or audit consumers.”

The asymmetry between three minutes of setup and the cost of credential rotation across your fleet is the asymmetry the security industry has been begging developers to fix for a decade. AI agents have just made the failure mode common enough that the fix has to be infrastructure, not training.

AgenticControlPlane.com

Get the next post
Agentic governance, AgentGovBench updates, the occasional incident post-mortem. One email per post. No marketing fluff.
Share: Twitter LinkedIn
Related posts

← back to blog