Skip to content
Agentic Control Plane

Vercel AI SDK + ACP — Governance Install Guide

The Vercel AI SDK (ai npm package) is the most widely deployed TypeScript framework for shipping agents on Next.js, Edge runtimes, and Node servers. Out of the box, a production deployment shares one backend API key across every end user’s request — no per-user policy enforcement, no per-user audit trail, no way to tell downstream systems which human triggered which action.

@agenticcontrolplane/governance closes that gap. Wrap each tool’s execute function with governed(...); bind the end user’s identity per request via withContext. Same governance model as Claude Code — same /govern/tool-use endpoint, same workspace policies.

Starter · 5-minute install. No framework-specific adapter needed — governed() from @agenticcontrolplane/governance composes with Vercel AI SDK’s tool({...execute}) directly. See the runnable starter, the governance model, or the frameworks index.

Install

npm install @agenticcontrolplane/governance ai @ai-sdk/anthropic zod

Minimal governed agent

import { anthropic } from "@ai-sdk/anthropic";
import { generateText, stepCountIs, tool } from "ai";
import { z } from "zod";
import {
  configure,
  governed,
  withContext,
} from "@agenticcontrolplane/governance";

configure({ baseUrl: "https://api.agenticcontrolplane.com" });

// Wrap your async tool function with governed(name, fn). The Vercel AI
// SDK's tool({ execute }) receives the wrapped version transparently.
const lookupRecord = governed(
  "lookup_record",
  async ({ id }: { id: string }) => {
    return await db.records.findOne({ id });
  },
);

const sendEmail = governed(
  "send_email",
  async ({ to, subject, body }: { to: string; subject: string; body: string }) => {
    return await mailer.send({ to, subject, body });
  },
);

app.post("/run", async (req, res) => {
  const userToken = req.header("authorization")!.replace(/^Bearer /, "").trim();

  await withContext(
    { userToken, agentName: "my-vercel-agent", agentTier: "interactive" },
    async () => {
      const result = await generateText({
        model: anthropic("claude-sonnet-4-6"),
        tools: {
          lookup_record: tool({
            description: "Look up a record by ID.",
            inputSchema: z.object({ id: z.string() }),
            execute: lookupRecord,
          }),
          send_email: tool({
            description: "Send an email.",
            inputSchema: z.object({
              to: z.string(),
              subject: z.string(),
              body: z.string(),
            }),
            execute: sendEmail,
          }),
        },
        stopWhen: stepCountIs(5),
        prompt: req.body.prompt,
      });
      res.json({ result: result.text });
    },
  );
});

What governed does

Wraps any async function with ACP’s pre/post hook protocol:

  1. POSTs to /govern/tool-use with the tool name, input, and the user JWT bound by withContext.
  2. Deny → the wrapped function returns "tool_error: <reason>". The Vercel AI SDK passes this string to the model as the tool result; the model sees the denial and adapts.
  3. Allow → your function runs.
  4. Post-audit: ACP scans the output for PII / secrets. If policy says redact, the redacted version replaces the original. Audit row written, rooted in the end user’s identity.

The Vercel AI SDK’s tool({ execute }) is unchanged — governance is invisible to the SDK, runs on every dispatch.

v6 idiom notes

The Vercel AI SDK had several breaking changes between v4 and v6. Key updates if you’re copy-pasting from older tutorials:

Old (v4) New (v6)
parameters on tool({...}) inputSchema
maxSteps: n stopWhen: stepCountIs(n)
maxTokens maxOutputTokens
CoreMessage ModelMessage
args / result on tool calls input / output
ai/react @ai-sdk/react

The starter uses v6 idioms throughout. If your existing app is still on v4, the governance pattern is the same — only the surrounding SDK calls change with the upgrade.

Per-tier policy

withContext binds an agentTier to the request scope:

  • interactive — human at the keyboard, permissive default.
  • subagent — invoked by another agent, no human in the immediate loop.
  • background — autonomous, most restrictive.
  • api — programmatic call from your backend.

A destructive tool denied in background (a scheduled job) can be allowed in interactive (a human-supervised request).

Streaming and streamText

The same pattern works with streamText:

import { streamText } from "ai";

const result = await streamText({
  model: anthropic("claude-sonnet-4-6"),
  tools: { ... },
  stopWhen: stepCountIs(5),
  prompt,
});

for await (const chunk of result.textStream) {
  res.write(chunk);
}

governed() runs on every tool call regardless of whether the surrounding function is generateText or streamText.

Fleet-wide telemetry

For logging every tool call across many agents in one place — separate from per-call governance — the SDK’s experimental_onToolCallStart / experimental_onToolCallFinish callbacks on generateText are complementary to the per-tool governed() wrapper. Use them for non-blocking telemetry; use governed() for blocking policy and audit.

Limitations

  • Only tools wrapped with governed are covered. Plain execute callbacks bypass governance.
  • LLM calls go direct to your provider. ACP governs tools, not tokens. For per-user LLM cost attribution, pair with Portkey or LiteLLM virtual keys.
  • Edge runtime considerations. @agenticcontrolplane/governance uses node:async_hooks for withContext. On Edge runtimes that don’t support AsyncLocalStorage, pass the context explicitly per-call instead of relying on the wrapper.
  • Pre-release. @agenticcontrolplane/governance is on 0.x. Pin exact versions.