Governed Vercel AI SDK in 3 minutes
The Vercel AI SDK is the most-installed TypeScript agent SDK on the planet. The ai package powers chat UIs, agent loops, and tool-using assistants across Next.js, Edge runtimes, and Node servers. It’s the default for shipping AI features in TypeScript.
What it doesn’t ship: per-user identity attribution on tool calls, cross-tenant audit, output redaction, or pluggable policy. Vercel’s experimental tool-call telemetry callbacks are useful for fleet logging but aren’t a policy enforcement point.
This post is the 3-minute path with Agentic Control Plane. The base @agenticcontrolplane/governance package composes with the SDK’s tool({...execute}) directly — no framework-specific adapter.
The pattern
Tool-layer governance via governed("name", fn). Wrap each tool’s async execute function. Vercel AI SDK’s tool({execute}) receives the wrapped version transparently. Bind identity once per request via withContext.
Three minutes from blank slate
1. Install
npm install @agenticcontrolplane/governance ai @ai-sdk/anthropic zod
2. Wrap your tools
import { anthropic } from "@ai-sdk/anthropic";
import { generateText, stepCountIs, tool } from "ai";
import { z } from "zod";
import {
configure,
governed,
withContext,
} from "@agenticcontrolplane/governance";
configure({ baseUrl: "https://api.agenticcontrolplane.com" });
const lookupRecord = governed(
"lookup_record",
async ({ id }: { id: string }) => {
return await db.records.findOne({ id });
},
);
app.post("/run", async (req, res) => {
const userToken = req.header("authorization")!.replace(/^Bearer /, "").trim();
await withContext(
{ userToken, agentName: "my-vercel-agent", agentTier: "interactive" },
async () => {
const result = await generateText({
model: anthropic("claude-sonnet-4-6"),
tools: {
lookup_record: tool({
description: "Look up a record by ID.",
inputSchema: z.object({ id: z.string() }),
execute: lookupRecord,
}),
},
stopWhen: stepCountIs(5),
prompt: req.body.prompt,
});
res.json({ result: result.text });
},
);
});
3. Run it
export ACP_USER_TOKEN=gsk_...
export ANTHROPIC_API_KEY=...
node my-agent.mjs
Open cloud.agenticcontrolplane.com/activity. One row per tool call with actor, tool, decision, session, input/output preview. Three minutes, two integration calls, full audit.
What governed does
Wraps any async function with ACP’s pre/post hook protocol. Same shape as in every other TypeScript integration ACP supports:
- POSTs to
/govern/tool-usewith the tool name, input, and the user token bound bywithContext. - Deny → returns
"tool_error: <reason>". The Vercel AI SDK passes this string to the model as the tool result. - Allow → runs your function.
- Post-audit: ACP scans the output for PII / secrets, optionally redacts.
The Vercel AI SDK’s tool({ execute }) is unchanged — governance is invisible to the SDK.
Works with streamText and the v6 agent loop
The same pattern works with streamText:
import { streamText } from "ai";
const result = await streamText({
model: anthropic("claude-sonnet-4-6"),
tools: { lookup_record: tool({ ..., execute: lookupRecord }) },
stopWhen: stepCountIs(5),
prompt,
});
governed() runs on every tool call regardless of whether the surrounding function is generateText, streamText, or one of the SDK’s agent loop helpers.
v6 idiom notes
If you’re copy-pasting from older tutorials, the SDK had several breaking changes between v4 and v6:
| Old (v4) | New (v6) |
|---|---|
parameters on tool({...}) |
inputSchema |
maxSteps: n |
stopWhen: stepCountIs(n) |
maxTokens |
maxOutputTokens |
CoreMessage |
ModelMessage |
args / result on tool calls |
input / output |
ai/react |
@ai-sdk/react |
The integration above uses v6 idioms throughout. The governance pattern is independent of the SDK version — governed() wraps your function the same way in v4, v5, or v6.
Per-tier policy
withContext binds an agentTier to the request scope:
interactive— human at the keyboard.subagent— invoked by another agent.background— autonomous, most restrictive.api— programmatic call from your backend.
A scheduled Vercel cron job calling your agent (background) gets stricter policy than a Next.js handler invoked by a user request (interactive). The agent code is the same.
Edge runtime considerations
@agenticcontrolplane/governance uses Node’s AsyncLocalStorage for withContext. On Edge runtimes that don’t support AsyncLocalStorage, pass context explicitly per call instead of relying on the wrapper, or run the agent in a Node server.
For most production deployments running agents in Next.js, the API route is a Node runtime by default — withContext works directly.
Composing with experimental telemetry
The Vercel AI SDK has experimental_onToolCallStart / experimental_onToolCallFinish callbacks on generateText. They’re non-blocking telemetry hooks — useful for fleet-wide logging across many agents in one place. Complementary to governed(), which is the blocking policy enforcement point.
Use governed() for governance and audit. Use the experimental callbacks for orthogonal telemetry that doesn’t need to gate the call.
What this unlocks
The Vercel AI SDK is the default for TypeScript agent shipping. ACP plugs in via the base governance package — no framework-specific adapter — and gives you per-user audit, identity-attributed policy, and output redaction across generateText, streamText, and any v6 agent loop.
Vercel AI SDK integration guide → · Three-minute integrations → · Get started →