Gemini Enterprise Agent Platform: Google's hosted-agent answer, and where it composes
Last week (April 23, 2026), Google announced the Gemini Enterprise Agent Platform, expanding Vertex AI into a full agent stack. The bundle: Agents CLI for scaffolding and deployment, Agent Runtime for managed hosting, Cloud Run and GKE Autopilot for self-managed deployments, plus the existing Vertex Agent Builder console for tool governance and per-agent IAM identities.
It’s a credible enterprise pitch. Google now has Gemini-on-Bedrock-style hosted competition — and unlike OpenAI’s exclusive AWS deal, Google is shipping its own native runtime for its own models inside its own cloud. Coherent story for Google-Cloud-native shops.
Like every hyperscaler-hosted agent platform, the architectural boundary of its enforcement is the cloud itself — meaning agents that live outside Google Cloud (or in non-ADK frameworks) need a complementary intercept layer. Worth walking through what fits where.
What’s actually in the announcement
Three pieces:
- Vertex AI Agent Builder enhancements: Cloud API Registry-backed tool governance, per-agent IAM identities, agent observability — already shipped through 2025 and into early 2026.
- Agents CLI: open-source Python CLI for scaffolding ADK agents and deploying them to Agent Runtime, Cloud Run, or GKE Autopilot. The “create to production in one CLI” path.
- Agent Runtime: Google’s managed runtime — no container, no Dockerfile, your source is packaged as a tarball and deployed to Vertex AI reasoning engines.
Plus agents-cli publish for registering agents in Gemini Enterprise — a marketplace / catalog for sharing governed agents across an organization.
What’s strong about it
ADK portability is unusual. The SDK runs anywhere Python runs. Same agent code runs on a developer’s laptop during dev, deploys to Agent Engine for production, runs on Cloud Run for a custom managed setup, deploys to GKE for full control. None of the other hyperscaler agent SDKs has this property — Bedrock AgentCore is AWS-only, Foundry is Azure-only.
Pricing dropped in January 2026. Agent Engine runtime is meaningfully cheaper than at launch. For teams testing in production, this matters.
Centralized tool registry. The Cloud API Registry lets administrators manage available tools for developers across organizations directly in the console — Google’s version of the “centralized governance team operates policy independently from agent developers” model.
Per-agent IAM identities. Every agent assigned an IAM Principal with a verifiable cryptographic ID. Audit trails tie cleanly to existing Google Cloud IAM tooling.
For Google-Cloud-deep enterprises, this is a real, coherent story.
Where the architectural boundary sits
Every hyperscaler-hosted agent platform — Bedrock AgentCore, Microsoft Foundry, Vertex Agent Builder — has the same architectural property: the platform’s enforcement surface is the cloud itself. That’s the intended scope, and it’s the right scope for the use cases the platforms are designed to govern.
Three places where a complementary intercept layer composes naturally alongside it in 2026:
1. Coding agents on developer laptops
Most production agentic blast radius doesn’t run in any cloud’s managed runtime. It runs locally. Claude Code, Cursor, Codex CLI — autonomous coding agents on developer machines with full shell access, file-write authority, git access, credential exposure via .env files.
Gemini Enterprise Agent Platform’s interception is at the cloud runtime layer; the coding-agent surface is at a different architectural layer entirely. Same shape for Bedrock AgentCore and Foundry. The natural intercept point is the agent runtime’s own hook surface — orthogonal to which cloud hosts your managed agents.
If your developers use Claude Code, that’s a separate governance problem from your Vertex deployment.
2. Agents that aren’t ADK
Vertex Agent Builder’s interception is built around ADK + Agent Engine. A CrewAI service on Cloud Run is technically GCP-hosted but uses a different intercept point than Agent Builder’s per-agent IAM identity story. A LangGraph supervisor on Cloud Run is the same. A Vercel AI SDK app on a Cloudflare Worker is at yet another architectural layer.
For these, the governance answer is tool-layer interception in the agent code itself — @governed decorators that wrap each tool, identity-binding via set_context per request. ACP runs as the policy backend; the framework code is unchanged.
3. Cross-cloud audit
Real enterprises run agents on multiple clouds. Some Vertex-hosted, some Bedrock-hosted (now with GPT-5.5 on Bedrock), some Foundry-hosted. Each cloud’s audit destination is a separate system: Cloud Logging for Vertex, CloudWatch for AgentCore, Microsoft Sentinel for Foundry.
The CISO question — “who did what through which agent on whose behalf last Tuesday?” — joins across all of them. Adding the Gemini Enterprise platform doesn’t reduce the join’s surface area; it adds a new audit source to ingest.
Cross-cloud unification is a separate architectural concern that lives above any single cloud’s product. By design, AWS sees AWS traffic, Azure sees Azure, Google sees Google — the unification of all three is its own layer.
How to think about it
For Google-Cloud-native shops with ADK agents and no significant non-GCP surface: the Gemini Enterprise stack is the right primary answer. Vertex Agent Builder, Agents CLI, Agent Engine, IAM identities, Cloud API Registry — coherent and native.
For shops with any of these in scope: developer-laptop coding agents (Claude Code/Cursor/Codex), non-ADK agent code (CrewAI/LangGraph/Vercel AI/Mastra/Pydantic AI/AutoGen), or multi-cloud workloads — Gemini Enterprise covers the Google-Cloud-native part of the picture, with the unification layer composed alongside it from outside the cloud boundary.
What ADK + ACP looks like in practice
The cleanest composition story among the three hyperscalers, because ADK is portable:
from acp_governance import configure, governed, set_context
from google.adk.agents import Agent
configure(base_url="https://api.agenticcontrolplane.com")
@governed("lookup_record")
def lookup_record(id: str) -> dict:
return db.records.find_one({"id": id})
agent = Agent(
name="my_agent",
model="gemini-flash-latest",
tools=[lookup_record],
)
Same code:
- Locally during dev → governed by ACP
- Deployed to Agent Engine → governed by both ACP and Vertex
- Deployed to Cloud Run or GKE → governed by both, with deployment-specific runtime characteristics
- Deployed to AWS or Azure → governed by ACP only (Vertex doesn’t extend outside Google Cloud)
Two governance scopes, one codebase, no rewrites between environments.
What to do this week
- If you’re Google-Cloud-deep with ADK agents: try the Gemini Enterprise stack. Pricing is meaningfully better than at launch and the integration is native.
- If you have ADK code that runs outside Google Cloud (local dev, AWS, on-prem): wrap your tools with
@governed. Same code, governance follows wherever you deploy. - If you have non-ADK agents: Gemini Enterprise doesn’t cover them. Use the framework-specific ACP integration (CrewAI, LangGraph, Vercel AI SDK, etc.).
- If you have developer-laptop coding agents: separate problem, separate solution. ACP hooks for Claude Code, Cursor, Codex CLI.
- If you’re multi-cloud: stand up a unified audit destination. The cross-cloud join can’t be solved by buying more from any one hyperscaler.
How ACP composes with Vertex Agent Builder → · Vertex AI Agent Builder integration → · Governed Google ADK in 3 minutes →