Agent HTTP triggers
Trigger any agent in your workspace with a single HTTP request. Connect your agents to n8n, Zapier, Make, or any system that can send a POST — every run is identity-verified, governed, and audit-logged.
Endpoint
POST https://api.makeagents.run/<your-workspace>/agents/<profileId>/run
Replace <your-workspace> with your workspace slug and <profileId> with the agent profile ID from your dashboard.
Authentication
Include your API key in the Authorization header:
Authorization: Bearer $ACP_KEY
Create API keys from Settings > API Keys in your dashboard. Keys can be scoped and have optional expiry dates.
All authentication methods supported by ACP work here: API keys (gsk_*), Firebase tokens, and external JWTs via your configured identity provider.
Request
{
"input": "Find all cold leads from last week and draft follow-up emails",
"context": {
"region": "EMEA",
"quarter": "Q1",
"maxLeads": "20"
},
"stream": false
}
| Field | Type | Required | Description |
|---|---|---|---|
input |
string | Yes | The goal or instruction for the agent |
context |
object | No | Key-value pairs injected into the agent’s system prompt |
stream |
boolean | No | true returns SSE events. Default: false |
Context injection
The context object lets you pass runtime information into the agent without modifying its profile. Values are appended to the system prompt as:
Context provided by the caller:
- region: EMEA
- quarter: Q1
- maxLeads: 20
This is useful when the same agent handles different scenarios — pass the customer ID, ticket number, region, or any other variable from your automation.
Response (non-streaming)
{
"id": "run_abc123",
"status": "completed",
"output": "I found 12 cold leads from last week. Here are the drafted follow-up emails...",
"conversationId": "conv_xyz",
"usage": {
"promptTokens": 1500,
"completionTokens": 800,
"toolCallCount": 3,
"estimatedCostCents": 2.45,
"model": "gpt-4o",
"durationMs": 4200
},
"stopReason": "goal_complete"
}
| Field | Description |
|---|---|
id |
Unique run identifier (prefixed with run_) |
status |
Always completed for non-streaming responses |
output |
The agent’s final text response |
conversationId |
Conversation ID — use to view the full thread in the dashboard |
usage.promptTokens |
Total input tokens consumed |
usage.completionTokens |
Total output tokens generated |
usage.toolCallCount |
Number of tool calls the agent made |
usage.estimatedCostCents |
Estimated LLM cost in cents |
usage.model |
Model used (from the agent profile) |
usage.durationMs |
Total execution time in milliseconds |
stopReason |
Why the run ended: goal_complete, max_tool_calls, or max_rounds |
Response (streaming)
Set "stream": true to receive Server-Sent Events as the agent works. This is the same SSE format used by the dashboard chat.
data: {"type":"text","text":"Searching for cold leads..."}
data: {"type":"tool_calls","toolCalls":[{"id":"tc_1","name":"salesforce.search","arguments":"{...}"}]}
data: {"type":"governance","tool":"salesforce.search","decision":"allowed","layers":{"identity":"verified","policy":"allowed","rate_limit":"under_limit"}}
data: {"type":"tool_result","toolCallId":"tc_1","name":"salesforce.search","result":"{...}"}
data: {"type":"text","text":"Found 12 cold leads. Drafting follow-up emails..."}
data: {"type":"done","conversationId":"conv_xyz","usage":{"promptTokens":1500,"completionTokens":800}}
Event types:
| Event | Description |
|---|---|
text |
Streamed text from the agent |
tool_calls |
Agent is calling one or more tools |
governance |
Governance decision for a tool call (allowed, denied, or transformed) |
tool_result |
Result returned from a tool |
error |
An error occurred |
done |
Run complete — includes usage metadata |
Governance
Every agent trigger runs through the same governance pipeline as dashboard chat:
- Identity — the API key identifies the caller. ABAC policies and rate limits apply per-caller, not per-agent.
- Tool governance — each tool call inside the run goes through policy enforcement, PII detection, rate limiting, and scope checking.
- Audit logging — the trigger itself plus every tool call is logged with full identity context. View them in Activity in your dashboard.
- Billing — the trigger counts as one tool call for plan billing. LLM token costs are tracked per-user per-day.
Error responses
| Status | Body | Cause |
|---|---|---|
400 |
{"error": "Missing required field: input"} |
No input in request body |
400 |
{"error": "context must be a key-value object"} |
context is not an object |
400 |
{"error": "Model \"...\" from profile is not enabled in chat config"} |
Agent’s model isn’t configured |
400 |
{"error": "No API key for provider \"...\""} |
No LLM API key for the model’s provider |
401 |
{"error": "Unauthorized"} |
Missing or invalid API key |
404 |
{"error": "Agent profile not found"} |
Invalid profileId |
429 |
{"error": "Daily AI usage limit reached..."} |
Per-user daily LLM cost cap hit |
Examples
curl
export ACP_KEY="your-api-key-here"
curl -X POST https://api.makeagents.run/acme/agents/PROFILE_ID/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ACP_KEY" \
-d '{
"input": "Summarize open support tickets from this week",
"context": { "team": "engineering" }
}'
n8n (HTTP Request node)
- Add an HTTP Request node
- Set method to POST
- Set URL to
https://api.makeagents.run/<workspace>/agents/<profileId>/run - Under Authentication, select Header Auth with name
Authorizationand valueBearer YOUR_API_KEY - Set body to JSON:
{
"input": "{{ $json.trigger_text }}",
"context": {
"customerId": "{{ $json.customer_id }}",
"priority": "{{ $json.priority }}"
}
}
The response body contains the agent’s output in the output field — route it to Slack, email, a database, or the next step in your workflow.
Zapier (Webhooks by Zapier)
- Add a Webhooks by Zapier action step
- Choose POST as the action event
- Set URL to
https://api.makeagents.run/<workspace>/agents/<profileId>/run - Under Headers, add
Authorization: Bearer YOUR_API_KEYandContent-Type: application/json - Set Data to:
input: Find all deals closing this month and flag any at risk
context__region: EMEA
context__quarter: Q1
- Map the
outputfield from the response to your next Zapier step.
Python
import requests
response = requests.post(
"https://api.makeagents.run/acme/agents/PROFILE_ID/run",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"input": "Check inventory levels and alert if any item is below threshold",
"context": {"warehouse": "east-coast"},
},
)
result = response.json()
print(result["output"])
print(f"Cost: {result['usage']['estimatedCostCents']}c in {result['usage']['durationMs']}ms")
What happens during a run
- Your request is authenticated via the API key
- The agent profile is loaded (system prompt, model, allowed tools, limits)
contextvalues are injected into the system prompt- A conversation and run record are created in Firestore
- The agent executes via
runCompletionLoop— calling tools, checking governance, streaming results - Each tool call goes through identity verification, policy enforcement, PII detection, and rate limiting
- The run record is updated with final status, token counts, and cost
- Usage is recorded for billing and the audit log is emitted
- The response is returned (or the SSE stream ends with a
doneevent)