Runs API
Runs execute workflow.v0 specs - multi-step workflows with parallel LLM calls, data transformations, and tool execution. Each run produces an append-only event stream and aggregated outputs.
Quick Reference
POST /api/v1/runs Create a run
GET /api/v1/runs/:id Get run status and outputs
GET /api/v1/runs/:id/events Stream run events
GET /api/v1/runs/:id/pending-tools Get pending tool calls
POST /api/v1/runs/:id/tool-results Submit tool results
Authentication
All run endpoints accept:
- Secret key (
mr_sk_*): Backend use with full project access - Customer bearer token: Customer scoped token for data-plane access
Publishable keys (mr_pk_*) cannot call run endpoints directly.
Create Run
POST /api/v1/runs
Starts a workflow run and returns a run_id. The run executes asynchronously - use the events endpoint to stream progress.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
spec |
object | Yes | A workflow.v0 spec |
input |
object | No | Reserved for future use |
options.idempotency_key |
string | No | Deduplication key for retries |
Response
| Field | Type | Description |
|---|---|---|
run_id |
uuid | Unique run identifier |
status |
string | Initial run status |
plan_hash |
string | Hash of the compiled workflow plan |
Example
curl -X POST https://api.modelrelay.ai/api/v1/runs \
-H "Authorization: Bearer mr_sk_..." \
-H "Content-Type: application/json" \
-d '{
"spec": {
"kind": "workflow.v0",
"name": "parallel-analysis",
"nodes": [
{
"id": "summarize",
"type": "llm.responses",
"input": {
"model": "claude-sonnet-4-20250514",
"input": [
{"type": "message", "role": "user", "content": [{"type": "text", "text": "Summarize: AI is transforming software development."}]}
]
}
},
{
"id": "critique",
"type": "llm.responses",
"input": {
"model": "claude-sonnet-4-20250514",
"input": [
{"type": "message", "role": "user", "content": [{"type": "text", "text": "Critique: AI is transforming software development."}]}
]
}
}
],
"outputs": [
{"name": "summary", "from": "summarize"},
{"name": "critique", "from": "critique"}
]
}
}'
const run = await mr.runs.create({
spec: {
kind: "workflow.v0",
name: "parallel-analysis",
nodes: [
{
id: "summarize",
type: "llm.responses",
input: {
model: "claude-sonnet-4-20250514",
input: [
{ type: "message", role: "user", content: [{ type: "text", text: "Summarize: AI is transforming software development." }] }
]
}
},
{
id: "critique",
type: "llm.responses",
input: {
model: "claude-sonnet-4-20250514",
input: [
{ type: "message", role: "user", content: [{ type: "text", text: "Critique: AI is transforming software development." }] }
]
}
}
],
outputs: [
{ name: "summary", from: "summarize" },
{ name: "critique", from: "critique" }
]
}
});
console.log(`Run started: ${run.run_id}`);
run, err := client.Runs.Create(ctx, sdk.RunsCreateRequest{
Spec: map[string]any{
"kind": "workflow.v0",
"name": "parallel-analysis",
"nodes": []map[string]any{
{
"id": "summarize",
"type": "llm.responses",
"input": map[string]any{
"model": "claude-sonnet-4-20250514",
"input": []map[string]any{
{"type": "message", "role": "user", "content": []map[string]any{
{"type": "text", "text": "Summarize: AI is transforming software development."},
}},
},
},
},
},
"outputs": []map[string]any{
{"name": "summary", "from": "summarize"},
},
},
})
Response
{
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "running",
"plan_hash": "abc123def456"
}
Get Run
GET /api/v1/runs/:id
Returns the current run snapshot including status, node results, outputs, and cost summary.
Response
| Field | Type | Description |
|---|---|---|
run_id |
uuid | Run identifier |
status |
string | Current run status |
plan_hash |
string | Compiled plan hash |
cost_summary |
object | Aggregated cost breakdown |
nodes |
array | Node execution results |
outputs |
object | Exported workflow outputs |
Example
curl https://api.modelrelay.ai/api/v1/runs/550e8400-e29b-41d4-a716-446655440000 \
-H "Authorization: Bearer mr_sk_..."
const run = await mr.runs.get("550e8400-e29b-41d4-a716-446655440000");
if (run.status === "succeeded") {
console.log("Outputs:", run.outputs);
console.log(`Total cost: $${run.cost_summary.total_usd_cents / 100}`);
}
Response
{
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "succeeded",
"plan_hash": "abc123def456",
"cost_summary": {
"total_usd_cents": 5,
"line_items": [
{"node_id": "summarize", "model": "claude-sonnet-4-20250514", "usd_cents": 3},
{"node_id": "critique", "model": "claude-sonnet-4-20250514", "usd_cents": 2}
]
},
"nodes": [
{"id": "summarize", "type": "llm.responses", "status": "succeeded"},
{"id": "critique", "type": "llm.responses", "status": "succeeded"}
],
"outputs": {
"summary": {"type": "message", "role": "assistant", "content": [...]},
"critique": {"type": "message", "role": "assistant", "content": [...]}
}
}
Stream Events
GET /api/v1/runs/:id/events
Streams the append-only event history for a run. Events are ordered by sequence number (seq) and can be resumed using after_seq.
Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
after_seq |
integer | 0 | Resume from events after this sequence number |
wait |
boolean | true | Wait for new events (long-poll) or return immediately |
limit |
integer | - | Maximum events to return (1-10000) |
Headers
| Header | Value | Description |
|---|---|---|
Accept |
application/x-ndjson |
NDJSON format (recommended) |
Accept |
text/event-stream |
Server-Sent Events format |
Example
# Stream all events (NDJSON)
curl https://api.modelrelay.ai/api/v1/runs/550e8400-e29b-41d4-a716-446655440000/events \
-H "Authorization: Bearer mr_sk_..." \
-H "Accept: application/x-ndjson"
# Resume from sequence 10
curl "https://api.modelrelay.ai/api/v1/runs/550e8400-e29b-41d4-a716-446655440000/events?after_seq=10" \
-H "Authorization: Bearer mr_sk_..." \
-H "Accept: application/x-ndjson"
const stream = await mr.runs.events("550e8400-e29b-41d4-a716-446655440000");
for await (const event of stream) {
console.log(`[${event.seq}] ${event.type}`);
if (event.type === "node_output_delta" && event.delta?.text_delta) {
process.stdout.write(event.delta.text_delta);
}
if (event.type === "run_completed") {
console.log("Run finished!");
break;
}
}
Event Stream
{"envelope_version":"v0","run_id":"...","seq":1,"ts":"...","type":"run_compiled"}
{"envelope_version":"v0","run_id":"...","seq":2,"ts":"...","type":"run_started","plan_hash":"abc123"}
{"envelope_version":"v0","run_id":"...","seq":3,"ts":"...","type":"node_started","node_id":"summarize"}
{"envelope_version":"v0","run_id":"...","seq":4,"ts":"...","type":"node_output_delta","node_id":"summarize","delta":{"kind":"message_delta","text_delta":"AI"}}
{"envelope_version":"v0","run_id":"...","seq":5,"ts":"...","type":"node_succeeded","node_id":"summarize"}
{"envelope_version":"v0","run_id":"...","seq":6,"ts":"...","type":"run_completed"}
Get Pending Tools
GET /api/v1/runs/:id/pending-tools
Returns pending tool calls for runs using client-side tool execution mode. When a node emits a node_waiting event, use this endpoint to retrieve the tool calls that need execution.
Response
| Field | Type | Description |
|---|---|---|
run_id |
uuid | Run identifier |
pending |
array | Nodes with pending tool calls |
Pending Node Object
| Field | Type | Description |
|---|---|---|
node_id |
string | Node awaiting tool results |
step |
integer | Current execution step |
request_id |
string | Request correlation ID |
tool_calls |
array | Tool calls to execute |
Example
curl https://api.modelrelay.ai/api/v1/runs/550e8400-e29b-41d4-a716-446655440000/pending-tools \
-H "Authorization: Bearer mr_sk_..."
Response
{
"run_id": "550e8400-e29b-41d4-a716-446655440000",
"pending": [
{
"node_id": "agent",
"step": 2,
"request_id": "req_abc123",
"tool_calls": [
{
"tool_call_id": "call_xyz789",
"name": "get_weather",
"arguments": "{\"location\": \"London\"}"
}
]
}
]
}
Submit Tool Results
POST /api/v1/runs/:id/tool-results
Submits tool execution results to resume a waiting run. Use the values from /pending-tools to correlate the results.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
node_id |
string | Yes | Node awaiting results |
step |
integer | Yes | Execution step from pending-tools |
request_id |
string | Yes | Request ID from pending-tools |
results |
array | Yes | Tool execution results |
Tool Result Object
| Field | Type | Required | Description |
|---|---|---|---|
tool_call_id |
string | Yes | ID from the pending tool call |
name |
string | Yes | Tool name |
output |
string | Yes | Tool execution output (JSON string) |
Example
curl -X POST https://api.modelrelay.ai/api/v1/runs/550e8400-e29b-41d4-a716-446655440000/tool-results \
-H "Authorization: Bearer mr_sk_..." \
-H "Content-Type: application/json" \
-d '{
"node_id": "agent",
"step": 2,
"request_id": "req_abc123",
"results": [
{
"tool_call_id": "call_xyz789",
"name": "get_weather",
"output": "{\"temperature\": 18, \"condition\": \"cloudy\"}"
}
]
}'
Response
{
"accepted": 1,
"status": "running"
}
Run Status
Runs transition through these states:
| Status | Description |
|---|---|
running |
Run is actively executing nodes |
waiting |
Run is paused, awaiting client-side tool results |
succeeded |
All nodes completed successfully |
failed |
One or more nodes failed |
canceled |
Run was canceled |
State Machine
stateDiagram-v2
[*] --> running
running --> waiting
running --> succeeded
running --> failed
waiting --> running: tool results submitted
Node Status
Individual nodes have their own status:
| Status | Description |
|---|---|
pending |
Node waiting for dependencies |
running |
Node is executing |
waiting |
Node awaiting client tool results |
succeeded |
Node completed successfully |
failed |
Node execution failed |
canceled |
Node was canceled |
Event Types
Run Lifecycle Events
| Event | Description |
|---|---|
run_compiled |
Workflow spec validated and compiled |
run_started |
Run execution began |
run_completed |
Run finished successfully (includes outputs) |
run_failed |
Run failed (includes error) |
run_canceled |
Run was canceled |
Node Events
| Event | Description |
|---|---|
node_started |
Node began execution |
node_succeeded |
Node completed successfully |
node_failed |
Node failed (includes error) |
node_output |
Node produced final output (artifact reference) |
node_output_delta |
Incremental output for streaming UX |
node_waiting |
Node waiting for client tool results |
LLM Events
| Event | Description |
|---|---|
node_llm_call |
LLM provider call completed (includes usage) |
node_tool_call |
Tool call initiated |
node_tool_result |
Tool result received |
Event Envelope
All events share this envelope structure:
| Field | Type | Description |
|---|---|---|
envelope_version |
string | Always "v0" |
run_id |
uuid | Run identifier |
seq |
integer | Strictly increasing sequence number |
ts |
datetime | Event timestamp |
type |
string | Event type |
node_id |
string | Node ID (for node-scoped events) |
error |
object | Error details (for failed events) |
delta |
object | Streaming delta (for output_delta events) |
llm_call |
object | LLM call metadata (for llm_call events) |
plan_hash |
string | Plan hash (on run_started) |
Node Types
llm.responses
Executes an LLM request using the /responses API. Supports streaming, tools, and structured output.
{
"id": "summarize",
"type": "llm.responses",
"input": {
"model": "claude-sonnet-4-20250514",
"input": [...],
"tools": [...],
"output_format": {...}
}
}
join.all
Waits for all upstream nodes to complete before proceeding. Used for synchronization points.
{
"id": "sync",
"type": "join.all"
}
transform.json
Transforms node outputs using JSON Pointer extraction.
{
"id": "extract",
"type": "transform.json",
"input": {
"from": "upstream_node",
"pointer": "/content/0/text"
}
}
Tool Execution Modes
Server-Side (default)
Tools are executed by the server automatically. The run continues without client interaction.
Client-Side
Tools are sent to the client for execution. The run pauses (waiting status) until results are submitted via /tool-results.
Configure per-node:
{
"id": "agent",
"type": "llm.responses",
"input": {
"model": "claude-sonnet-4-20250514",
"input": [...],
"tools": [...],
"tool_execution": {
"mode": "client"
}
}
}
Error Codes
| Status | Description |
|---|---|
| 400 | Invalid spec or request |
| 401 | Missing or invalid authentication |
| 404 | Run not found |
| 409 | Tool results conflict (wrong step/request_id) |
Next Steps
- Responses API - Single LLM requests
- Workflow Spec Reference - Full workflow.v0 schema