Agents
ModelRelay supports agentic tool loops in two ways. Use the right one depending on reuse and governance needs.
Two approaches
| Approach | Best for | Notes |
|---|---|---|
llm.responses with tool_execution: "agentic" |
One-off logic scoped to a workflow | Self-contained; define tools + system prompt inline |
agent.run |
Reusable agents shared across workflows | Project-scoped agent resources with versions + auditability |
When to use which
Use llm.responses + agentic when:
- The behavior is specific to a single workflow
- You want everything defined inline
- You don’t need agent versioning
Use agent.run when:
- The agent is reused across workflows
- You need versioning and audit trails
- A team manages agents as shared resources
Examples
Inline agentic node
{
"id": "analyze",
"type": "llm.responses",
"tool_execution": "agentic",
"max_tool_steps": 6,
"tools": ["search", "summarize"],
"input": [{ "role": "user", "content": "Analyze Q4 results" }]
}
Project agent in a workflow
{
"id": "research",
"type": "agent.run",
"agent": "researcher@2",
"input": [{ "role": "user", "content": { "$ref": "#/input/query" } }]
}
Run an agent directly
POST /projects/{project_id}/agents/{slug}/run
{
"input": [{ "role": "user", "content": "Analyze Q4 results" }],
"options": { "max_steps": 10 }
}
Debugging agent runs
- Use
/runs/{run_id}/stepsfor per-step logs and tool I/O. - Use
POST /projects/{project_id}/agents/{slug}/testwithmock_toolsto replay tool calls.