Tool Use
Tools let AI models call functions in your application, access external data, and take actions. Instead of just generating text, models can request specific operations and use the results to provide better answers.
Tool Types
ModelRelay supports three types of tools:
| Type | Description |
|---|---|
function |
Custom functions you define with JSON schema parameters |
x_search |
Search X/Twitter posts (Grok only) |
code_execution |
Run code in sandboxed environments |
Function tools are the most common—they let you connect models to your APIs, databases, and services.
For web search and page fetching, use Workflows with server-side web_fetch and web_search tools, which provide consistent behavior across all models.
Function Tools
Define a function tool with a name, description, and JSON schema for parameters:
import { createFunctionTool } from "@modelrelay/sdk";
const weatherTool = createFunctionTool("get_weather", "Get current weather for a location", {
type: "object",
properties: {
location: { type: "string", description: "City name" },
unit: { type: "string", enum: ["celsius", "fahrenheit"], default: "celsius" }
},
required: ["location"]
});
import sdk "github.com/modelrelay/sdk-go"
// Option 1: From JSON schema
weatherTool := sdk.NewFunctionTool("get_weather", "Get current weather for a location", map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{"type": "string", "description": "City name"},
"unit": map[string]any{"type": "string", "enum": []string{"celsius", "fahrenheit"}},
},
"required": []string{"location"},
})
// Option 2: From Go struct (recommended)
type WeatherParams struct {
Location string `json:"location" description:"City name"`
Unit string `json:"unit,omitempty" enum:"celsius,fahrenheit" default:"celsius"`
}
weatherTool := sdk.MustFunctionToolFromType[WeatherParams]("get_weather", "Get current weather for a location")
use modelrelay::{tool, ToolRegistry};
#[tool(description = "Get current weather for a location")]
fn get_weather(
#[doc = "City name"] location: String,
#[doc = "Temperature unit"] unit: Option<String>,
) -> String {
// Implementation
format!("Weather in {}: 72°F, sunny", location)
}
User Interaction (user_ask)
Use the built-in user_ask tool when the model needs human input. The run emits node_user_ask, and you submit a tool result payload like {"answer":"...","is_freeform":true} via /runs/:id/tool-results.
import { createUserAskTool, userAskResultFreeform } from "@modelrelay/sdk";
const tools = [createUserAskTool()];
// When the user responds:
const output = userAskResultFreeform("Postgres");
tools := []llm.Tool{sdk.UserAskTool()}
// When the user responds:
output, _ := sdk.UserAskResultFreeform("Postgres")
use modelrelay::{user_ask_result_freeform, user_ask_tool};
let tools = vec![user_ask_tool()];
// When the user responds:
let output = user_ask_result_freeform("Postgres")?;
Schema from Types
Generate JSON schemas automatically from typed structures:
import { createTypedTool } from "@modelrelay/sdk";
import { z } from "zod";
const weatherTool = createTypedTool({
name: "get_weather",
description: "Get current weather for a location",
parameters: z.object({
location: z.string().describe("City name"),
unit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
}),
});
type WeatherParams struct {
Location string `json:"location" description:"City name"`
Unit string `json:"unit,omitempty" enum:"celsius,fahrenheit" default:"celsius"`
}
// Supported struct tags:
// - description:"..." - Field description
// - enum:"a,b,c" - Allowed values
// - default:"val" - Default value
// - minimum:"N", maximum:"N" - Numeric bounds
// - minLength:"N", maxLength:"N" - String length
// - pattern:"regex" - Regex pattern
// - format:"email|uri|uuid|date-time" - Format hint
tool := sdk.MustFunctionToolFromType[WeatherParams]("get_weather", "Get weather")
Adding Tools to Requests
Pass tools when creating a request:
const req = mr.responses
.new()
.model("claude-sonnet-4-5")
.system("You are a helpful assistant with access to weather data.")
.user("What's the weather in Tokyo?")
.tool(weatherTool)
.tool(searchTool)
.build();
const response = await mr.responses.create(req);
req, opts, _ := client.Responses.New().
Model(sdk.NewModelID("claude-sonnet-4-5")).
System("You are a helpful assistant with access to weather data.").
User("What's the weather in Tokyo?").
Tool(weatherTool).
Tool(searchTool).
Build()
resp, err := client.Responses.Create(ctx, req, opts...)
let mut registry = ToolRegistry::new();
registry.register(get_weather);
registry.register(search);
let response = ResponseBuilder::new()
.model("claude-sonnet-4-5")
.system("You are a helpful assistant with access to weather data.")
.user("What's the weather in Tokyo?")
.tools(®istry)
.send(&client.responses())
.await?;
Tool Choice
Control when the model uses tools:
import { toolChoiceAuto, toolChoiceRequired, toolChoiceNone } from "@modelrelay/sdk";
// Let model decide (default)
.toolChoice(toolChoiceAuto())
// Force model to use a tool
.toolChoice(toolChoiceRequired())
// Force a specific tool
.toolChoice(toolChoiceRequired("get_weather"))
// Prevent tool use
.toolChoice(toolChoiceNone())
// Let model decide (default)
builder.ToolChoiceAuto()
// Force model to use a tool
builder.ToolChoiceRequired()
// Force a specific tool
builder.ToolChoice(&llm.ToolChoice{
Type: llm.ToolChoiceRequired,
Function: sdk.Ptr("get_weather"),
})
// Prevent tool use
builder.ToolChoiceNone()
| Choice | Behavior |
|---|---|
auto |
Model decides whether to use tools |
required |
Model must use at least one tool |
required("name") |
Model must use the specified tool |
none |
Model cannot use tools |
Handling Tool Calls
Check if the response contains tool calls and process them:
import { hasToolCalls, firstToolCall } from "@modelrelay/sdk";
const response = await mr.responses.create(req);
if (hasToolCalls(response)) {
// Get all tool calls
const toolCalls = response.output[0].toolCalls;
for (const call of toolCalls) {
console.log(`Tool: ${call.function.name}`);
console.log(`Args: ${call.function.arguments}`);
console.log(`ID: ${call.id}`);
}
}
// Or get just the first one
const call = firstToolCall(response);
if (call) {
console.log(`First tool: ${call.function.name}`);
}
resp, _ := client.Responses.Create(ctx, req, opts...)
if resp.HasToolCalls() {
// Get all tool calls
for _, call := range resp.ToolCalls() {
fmt.Printf("Tool: %s\n", call.Function.Name)
fmt.Printf("Args: %s\n", call.Function.Arguments)
fmt.Printf("ID: %s\n", call.ID)
}
}
// Or get just the first one
if call := resp.FirstToolCall(); call != nil {
fmt.Printf("First tool: %s\n", call.Function.Name)
}
let response = ResponseBuilder::new()
.model("claude-sonnet-4-5")
.user("What's the weather?")
.tools(®istry)
.send(&client.responses())
.await?;
if let Some(tool_call) = response.tool_call() {
println!("Tool: {}", tool_call.function.name);
println!("Args: {}", tool_call.function.arguments);
}
Parsing Arguments
Parse tool call arguments with type safety:
import { createTypedTool, parseTypedToolCall, ToolArgsError } from "@modelrelay/sdk";
import { z } from "zod";
const weatherTool = createTypedTool({
name: "get_weather",
description: "Get current weather for a location",
parameters: z.object({
location: z.string(),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
}),
});
try {
const typedCall = parseTypedToolCall(toolCall, weatherTool);
console.log(typedCall.function.arguments.location); // typed as string
} catch (error) {
if (error instanceof ToolArgsError) {
console.error(`Parse error for ${error.toolName}: ${error.message}`);
console.error(`Raw args: ${error.rawArguments}`);
}
}
type WeatherArgs struct {
Location string `json:"location"`
Unit string `json:"unit,omitempty"`
}
var args WeatherArgs
if err := sdk.ParseToolArgs(call, &args); err != nil {
var parseErr *sdk.ToolArgsError
if errors.As(err, &parseErr) {
fmt.Printf("Parse error for %s: %s\n", parseErr.ToolName, parseErr.Message)
fmt.Printf("Raw args: %s\n", parseErr.RawArguments)
}
return err
}
fmt.Println(args.Location) // typed as string
// With validation
func (a WeatherArgs) Validate() error {
if a.Location == "" {
return errors.New("location is required")
}
return nil
}
if err := sdk.ParseAndValidateToolArgs(call, &args); err != nil {
// Includes validation errors
}
Tool Results
After executing a tool, create a result message to continue the conversation:
import { toolResultMessage, assistantMessageWithToolCalls } from "@modelrelay/sdk";
// Execute the tool
const weatherData = await getWeather(args.location, args.unit);
// Create tool result message
const resultMsg = toolResultMessage(toolCall.id, JSON.stringify(weatherData));
// Build next request with conversation history
const nextReq = mr.responses
.new()
.model("claude-sonnet-4-5")
.system("You are a helpful assistant.")
.user("What's the weather in Tokyo?")
.message(assistantMessageWithToolCalls("", [toolCall])) // Assistant's tool call
.message(resultMsg) // Tool result
.build();
const finalResponse = await mr.responses.create(nextReq);
console.log(finalResponse.output[0].content);
// Execute the tool
weatherData := getWeather(args.Location, args.Unit)
// Create tool result message
resultMsg := sdk.MustToolResultMessage(call.ID, weatherData)
// Build next request with conversation history
input := []llm.InputItem{
sdk.SystemMessage("You are a helpful assistant."),
sdk.UserMessage("What's the weather in Tokyo?"),
sdk.AssistantMessageWithToolCalls("", resp.ToolCalls()), // Assistant's tool call
resultMsg, // Tool result
}
nextReq := &llm.ResponsesRequest{
Model: sdk.NewModelID("claude-sonnet-4-5"),
Input: input,
}
finalResp, _ := client.Responses.Create(ctx, nextReq)
fmt.Println(finalResp.Text())
Complete Tool Loop
A full tool use loop iterates until the model stops requesting tools:
flowchart TD
A[User Query] --> B[Send Request with Tools]
B --> C{Model Response}
C -->|Has Tool Calls| D[Execute Tool Functions]
D --> E[Append Tool Results to Messages]
E --> B
C -->|No Tool Calls| F[Return Final Response]
import {
ModelRelay,
createFunctionTool,
hasToolCalls,
toolResultMessage,
assistantMessageWithToolCalls,
} from "@modelrelay/sdk";
const mr = ModelRelay.fromSecretKey(process.env.MODELRELAY_API_KEY!);
const tools = [
createFunctionTool("get_weather", "Get weather for a location", {
type: "object",
properties: { location: { type: "string" } },
required: ["location"],
}),
];
// Tool implementations
async function executeTool(name: string, args: any): Promise<string> {
switch (name) {
case "get_weather":
return JSON.stringify({ temp: 72, condition: "sunny" });
default:
return JSON.stringify({ error: "Unknown tool" });
}
}
// Conversation state
let messages: any[] = [
{ role: "user", content: "What's the weather in Tokyo and New York?" },
];
// Tool loop
while (true) {
const req = mr.responses
.new()
.model("claude-sonnet-4-5")
.system("You have access to weather data.")
.messages(messages)
.tools(tools)
.build();
const response = await mr.responses.create(req);
if (!hasToolCalls(response)) {
// No more tool calls - we're done
console.log(response.output[0].content);
break;
}
// Process tool calls
const toolCalls = response.output[0].toolCalls;
messages.push(assistantMessageWithToolCalls("", toolCalls));
for (const call of toolCalls) {
const args = JSON.parse(call.function.arguments);
const result = await executeTool(call.function.name, args);
messages.push(toolResultMessage(call.id, result));
}
}
tools := []llm.Tool{
sdk.MustFunctionToolFromType[WeatherParams]("get_weather", "Get weather"),
}
// Tool implementations
func executeTool(name string, args map[string]any) string {
switch name {
case "get_weather":
return `{"temp": 72, "condition": "sunny"}`
default:
return `{"error": "Unknown tool"}`
}
}
// Conversation state
input := []llm.InputItem{
sdk.UserMessage("What's the weather in Tokyo and New York?"),
}
// Tool loop
for {
req := &llm.ResponsesRequest{
Model: sdk.NewModelID("claude-sonnet-4-5"),
System: sdk.Ptr("You have access to weather data."),
Input: input,
Tools: tools,
}
resp, _ := client.Responses.Create(ctx, req)
if !resp.HasToolCalls() {
// No more tool calls - we're done
fmt.Println(resp.Text())
break
}
// Process tool calls
input = append(input, sdk.AssistantMessageWithToolCalls("", resp.ToolCalls()))
for _, call := range resp.ToolCalls() {
args, _ := sdk.ParseToolArgsMap(call)
result := executeTool(string(call.Function.Name), args)
input = append(input, sdk.MustToolResultMessage(call.ID, result))
}
}
Tool Registry
Use a registry for cleaner tool management and automatic dispatch:
import { ToolRegistry } from "@modelrelay/sdk";
const registry = new ToolRegistry()
.register("get_weather", async (args) => {
const weather = await fetchWeather(args.location);
return { temp: weather.temp, condition: weather.condition };
})
.register("search", async (args) => {
const results = await searchDatabase(args.query);
return { results };
});
// Execute all tool calls from a response
const results = await registry.executeAll(response.toolCalls);
// Convert results to messages for next request
const messages = registry.resultsToMessages(results);
// Check for errors
for (const result of results) {
if (result.error) {
console.error(`Tool ${result.toolName} failed: ${result.error}`);
}
}
registry := sdk.NewToolRegistry().
Register("get_weather", func(args map[string]any, call llm.ToolCall) (any, error) {
location := args["location"].(string)
weather, err := fetchWeather(location)
if err != nil {
return nil, err
}
return map[string]any{"temp": weather.Temp, "condition": weather.Condition}, nil
}).
Register("search", func(args map[string]any, call llm.ToolCall) (any, error) {
query := args["query"].(string)
results, err := searchDatabase(query)
if err != nil {
return nil, err
}
return map[string]any{"results": results}, nil
})
// Execute all tool calls from a response
results := registry.ExecuteAll(resp.ToolCalls())
// Convert results to messages for next request
messages := registry.ResultsToMessages(results)
// Check for errors
for _, result := range results {
if result.Error != nil {
fmt.Printf("Tool %s failed: %s\n", result.ToolName, result.Error)
}
}
use modelrelay::{tool, ToolRegistry};
#[tool(description = "Get weather for a location")]
fn get_weather(#[doc = "City"] location: String) -> String {
format!("Weather in {}: 72°F, sunny", location)
}
#[tool(description = "Search the database")]
fn search(#[doc = "Query"] query: String) -> String {
format!("Results for '{}': ...", query)
}
let mut registry = ToolRegistry::new();
registry.register(get_weather);
registry.register(search);
// Execute a tool call
if let Some(tool_call) = response.tool_call() {
let result = registry.call(&tool_call)?;
println!("Result: {}", result);
}
Local Tool Packs
Local tool packs provide pre-built tools for common operations that execute on your machine. They handle security, sandboxing, and schema generation for you.
Bash Tool Pack
Execute bash commands locally with a configurable policy:
use modelrelay::{
Client, LocalBashToolPack, ToolRegistry, ResponsesRequest, InputItem,
BashPolicy, with_bash_policy,
};
use std::time::Duration;
// Create bash tool pack with security rules
let bash_pack = LocalBashToolPack::new(".", vec![
with_bash_policy(
BashPolicy::new()
.allow_command("gh") // Allow GitHub CLI
.allow_command("git") // Allow git commands
),
with_bash_timeout(Duration::from_secs(30)),
])?;
// Register handlers
let mut registry = ToolRegistry::new();
bash_pack.register_into(&mut registry);
// Get tool definitions for API request
let tools = bash_pack.tool_definitions();
// Make request with tools
let client = Client::from_env()?;
let mut messages = vec![InputItem::user("List my open GitHub issues")];
let resp = client.responses().create(ResponsesRequest {
model: "claude-sonnet-4-5".into(),
tools: Some(tools),
input: messages.clone(),
..Default::default()
}).await?;
// Execute tool calls locally
while let Some(tool_calls) = resp.tool_calls() {
let results = registry.execute_all(&tool_calls).await;
messages.extend(registry.results_to_messages(&results));
// Continue conversation with results
resp = client.responses().create(ResponsesRequest {
model: "claude-sonnet-4-5".into(),
tools: Some(bash_pack.tool_definitions()),
input: messages.clone(),
..Default::default()
}).await?;
}
println!("{}", resp.text());
import (
sdk "github.com/modelrelay/sdk-go"
"time"
)
// Create bash tool pack with security rules
bashPack := sdk.NewLocalBashToolPack(".",
sdk.WithLocalBashAllowRules(
sdk.BashCommandPrefix("gh "), // Allow GitHub CLI
sdk.BashCommandPrefix("git "), // Allow git commands
),
sdk.WithLocalBashTimeout(30 * time.Second),
)
// Register handlers
registry := sdk.NewToolRegistry()
bashPack.RegisterInto(registry)
// Get tool definitions for API request
tools := bashPack.ToolDefinitions()
// Make request with tools
client := sdk.NewClient(sdk.WithAPIKeyFromEnv())
messages := []sdk.InputItem{sdk.UserMessage("List my open GitHub issues")}
resp, err := client.Responses().Create(ctx, sdk.ResponsesRequest{
Model: "claude-sonnet-4-5",
Tools: tools,
Input: messages,
})
// Execute tool calls locally
for resp.HasToolCalls() {
results := registry.ExecuteAll(resp.ToolCalls())
messages = append(messages, registry.ResultsToMessages(results)...)
resp, err = client.Responses().Create(ctx, sdk.ResponsesRequest{
Model: "claude-sonnet-4-5",
Tools: tools,
Input: messages,
})
}
fmt.Println(resp.Text())
Security model: Bash is deny-by-default. You must explicitly allow commands via BashPolicy:
BashPolicy::new().allow_command("git")- Allow specific commands (normalized)BashPolicy::new().allow_all().deny_command("rm")- Allow all, then deny specific commands
When allow_all() is not set, every command in a pipeline/chain must be explicitly allowed.
By default, BashPolicy denies:
- Command chaining (
;,&&,||) - Pipes to shells (
| bash,| sh,| zsh) - Subshells (
$(), backticks,bash -c) eval/execusage
You can opt out explicitly (use sparingly):
allow_chains()allow_pipe_to_shell()allow_subshells()allow_eval()
Security caveat: This is defense-in-depth, not a sandbox. A determined attacker can still bypass lightweight tokenization (e.g., dynamic eval, heavy obfuscation). Use the bash tool only when necessary and prefer the filesystem tool pack for read-only tasks.
Configuration options:
with_bash_policy(policy)- Set theBashPolicy(required to enable bash)with_bash_timeout(duration)- Kill commands after timeout (default: 10s)with_bash_max_output_bytes(n)- Truncate output (default: 32KB)with_bash_inherit_env()- Pass environment variables (default: empty env)
Filesystem Tool Pack
Read-only filesystem access for safer use cases:
use modelrelay::{LocalFSToolPack, ToolRegistry};
let fs_pack = LocalFSToolPack::new("./project", vec![])?;
let mut registry = ToolRegistry::new();
fs_pack.register_into(&mut registry);
// Provides three tools:
// - fs_read_file: Read file contents
// - fs_list_files: List directory recursively
// - fs_search: Search for regex pattern in files
// - fs_edit: Replace exact strings in files
let tools = fs_pack.tool_definitions();
fsPack := sdk.NewLocalFSToolPack("./project")
registry := sdk.NewToolRegistry()
fsPack.RegisterInto(registry)
// Provides three tools:
// - fs_read_file: Read file contents
// - fs_list_files: List directory recursively
// - fs_search: Search for regex pattern in files
// - fs_edit: Replace exact strings in files
tools := fsPack.ToolDefinitions()
When to use which:
- Bash: Need to execute commands, write files, or call CLIs
- Filesystem: Only need read access, want maximum safety
Combining Multiple Packs
let bash_pack = LocalBashToolPack::new(".", bash_opts)?;
let fs_pack = LocalFSToolPack::new(".", vec![])?;
let mut registry = ToolRegistry::new();
bash_pack.register_into(&mut registry);
fs_pack.register_into(&mut registry);
// Combine tool definitions
let tools = [
bash_pack.tool_definitions(),
fs_pack.tool_definitions(),
].concat();
X/Twitter Search (Grok Only)
Search posts on X/Twitter using xAI’s Grok models:
const xTool = {
type: "x_search" as const,
xSearch: {
allowedHandles: ["@openai", "@anthropic", "@google"],
excludedHandles: ["@spam"],
fromDate: "2024-01-01",
toDate: "2024-12-31",
},
};
const req = mr.responses
.new()
.model("grok-4-1-fast-reasoning")
.user("What are AI companies saying about safety?")
.tool(xTool)
.build();
xTool := llm.Tool{
Type: llm.ToolTypeXSearch,
XSearch: &llm.XSearchConfig{
AllowedHandles: []string{"@openai", "@anthropic", "@google"},
ExcludedHandles: []string{"@spam"},
FromDate: "2024-01-01",
ToDate: "2024-12-31",
},
}
req, opts, _ := client.Responses.New().
Model(sdk.NewModelID("grok-4-1-fast-reasoning")).
User("What are AI companies saying about safety?").
Tool(xTool).
Build()
use modelrelay::{Tool, XSearchConfig};
let x_tool = Tool::x_search(XSearchConfig {
allowed_handles: Some(vec![
"@openai".into(),
"@anthropic".into(),
"@google".into(),
]),
excluded_handles: Some(vec!["@spam".into()]),
from_date: Some("2024-01-01".into()),
to_date: Some("2024-12-31".into()),
});
let response = ResponseBuilder::new()
.model("grok-4-1-fast-reasoning")
.user("What are AI companies saying about safety?")
.tool(x_tool)
.send(&client.responses())
.await?;
Code Execution
Let the model run code in a sandboxed environment:
const codeTool = {
type: "code_execution" as const,
codeExecution: {},
};
const req = mr.responses
.new()
.model("claude-sonnet-4-5")
.user("Calculate the first 20 Fibonacci numbers")
.tool(codeTool)
.build();
Code execution capabilities vary by provider. Check provider documentation for supported languages and limitations.
Streaming Tool Calls
When streaming, tool calls arrive incrementally. Use an accumulator to collect them:
import { ToolCallAccumulator } from "@modelrelay/sdk";
const stream = await mr.responses.stream(req);
const accumulator = new ToolCallAccumulator();
for await (const event of stream) {
switch (event.type) {
case "message_delta":
if (event.textDelta) {
process.stdout.write(event.textDelta);
}
break;
case "tool_use_start":
case "tool_use_delta":
if (event.toolCallDelta) {
accumulator.processDelta(event.toolCallDelta);
}
break;
case "tool_use_stop":
const toolCalls = accumulator.getToolCalls();
for (const call of toolCalls) {
console.log(`\nTool: ${call.function?.name}`);
console.log(`Args: ${call.function?.arguments}`);
}
break;
}
}
stream, _ := client.Responses.Stream(ctx, req, opts...)
defer stream.Close()
accumulator := sdk.NewToolCallAccumulator()
for {
event, ok, err := stream.Next()
if err != nil || !ok {
break
}
switch event.Kind {
case llm.StreamEventKindMessageDelta:
fmt.Print(event.TextDelta)
case llm.StreamEventKindToolUseDelta:
if event.ToolCallDelta != nil {
accumulator.ProcessDelta(event.ToolCallDelta)
}
case llm.StreamEventKindToolUseStop:
calls := accumulator.GetToolCalls()
for _, call := range calls {
fmt.Printf("\nTool: %s\n", call.Function.Name)
fmt.Printf("Args: %s\n", call.Function.Arguments)
}
}
}
Error Handling & Retry
Handle tool execution errors and retry with model feedback:
import {
executeWithRetry,
hasRetryableErrors,
createRetryMessages,
} from "@modelrelay/sdk";
// Automatic retry for parse/validation errors
const results = await executeWithRetry(registry, response.toolCalls, {
maxRetries: 2,
onRetry: async (errorMessages, attempt) => {
console.log(`Retry attempt ${attempt}`);
// Send errors back to model for correction
messages.push(assistantMessageWithToolCalls("", response.toolCalls));
messages.push(...errorMessages);
const retryResp = await mr.responses.create(
mr.responses.new()
.model("claude-sonnet-4-5")
.messages(messages)
.tools(tools)
.build()
);
return retryResp.output[0]?.toolCalls || [];
},
});
// Check results
for (const result of results) {
if (result.error) {
if (result.isRetryable) {
console.log(`Retryable error: ${result.error}`);
} else {
console.error(`Fatal error: ${result.error}`);
}
}
}
results, err := sdk.ExecuteWithRetry(registry, resp.ToolCalls(), sdk.RetryOptions{
MaxRetries: 2,
OnRetry: func(errorMsgs []llm.InputItem, attempt int) ([]llm.ToolCall, error) {
fmt.Printf("Retry attempt %d\n", attempt)
// Send errors back to model for correction
input = append(input, sdk.AssistantMessageWithToolCalls("", resp.ToolCalls()))
input = append(input, errorMsgs...)
retryReq := &llm.ResponsesRequest{
Model: sdk.NewModelID("claude-sonnet-4-5"),
Input: input,
Tools: tools,
}
retryResp, err := client.Responses.Create(ctx, retryReq)
if err != nil {
return nil, err
}
return retryResp.ToolCalls(), nil
},
})
// Check results
for _, result := range results {
if result.Error != nil {
if result.IsRetryable {
fmt.Printf("Retryable error: %s\n", result.Error)
} else {
fmt.Printf("Fatal error: %s\n", result.Error)
}
}
}
Best Practices
-
Write clear tool descriptions — Models use descriptions to decide when to call tools. Be specific about what each tool does and when to use it.
-
Validate arguments — Always validate tool arguments before execution. Use typed parsing with schemas to catch errors early.
-
Handle errors gracefully — Return structured error messages that help the model understand what went wrong and retry appropriately.
-
Limit tool iterations — Set a maximum number of tool loop iterations to prevent runaway costs. Most tasks complete in 2-5 iterations.
-
Use registries for complex apps — Tool registries centralize tool management and simplify the execution loop.
-
Consider streaming for long operations — Stream responses when tools might take time, so users see progress.
Next Steps
- Streaming — Real-time response streaming with tool calls
- Structured Output — Get typed JSON responses
- Workflows — Build multi-step AI pipelines