First Request
This guide walks you through making your first request to the ModelRelay API.
Prerequisites
Before you begin, make sure you have:
- A ModelRelay account with a project created
- A secret API key (
mr_sk_*) from your project dashboard - At least one provider key configured (Anthropic, OpenAI, etc.)
If you haven’t set these up yet, see the Getting Started guide.
Install the SDK
npm install @modelrelay/sdk
go get github.com/modelrelay/sdk-go
# Cargo.toml
[dependencies]
modelrelay = "0.93"
tokio = { version = "1", features = ["full"] }
Make a Request
import { ModelRelay } from "@modelrelay/sdk";
const mr = new ModelRelay({
apiKey: process.env.MODELRELAY_API_KEY,
});
const answer = await mr.responses.text(
"claude-sonnet-4-20250514",
"You are a helpful assistant.",
"What is the capital of France?"
);
console.log(answer);
// "The capital of France is Paris."
package main
import (
"context"
"fmt"
"log"
"os"
sdk "github.com/modelrelay/sdk-go"
)
func main() {
client, err := sdk.NewClientWithKey(
sdk.MustParseAPIKey(os.Getenv("MODELRELAY_API_KEY")),
)
if err != nil {
log.Fatal(err)
}
answer, err := client.Responses.Text(
context.Background(),
sdk.NewModelID("claude-sonnet-4-20250514"),
"You are a helpful assistant.",
"What is the capital of France?",
)
if err != nil {
log.Fatal(err)
}
fmt.Println(answer)
// "The capital of France is Paris."
}
use modelrelay::{Client, Config, ResponseBuilder, ApiKey};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new(Config {
api_key: Some(ApiKey::parse(&std::env::var("MODELRELAY_API_KEY")?)?),
..Default::default()
})?;
let answer = ResponseBuilder::text_prompt(
"You are a helpful assistant.",
"What is the capital of France?",
)
.model("claude-sonnet-4-20250514")
.send_text(&client.responses())
.await?;
println!("{}", answer);
// "The capital of France is Paris."
Ok(())
}
Streaming
For real-time responses, use streaming:
const stream = await mr.responses.streamTextDeltas(
"claude-sonnet-4-20250514",
"You are a helpful assistant.",
"Write a haiku about programming."
);
for await (const delta of stream) {
process.stdout.write(delta);
}
stream, err := client.Responses.StreamTextDeltas(
context.Background(),
sdk.NewModelID("claude-sonnet-4-20250514"),
"You are a helpful assistant.",
"Write a haiku about programming.",
)
if err != nil {
log.Fatal(err)
}
defer stream.Close()
for {
delta, ok, err := stream.Next()
if err != nil {
log.Fatal(err)
}
if !ok {
break
}
fmt.Print(delta)
}
use futures_util::StreamExt;
let mut stream = ResponseBuilder::text_prompt(
"You are a helpful assistant.",
"Write a haiku about programming.",
)
.model("claude-sonnet-4-20250514")
.stream_deltas(&client.responses())
.await?;
while let Some(delta) = stream.next().await {
print!("{}", delta?);
}
Full Request Builder
For more control over request parameters:
const response = await mr.responses.create(
mr.responses
.new()
.model("claude-sonnet-4-20250514")
.system("You are a helpful assistant.")
.user("What is 2 + 2?")
.maxOutputTokens(256)
.build()
);
console.log(response.output);
// Array of output items (text, tool calls, etc.)
console.log(response.usage);
// { input_tokens: 25, output_tokens: 12 }
req, opts, err := client.Responses.New().
Model(sdk.NewModelID("claude-sonnet-4-20250514")).
System("You are a helpful assistant.").
User("What is 2 + 2?").
MaxOutputTokens(256).
Build()
if err != nil {
log.Fatal(err)
}
response, err := client.Responses.Create(ctx, req, opts...)
if err != nil {
log.Fatal(err)
}
fmt.Println(response.AssistantText())
fmt.Printf("Usage: %+v\n", response.Usage)
let response = ResponseBuilder::new()
.model("claude-sonnet-4-20250514")
.system("You are a helpful assistant.")
.user("What is 2 + 2?")
.max_output_tokens(256)
.send(&client.responses())
.await?;
println!("{}", response.text());
println!("Usage: {:?}", response.usage);
Using curl
You can also call the API directly:
curl -X POST https://api.modelrelay.ai/api/v1/responses \
-H "Authorization: Bearer $MODELRELAY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-20250514",
"input": [
{
"type": "message",
"role": "system",
"content": [{ "type": "text", "text": "You are a helpful assistant." }]
},
{
"type": "message",
"role": "user",
"content": [{ "type": "text", "text": "What is the capital of France?" }]
}
]
}'
Response Format
A successful response looks like:
{
"id": "resp_abc123",
"model": "claude-sonnet-4-20250514",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "The capital of France is Paris."
}
]
}
],
"stop_reason": "end_turn",
"usage": {
"input_tokens": 25,
"output_tokens": 12
}
}
Response Fields
| Field | Description |
|---|---|
id |
Unique response identifier |
model |
The model that generated the response |
output |
Array of output items (messages, tool calls) |
stop_reason |
Why generation stopped (end_turn, max_tokens, tool_calls) |
usage |
Token counts for billing |
Available Models
ModelRelay supports models from multiple providers. Common options:
| Model | Provider | Description |
|---|---|---|
claude-sonnet-4-20250514 |
Anthropic | Fast, capable general-purpose model |
claude-opus-4-20250514 |
Anthropic | Most capable Claude model |
gpt-4o |
OpenAI | OpenAI’s flagship model |
gpt-4o-mini |
OpenAI | Fast and cost-effective |
Use the Models API to list all available models for your project.
Error Handling
import { ModelRelay, APIError, ConfigError } from "@modelrelay/sdk";
try {
const answer = await mr.responses.text(
"claude-sonnet-4-20250514",
"You are helpful.",
"Hello!"
);
} catch (error) {
if (error instanceof APIError) {
console.error(`API error ${error.status}: ${error.message}`);
} else if (error instanceof ConfigError) {
console.error(`Config error: ${error.message}`);
} else {
throw error;
}
}
answer, err := client.Responses.Text(ctx, model, system, user)
if err != nil {
var apiErr *sdk.APIError
var transportErr sdk.TransportError
if errors.As(err, &apiErr) {
log.Printf("API error %d: %s", apiErr.Status, apiErr.Message)
} else if errors.As(err, &transportErr) {
log.Printf("Transport error: %s", transportErr.Message)
} else {
return err
}
}
use modelrelay::errors::{Error, APIError};
match ResponseBuilder::text_prompt(system, user)
.model(model)
.send_text(&client.responses())
.await
{
Ok(answer) => println!("{}", answer),
Err(Error::API(APIError { status, message, .. })) => {
eprintln!("API error {}: {}", status, message);
}
Err(Error::Transport(e)) => {
eprintln!("Transport error: {}", e.message);
}
Err(e) => return Err(e.into()),
}
Next Steps
- Authentication - Learn about API key types
- Streaming - Real-time response streaming
- Tool Use - Let models call functions
- Structured Output - Get typed JSON responses