CLI (mrl)
A lightweight command-line tool for chatting with AI models, running agentic tasks, and managing ModelRelay resources.
Quick Start
Chat directly from the command line:
mrl "What is 2 + 2?"
mrl "Write a haiku" --stream
mrl "Explain recursion" --model gpt-5.2 --usage
# Pipe text content into the prompt
cat README.md | mrl "summarize this"
echo "What is the capital of France?" | mrl
git diff | mrl "explain these changes"
Installation
Homebrew (macOS/Linux)
brew install modelrelay/tap/mrl
To upgrade:
brew upgrade mrl
Manual Download
Download the latest release from releases.modelrelay.ai/mrl and add to your PATH.
From Source
go install github.com/modelrelay/mrl@latest
Configuration
Environment Variables
export MODELRELAY_API_KEY=mr_sk_...
export MODELRELAY_PROJECT_ID=... # UUID (optional default)
export MODELRELAY_API_BASE_URL=... # optional
export MODELRELAY_MODEL=... # default model
Config File
Create ~/.config/mrl/config.toml:
[profiles.default]
api_key = "mr_sk_..."
base_url = "https://api.modelrelay.ai/api/v1"
project_id = "<uuid>"
model = "claude-sonnet-4-5"
output = "table" # or "json"
# Options for `mrl do` command
allow_all = true
trace = true
# allow = ["git ", "npm "] # alternative to allow_all
Managing Profiles
# Set values for a profile
mrl config set --profile dev --api-key mr_sk_...
# Switch to a profile
mrl config use dev
# Show current config
mrl config show
Running Agents
Run a Deployed Agent
Run an agent deployed to ModelRelay by its slug:
mrl agent run researcher --input "Analyze Q4 sales"
Test with Mocked Tools
Test agents locally with mocked tool responses:
mrl agent test researcher \
--input "Analyze Q4 sales" \
--mock-tools ./mocks.json \
--trace
JSON Input
For complex inputs, use a JSON file:
mrl agent test researcher \
--input-file ./inputs.json \
--output ./trace.json \
--json
Local Tool Loops
Run agentic loops locally with the model calling tools on your machine.
Basic Loop with Bash
Enable the bash tool (deny-by-default) with allowed command prefixes:
mrl agent loop \
--model claude-sonnet-4-5 \
--tool bash \
--bash-allow "git " \
--input "List recent commits and summarize them"
With Task Tracking
Include tasks_write for progress tracking:
mrl agent loop \
--model claude-sonnet-4-5 \
--tool bash \
--tool tasks_write \
--state-ttl-sec 86400 \
--tasks-output ./tasks.json \
--input "Audit this repo and track your progress"
Filesystem Tools
Enable local filesystem tools (fs.*):
mrl agent loop \
--model claude-sonnet-4-5 \
--tool fs \
--input "Search for TODOs in this repo"
Tool Manifest
Load tools from a TOML or JSON manifest file. CLI flags override manifest values.
Create tools.toml:
tool_root = "."
tools = ["bash", "tasks_write"]
state_ttl_sec = 86400
[bash]
allow = ["git ", "rg "]
timeout = "15s"
max_output_bytes = 64000
[tasks_write]
output = "tasks.json"
print = true
[fs]
ignore_dirs = ["node_modules", ".git"]
search_timeout = "3s"
[[custom]]
name = "custom.echo"
description = "Echo input as JSON"
command = ["cat"]
schema = { type = "object", properties = { message = { type = "string" } }, required = ["message"] }
Run with:
mrl agent loop --model claude-sonnet-4-5 --tools-file ./tools.toml --input "Audit this repo"
Local RLM Sessions
Run a local RLM session where Python executes on your machine and LLM calls go through ModelRelay (uses your configured default model unless you pass --model).
# Pipe a file into the local Python sandbox
cat large_dataset.csv | mrl rlm "Summarize the data and compute key stats"
Attach local files by path:
mrl rlm "Summarize the data" -a ./large_dataset.csv
Multiple files (shell expands globs before mrl runs):
mrl rlm "Summarize all datasets" -a ./data/*.csv -a ./logs/*.json
Use --remote to run hosted RLM via /rlm/execute. Remote mode only supports inline text attachments (no local file paths).
Flags
| Flag | Description |
|---|---|
-a, --attachment |
Attach a local file (repeatable; use - for stdin) |
--attachment-type |
Override attachment MIME type (useful for stdin) |
--attach-stdin |
Attach stdin as a file |
--max-iterations |
Max code generation cycles (default: 10) |
--max-subcalls |
Max llm_query/llm_batch calls (default: 50) |
--max-depth |
Max recursion depth (default: 1) |
--exec-timeout-ms |
Python execution timeout in ms (0 uses interpreter default) |
--python |
Python executable (default: python3) |
--max-inline-bytes |
Max inline context bytes (0 uses interpreter default) |
--max-total-bytes |
Max total context bytes (0 uses interpreter default) |
--inline-text-max-bytes |
Max inline text bytes per file (0 uses default 1MB) |
--system |
Custom instructions prepended to the default RLM system prompt |
--system-override |
Replace the entire system prompt instead of prepending |
--remote |
Run hosted RLM via /rlm/execute instead of local Python |
The CLI builds a JSON context from attached files and exposes it as context in Python. Small text files are also loaded into context["files"][i]["text"] for easier scanning.
Quick Tasks with do
The do command is a simpler alternative for quick agentic tasks with bash. It runs a local tool loop where the model executes shell commands to complete your task.
Basic Usage
mrl do "commit my changes" --allow-all
Configuration
Set defaults in your config to avoid repeating flags:
mrl config set --model claude-sonnet-4-5 --allow-all --trace
Then simply run:
mrl do "commit my changes"
How It Works
The do command runs an agentic loop: it calls the /responses API, executes any tool calls locally, and continues until the model completes the task.
Here’s the flow for mrl do "commit my changes":
sequenceDiagram
participant CLI as mrl CLI
participant API as /responses API
participant Local as Local Shell
Note over CLI: User: "commit my changes"
rect rgb(40, 40, 40)
Note over CLI,Local: Turn 1
CLI->>API: POST /responses
[system, user]
API-->>CLI: tool_call: bash
args: "git status"
CLI->>Local: git status
Local-->>CLI: "Changes not staged"
end
rect rgb(40, 40, 40)
Note over CLI,Local: Turn 2
CLI->>API: POST /responses
[...+ tool_result]
API-->>CLI: tool_call: bash
args: "git diff"
CLI->>Local: git diff
Local-->>CLI: shows actual changes
end
rect rgb(40, 40, 40)
Note over CLI,Local: Turn 3
CLI->>API: POST /responses
[...+ tool_result]
API-->>CLI: tool_call: bash
args: "git add && commit"
CLI->>Local: git add . && git commit -m "..."
Local-->>CLI: [abc123] feat: descriptive msg
end
rect rgb(40, 40, 40)
Note over CLI,Local: Turn 4
CLI->>API: POST /responses
[...+ tool_result]
API-->>CLI: text: "Committed."
tool_calls: []
end
Note over CLI: No tool calls → exit loop
Key points:
- 4 API calls to
/responses - 3 local bash executions (status → diff → add+commit)
- Model reads the diff before writing a descriptive commit message
- Model decides when the task is complete (no more tool calls)
- All tool execution happens locally on your machine
Flags
| Flag | Description |
|---|---|
--model |
Model ID (overrides config) |
--system |
Custom system prompt |
--allow |
Allow bash command prefix (repeatable) |
--allow-all |
Allow all bash commands |
--max-turns |
Max tool loop iterations (default: 50) |
--trace |
Show commands as they execute |
Config Options
These can be set with mrl config set:
| Option | Description |
|---|---|
--model |
Default model for all commands |
--allow-all |
Allow all bash commands by default |
--allow |
Default allowed command prefixes |
--trace |
Show commands by default |
Resource Management
Customers
# List customers
mrl customer list
# Get a customer
mrl customer get <customer_id>
# Create a customer
mrl customer create --external-id user_123 --email user@example.com
Tiers
# List tiers
mrl tier list
# Get a tier
mrl tier get <tier_id>
Usage
# View account usage
mrl usage account
Utility Commands
List Models
# List all models
mrl model list
# Filter by provider and capability
mrl model list --provider openai --capability text_generation
# Include deprecated models
mrl model list --include-deprecated --json
Lint JSON Schemas
Validate JSON schemas for provider compatibility:
# Basic lint
mrl schema lint ./schema.json
# Validate for specific provider
mrl schema lint ./schema.json --provider openai
# Validate tool schema
mrl schema lint ./tool-schema.json --provider openai --tool-schema
Version
mrl version
Output Formats
Table output is the default. Use --json for machine-readable output on any command:
mrl customer list --json
mrl model list --json
Global Flags
| Flag | Description |
|---|---|
--profile |
Config profile to use |
--api-key |
API key (overrides config) |
--base-url |
API base URL (overrides config) |
--project |
Project UUID (overrides config) |
--model |
Model ID (overrides config) |
--json |
Output JSON instead of table |
--timeout |
Request timeout (default: 30s) |
--stream |
Stream output as it’s generated (chat mode) |
--usage |
Show token usage after response (chat mode) |
--system |
System prompt (chat mode) |
Next Steps
- First Request - Make your first API call
- Go SDK - Use the Go SDK for programmatic access
- TypeScript SDK - Use the TypeScript SDK