Configuration Guide

Configure OpenRouter for OpenClaw

Access over 100 AI models through OpenRouter's unified API. This guide covers setup, model selection, cost optimization, and configuring automatic failover for maximum reliability. Follow this guide to set it up yourself, or let our team handle it for you.

Prerequisites

  • OpenClaw installed and running
  • OpenRouter account (openrouter.ai)
  • OpenRouter API key
  • Understanding of AI model capabilities

How to Complete This Guide

Create OpenRouter Account

Sign up at openrouter.ai and add payment information for API access.

Generate API Key

Create a new API key in the dashboard and copy it securely.

Set Environment Variable

Configure OPENAI_API_KEY or OPENROUTER_API_KEY with your key.

Configure Model

Update openclaw.json with OpenRouter base URL and model identifier.

Test Connection

Restart OpenClaw and verify responses work through OpenRouter.

Optimize Settings

Configure failover, cost limits, and model selection for your needs.

Why Use OpenRouter with OpenClaw

OpenRouter serves as a unified gateway to over 100 AI models from multiple providers, offering significant advantages for OpenClaw deployments. Instead of managing separate API keys and configurations for Anthropic, OpenAI, Google, Meta, and other providers, OpenRouter provides single-key access to all supported models with automatic routing, failover, and cost optimization capabilities.

For OpenClaw users, OpenRouter simplifies model experimentation and provides resilience against provider outages. You can easily switch between Claude, GPT-4, Gemini, Llama, Mistral, and dozens of other models without reconfiguring authentication. The unified billing and usage tracking makes cost management straightforward across all models.

Key Benefits

Access to diverse models enables matching the right model to each task. Claude excels at complex reasoning and long context; GPT-4 offers strong general capabilities; open-source models like Llama provide cost-effective options for simpler tasks. OpenRouter's routing can automatically select optimal models based on your criteria.

Provider redundancy protects against outages. If Anthropic's API experiences issues, OpenRouter can automatically route to alternative models. This failover capability is valuable for always-on assistants where availability matters.

Cost optimization through model selection helps manage expenses. OpenRouter's transparent pricing and model comparison tools help identify cost-effective options. For high-volume usage, selecting efficient models for routine tasks while reserving premium models for complex work significantly reduces costs.

OpenRouter Account Setup

Before configuring OpenClaw, set up your OpenRouter account and understand the billing model. This section walks through account creation, API key generation, and initial setup.

Creating an Account

Visit openrouter.ai and create an account using email or OAuth providers like Google or GitHub. OpenRouter offers pay-as-you-go pricing with no monthly minimums, making it accessible for experimentation and personal use.

After account creation, add payment information to enable API access beyond any free tier. OpenRouter supports credit cards and can send low-balance alerts. Set a reasonable spending limit to prevent unexpected charges during development or misconfiguration.

Generating API Keys

Navigate to the API Keys section in your OpenRouter dashboard. Create a new API key with a descriptive name like "OpenClaw Production" to track usage. Copy the key immediately as it won't be shown again. Store it securely; this key provides access to all models on your account.

For different deployments or testing environments, consider creating separate API keys. This enables tracking usage by environment and revoking specific keys if compromised without affecting other deployments.

Understanding Pricing

OpenRouter pricing is model-dependent, charged per token (input and output separately). Prices are clearly listed on the models page. Premium models like Claude Opus or GPT-4 cost more than efficient models like Claude Haiku or GPT-3.5-turbo. Open-source models often have the lowest pricing.

Review pricing for models you plan to use and estimate costs based on expected usage. A personal assistant with moderate usage typically costs $5-20/month with premium models, less with efficient model selection.

Basic OpenClaw Configuration

Configure OpenClaw to use OpenRouter as its AI provider. This involves setting the API key and specifying the model identifier in OpenRouter's format.

Setting the API Key

Set your OpenRouter API key as an environment variable. OpenRouter uses the same environment variable name as OpenAI (OPENAI_API_KEY) for compatibility, or you can use the OpenRouter-specific variable. The recommended approach uses environment variables to keep keys out of configuration files.

Add the environment variable to your shell profile (.bashrc, .zshrc) for persistence, and ensure it's available to the OpenClaw daemon. On macOS with launchd, use launchctl setenv. On Linux with systemd, add it to the service file's Environment directive.

Configuring the Model

Specify your chosen model in openclaw.json using OpenRouter's model identifier format. OpenRouter model IDs include the provider prefix, such as anthropic/claude-3-opus-20240229 or openai/gpt-4-turbo. Check OpenRouter's model listing for exact identifiers.

The agent section of your configuration sets the model and related parameters. Include the base URL override to route requests through OpenRouter instead of direct provider APIs.

Verifying Configuration

After configuration, restart the OpenClaw daemon and test with a simple message. Check that responses come through and the model is correctly identified. The OpenRouter dashboard shows API calls, confirming successful integration.

Choosing the Right Model

OpenRouter provides access to many models with different capabilities, contexts, and costs. This section helps you choose appropriate models for your OpenClaw use case.

Premium Models

For complex reasoning, coding assistance, and tasks requiring deep understanding, premium models deliver the best results. Anthropic's Claude 3.5 Sonnet or Claude Opus offers strong reasoning with large context windows. OpenAI's GPT-4-turbo provides excellent general capabilities. These models cost more but produce higher quality outputs.

OpenClaw recommends Anthropic's Claude models for the best balance of capability and safety. Claude's longer context windows (up to 200K tokens) excel at maintaining conversation context and handling large documents.

Efficient Models

For routine tasks, simple queries, and cost-sensitive deployments, efficient models provide good results at lower cost. Claude Haiku, GPT-3.5-turbo, and Gemini Flash are significantly cheaper while handling most everyday assistant tasks well.

Consider using efficient models as your default, switching to premium models for specific complex tasks. OpenRouter's routing features can automate this selection based on query characteristics.

Open Source Models

Meta's Llama 3, Mistral, and other open-source models available through OpenRouter offer the most economical options. Quality varies by model and task type. These work well for experimentation, less critical tasks, or when cost is the primary constraint.

Specialized Models

Some models excel at specific tasks. Coding-focused models like DeepSeek Coder handle programming tasks effectively. Long-context models suit document analysis. Consider your primary use cases when selecting your default model.

Advanced Configuration Options

OpenRouter and OpenClaw support advanced configurations for failover, routing, and optimization. These features enhance reliability and cost efficiency.

Model Failover

Configure fallback models to handle provider outages automatically. If your primary model fails, OpenClaw can retry with alternative models. List fallback models in order of preference. This ensures continuity even during Anthropic or OpenAI service issues.

OpenRouter-Specific Headers

OpenRouter supports custom headers for routing preferences, app identification, and usage tracking. Set X-Title for app identification in logs. Configure route preferences with the transforms header to control provider selection when multiple serve the same model.

Rate Limiting and Quotas

Set usage limits to control costs. OpenRouter's dashboard allows setting monthly spending limits. Configure alerts for approaching limits. For shared deployments, these controls prevent runaway costs from misconfiguration or abuse.

Request Customization

OpenRouter passes through most provider parameters. Configure temperature, max tokens, and other generation parameters in your OpenClaw configuration. These affect model behavior consistently regardless of which underlying provider handles the request.

Streaming Configuration

OpenClaw uses streaming responses for better user experience. Verify streaming is enabled in both OpenClaw and your OpenRouter configuration. Streaming provides incremental responses rather than waiting for complete generation.

Cost Optimization Strategies

Optimize your OpenRouter usage costs through model selection, prompt engineering, and configuration tuning.

Token Efficiency

Costs scale with token usage (input + output). Reduce input tokens by trimming conversation history, summarizing context, and using efficient prompts. System prompts contribute to every request; keep them concise. Configure OpenClaw's context management to prune older messages.

Model Tiering

Use different models for different task complexities. Configure a cheaper default model for routine queries and switch to premium models for complex tasks requiring better reasoning. This hybrid approach balances quality and cost.

Response Length Control

Set appropriate max_tokens limits to prevent unnecessarily long responses. Most assistant interactions don't need 4000-token responses. Limiting output tokens reduces costs while maintaining useful response quality.

Caching Strategies

For repeated queries (common questions, system information), caching responses avoids redundant API calls. While OpenClaw doesn't include built-in caching, skills can implement caching logic for frequently accessed information.

Usage Monitoring

Regularly review OpenRouter usage statistics to identify cost patterns. Look for unexpected usage spikes, inefficient prompts generating long responses, or opportunities to use cheaper models. The dashboard provides detailed breakdowns by model and time period.

Troubleshooting OpenRouter Issues

This section addresses common issues when using OpenRouter with OpenClaw.

Authentication Errors

401 errors indicate API key issues. Verify the key is correctly set and accessible to the OpenClaw process. Check for whitespace or special characters accidentally included. Regenerate the key if issues persist.

Model Not Found

404 errors for model requests usually indicate incorrect model identifiers. OpenRouter model IDs include provider prefix (e.g., anthropic/claude-3-opus). Check the exact identifier on OpenRouter's model page and ensure it matches your configuration exactly.

Rate Limiting

429 errors indicate rate limiting. OpenRouter imposes rate limits based on your account tier. Reduce request frequency, implement retry logic with exponential backoff, or upgrade your account tier for higher limits.

Timeout Errors

Long requests may timeout, especially with complex prompts or slow models. Increase timeout configuration in OpenClaw. Consider using faster models for time-sensitive interactions.

Billing Issues

If requests fail with billing-related errors, check your OpenRouter account balance and payment method. Add funds or update payment information to restore service. Set up low-balance alerts to avoid unexpected service interruptions.

Response Quality Issues

If response quality seems poor, verify you're reaching the intended model. Check OpenRouter logs to confirm which model handled requests. Some model identifiers route to different versions; verify you're using current, capable models.

Code Examples

Terminal
# Set OpenRouter API key as environment variable
export OPENROUTER_API_KEY="your-openrouter-api-key"

# Or use OPENAI_API_KEY for compatibility
export OPENAI_API_KEY="your-openrouter-api-key"

# Add to shell profile for persistence
echo 'export OPENROUTER_API_KEY="your-key"' >> ~/.zshrc
~/.openclaw/openclaw.json
{
  "agent": {
    "model": "anthropic/claude-3.5-sonnet",
    "maxTokens": 4096,
    "temperature": 0.7,
    "provider": {
      "baseUrl": "https://openrouter.ai/api/v1",
      "apiKey": "${OPENROUTER_API_KEY}"
    }
  },
  "gateway": {
    "port": 18789
  }
}
~/.openclaw/openclaw.json (with failover)
{
  "agent": {
    "model": "anthropic/claude-3.5-sonnet",
    "fallbackModels": [
      "openai/gpt-4-turbo",
      "anthropic/claude-3-haiku",
      "google/gemini-pro"
    ],
    "provider": {
      "baseUrl": "https://openrouter.ai/api/v1",
      "headers": {
        "X-Title": "OpenClaw Assistant",
        "HTTP-Referer": "https://your-site.com"
      }
    }
  }
}
Terminal
# Test OpenRouter connection directly
curl https://openrouter.ai/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENROUTER_API_KEY" \
  -d '{
    "model": "anthropic/claude-3-haiku",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
Terminal
# Restart OpenClaw to apply configuration
openclaw gateway restart

# Check status
openclaw gateway status

# View logs for connection verification
openclaw gateway logs

Frequently Asked Questions

What are the advantages of OpenRouter over direct API access?

OpenRouter provides single-key access to 100+ models, automatic failover capabilities, unified billing, and easy model switching. You avoid managing multiple provider accounts and API keys. The slight per-token markup is offset by convenience and reliability features, especially for users wanting access to multiple model providers.

Which OpenRouter model is recommended for OpenClaw?

Anthropic's Claude 3.5 Sonnet (anthropic/claude-3.5-sonnet) offers the best balance of capability, context length, and cost for most OpenClaw use cases. For budget-conscious deployments, Claude Haiku provides good results at lower cost. For maximum capability, Claude Opus handles the most complex tasks.

How do I control costs when using OpenRouter?

Set spending limits in your OpenRouter dashboard. Use efficient models for routine tasks and reserve premium models for complex queries. Configure max_tokens to prevent unnecessarily long responses. Monitor usage regularly and adjust model selection based on actual needs. Token-efficient prompting reduces costs across all models.

Can I switch between models easily with OpenRouter?

Yes, switching models requires only changing the model identifier in your configuration. No authentication changes needed. This makes it easy to experiment with different models, implement model routing based on task type, or quickly switch if a provider has issues.

Does OpenRouter add latency to requests?

OpenRouter adds minimal latency (typically 50-100ms) for routing and processing. For most conversational use cases, this is imperceptible. The reliability benefits of automatic failover and the convenience of unified access typically outweigh this small latency addition.

How does failover work with OpenRouter?

Configure fallbackModels in your OpenClaw configuration to list alternative models. If the primary model fails (rate limit, outage, error), OpenClaw automatically retries with the next model in the list. This ensures continuity even during provider issues. OpenRouter's own routing can also handle some failover scenarios automatically.

Is OpenRouter suitable for production deployments?

Yes, OpenRouter is production-ready and used by many applications. Its reliability features (failover, multiple providers) actually enhance production stability compared to single-provider setups. Set appropriate spending limits and monitor usage for production deployments. Consider premium account tiers for higher rate limits if needed.

Professional Services

Need Help with OpenClaw?

Let our experts handle the setup, configuration, and ongoing management so you can focus on your business.

Free assessment • No commitment required

Don't Want to Configure OpenRouter Yourself?

Our team sets up OpenClaw with OpenRouter for businesses every day. We handle model selection, failover strategies, and cost optimization so you don't have to. Book a free consultation and we'll take care of everything.