Configuration Guide
Configure OpenAI for OpenClaw
Connect OpenClaw to OpenAI's powerful GPT models including GPT-4-turbo and GPT-3.5. This guide covers API setup, model selection, configuration options, and optimization strategies. Follow this guide to set it up yourself, or let our team handle it for you.
Prerequisites
- OpenClaw installed and running
- OpenAI account with API access
- OpenAI API key
- Payment method configured for API usage
OpenAI Integration Overview
OpenAI provides some of the most capable AI models available, including the GPT-4 family known for strong reasoning, coding abilities, and general knowledge. While OpenClaw recommends Anthropic's Claude models as the default choice, OpenAI integration offers an excellent alternative or complement, especially for users already in the OpenAI ecosystem or with specific use cases where GPT models excel.
This guide covers configuring OpenClaw to use OpenAI's API directly. For access to OpenAI models alongside other providers through a single API, consider our OpenRouter configuration guide, which provides additional flexibility and failover options.
Available Models
OpenAI offers several model tiers. GPT-4-turbo represents the current flagship, balancing capability with reasonable cost and speed. GPT-4o provides optimized performance for certain workloads. GPT-3.5-turbo offers a cost-effective option for simpler tasks, running faster and cheaper than GPT-4 variants.
Each model has different context windows, pricing, and capabilities. GPT-4-turbo supports 128K context, suitable for long conversations and document analysis. GPT-3.5-turbo's 16K context handles most conversational use cases. Consider your typical conversation length and complexity when selecting models.
OpenAI vs Anthropic
Both providers offer excellent models. OpenAI's GPT-4 excels at certain coding tasks and has broad general knowledge. Anthropic's Claude offers longer context windows (up to 200K), strong reasoning, and is specifically recommended by OpenClaw for its prompt-injection resistance. Many users maintain access to both, selecting based on task requirements.
OpenAI Account and API Setup
Before configuring OpenClaw, ensure your OpenAI account is properly set up for API access. This section covers account requirements and API key generation.
Account Requirements
OpenAI API access requires a separate billing relationship from ChatGPT subscriptions. Having a ChatGPT Plus subscription doesn't provide API access. Navigate to platform.openai.com to access the API platform, which has its own billing separate from consumer products.
Add a payment method to your API account. OpenAI uses pay-as-you-go pricing with no monthly minimums. Set usage limits to prevent unexpected charges during development or if your assistant becomes heavily used.
Creating API Keys
In the OpenAI Platform dashboard, navigate to API Keys and create a new key. Give it a descriptive name like "OpenClaw Production" for easy identification. Copy the key immediately; OpenAI doesn't show keys again after creation.
Consider creating separate keys for development and production environments. This enables tracking usage separately and revoking specific keys if needed without affecting other deployments.
Usage Limits and Billing
Configure usage limits in the OpenAI dashboard to prevent runaway costs. Set hard limits that cap spending and soft limits that send alerts. Monitor usage regularly, especially during initial deployment when usage patterns aren't yet established.
New accounts may have reduced rate limits. If you encounter rate limiting, check your account tier and consider requesting increased limits for production use.
Configuring OpenClaw for OpenAI
Configure OpenClaw to use OpenAI as its AI provider through environment variables and configuration file settings.
Setting the API Key
Set your OpenAI API key as an environment variable. The standard variable name is OPENAI_API_KEY. Add this to your shell profile for persistence, and ensure it's available to the OpenClaw daemon process.
On macOS with launchd, use launchctl setenv to make the variable available to GUI processes and daemons. On Linux with systemd, add it to the service file's Environment directive or use an EnvironmentFile.
Model Configuration
Specify the OpenAI model in your openclaw.json configuration. OpenAI model identifiers follow the format gpt-4-turbo, gpt-3.5-turbo, etc. Check OpenAI's documentation for current model names, as they periodically update model versions.
The configuration includes the model identifier and optional parameters like temperature and max tokens. Temperature affects response randomness; lower values (0.3-0.5) produce more focused responses, while higher values (0.7-0.9) increase creativity.
Verifying the Connection
After configuring, restart the OpenClaw daemon and send a test message. Verify responses arrive and check the OpenAI dashboard for recorded API calls. This confirms the integration is working correctly.
Choosing OpenAI Models
OpenAI provides multiple models suited to different use cases and budgets. This section helps you select the appropriate model for your needs.
GPT-4-turbo
GPT-4-turbo is OpenAI's flagship model, offering the best reasoning capabilities, largest knowledge base, and 128K context window. It handles complex multi-step tasks, coding assistance, and nuanced conversations well. Cost is moderate; suitable for personal assistants and professional use.
Use GPT-4-turbo when quality matters most and for complex tasks requiring careful reasoning or extensive context. The 128K context handles long conversations and document analysis effectively.
GPT-4o
GPT-4o is an optimized variant designed for better performance characteristics. It offers similar capabilities to GPT-4-turbo with potential improvements in specific areas. Check current OpenAI documentation for the latest capabilities and pricing.
GPT-3.5-turbo
GPT-3.5-turbo provides fast, cost-effective responses suitable for simpler tasks. It handles basic questions, routine assistance, and straightforward conversations well. The 16K context version supports moderate conversation lengths.
Use GPT-3.5-turbo for cost-sensitive deployments, high-volume simple queries, or as a fast-response option for routine tasks. Consider pairing with GPT-4 for complex tasks in a tiered approach.
Model Comparison Considerations
When choosing, consider response quality requirements, acceptable latency, cost constraints, and typical conversation complexity. Many users start with GPT-4-turbo for quality, then optimize costs by moving simpler workloads to GPT-3.5-turbo once usage patterns are understood.
Configuration Parameters
Fine-tune OpenAI model behavior through configuration parameters. These settings affect response characteristics, length, and style.
Temperature
Temperature controls response randomness. Range is 0-2, with lower values producing more deterministic, focused responses and higher values increasing variety and creativity. For assistants, 0.5-0.7 typically works well, balancing helpfulness with consistent behavior.
Use lower temperature (0.2-0.4) for factual queries, code generation, and tasks requiring precision. Use higher temperature (0.7-0.9) for creative tasks, brainstorming, or more conversational responses.
Max Tokens
max_tokens limits response length. Setting this prevents unnecessarily long responses and controls costs. For conversational assistants, 1000-2000 tokens usually suffices. Increase for tasks requiring detailed explanations or code generation.
Note that this limits output only; input tokens (your messages and context) aren't affected. Total token usage (input + output) determines cost.
Top P (Nucleus Sampling)
top_p provides an alternative to temperature for controlling randomness. Values closer to 1 consider more token options; lower values focus on higher-probability tokens. Generally, adjust either temperature or top_p, not both simultaneously.
Frequency and Presence Penalties
These parameters reduce repetition. Frequency penalty decreases likelihood of repeating tokens based on how often they've appeared. Presence penalty decreases likelihood based on whether tokens have appeared at all. Values from 0-2, with 0.5-1.0 reducing repetition without overly constraining responses.
Optimization and Cost Management
Optimize your OpenAI usage for better performance and cost efficiency through configuration and usage patterns.
Context Management
Long conversations accumulate context that's sent with each request, increasing costs. Configure OpenClaw's conversation pruning to limit context length. Summarize older messages or maintain a sliding window of recent messages to control context size while preserving conversation coherence.
Response Length Control
Set appropriate max_tokens limits. Many queries don't need lengthy responses. For quick answers, 500-1000 tokens suffice. Reserve longer limits for explanations, coding, or content generation tasks.
Model Tiering Strategy
Use GPT-3.5-turbo for routine queries and GPT-4-turbo for complex tasks. While OpenClaw doesn't have built-in routing, you can configure different models for different use cases or switch based on current needs.
Caching Considerations
For frequently asked questions or repeated queries, consider implementing caching at the skill level. This avoids redundant API calls for identical questions, though implementation requires custom skill development.
Monitoring Usage
Regularly review OpenAI dashboard usage statistics. Identify high-cost conversations, optimize prompts that generate excessive tokens, and verify usage matches expectations. Set up billing alerts to catch unusual patterns early.
Troubleshooting OpenAI Issues
This section addresses common issues when using OpenAI with OpenClaw.
Authentication Errors
401 Unauthorized errors indicate API key issues. Verify the OPENAI_API_KEY environment variable is set correctly and accessible to OpenClaw. Check for extra whitespace or characters. Ensure the key is active in your OpenAI dashboard.
Rate Limiting
429 errors indicate rate limiting. OpenAI limits requests per minute and tokens per minute based on account tier. Implement retry logic with exponential backoff. For persistent issues, request higher limits from OpenAI or spread requests across time.
Model Not Found
Invalid model errors occur when using incorrect model names. OpenAI periodically updates model identifiers. Check current documentation for exact model names. Legacy model names may stop working when OpenAI deprecates old versions.
Timeout Errors
Complex requests may timeout, especially with GPT-4. Increase timeout settings in OpenClaw configuration. Consider using streaming responses to get incremental output rather than waiting for complete generation.
Billing and Quota Issues
Requests fail if your account has insufficient balance or has hit spending limits. Check your OpenAI billing page for balance and limits. Add funds or adjust limits to restore service.
Quality Issues
If responses seem lower quality than expected, verify you're using the intended model (GPT-4 vs GPT-3.5). Check temperature settings; too high may cause incoherent responses. Ensure system prompts properly set context and expectations.
Code Examples
# Set OpenAI API key as environment variable
export OPENAI_API_KEY="sk-your-api-key-here"
# Add to shell profile for persistence
echo 'export OPENAI_API_KEY="sk-your-api-key-here"' >> ~/.zshrc
source ~/.zshrc
# For macOS launchd (daemon access)
launchctl setenv OPENAI_API_KEY "sk-your-api-key-here" {
"agent": {
"model": "openai/gpt-4-turbo",
"maxTokens": 4096,
"temperature": 0.7
},
"gateway": {
"port": 18789,
"host": "127.0.0.1"
}
} {
"agent": {
"model": "openai/gpt-4-turbo",
"maxTokens": 2048,
"temperature": 0.5,
"topP": 1,
"frequencyPenalty": 0.5,
"presencePenalty": 0.5
}
} # Test OpenAI connection directly
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4-turbo",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 50
}' # Restart OpenClaw to apply configuration
openclaw gateway restart
# Check status
openclaw gateway status
# View logs for connection verification
openclaw gateway logs # Linux systemd service environment configuration
[Service]
Environment="OPENAI_API_KEY=sk-your-api-key-here"
# Or use an environment file
EnvironmentFile=%h/.openclaw/env Related Content
Configure Anthropic Claude for OpenClaw
Configure OpenClaw with Anthropic's Claude models - the officially recommended AI provider. Learn API setup, model selection, OAuth configuration, and optimization for Claude Opus, Sonnet, and Haiku.
Configure OpenRouter for OpenClaw
Configure OpenClaw with OpenRouter for access to 100+ AI models including Claude, GPT-4, Llama, and more through a single API. Learn setup, model selection, cost optimization, and failover configuration.
Deploy OpenClaw to the Cloud
Deploy OpenClaw to major cloud providers including AWS, Google Cloud, and Azure. Learn infrastructure setup, security configurations, scaling strategies, and production best practices.
Install OpenClaw with Docker
Step-by-step Docker installation guide for OpenClaw AI assistant. Learn how to deploy OpenClaw using Docker containers with optimized configurations, persistent storage, and production-ready settings.