Using Kimi 2.5 with OpenClaw on Deep Layer
Complete guide for integrating Kimi 2.5 AI model with OpenClaw running on Deep Layer cloud services
Using Kimi 2.5 with OpenClaw on Deep Layer
This guide provides detailed instructions for integrating Kimi 2.5 (hosted on Groq) with OpenClaw running on Deep Layer cloud services.
Overview
Kimi 2.5 is available through Groq's API as moonshotai/kimi-k2-instruct-0905 with the following specifications:
- Speed: 200 tokens/second
- Pricing: $1.00 input / $3.00 output per 1M tokens
- Context Window: 262,144 tokens
- Max Completion: 16,384 tokens
- Rate Limits: 250K TPM, 1K RPM (Developer plan)
Prerequisites
- Groq API Key: Sign up at console.groq.com and generate an API key
- OpenClaw Instance: Running on Deep Layer cloud services
- Access to OpenClaw Configuration: Ability to modify the gateway configuration
Configuration Steps
Step 1: Add Kimi 2.5 to OpenClaw Model Catalog
Add the following configuration to your OpenClaw gateway config under agents.defaults.models:
agents:
defaults:
models:
custom-provider/moonshotai/kimi-k2-instruct-0905:
alias: "kimi-2.5"
params:
baseUrl: "https://api.groq.com/openai/v1"
apiKey: "${GROQ_API_KEY}" # Use environment variable
model: "moonshotai/kimi-k2-instruct-0905"
streaming: true
Step 2: Configure Environment Variables
Set the Groq API key as an environment variable on your Deep Layer instance:
# Add to your environment (via Deep Layer dashboard or SSH)
export GROQ_API_KEY="gsk_your_actual_api_key_here"
# Make it persistent
echo "export GROQ_API_KEY=gsk_your_actual_api_key_here" >> ~/.bashrc
Step 3: Set Kimi 2.5 as Default Model (Optional)
To use Kimi 2.5 as your default model, update the agent defaults:
agents:
defaults:
model: "custom-provider/moonshotai/kimi-k2-instruct-0905"
Step 4: Apply Configuration
Apply the configuration changes to your OpenClaw gateway:
# Apply the new configuration
openclaw gateway config.apply
# Or restart the gateway
openclaw gateway restart
Usage Examples
Basic Usage
Once configured, you can use Kimi 2.5 in your OpenClaw sessions:
# Use with specific model flag
openclaw --model custom-provider/moonshotai/kimi-k2-instruct-0905
# Or if you've set it as default, just use normally
openclaw
In Configuration Files
Reference Kimi 2.5 in your agent configurations:
agents:
list:
my-agent:
model: "custom-provider/moonshotai/kimi-k2-instruct-0905"
# ... other agent settings
Via API
When using OpenClaw's API, specify the model:
{
"model": "custom-provider/moonshotai/kimi-k2-instruct-0905",
"messages": [
{"role": "user", "content": "Explain quantum computing"}
]
}
Deep Layer Specific Considerations
Environment Variable Management
On Deep Layer, you can set environment variables through:
- Dashboard: Environment Variables section in your instance settings
- SSH: Direct access to configure persistent environment variables
- Startup Scripts: Add to your instance's startup configuration
Network Configuration
Deep Layer instances have outbound internet access by default, so connecting to Groq's API (api.groq.com) should work without additional firewall rules.
Monitoring and Logging
Monitor your Kimi 2.5 usage through:
- Groq Console: Track API usage and costs
- OpenClaw Logs: Check gateway logs for model interactions
- Deep Layer Monitoring: Monitor instance performance
Troubleshooting
Common Issues
1. Authentication Errors
Error: 401 Unauthorized
Solution: Verify your GROQ_API_KEY is correctly set and active in the Groq console.
2. Model Not Found
Error: Model not found: moonshotai/kimi-k2-instruct-0905
Solution: Check that the exact model ID is used: moonshotai/kimi-k2-instruct-0905
3. Rate Limiting
Error: Rate limit exceeded
Solution: You're hitting Groq's rate limits. The Developer plan allows 250K tokens per minute and 1K requests per minute.
Verification Steps
Test API Connectivity:
bash
curl https://api.groq.com/openai/v1/models \n -H "Authorization: Bearer $GROQ_API_KEY"
Check OpenClaw Model List:
bash
openclaw models list
Test Model Directly:
bash
echo "Hello, how are you?" | openclaw --model custom-provider/moonshotai/kimi-k2-instruct-0905
Cost Optimization
Token Usage Monitoring
Monitor your token consumption:
- Input tokens: $1.00 per 1M tokens
- Output tokens: $3.00 per 1M tokens
- Typical conversation: ~1-5K tokens per exchange
Usage Patterns
For cost-effective usage:
1. Batch requests when possible
2. Use appropriate context length - Kimi 2.5 supports up to 262K context
3. Monitor usage in Groq console
4. Set up alerts for unusual spending patterns
Advanced Configuration
Custom Parameters
You can add additional OpenAI-compatible parameters:
agents:
defaults:
models:
custom-provider/moonshotai/kimi-k2-instruct-0905:
alias: "kimi-2.5"
params:
baseUrl: "https://api.groq.com/openai/v1"
apiKey: "${GROQ_API_KEY}"
model: "moonshotai/kimi-k2-instruct-0905"
temperature: 0.7
max_tokens: 4000
top_p: 0.9
streaming: true
Multiple Model Setup
Configure multiple Kimi variants or other models:
agents:
defaults:
models:
custom-provider/moonshotai/kimi-k2-instruct-0905:
alias: "kimi-2.5"
params:
baseUrl: "https://api.groq.com/openai/v1"
apiKey: "${GROQ_API_KEY}"
model: "moonshotai/kimi-k2-instruct-0905"
streaming: true
custom-provider/llama-3.3-70b:
alias: "llama-3.3"
params:
baseUrl: "https://api.groq.com/openai/v1"
apiKey: "${GROQ_API_KEY}"
model: "llama-3.3-70b-versatile"
streaming: true
Support and Resources
- Groq Documentation: console.groq.com/docs
- OpenClaw Documentation: docs.openclaw.ai
- Deep Layer Support: Through your Deep Layer dashboard
- Model Status: Check status.groq.com
Security Best Practices
- API Key Storage: Use environment variables, not hardcoded values
- Key Rotation: Regularly rotate your Groq API keys
- Access Control: Limit who can access your OpenClaw configuration
- Monitoring: Set up alerts for unusual usage patterns
- Backup: Keep backups of your configuration files
This guide is specific to Deep Layer's OpenClaw deployment. For other hosting providers, adjust the network and environment configuration accordingly.