Provider Configuration
gcop-rs supports multiple LLM providers. You can use built-in providers or add custom ones.
Built-in Providers
Claude (Anthropic)
[llm.providers.claude]
api_key = "sk-ant-your-key"
model = "claude-sonnet-4-5-20250929"
temperature = 0.3
max_tokens = 2000Get API Key: https://console.anthropic.com/
Available Models:
claude-sonnet-4-5-20250929(recommended)claude-opus-4-5-20251101(most powerful)claude-3-5-sonnet-20241022(older version)
OpenAI
[llm.providers.openai]
api_key = "sk-your-openai-key"
model = "gpt-4-turbo"
temperature = 0.3Get API Key: https://platform.openai.com/
Available Models:
gpt-4-turbogpt-4gpt-3.5-turbo
Ollama (Local)
[llm.providers.ollama]
endpoint = "http://localhost:11434/api/generate"
model = "codellama:13b"Setup:
# Install Ollama
curl https://ollama.ai/install.sh | sh
# Pull a model
ollama pull codellama:13b
# Start server
ollama serveAvailable Models: Any model available in Ollama (codellama, llama2, mistral, etc.)
Gemini (Google)
[llm.providers.gemini]
api_key = "AIza-your-gemini-key"
model = "gemini-3-flash-preview"
temperature = 0.3Get API Key: https://ai.google.dev/
Available Models:
gemini-3-flash-preview(recommended default)gemini-2.5-flashgemini-2.5-pro
Custom Providers
You can add OpenAI-, Claude-, or Gemini-compatible APIs using the api_style parameter.
DeepSeek
[llm.providers.deepseek]
api_style = "openai"
api_key = "sk-your-deepseek-key"
endpoint = "https://api.deepseek.com/v1/chat/completions"
model = "deepseek-chat"
temperature = 0.3Get API Key: https://platform.deepseek.com/
Qwen (通义千问)
[llm.providers.qwen]
api_style = "openai"
api_key = "sk-your-qwen-key"
endpoint = "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions"
model = "qwen-max"Claude Proxy/Mirror
[llm.providers.claude-code-hub]
api_style = "claude"
api_key = "your-key"
endpoint = "https://your-claude-code-hub.com/v1/messages"
model = "claude-sonnet-4-5-20250929"Custom OpenAI Compatible Service
[llm.providers.my-llm]
api_style = "openai"
api_key = "your-key"
endpoint = "https://api.example.com/v1/chat/completions"
model = "custom-model"API Style Parameter
The api_style parameter determines which API implementation to use:
| Value | Description | Compatible Services |
|---|---|---|
"openai" | OpenAI Chat Completions API | OpenAI, DeepSeek, Qwen, most custom services |
"claude" | Anthropic Messages API | Claude, Claude proxies/mirrors |
"ollama" | Ollama Generate API | Local Ollama only |
"gemini" | Google Gemini GenerateContent API | Gemini and Gemini-compatible endpoints |
If api_style is not specified, it defaults to the provider name (for backward compatibility with built-in providers).
Switching Providers
Using Command-Line
# Use different provider for one command
gcop-rs --provider openai commit
gcop-rs --provider deepseek review changesChanging Default
Edit your platform-specific config file (see Configuration Guide):
[llm]
default_provider = "deepseek" # Change thisAPI Key Management
Config File
Provider api_key is configured in config.toml:
[llm.providers.claude]
api_key = "sk-ant-..."CI Mode Environment Variables
In CI mode (CI=1), use environment variables instead of config file:
GCOP_CI_PROVIDER- Provider type:claude,openai,ollama, orgeminiGCOP_CI_API_KEY- API keyGCOP_CI_MODEL(optional, has defaults)GCOP_CI_ENDPOINT(optional)
See Also
- Configuration Reference - All configuration options
- Provider Health Checks - How
gcop-rs config validateworks - Custom Prompts - Customize AI behavior
- Troubleshooting - Provider connection issues