Release v0.5.0
New Features
Provider Fallback Support
gcop-rs now supports automatic failover between LLM providers. When your primary provider fails (network error, rate limit, etc.), it will automatically try the next provider in your fallback list.
Configuration:
toml
[llm]
default_provider = "claude"
fallback_providers = ["openai", "ollama"]Behavior:
- Tries primary provider first
- On failure, shows warning and switches to next provider
- Works for both commit message generation and code review
- Supports streaming mode (falls back to non-streaming if all streaming providers fail)
Claude Streaming Support
Claude provider now supports streaming responses with real-time typing effect, just like ChatGPT.
- Uses SSE (Server-Sent Events) for efficient streaming
- Shows characters as they're generated
- Graceful fallback to spinner mode if streaming fails
Enhanced Retry Mechanism
The API retry system now supports:
- Retry-After header: Respects rate limit hints from APIs (HTTP 429 responses)
- Configurable max delay: New
max_retry_delay_msoption to cap wait time
toml
[network]
max_retries = 3
retry_delay_ms = 1000
max_retry_delay_ms = 60000 # Maximum 60 secondsColored Provider Output
LLM providers now display colored warning and info messages when:
- Switching to fallback provider
- Retrying after rate limit
- Encountering recoverable errors
Improvements
Better User Experience
- Improved commit command interaction flow
- Better review command output formatting
- Clearer progress indicators
Error Handling
- Restructured error types for clarity
- More user-friendly error messages
- Better suggestions for common issues
Code Quality
- Extracted common prompt building logic
- Unified response processing across providers
- Reduced code duplication in LLM module
Bug Fixes
- Streaming Error Handling: Fixed error handling and log levels for streaming responses
Documentation
- Updated streaming output documentation
- Added Claude configuration examples
- Added installation update/uninstall instructions for Homebrew, pipx, cargo-binstall, and cargo
Upgrade Notes
Upgrading from v0.4.x requires no action, fully backward compatible.
Optional new configuration options:
toml
[llm]
fallback_providers = ["openai", "ollama"] # Optional failover list
[network]
max_retry_delay_ms = 60000 # Optional max wait timeInstallation
bash
# Homebrew (macOS/Linux)
brew tap AptS-1547/gcop-rs
brew install gcop-rs
# pipx (Python users, recommended)
pipx install gcop-rs
# cargo-binstall (no compilation)
cargo binstall gcop-rs
# cargo install (compile from source)
cargo install gcop-rsOr download pre-built binaries from Releases.
Feedback
If you have any issues or suggestions, please submit an Issue.