Release v0.6.1
Verbose Mode & Prompt Architecture Improvements
This release enhances the debugging experience with improved verbose output and refactors the internal prompt architecture for better LLM provider compatibility.
Verbose Mode Now Shows Complete Prompts
The -v flag has been enhanced to display the complete prompt sent to LLM providers, including both system and user messages.
Before v0.6.1:
$ gcop-rs -v commit
# Only showed basic debug infoAfter v0.6.1:
$ gcop-rs -v commit
[DEBUG] System message:
You are a professional Git commit message generator...
[DEBUG] User message:
Generate a commit message for the following changes:
...Benefits:
- Better debugging when commit messages aren't as expected
- Understand exactly what context is sent to the LLM
- Useful for custom prompt development and testing
Security Note: Verbose output may contain code snippets from your diff. Avoid sharing verbose logs publicly.
LLM Prompt Architecture Refactored
The prompt building system has been refactored to properly support system/user message separation:
How it works now:
| Provider | System Message | User Message |
|---|---|---|
| Claude | Native system field | messages[0].content |
| OpenAI | messages[0].role = "system" | messages[1].role = "user" |
| Ollama | Merged into single prompt | (API limitation) |
New PromptParts structure:
pub struct PromptParts {
pub system: String, // Context, role, format instructions
pub user: String, // Actual diff and request
}Why this matters:
- Claude and OpenAI can now leverage their native system message handling
- Better context separation leads to more consistent outputs
- Ollama continues to work with merged prompts (no breaking changes)
Documentation Updates
- New About Page: Added project information and attribution
docs/guide/about.md(English)docs/zh/guide/about.md(Chinese)
- Updated Links: Documentation now points to new domain
- Homebrew Installation: Updated tap repository name
Code Quality Improvements
- Unified code formatting across all source files
- Simplified test code structure for better maintainability
- Added
#[allow(clippy::too_many_arguments)]annotations where appropriate
Upgrade Notes
Upgrading from v0.6.0 requires no action. This release is fully backward compatible.
What stays the same:
- All public APIs unchanged
- Configuration file format unchanged
- Command-line interface unchanged
What improves:
- Better debugging with
-vflag - More consistent LLM outputs (especially for Claude/OpenAI)
Breaking Changes
None. This is a patch release with improvements only.
Installation
# Homebrew (macOS/Linux)
brew tap AptS-1547/gcop-rs
brew upgrade gcop-rs
# pipx (Python users, recommended)
pipx upgrade gcop-rs
# cargo-binstall (no compilation)
cargo binstall gcop-rs
# cargo install (compile from source)
cargo install gcop-rs --forceOr download pre-built binaries from Releases.
Statistics
- Files changed: 23
- Lines added: +552
- Lines removed: -505
- Commits: 7
Contributors
- AptS-1547 (Yuhan Bian / 卞雨涵)
- AptS-1548 (48)
Feedback
If you have any issues or suggestions, please submit an Issue.
Full Changelog: v0.6.0...v0.6.1