Skip to content

Release v0.3.0

✨ New Features

OpenAI Streaming Output Support

Finally here! Commit message generation now supports real-time streaming output, similar to ChatGPT's typing effect.

Effect: No more staring at a spinner - watch the text appear character by character.

Configuration: Enabled by default, can be disabled in config:

toml
[ui]
streaming = true  # Enabled by default

Support Status:

ProviderStreaming Support
OpenAI / Compatible API✅ Full support
Claude⏳ Fallback to Spinner
Ollama⏳ Fallback to Spinner

Providers that don't support streaming will automatically fall back to spinner mode, no manual configuration needed.

Technical Implementation:

  • New SSE (Server-Sent Events) parsing module
  • LLMProvider trait adds supports_streaming() and generate_commit_message_streaming() methods
  • Async stream processing, real-time rendering to terminal

🎨 UI Improvements

Colored Feedback Prompt

Retry with feedback option now shows colored prompt text, more clearly guiding user input.

Simplified Option Description

Retry option simplified from "Retry with feedback - Regenerate with instructions" to "Retry with feedback - Add instructions", more concise.

📦 Dependency Changes

  • Added bytes = "1.10" - Streaming response byte handling
  • Added futures = "0.3" - Async stream processing
  • reqwest adds stream feature

Configuration Changes

New [ui] configuration option:

toml
[ui]
streaming = true  # Enable streaming output (default true)

Upgrade Notes

Upgrading from v0.2.1 requires no action, fully backward compatible.

Streaming output is enabled by default. If you prefer the spinner wait style, set streaming = false.

📦 Installation

bash
cargo install gcop-rs

Statistics

15 files changed
+413 insertions
-30 deletions

📚 Documentation

Feedback

If you have any issues or suggestions, please submit an Issue.