qqqa is getting released as stable!
What's Changed
- Claude CLI LLM backend: Use haiku 4.5 by default for faster answers
- Reduce LLM inference time by instructing the model to think less
- Fix: Reduce output verbosity and remove repetition
- Fix: llama3.1 wrong tool payload
- Provide skip git repo check to codex