List view
- No due date•2/4 issues closed
- for MVP, should be a benchmark set to verify task completion rate, quality, etc, and whether LLM's behavior align with rules - for improve rule sets - for self-improve rule sets, tool use, etc, need evaluate tasks run by users
No due date- extend this repo to a platform for AI coding IDE's rules? - collect evaluation for rules and share to authors of rules
No due date•1/2 issues closed- get more feedback from users
No due date•5/7 issues closed