Releases: CTLab-ITMO/CoolPrompt
Releases · CTLab-ITMO/CoolPrompt
Fix task_detector v1.2.0
v1.2.1 fix 1.2.0
1.2.0: LLM-as-a-judge Update
v1.2.0 upd 1.2.0
Release 1.1.0
1.1.0
What is new?
New Core Functions:
- Synthetic Data Generator:
- an auxiliary module for synthetic data generation when no input dataset is provided
- Task Detector:
- an automated task classification component for scenarios without explicit user-defined task specifications
- PromptAssistant:
- a LLM-based component helps to interpret prompt optimization results
- can be assigned separatly from target llm for prompting by
system_modelargument - also helps to create a synthetic dataset
Upgrades:
- Boosted HyPE:
- New meta-prompt for optimizer HyPE that provides stronger instructive prompts
- New default llm:
- We chose a small llm qwen3-4b-intsruct by a native huggingface launch
- New metrics:
- Bert-score (with multilingual model) and G-Eval (experimental)
And more micro features
Check our notebook with new run examples
1.0.2
- Fixed settings of num candidates for ReflectivePrompt
Release 1.0.1
- Fixed minor bugs at the prompt evaluation stage
- Added logging of optimization steps with settings of logging completeness via
verboseargument - Expanded Readme documentation
- Fixed dependency versions in toml file
v1.0.0
First Release!
We added the main assistant interface - PromptTuner, which includes:
- Task Selection: Configuring the choice of task type: classification or generation (default)
- Metric Selection: Choosing 1 out of 3 metrics for each task type to evaluate the prompt
- Evaluation Stage: Implemented a partial prompt evaluation feature by allowing the user to submit a dataset
- LLM Customization: Selecting a custom model initialized via Langchain. By default, the built-in model configuration is called
- Two Auto-Prompting Algorithms:
- ReflectivePrompt – Based on evolutionary optimization methods
- DistillPrompt – Based on prompt distillation and Tree-of-Thoughts methods
- One Fast Adapter-Prompt: Hype - for extending the base prompt
