Skip to content

Releases: CTLab-ITMO/CoolPrompt

Fix task_detector v1.2.0

09 Dec 11:47

Choose a tag to compare

1.2.0: LLM-as-a-judge Update

23 Nov 21:09

Choose a tag to compare

v1.2.0

upd 1.2.0

Release 1.1.0

15 Oct 12:08
2c13cb0

Choose a tag to compare

1.1.0

What is new?

New Core Functions:

  • Synthetic Data Generator:
    • an auxiliary module for synthetic data generation when no input dataset is provided
  • Task Detector:
    • an automated task classification component for scenarios without explicit user-defined task specifications
  • PromptAssistant:
    • a LLM-based component helps to interpret prompt optimization results
    • can be assigned separatly from target llm for prompting by system_model argument
    • also helps to create a synthetic dataset

Upgrades:

  • Boosted HyPE:
    • New meta-prompt for optimizer HyPE that provides stronger instructive prompts
  • New default llm:
    • We chose a small llm qwen3-4b-intsruct by a native huggingface launch
  • New metrics:
    • Bert-score (with multilingual model) and G-Eval (experimental)

And more micro features

Check our notebook with new run examples

1.0.2

15 Jul 11:09

Choose a tag to compare

  • Fixed settings of num candidates for ReflectivePrompt

Release 1.0.1

14 Jul 10:46
b412834

Choose a tag to compare

  • Fixed minor bugs at the prompt evaluation stage
  • Added logging of optimization steps with settings of logging completeness via verbose argument
  • Expanded Readme documentation
  • Fixed dependency versions in toml file

v1.0.0

01 Jul 20:03

Choose a tag to compare

First Release!

We added the main assistant interface - PromptTuner, which includes:

  • Task Selection: Configuring the choice of task type: classification or generation (default)
  • Metric Selection: Choosing 1 out of 3 metrics for each task type to evaluate the prompt
  • Evaluation Stage: Implemented a partial prompt evaluation feature by allowing the user to submit a dataset
  • LLM Customization: Selecting a custom model initialized via Langchain. By default, the built-in model configuration is called
  • Two Auto-Prompting Algorithms:
    1. ReflectivePrompt – Based on evolutionary optimization methods
    2. DistillPrompt – Based on prompt distillation and Tree-of-Thoughts methods
  • One Fast Adapter-Prompt: Hype - for extending the base prompt

logo_light