-
Notifications
You must be signed in to change notification settings - Fork 129
Description
Hi, thanks for your great work on WriteHERE.
I'm trying to use the story generation mode, but encountered an unexpected error.
Below is the full report.
❗ KeyError: user_question when running --mode story
I encountered a KeyError when running WriteHERE in story mode.
Command
python engine.py --filename ../test_data/meta_fiction.jsonl
--output-filename ./project/story/output.jsonl
--done-flag-file ./project/story/done.txt
--model gemini-2.0-flash
--mode story
Input file (from repository examples)
{
"id": "example",
"field": "inputs",
"value": "Please write a metafictional literary short story about AI and grief, around 1000 words.",
"ori": {
"example_id": "example",
"inputs": "Please write a metafictional literary short story about AI and grief, around 1000 words.",
"subset": "train"
}
}
Error message
KeyError: 'user_question'
Traceback (tail)
... executor/actions/base_action.py", line 178, in _parse_tool
type=args_doc[param.name]["type"],
KeyError: 'user_question'
Additional notes
- The input JSONL file is directly copied from the official repository.
- The command works if I use
--mode auto, but fails in--mode story. - I also tested with OpenAI GPT models and Gemini models, same error.
- The error seems to occur before any LLM call, during action/tool parsing.
Please advise what might be missing or misconfigured in story mode.
Thanks again for the great project!