While executing the generation of descriptions using vllm to call llama3, I encountered an issue after producing 648 entries:

This appears to be a problem with the VLLM framework. I would like to ask to you if you have encountered similar issues during your use?