More evaluation methods including CIDEr, METEOR and ROUGE besides BLEU1-4#62
More evaluation methods including CIDEr, METEOR and ROUGE besides BLEU1-4#62yolo615 wants to merge 9 commits intosgrvinod:masterfrom
Conversation
|
@ruizhao1997 thanks! I'm reviewing this PR. In the meantime, would it also be possible for you to incorporate SPICE as an additional eval metric? :) |
|
@sgrvinod Yes, of course. But I have been busy with other things recently. I would add the SPICE a few days later. |
|
@ruizhao1997 that'd be great! you could also have a look at coco-caption. Specifically, pycocoevalcap where a host of common eval metrics are available. |
|
@ruizhao1997 Hi, I try to do evaluation using metrics CIDEr, METEOR, and ROUGE. But there is something wrong with METEOR. Traceback (most recent call last): How can I fix this problem? Thanks! |
i meet this error too.have you fixed? |
i meet this error tootoo.have you fixed? |
I use the COCO.api and change the eval.py a little to achieve evaluation using CIDEr, METEOR, ROUGE and BLEU.
The packages of all the evaluation methods are in the folder: evalfunc.