HallusionBench is a comprehensive benchmark designed to evaluate language hallucination and visual illusion in large vision-language models. It presents challenging image-context reasoning tasks to assess model robustness and accuracy.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.