The dataset "ChartMimic_v2_Direct" is used for evaluating video language models. More specifically, it focuses on evaluating Large Multimodal Models' (LMMs) cross-modal reasoning capabilities through chart-to-code generation, encompassing visual understanding, code generation, and cross-modal reasoning. It is available on Hugging Face.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.