Codesota · General · Vision-Language Models · NIH/Multi-needleTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

MMNeedle (Multimodal Needle-in-a-haystack).

MMNeedle (MultiModal Needle-in-a-haystack) is a benchmark for evaluating long-context capabilities of multimodal large language models (MLLMs). The benchmark stresses sub-image level retrieval and understanding by asking models to locate a target "needle" (a sub-image or region) inside a large "haystack" composed of many images or stitched images to create very long visual contexts. The benchmark includes a protocol to generate labels for sub-image retrieval and supports multi-image and stitched-image inputs to scale context length; evaluation focuses on the model's ability to find the correct sub-image given textual instructions and visual context. The dataset, code and leaderboard are linked from the project page and GitHub repository for the MMNeedle benchmark.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
NIH/Multi-needle — Vision-Language Models benchmark · Codesota | CodeSOTA