Codesota · General · Vision-Language Models · SO100 real-world: Pick-Place, Stacking, SortingTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

SO100 (real-world: Pick-Place, Stacking, Sorting).

Three small real-world robot manipulation datasets collected using the SO-100 (SO100) robot and released on the Hugging Face Hub. The datasets correspond to three tasks: Pick-Place, Stacking, and Sorting. According to the SmolVLA paper (arXiv:2506.01844) each dataset contains 10 trajectories from each of 5 starting positions (50 demonstrations total) and is scored with fine-grained subtasks. The released data on Hugging Face uses the LeRobot dataset format (parquet/Timeseries + video frames), is provided under an Apache-2.0 compatible license, and is intended for training and evaluating vision-language-action and robotics models. Representative Hugging Face dataset pages include fracapuano/so100_test and related so100 repositories.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies