LVD-142M is the curated pretraining dataset used to train the DINOv2 family of self-supervised ViT models. According to the DINOv2 paper (arXiv:2304.07193) it is a deduplicated, automatically-assembled collection of roughly 142 million images built by retrieving and filtering images from multiple curated and uncurated sources (the paper reports the exact composition in Appendix Table 15). The dataset was created to provide a diverse, high-quality corpus for large-scale visual self-supervised pretraining (training a 1B-parameter ViT and distilled smaller variants). LVD-142M itself has not been published as a standalone dataset (the authors were asked about releasing the dataset / curation code in the facebookresearch/dinov2 GitHub issues but no public dataset release is available), and no official Hugging Face dataset page for LVD-142M could be found. Many model cards on Hugging Face list "Pretrain Dataset: LVD-142M" to indicate models were pretrained on it (e.g., Meta / timm DINOv2 model cards), but the dataset files/download are not publicly hosted by the authors.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.