MTEB (Massive Text Embedding Benchmark) is a large-scale benchmark designed to measure the performance of text embedding models across diverse embedding tasks. It includes 56 datasets covering 8 tasks and supports over 112 different languages. It's easy to use and extensible, allowing new datasets to be added. MTEB tests how well embedding models work with different types of text and tasks, providing a complete picture of each model's strengths and weaknesses.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.