Benchmarking and Evaluation Framework
3.2k 2026-04-18
embeddings-benchmark/mteb
MTEB is a comprehensive benchmark and evaluation framework designed to assess the performance of text embedding models and retrieval systems across a wide range of tasks.
Core Features
Massive benchmark for text embeddings and retrieval systems
Supports evaluation of various pre-trained and custom models
Offers a diverse set of tasks and benchmarks for comprehensive testing
Provides both Python API and CLI for flexible evaluation workflows
Features an interactive leaderboard to track model performance
Quick Start
pip install mtebDetailed Introduction
MTEB (Massive Text Embedding Benchmark) is an open-source framework dedicated to standardizing and simplifying the evaluation of text embedding models and retrieval systems. It addresses the critical need for robust and comparable performance metrics across diverse natural language processing tasks. By offering a unified platform for benchmarking, MTEB enables researchers and developers to objectively assess model capabilities, identify state-of-the-art solutions, and accelerate advancements in the field of semantic search and information retrieval.