AgentBench is a comprehensive benchmark designed to evaluate Large Language Models (LLMs) as agents. It provides insights into their strengths and limitations, serving as a standardized platform for future research and development in AI agent technologies. It also includes a trajectory dataset for behavior cloning training.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.