HammerBench is a benchmark for evaluating agents in real mobile assistant scenarios, focusing on fine-grained function-calling and slot-filling tasks in interactive dialogues. It tests agents across multiple domains with diverse tools and query types, capturing various user behaviors like detailed vs. vague queries and single-turn vs. multi-turn interactions. It also allows evaluation of LLM performance under circumstances such as imperfect instructions.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.