A comprehensive benchmark for evaluating the creative writing capabilities of large language models using a hybrid rubric and Elo scoring system. The evaluation uses 32 distinct writing prompts across 3 iterations (96 items total) with temperature 0.7 and min_p 0.1. Each generated piece is assessed by a judge model (Claude 3.7 Sonnet) against a comprehensive rubric, followed by pairwise matchups using the Glicko-2 rating system that accounts for win margins. The benchmark is designed for enhanced discrimination at the top end of model performance and includes prompts challenging models in humor, romance, spatial awareness, and unique perspectives. It implements bias mitigation strategies for length, position, verbosity, and poetic incoherence. Used for the official Creative Writing leaderboard on EQ-Bench.com.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.