Codesota · General · Retrieval · StackOverflow-QA (StackQA)Tasks/General/Retrieval
Retrieval · benchmark dataset · EN

StackOverflow-QA (StackQA).

StackOverflow-QA (aka StackQA) is a retrieval benchmark constructed from Stack Overflow question/answer posts where both queries and candidate documents can contain long mixed content of natural language and code. It is provided in a retrieval format (queries, corpus, qrels/scores) and intended for code+text information retrieval evaluations (e.g., dense single-vector retrieval). The Hugging Face mirror (mteb/stackoverflow-qa) shows splits and typical fields (query-id, corpus-id, score) and sizes: ~15.9k default rows (train: ~14k, test: ~1.99k) with corpus/queries subsets (~19.9k). This dataset has been used in recent code-IR benchmarks (e.g., CoIR) and evaluated with metrics such as nDCG@10 for single-vector retrieval.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies