Home/Guides/Prompting Frameworks
ResearchBenchmarksDecember 2025

Do Prompting Frameworks Actually Work?

RTF, TAG, RACE, COAST... The internet is full of magic frameworks for writing prompts. We decided to test them with data.

This guide is for you if: you are just starting with AI, want to write better prompts, or wondering if all those LinkedIn acronyms are worth learning. Spoiler: the results may surprise you.

Before we start - what are these frameworks?

Prompting frameworks are acronyms meant to help you remember how to write good prompts:

RTF = Role, Task, Format
TAG = Task, Action, Goal
RACE = Role, Action, Context, Expectation
COAST = Context, Objective, Actions, Scenario, Task

Sounds professional, right? Let us check if it actually works better than... just clearly describing what you want.

Results - What Did the Data Show?

ApproachAccuracyvs Simple promptToken usage
Simple prompt (baseline)97%baseline93
APE97%baseline108
RACE97%baseline123
TRACE97%baseline122
COAST95%-2%121
ROSES95%-2%118
RTF94%-3%119
STAR80%-17%132
TAG78%-19%132

Surprise: A simple, clear prompt achieved 97% accuracy. No framework improved on that. Some (STAR, TAG) made results worse by 17-19%.

Practical Advice

If you are just getting started with LLMs:

  1. 1.Start with a simple description of what you want. Do not overcomplicate.
  2. 2.If the result is not perfect - ask follow-up questions, add specifics, give examples.
  3. 3.For logical/mathematical tasks, add "explain step by step".
  4. 4.Do not waste time learning RTF/TAG/RACE - it is marketing, not science.

The best prompting skill is not knowing acronyms, but the ability to clearly communicate what you need.

Sources and Materials