Codesota · Models · TimeLLMTongji University / Ant Group10 results · 5 benchmarks
Model card

TimeLLM.

Tongji University / Ant Groupopen-source7B paramsLLM Reprogramming (Llama-7B backbone)

Reprograms frozen LLMs for time-series forecasting via learned input patches + natural language prompts.

§ 01 · Benchmarks

Every benchmark TimeLLM has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01ETTh1Time Series · Time Series Forecastingmae0.5%#2/32024-05-07source ↗
02ETTh1Time Series · Time Series Forecastingmse0.4%#3/32024-05-07source ↗
03ETTh2Time Series · Time Series Forecastingmae0.4%#3/32024-05-07source ↗
04ETTh2Time Series · Time Series Forecastingmse0.4%#3/32024-05-07source ↗
05ETTm1Time Series · Time Series Forecastingmae0.4%#3/32024-05-07source ↗
06ETTm1Time Series · Time Series Forecastingmse0.4%#3/32024-05-07source ↗
07ETTm2Time Series · Time Series Forecastingmae0.3%#3/32024-05-07source ↗
08ETTm2Time Series · Time Series Forecastingmse0.3%#3/32024-05-07source ↗
09WeatherTime Series · Time Series Forecastingmae0.3%#6/62024-05-07source ↗
10WeatherTime Series · Time Series Forecastingmse0.2%#6/62024-05-07source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where TimeLLM actually performs.

Time Series
5
benchmarks
avg rank #3.5
§ 03 · Papers

1 paper with results for TimeLLM.

  1. 2024-05-07· Time Series· 10 results

    Time-LLM: Time Series Forecasting by Reprogramming Large Language Models

    Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu et al.
§ 05 · Sources & freshness

Where these numbers come from.

TimeLLM Table 11 (avg)
10
results
10 of 10 rows marked verified.