Methodology

Few-Shot Learning

Learning from very few examples.

0 datasets0 resultsView full task mapping →

Few-shot learning enables models to classify new categories from just 1-5 examples per class. Meta-learning approaches (MAML, Prototypical Networks) established the field, but in-context learning in large language models and foundation model adapters (CLIP, DINOv2) have largely superseded specialized few-shot methods for practical use.

History

2016

Matching Networks learn to compare query images to few labeled support examples

2017

MAML (Model-Agnostic Meta-Learning) — Finn et al. learn initialization for fast adaptation

2017

Prototypical Networks compute class prototypes in embedding space for nearest-centroid classification

2019

A Closer Look at Few-Shot Classification shows simple baselines (fine-tuning) are competitive

2020

GPT-3 demonstrates few-shot learning via in-context learning — no gradient updates needed

2021

CLIP enables zero-shot and few-shot visual classification via text-image alignment

2022

Frozen pretrained features + linear probing rivals meta-learning on Mini-ImageNet

2023

DINOv2 features + kNN classification achieves strong few-shot results without any fine-tuning

2024

Foundation model adapters (LoRA, prefix tuning) enable efficient few-shot adaptation

2025

In-context learning in multimodal models largely replaces specialized few-shot methods

How Few-Shot Learning Works

1Support SetA small set of labeled exam…2Feature ExtractionA pretrained backbone (DINO…3Similarity ComputationQuery examples are compared…4ClassificationThe query is classified bas…5In-Context AlternativeFor LLMs: examples are plac…Few-Shot Learning Pipeline
1

Support Set

A small set of labeled examples (1-5 per class) defines the new classification task.

2

Feature Extraction

A pretrained backbone (DINOv2, CLIP, foundation LLM) extracts rich representations of both support and query examples.

3

Similarity Computation

Query examples are compared to support examples — via cosine similarity, Euclidean distance to prototypes, or learned metrics.

4

Classification

The query is classified based on nearest neighbors, prototype matching, or a fine-tuned linear head.

5

In-Context Alternative

For LLMs: examples are placed in the prompt, and the model classifies new inputs by analogy — no parameter updates.

Current Landscape

Few-shot learning in 2025 has been largely absorbed by the foundation model paradigm. The specialized meta-learning approaches that defined the field (MAML, Prototypical Networks) are increasingly niche, as pretrained features from CLIP, DINOv2, and large LLMs enable effective few-shot classification without task-specific training. In-context learning in LLMs has become the dominant few-shot method for text. The remaining role for specialized few-shot methods is in domains poorly covered by foundation models (rare industrial applications, specialized scientific imaging).

Key Challenges

Foundation model displacement — specialized few-shot methods are being superseded by general-purpose foundation models

Domain gap — pretrained features work poorly when the few-shot domain is far from pretraining data (medical, industrial)

Cross-domain evaluation — Mini-ImageNet and tiered-ImageNet benchmarks are saturated and not representative of real challenges

Few-shot stability — performance varies significantly depending on which specific examples are chosen as the support set

Practical relevance — with foundation models, the question shifts from 'can we classify with 5 examples?' to 'can we adapt with 0 examples?'

Quick Recommendations

Visual few-shot classification

CLIP / DINOv2 features + nearest centroid

Foundation model features make specialized few-shot methods unnecessary for most domains

Text few-shot classification

GPT-4 / Claude in-context learning

In-context learning with 3-5 examples is the most practical approach

Domain-specific few-shot

MAML or Prototypical Networks with domain pretraining

Still valuable when foundation models lack domain coverage

Efficient adaptation

LoRA fine-tuning on 5-50 examples

Parameter-efficient fine-tuning bridges few-shot and full fine-tuning

What's Next

The frontier is one-shot and zero-shot learning in specialized domains — leveraging foundation models pretrained on diverse data to handle novel categories with minimal or no examples. Expect test-time adaptation methods that refine predictions on-the-fly, and active learning strategies that select the most informative few-shot examples.

Benchmarks & SOTA

No datasets indexed for this task yet.

Contribute on GitHub

Related Tasks

Something wrong or missing?

Help keep Few-Shot Learning benchmarks accurate. Report outdated results, missing benchmarks, or errors.

0/2000
Few-Shot Learning Benchmarks - Methodology - CodeSOTA | CodeSOTA