Few-Shot Learning
Learning from very few examples.
Few-shot learning enables models to classify new categories from just 1-5 examples per class. Meta-learning approaches (MAML, Prototypical Networks) established the field, but in-context learning in large language models and foundation model adapters (CLIP, DINOv2) have largely superseded specialized few-shot methods for practical use.
History
Matching Networks learn to compare query images to few labeled support examples
MAML (Model-Agnostic Meta-Learning) — Finn et al. learn initialization for fast adaptation
Prototypical Networks compute class prototypes in embedding space for nearest-centroid classification
A Closer Look at Few-Shot Classification shows simple baselines (fine-tuning) are competitive
GPT-3 demonstrates few-shot learning via in-context learning — no gradient updates needed
CLIP enables zero-shot and few-shot visual classification via text-image alignment
Frozen pretrained features + linear probing rivals meta-learning on Mini-ImageNet
DINOv2 features + kNN classification achieves strong few-shot results without any fine-tuning
Foundation model adapters (LoRA, prefix tuning) enable efficient few-shot adaptation
In-context learning in multimodal models largely replaces specialized few-shot methods
How Few-Shot Learning Works
Support Set
A small set of labeled examples (1-5 per class) defines the new classification task.
Feature Extraction
A pretrained backbone (DINOv2, CLIP, foundation LLM) extracts rich representations of both support and query examples.
Similarity Computation
Query examples are compared to support examples — via cosine similarity, Euclidean distance to prototypes, or learned metrics.
Classification
The query is classified based on nearest neighbors, prototype matching, or a fine-tuned linear head.
In-Context Alternative
For LLMs: examples are placed in the prompt, and the model classifies new inputs by analogy — no parameter updates.
Current Landscape
Few-shot learning in 2025 has been largely absorbed by the foundation model paradigm. The specialized meta-learning approaches that defined the field (MAML, Prototypical Networks) are increasingly niche, as pretrained features from CLIP, DINOv2, and large LLMs enable effective few-shot classification without task-specific training. In-context learning in LLMs has become the dominant few-shot method for text. The remaining role for specialized few-shot methods is in domains poorly covered by foundation models (rare industrial applications, specialized scientific imaging).
Key Challenges
Foundation model displacement — specialized few-shot methods are being superseded by general-purpose foundation models
Domain gap — pretrained features work poorly when the few-shot domain is far from pretraining data (medical, industrial)
Cross-domain evaluation — Mini-ImageNet and tiered-ImageNet benchmarks are saturated and not representative of real challenges
Few-shot stability — performance varies significantly depending on which specific examples are chosen as the support set
Practical relevance — with foundation models, the question shifts from 'can we classify with 5 examples?' to 'can we adapt with 0 examples?'
Quick Recommendations
Visual few-shot classification
CLIP / DINOv2 features + nearest centroid
Foundation model features make specialized few-shot methods unnecessary for most domains
Text few-shot classification
GPT-4 / Claude in-context learning
In-context learning with 3-5 examples is the most practical approach
Domain-specific few-shot
MAML or Prototypical Networks with domain pretraining
Still valuable when foundation models lack domain coverage
Efficient adaptation
LoRA fine-tuning on 5-50 examples
Parameter-efficient fine-tuning bridges few-shot and full fine-tuning
What's Next
The frontier is one-shot and zero-shot learning in specialized domains — leveraging foundation models pretrained on diverse data to handle novel categories with minimal or no examples. Expect test-time adaptation methods that refine predictions on-the-fly, and active learning strategies that select the most informative few-shot examples.
Benchmarks & SOTA
No datasets indexed for this task yet.
Contribute on GitHubRelated Tasks
Something wrong or missing?
Help keep Few-Shot Learning benchmarks accurate. Report outdated results, missing benchmarks, or errors.