RTX 5090. Specs, benchmarks, $/hr.
The first consumer NVIDIA card with 32 GB of VRAM. 1.27× the FP16 of a 4090, 1.78× the bandwidth, and just enough headroom to fit a 70B at INT4 with a long context — without leaving the house.
RTX 5090, specified.
Dense FP16 from the NVIDIA datasheet. Bandwidth is peak; sustained will be lower. Price reflects street MSRP or used-market as of the date stamped at the top.
| Vendor | NVIDIA |
| Tier | Consumer |
| Generation | Blackwell |
| VRAM | 32 GB · GDDR7 |
| Memory bandwidth | 1,792 GB/s |
| FP16 dense | 209.5 TFLOPS |
| TDP | 575 W |
| Released | 2025 |
| Price | $1,999 MSRP |
| Status | Available |
Eleven workloads, one card.
Throughput on the same set of repeatable workloads we use across the register. Same quantisation across cards in the same row; latency reported with p95 in the methodology notes.
Numbers without a measurement on this chip are marked "—". Cross-card comparisons live on the head-to-head pages.
| Category | Workload | Metric | RTX 5090 | Notes |
|---|---|---|---|---|
| LLM Inference | Llama 3.1 8B | tok/s | 140 | tokens per second · single-stream · FP16 |
| LLM Inference | Llama 3.1 70B · 4-bit | tok/s | 38 | tokens per second · single-stream · INT4 GPTQ |
| LLM Inference | Qwen 2.5 32B · 4-bit | tok/s | 48 | tokens per second · single-stream · INT4 |
| LLM Inference | Mistral 7B | tok/s | 165 | tokens per second · single-stream · FP16 |
| Image Generation | SDXL 1024×1024 | it/s | 6.5 | iterations per second · 30 steps · FP16 |
| Image Generation | Flux.1 Dev | it/s | 3.4 | iterations per second · 28 steps · FP16 |
| Training | Fine-tune Llama 3.1 8B LoRA | samples/s | 12.5 | samples per second · seq 2k · BF16 |
| Training | ResNet-50 · ImageNet | img/s | 2,800 | images per second · BS=256 · BF16 |
| Computer Vision | YOLOv8x · inference | FPS | 320 | frames per second · BS=1 · FP16 |
| Computer Vision | SAM ViT-H | masks/s | 9.2 | masks per second · 1024×1024 · FP16 |
| Audio/Video | Whisper Large v3 | × RT | 28 | multiples of real-time · CPU offload off |
What fits in 32 GB, really.
FP16 weights = 2 bytes × parameters. INT4 cuts that 4× with small quality loss. Fine-tuning needs 3–4× more memory for gradients, optimiser, activations.
| Model | Params | FP16 | INT8 | INT4 | Fits on RTX 5090? |
|---|---|---|---|---|---|
| Llama 3.1 8B | 8B | 16 GB | 8 GB | 4 GB | FP16, INT8 and INT4 |
| Qwen 2.5 14B | 14B | 28 GB | 14 GB | 7 GB | FP16, INT8 and INT4 |
| Qwen 2.5 32B | 32B | 64 GB | 32 GB | 16 GB | INT8 and INT4 only |
| Llama 3.1 70B | 70B | 140 GB | 70 GB | 36 GB | No |
| DeepSeek V3 | 671B MoE | 1.3 TB | 671 GB | 336 GB | No |
| Llama 3.1 405B | 405B | 810 GB | 405 GB | 203 GB | No |