Continuous Control
Continuous control — learning smooth motor commands in simulated physics — was transformed by MuJoCo and the OpenAI Gym suite in the mid-2010s. SAC (2018) and TD3 became reliable baselines, but the field shifted toward harder locomotion (humanoid parkour, dexterous hands) and sim-to-real transfer after DeepMind's dm_control and Isaac Gym raised the bar. DreamerV3 (2023) showed that world-model approaches can match or beat model-free methods across dozens of control tasks with a single hyperparameter set, signaling a move toward generalist RL agents.
MuJoCo
Physics-based continuous control benchmark. Evaluated on 15 DMControl tasks; metric is mean normalized score (0=random, 1000=expert) at 1M environment steps.
Top 10
Leading models on MuJoCo.
All datasets
1 dataset tracked for this task.
Related tasks
Other tasks in Reinforcement Learning.