Agent57: Outperforming the Atari Human Benchmark | ✓ Link | 2623.71 | Agent57 | 2020-03-30 |
First return, then explore | ✓ Link | 2281 | Go-Explore | 2020-04-27 |
Self-supervised network distillation: an effective approach to exploration in sparse reward environments | ✓ Link | 2188 | SND-VIC | 2023-02-22 |
Self-supervised network distillation: an effective approach to exploration in sparse reward environments | ✓ Link | 2138 | SND-STD | 2023-02-22 |
Generalized Data Distribution Iteration | | 2035 | GDI-I3 | 2022-06-07 |
Generalized Data Distribution Iteration | | 2000 | GDI-H3(200M frames) | 2022-06-07 |
Generalized Data Distribution Iteration | | 2000 | GDI-H3 | 2022-06-07 |
Recurrent Experience Replay in Distributed Reinforcement Learning | ✓ Link | 1970.7 | R2D2 | 2019-05-01 |
Exploration by Random Network Distillation | ✓ Link | 1859 | RND | 2018-10-30 |
Distributed Prioritized Experience Replay | ✓ Link | 1813 | Ape-X | 2018-03-02 |
Self-supervised network distillation: an effective approach to exploration in sparse reward environments | ✓ Link | 1787 | SND-V | 2023-02-22 |
Online and Offline Reinforcement Learning by Planning with a Learned Model | ✓ Link | 1731.47 | MuZero (Res2 Adam) | 2021-04-13 |
A Distributional Perspective on Reinforcement Learning | ✓ Link | 1520.0 | C51 noop | 2017-07-21 |
RUDDER: Return Decomposition for Delayed Rewards | ✓ Link | 1350 | RUDDER | 2018-06-20 |
Implicit Quantile Networks for Distributional Reinforcement Learning | ✓ Link | 1318 | IQN | 2018-06-14 |
Count-Based Exploration with the Successor Representation | ✓ Link | 1241.8 | DQNMMCe+SR | 2018-07-31 |
Learning values across many orders of magnitude | | 1172.0 | DDQN+Pop-Art noop | 2016-02-24 |
Count-Based Exploration in Feature Space for Reinforcement Learning | ✓ Link | 1169.2 | Sarsa-φ-EB | 2017-06-25 |
Noisy Networks for Exploration | ✓ Link | 815 | NoisyNet-Dueling | 2017-06-30 |
Evolution Strategies as a Scalable Alternative to Reinforcement Learning | ✓ Link | 760.0 | ES FF (1 hour) noop | 2017-03-10 |
Massively Parallel Methods for Deep Reinforcement Learning | ✓ Link | 523.4 | Gorila | 2015-07-15 |
Dueling Network Architectures for Deep Reinforcement Learning | ✓ Link | 497.0 | Duel noop | 2015-11-20 |
#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning | ✓ Link | 445.0 | TRPO-hash | 2016-11-15 |
Large-Scale Study of Curiosity-Driven Learning | ✓ Link | 416 | Intrinsic Reward Agent | 2018-08-13 |
Human level control through deep reinforcement learning | ✓ Link | 380.0 | Nature DQN | 2015-02-25 |
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity | ✓ Link | 291 | ASL DDQN | 2023-05-07 |
Deep Exploration via Bootstrapped DQN | ✓ Link | 212.5 | Bootstrapped DQN | 2016-02-15 |
Dueling Network Architectures for Deep Reinforcement Learning | ✓ Link | 200.0 | Duel hs | 2015-11-20 |
Increasing the Action Gap: New Operators for Reinforcement Learning | ✓ Link | 198.69 | Advantage Learning | 2015-12-15 |
Deep Reinforcement Learning with Double Q-learning | ✓ Link | 163.0 | DQN noop | 2015-09-22 |
Deep Reinforcement Learning with Double Q-learning | ✓ Link | 136.0 | DQN hs | 2015-09-22 |
Dueling Network Architectures for Deep Reinforcement Learning | ✓ Link | 98.0 | DDQN (tuned) noop | 2015-11-20 |
Prioritized Experience Replay | ✓ Link | 94.0 | Prior hs | 2015-11-18 |
Count-Based Exploration with Neural Density Models | ✓ Link | 82.2 | DQN-PixelCNN | 2017-03-03 |
The Arcade Learning Environment: An Evaluation Platform for General Agents | ✓ Link | 66 | Best Learner | 2012-07-19 |
Prioritized Experience Replay | ✓ Link | 54.0 | Prior noop | 2015-11-18 |
Dueling Network Architectures for Deep Reinforcement Learning | ✓ Link | 48.0 | Prior+Duel noop | 2015-11-20 |
Count-Based Exploration with Neural Density Models | ✓ Link | 48.0 | DQN-CTS | 2017-03-03 |
Distributional Reinforcement Learning with Quantile Regression | ✓ Link | 43.9 | QR-DQN-1 | 2017-10-27 |
Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization | ✓ Link | 36.33 | POP3D | 2018-07-02 |
Deep Reinforcement Learning with Double Q-learning | ✓ Link | 29.0 | Prior+Duel hs | 2015-09-22 |
Asynchronous Methods for Deep Reinforcement Learning | ✓ Link | 25.0 | A3C LSTM hs | 2016-02-04 |
Asynchronous Methods for Deep Reinforcement Learning | ✓ Link | 23.0 | A3C FF hs | 2016-02-04 |
Deep Reinforcement Learning with Double Q-learning | ✓ Link | 21.0 | DDQN (tuned) hs | 2015-09-22 |
Asynchronous Methods for Deep Reinforcement Learning | ✓ Link | 19.0 | A3C FF (1 day) hs | 2016-02-04 |
Mastering Atari with Discrete World Models | ✓ Link | 2 | DreamerV2 | 2020-10-05 |
[]() | | 0.6 | SARSA | |
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | ✓ Link | 0.40 | MuZero | 2019-11-19 |
Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models | ✓ Link | 0.0 | MP-EB | 2015-07-03 |
Unifying Count-Based Exploration and Intrinsic Motivation | ✓ Link | 0.0 | A3C-CTS | 2016-06-06 |
Count-Based Exploration in Feature Space for Reinforcement Learning | ✓ Link | 0.0 | Sarsa-ε | 2017-06-25 |
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures | ✓ Link | 0.00 | IMPALA (deep) | 2018-02-05 |
Self-Imitation Learning | ✓ Link | 0 | A2C + SIL | 2018-06-14 |
Evolving simple programs for playing Atari games | ✓ Link | 0 | CGP | 2018-06-14 |
DNA: Proximal Policy Optimization with a Dual Network Architecture | ✓ Link | 0 | DNA | 2022-06-20 |