SMPConv: Self-moving Point Representations for Continuous Convolution | ✓ Link | 99.10 | 99.75 | SMPConv | 2023-04-05 |
Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers | ✓ Link | 98.76% | 99.53% | LSSL | 2021-10-26 |
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes | ✓ Link | 98.72% | | FlexTCN-4 | 2021-10-15 |
Efficiently Modeling Long Sequences with Structured State Spaces | ✓ Link | 98.70% | 99.63% | S4 | 2021-10-31 |
CKConv: Continuous Kernel Convolution For Sequential Data | ✓ Link | 98.54% | 99.32% | CKCNN (1M) | 2021-02-04 |
Parallelizing Legendre Memory Unit Training | ✓ Link | 98.49% | | Modified LMU (165k) | 2021-02-22 |
UnICORNN: A recurrent model for learning very long time dependencies | ✓ Link | 98.4 | | UnICORNN | 2021-03-09 |
HiPPO: Recurrent Memory with Optimal Polynomial Projections | ✓ Link | 98.3% | | HiPPO-LegS | 2020-08-17 |
CKConv: Continuous Kernel Convolution For Sequential Data | ✓ Link | 98% | 99.31% | CKCNN (100k) | 2021-02-04 |
Learning Long-Term Dependencies in Irregularly-Sampled Time Series | ✓ Link | 97.83% | | ODE-LSTM | 2020-06-08 |
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies | ✓ Link | 97.34% | 99.4% | coRNN | 2020-10-02 |
Deep Independently Recurrent Neural Network (IndRNN) | ✓ Link | 97.2% | 99.48% | Dense IndRNN | 2019-10-11 |
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling | ✓ Link | 97.2% | 99.0% | Temporal Convolutional Network | 2018-03-04 |
Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks | ✓ Link | 97.2% | | LMU | 2019-12-01 |
Adaptive-saturated RNN: Remember more with less instability | ✓ Link | 96.96% | 99.3% | Adaptive-saturated RNN | 2023-04-24 |
RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks | ✓ Link | 96.94 | | Sparse Combo Net | 2021-06-16 |
Recurrent Highway Networks with Grouped Auxiliary Memory | ✓ Link | 96.8% | | GAM-RHN-1 | 2019-12-13 |
Long Expressive Memory for Sequence Modeling | ✓ Link | 96.6% | 99.5% | LEM | 2021-10-10 |
Lipschitz Recurrent Neural Networks | ✓ Link | 96.3% | 99.4 | LipschitzRNN | 2020-06-22 |
Learning to Remember More with Less Memorization | ✓ Link | 96.3% | 99.1% | DNC+CUW | 2019-01-05 |
Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN | ✓ Link | 96% | 99% | IndRNN | 2018-03-13 |
Recurrent Batch Normalization | ✓ Link | 95.4% | 99% | BN LSTM | 2016-03-30 |
Efficient recurrent architectures through activity sparsity and sparse back-propagation through time | ✓ Link | 95.1% | 98.3% | EGRU | 2022-06-13 |
Dilated Recurrent Neural Networks | ✓ Link | 94.6% | 99.2% | Dilated GRU | 2017-10-05 |
Full-Capacity Unitary Recurrent Neural Networks | ✓ Link | 94.1% | 96.9% | Full-capacity uRNN | 2016-10-31 |
Unitary Evolution Recurrent Neural Networks | ✓ Link | 88% | 98.2% | LSTM | 2015-11-20 |
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | ✓ Link | 82% | 97% | iRNN | 2015-04-03 |
FlexConv: Continuous Kernel Convolutions with Differentiable Kernel Sizes | ✓ Link | | 99.62% | FlexTCN-6 | 2021-10-15 |
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained | ✓ Link | | 99.4% | STAR | 2019-11-25 |
R-Transformer: Recurrent Neural Network Enhanced Transformer | ✓ Link | | 99.1% | R-Transformer | 2019-07-12 |