Paper | Code | R2 | RMSE | ModelName | ReleaseDate |
---|---|---|---|---|---|
An end-to-end attention-based approach for learning on graphs | ✓ Link | 0.697±0.000 | 0.486±0.697 | ESA (Edge set attention, no positional encodings) | 2024-02-16 |
Principal Neighbourhood Aggregation for Graph Nets | ✓ Link | 0.696±0.000 | 0.486±0.696 | PNA | 2020-04-12 |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | ✓ Link | 0.675±0.000 | 0.503±0.675 | DropGIN | 2021-11-11 |
How Powerful are Graph Neural Networks? | ✓ Link | 0.668±0.000 | 0.509±0.668 | GIN | 2018-10-01 |
Graph Attention Networks | ✓ Link | 0.666±0.000 | 0.510±0.666 | GAT | 2017-10-30 |
How Attentive are Graph Attention Networks? | ✓ Link | 0.655±0.000 | 0.518±0.655 | GATv2 | 2021-05-30 |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.642±0.000 | 0.528±0.642 | GCN | 2016-09-09 |
Pure Transformers are Powerful Graph Learners | ✓ Link | 0.641±0.000 | 0.529±0.641 | TokenGT | 2022-07-06 |
Do Transformers Really Perform Bad for Graph Representation? | ✓ Link | OOM | OOM | Graphormer | 2021-06-09 |