Paper | Code | R2 | RMSE | ModelName | ReleaseDate |
---|---|---|---|---|---|
Principal Neighbourhood Aggregation for Graph Nets | ✓ Link | 0.843±0.000 | 0.430±0.843 | PNA | 2020-04-12 |
An end-to-end attention-based approach for learning on graphs | ✓ Link | 0.841±0.000 | 0.433±0.841 | ESA (Edge set attention, no positional encodings) | 2024-02-16 |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | ✓ Link | 0.835±0.000 | 0.441±0.835 | GINDrop | 2021-11-11 |
Graph Attention Networks | ✓ Link | 0.833±0.000 | 0.443±0.833 | GAT | 2017-10-30 |
How Powerful are Graph Neural Networks? | ✓ Link | 0.833±0.000 | 0.444±0.833 | GIN | 2018-10-01 |
How Attentive are Graph Attention Networks? | ✓ Link | 0.826±0.000 | 0.453±0.826 | GATv2 | 2021-05-30 |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.814±0.000 | 0.469±0.814 | GCN | 2016-09-09 |
Pure Transformers are Powerful Graph Learners | ✓ Link | 0.800±0.000 | 0.486±0.800 | TokenGT | 2022-07-06 |
Do Transformers Really Perform Bad for Graph Representation? | ✓ Link | OOM | OOM | Graphormer | 2021-06-09 |