Paper | Code | R2 | RMSE | ModelName | ReleaseDate |
---|---|---|---|---|---|
An end-to-end attention-based approach for learning on graphs | ✓ Link | 0.925±0.000 | 0.343±0.925 | ESA (Edge set attention, no positional encodings) | 2024-02-16 |
Principal Neighbourhood Aggregation for Graph Nets | ✓ Link | 0.924±0.000 | 0.346±0.924 | PNA | 2020-04-12 |
How Powerful are Graph Neural Networks? | ✓ Link | 0.922±0.000 | 0.349±0.922 | GIN | 2018-10-01 |
Graph Attention Networks | ✓ Link | 0.921±0.000 | 0.353±0.921 | GAT | 2017-10-30 |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | ✓ Link | 0.920±0.000 | 0.354±0.920 | DropGIN | 2021-11-11 |
How Attentive are Graph Attention Networks? | ✓ Link | 0.919±0.000 | 0.356±0.919 | GATv2 | 2021-05-30 |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.912±0.000 | 0.372±0.912 | GCN | 2016-09-09 |
Pure Transformers are Powerful Graph Learners | ✓ Link | 0.907±0.000 | 0.383±0.907 | TokenGT | 2022-07-06 |
Do Transformers Really Perform Bad for Graph Representation? | ✓ Link | OOM | OOM | Graphormer | 2021-06-09 |