Paper | Code | R2 | RMSE | ModelName | ReleaseDate |
---|---|---|---|---|---|
An end-to-end attention-based approach for learning on graphs | ✓ Link | 0.891±0.000 | 0.335±0.891 | ESA (Edge set attention, no positional encodings) | 2024-02-16 |
Principal Neighbourhood Aggregation for Graph Nets | ✓ Link | 0.891±0.000 | 0.336±0.891 | PNA | 2020-04-12 |
How Powerful are Graph Neural Networks? | ✓ Link | 0.887±0.000 | 0.342±0.887 | GIN | 2018-10-01 |
Graph Attention Networks | ✓ Link | 0.886±0.000 | 0.343±0.886 | GAT | 2017-10-30 |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | ✓ Link | 0.886±0.000 | 0.343±0.886 | DropGIN | 2021-11-11 |
How Attentive are Graph Attention Networks? | ✓ Link | 0.885±0.000 | 0.344±0.885 | GATv2 | 2021-05-30 |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.878±0.000 | 0.355±0.878 | GCN | 2016-09-09 |
Pure Transformers are Powerful Graph Learners | ✓ Link | 0.872±0.000 | 0.363±0.872 | TokenGT | 2022-07-06 |
Do Transformers Really Perform Bad for Graph Representation? | ✓ Link | OOM | OOM | Graphormer | 2021-06-09 |