Paper | Code | R2 | RMSE | ModelName | ReleaseDate |
---|---|---|---|---|---|
An end-to-end attention-based approach for learning on graphs | ✓ Link | 0.725±0.000 | 0.507±0.725 | ESA (Edge set attention, no positional encodings) | 2024-02-16 |
Principal Neighbourhood Aggregation for Graph Nets | ✓ Link | 0.717±0.000 | 0.514±0.717 | PNA | 2020-04-12 |
DropGNN: Random Dropouts Increase the Expressiveness of Graph Neural Networks | ✓ Link | 0.702±0.000 | 0.527±0.702 | GINDrop | 2021-11-11 |
How Powerful are Graph Neural Networks? | ✓ Link | 0.696±0.000 | 0.532±0.696 | GIN | 2018-10-01 |
Pure Transformers are Powerful Graph Learners | ✓ Link | 0.684±0.000 | 0.543±0.684 | TokenGT | 2022-07-06 |
Graph Attention Networks | ✓ Link | 0.681±0.000 | 0.546±0.681 | GAT | 2017-10-30 |
How Attentive are Graph Attention Networks? | ✓ Link | 0.666±0.000 | 0.558±0.666 | GATv2 | 2021-05-30 |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.658±0.000 | 0.565±0.658 | GCN | 2016-09-09 |
Do Transformers Really Perform Bad for Graph Representation? | ✓ Link | OOM | OOM | Graphormer | 2021-06-09 |