Learning on Large-scale Text-attributed Graphs via Variational Inference | ✓ Link | 0.9014 ± 0.0012 | Yes | 0.9370 ± 0.0004 | 139633805 | GLEM+EnGCN | 2022-10-26 |
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking | ✓ Link | 0.8798 ± 0.0004 | No | 0.9241 ± 0.0003 | 653918 | EnGCN | 2022-10-14 |
Learning on Large-scale Text-attributed Graphs via Variational Inference | ✓ Link | 0.8737 ± 0.0006 | Yes | 0.9400 ± 0.0003 | 139792525 | GLEM+GIANT+SAGN+SCR | 2022-10-26 |
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias | ✓ Link | 0.8718 ± 0.0004 | Yes | 0.9399 ± 0.0002 | 110636896 | LD+GIANT+SAGN+SCR | 2023-09-26 |
[]() | | 0.8692 ± 0.0007 | Yes | 0.9371 ± 0.0003 | 1154654 | **GraDBERT+GIANT & SAGN+SLE+CnS ** | |
[]() | | 0.8684 ± 0.0005 | Yes | 0.9365 ± 0.0003 | 1154142 | GIANT-XRT+R-SAGN+SCR+C&S | |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8680 ± 0.0007 | Yes | 0.9357 ± 0.0004 | 1154654 | GIANT-XRT+SAGN+SCR+C&S | 2021-12-08 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8673 ± 0.0008 | Yes | 0.9387 ± 0.0002 | 1154654 | GIANT-XRT+SAGN+MCR+C&S | 2021-12-08 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8667 ± 0.0009 | Yes | 0.9364 ± 0.0005 | 1154654 | GIANT-XRT+SAGN+SCR | 2021-12-08 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8651 ± 0.0009 | Yes | 0.9389 ± 0.0002 | 1154654 | GIANT-XRT+SAGN+MCR | 2021-12-08 |
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias | ✓ Link | 0.8645 ± 0.0012 | Yes | 0.9415 ± 0.0003 | 144331677 | LD+GAMLP | 2023-09-26 |
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | ✓ Link | 0.8643 ± 0.0020 | Yes | 0.9352 ± 0.0005 | 1548382 | GIANT-XRT+SAGN+SLE+C&S (use raw text) | 2021-10-29 |
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | ✓ Link | 0.8622 ± 0.0022 | Yes | 0.9363 ± 0.0005 | 1548382 | GIANT-XRT+SAGN+SLE (use raw text) | 2021-10-29 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8591 ± 0.0008 | Yes | 0.9402 ± 0.0004 | 2144151 | GIANT-XRT+GAMLP+MCR | 2021-12-08 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8520 ± 0.0008 | No | 0.9304 ± 0.0005 | 3335831 | GAMLP+RLU+SCR+C&S | 2021-12-08 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8505 ± 0.0009 | No | 0.9292 ± 0.0005 | 3335831 | GAMLP+RLU+SCR | 2021-12-08 |
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training | ✓ Link | 0.8485 ± 0.0010 | No | 0.9302 ± 0.0003 | 2179678 | SAGN+SLE (4 stages)+C&S | 2021-04-19 |
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training | ✓ Link | 0.8468 ± 0.0012 | No | 0.9309 ± 0.0007 | 2179678 | SAGN+SLE (4 stages) | 2021-04-19 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8462 ± 0.0003 | No | 0.9319 ± 0.0003 | 3335831 | GAMLP+MCR | 2021-12-08 |
[]() | | 0.8459 ± 0.0010 | No | 0.9324 ± 0.0005 | 3335831 | GAMLP+RLU | |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8451 ± 0.0006 | No | 0.9132 ± 0.0010 | 406063 | Spec-MLP-Wide + C&S | 2020-10-27 |
SCR: Training Graph Neural Networks with Consistency Regularization | ✓ Link | 0.8441 ± 0.0005 | No | 0.9325 ± 0.0004 | 2179678 | SAGN+MCR | 2021-12-08 |
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training | ✓ Link | 0.8428 ± 0.0014 | No | 0.9287 ± 0.0003 | 2179678 | SAGN+SLE | 2021-04-19 |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8418 ± 0.0007 | No | 0.9147 ± 0.0009 | 96247 | MLP + C&S | 2020-10-27 |
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | ✓ Link | 0.8415 ± 0.0022 | Yes | 0.9318 ± 0.0004 | 417583 | GIANT-XRT+GraphSAINT(use raw text) | 2021-10-29 |
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | ✓ Link | 0.8389 ± 0.0036 | No | 0.9242 ± 0.0029 | 433047 | GraphSAGE | 2024-06-13 |
Polynormer: Polynomial-Expressive Graph Transformer in Linear Time | ✓ Link | 0.8382 ± 0.0011 | No | 0.9239 ± 0.0005 | 2383654 | Polynormer | 2024-03-02 |
[]() | | 0.8354 ± 0.0009 | No | 0.9312 ± 0.0003 | 3335831 | GAMLP | |
Adaptive Graph Diffusion Networks | ✓ Link | 0.8334 ± 0.0027 | No | 0.9229 ± 0.0010 | 1544047 | AGDN | 2020-12-30 |
Training Graph Neural Networks with 1000 Layers | ✓ Link | 0.8307 ± 0.0030 | No | 0.9290 ± 0.0007 | 2945007 | RevGNN-112 | 2021-06-14 |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8301 ± 0.0001 | No | 0.9134 ± 0.0001 | 10763 | Linear + C&S | 2020-10-27 |
Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification | ✓ Link | 0.8256 ± 0.0031 | No | 0.9308 ± 0.0017 | 1475605 | UniMP | 2020-09-08 |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8254 ± 0.0003 | No | 0.9103 ± 0.0001 | 4747 | Plain Linear + C&S | 2020-10-27 |
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification | ✓ Link | 0.8233 ± 0.0019 | No | 0.9224 ± 0.0036 | 233047 | GCN | 2024-06-13 |
Robust Optimization as Data Augmentation for Large-scale Graphs | ✓ Link | 0.8193 ± 0.0031 | No | 0.9221 ± 0.0037 | 253743 | DeeperGCN+FLAG | 2020-10-19 |
Robust Optimization as Data Augmentation for Large-scale Graphs | ✓ Link | 0.8176 ± 0.0045 | No | 0.9251 ± 0.0006 | 751574 | GAT+FLAG | 2020-10-19 |
Inductive Representation Learning on Large Graphs | ✓ Link | 0.8154 ± 0.0050 | No | 0.9238 ± 0.0006 | 103983 | GraphSAGE + C&S + node2vec | 2017-06-07 |
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training | ✓ Link | 0.8120 ± 0.0007 | No | 0.9309 ± 0.0004 | 2233391 | SAGN | 2021-04-19 |
Dimensionality Reduction Meets Message Passing for Graph Node Embeddings | ✓ Link | 0.8115 ± 0.0002 | No | 0.9200 ± 0.0005 | 0 | PCAPass + XGBoost | 2022-02-01 |
DeeperGCN: All You Need to Train Deeper GCNs | ✓ Link | 0.8098 ± 0.0020 | No | 0.9238 ± 0.0009 | 253743 | DeeperGCN | 2020-06-13 |
E2EG: End-to-End Node Classification Using Graph Topology and Text-based Node Attributes | ✓ Link | 0.8098 ± 0.0040 | Yes | 0.9234 ± 0.0009 | 66793520 | E2EG (use raw text) | 2022-08-09 |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8092 ± 0.0037 | No | 0.9263 ± 0.0008 | 753622 | GAT w/NS + C&S | 2020-10-27 |
SIGN: Scalable Inception Graph Neural Networks | ✓ Link | 0.8052 ± 0.0016 | No | 0.9299 ± 0.0004 | 3483703 | SIGN | 2020-04-23 |
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | ✓ Link | 0.8049 ± 0.0028 | Yes | 0.9210 ± 0.0009 | 275759 | GIANT-XRT+MLP (use raw text) | 2021-10-29 |
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks | ✓ Link | 0.8041 ± 0.0022 | No | 0.9238 ± 0.0007 | 207919 | GraphSAGE w/NS + C&S | 2020-10-27 |
GraphSAINT: Graph Sampling Based Inductive Learning Method | ✓ Link | 0.8027 ± 0.0026 | No | Please tell us | 331661 | GraphSAINT-inductive | 2019-07-10 |
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks | ✓ Link | 0.7971 ± 0.0042 | No | 0.9188 ± 0.0008 | 456034 | ClusterGCN+residual+3 layers | 2019-05-20 |
Graph Attention Networks | ✓ Link | 0.7945 ± 0.0059 | No | Please tell us | 751574 | GAT with NeighborSampling | 2017-10-30 |
Robust Optimization as Data Augmentation for Large-scale Graphs | ✓ Link | 0.7936 ± 0.0057 | No | 0.9205 ± 0.0007 | 206895 | GraphSAGE+FLAG | 2020-10-19 |
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks | ✓ Link | 0.7923 ± 0.0078 | No | 0.8985 ± 0.0022 | 1540848 | Cluster-GAT | 2019-05-20 |
GraphSAINT: Graph Sampling Based Inductive Learning Method | ✓ Link | 0.7908 ± 0.0024 | No | 0.9162 ± 0.0008 | 206895 | GraphSAINT (SAGE aggr) | 2019-07-10 |
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks | ✓ Link | 0.7897 ± 0.0033 | No | 0.9212 ± 0.0009 | 206895 | ClusterGCN (SAGE aggr) | 2019-05-20 |
Inductive Representation Learning on Large Graphs | ✓ Link | 0.7870 ± 0.0036 | No | 0.9170 ± 0.0009 | 206895 | NeighborSampling (SAGE aggr) | 2017-06-07 |
Inductive Representation Learning on Large Graphs | ✓ Link | 0.7850 ± 0.0014 | No | 0.9224 ± 0.0007 | 206895 | Full-batch GraphSAGE | 2017-06-07 |
Inductive Representation Learning on Large Graphs | ✓ Link | 0.7829 ± 0.0016 | No | Please tell us | Please tell us | GraphSAGE | 2017-06-07 |
[]() | | 0.7606 ± 0.0037 | No | 0.8991 ± 0.0011 | 22624 | TCNN | |
Semi-Supervised Classification with Graph Convolutional Networks | ✓ Link | 0.7564 ± 0.0021 | No | 0.9200 ± 0.0003 | 103727 | Full-batch GCN | 2016-09-09 |
[]() | | 0.7434 ± 0.0000 | No | 0.9091 ± 0.0000 | 0 | Label Propagation | |
[]() | | 0.7406 ± 0.0026 | No | 0.9066 ± 0.0011 | 120251183 | GraphZoom (Node2vec) | |
node2vec: Scalable Feature Learning for Networks | ✓ Link | 0.7249 ± 0.0010 | No | 0.9032 ± 0.0006 | 313612207 | Node2vec | 2016-07-03 |
Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation | ✓ Link | 0.6886 ± 0.0046 | | | | GLNN | 2021-10-17 |
Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages | ✓ Link | 0.6259 ± 0.0010 | No | 0.7721 ± 0.0015 | 115806 | CoLinkDistMLP | 2021-06-16 |
Robust Optimization as Data Augmentation for Large-scale Graphs | ✓ Link | 0.6241 ± 0.0016 | No | 0.7688 ± 0.0014 | 103727 | MLP+FLAG | 2020-10-19 |
Open Graph Benchmark: Datasets for Machine Learning on Graphs | ✓ Link | 0.6106 ± 0.0008 | No | 0.7554 ± 0.0014 | 103727 | MLP | 2020-05-02 |