OpenCodePapers

node-property-prediction-on-ogbn-products

Node Property Prediction
Dataset Link
Results over time
Click legend items to toggle metrics. Hover points for model names.
Leaderboard
PaperCodeTest AccuracyExt. dataValidation AccuracyNumber of paramsModelNameReleaseDate
Learning on Large-scale Text-attributed Graphs via Variational Inference✓ Link0.9014 ± 0.0012Yes0.9370 ± 0.0004139633805GLEM+EnGCN2022-10-26
A Comprehensive Study on Large-Scale Graph Training: Benchmarking and Rethinking✓ Link0.8798 ± 0.0004No0.9241 ± 0.0003653918EnGCN2022-10-14
Learning on Large-scale Text-attributed Graphs via Variational Inference✓ Link0.8737 ± 0.0006Yes0.9400 ± 0.0003139792525GLEM+GIANT+SAGN+SCR2022-10-26
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias✓ Link0.8718 ± 0.0004Yes0.9399 ± 0.0002110636896LD+GIANT+SAGN+SCR2023-09-26
[]()0.8692 ± 0.0007Yes0.9371 ± 0.00031154654**GraDBERT+GIANT & SAGN+SLE+CnS **
[]()0.8684 ± 0.0005Yes0.9365 ± 0.00031154142GIANT-XRT+R-SAGN+SCR+C&S
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8680 ± 0.0007Yes0.9357 ± 0.00041154654GIANT-XRT+SAGN+SCR+C&S2021-12-08
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8673 ± 0.0008Yes0.9387 ± 0.00021154654GIANT-XRT+SAGN+MCR+C&S2021-12-08
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8667 ± 0.0009Yes0.9364 ± 0.00051154654GIANT-XRT+SAGN+SCR2021-12-08
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8651 ± 0.0009Yes0.9389 ± 0.00021154654GIANT-XRT+SAGN+MCR2021-12-08
Label Deconvolution for Node Representation Learning on Large-scale Attributed Graphs against Learning Bias✓ Link0.8645 ± 0.0012Yes0.9415 ± 0.0003144331677LD+GAMLP2023-09-26
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction✓ Link0.8643 ± 0.0020Yes0.9352 ± 0.00051548382GIANT-XRT+SAGN+SLE+C&S (use raw text)2021-10-29
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction✓ Link0.8622 ± 0.0022Yes0.9363 ± 0.00051548382GIANT-XRT+SAGN+SLE (use raw text)2021-10-29
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8591 ± 0.0008Yes0.9402 ± 0.00042144151GIANT-XRT+GAMLP+MCR2021-12-08
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8520 ± 0.0008No0.9304 ± 0.00053335831GAMLP+RLU+SCR+C&S2021-12-08
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8505 ± 0.0009No0.9292 ± 0.00053335831GAMLP+RLU+SCR2021-12-08
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training✓ Link0.8485 ± 0.0010No0.9302 ± 0.00032179678SAGN+SLE (4 stages)+C&S2021-04-19
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training✓ Link0.8468 ± 0.0012No0.9309 ± 0.00072179678SAGN+SLE (4 stages)2021-04-19
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8462 ± 0.0003No0.9319 ± 0.00033335831GAMLP+MCR2021-12-08
[]()0.8459 ± 0.0010No0.9324 ± 0.00053335831GAMLP+RLU
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8451 ± 0.0006No0.9132 ± 0.0010406063Spec-MLP-Wide + C&S2020-10-27
SCR: Training Graph Neural Networks with Consistency Regularization✓ Link0.8441 ± 0.0005No0.9325 ± 0.00042179678SAGN+MCR2021-12-08
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training✓ Link0.8428 ± 0.0014No0.9287 ± 0.00032179678SAGN+SLE2021-04-19
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8418 ± 0.0007No0.9147 ± 0.000996247MLP + C&S2020-10-27
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction✓ Link0.8415 ± 0.0022Yes0.9318 ± 0.0004417583GIANT-XRT+GraphSAINT(use raw text)2021-10-29
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification✓ Link0.8389 ± 0.0036No0.9242 ± 0.0029433047GraphSAGE2024-06-13
Polynormer: Polynomial-Expressive Graph Transformer in Linear Time✓ Link0.8382 ± 0.0011No0.9239 ± 0.00052383654Polynormer2024-03-02
[]()0.8354 ± 0.0009No0.9312 ± 0.00033335831GAMLP
Adaptive Graph Diffusion Networks✓ Link0.8334 ± 0.0027No0.9229 ± 0.00101544047AGDN2020-12-30
Training Graph Neural Networks with 1000 Layers✓ Link0.8307 ± 0.0030No0.9290 ± 0.00072945007RevGNN-1122021-06-14
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8301 ± 0.0001No0.9134 ± 0.000110763Linear + C&S2020-10-27
Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification✓ Link0.8256 ± 0.0031No0.9308 ± 0.00171475605UniMP2020-09-08
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8254 ± 0.0003No0.9103 ± 0.00014747Plain Linear + C&S2020-10-27
Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification✓ Link0.8233 ± 0.0019No0.9224 ± 0.0036233047GCN2024-06-13
Robust Optimization as Data Augmentation for Large-scale Graphs✓ Link0.8193 ± 0.0031No0.9221 ± 0.0037253743DeeperGCN+FLAG2020-10-19
Robust Optimization as Data Augmentation for Large-scale Graphs✓ Link0.8176 ± 0.0045No0.9251 ± 0.0006751574GAT+FLAG2020-10-19
Inductive Representation Learning on Large Graphs✓ Link0.8154 ± 0.0050No0.9238 ± 0.0006103983GraphSAGE + C&S + node2vec2017-06-07
Scalable and Adaptive Graph Neural Networks with Self-Label-Enhanced training✓ Link0.8120 ± 0.0007No0.9309 ± 0.00042233391SAGN2021-04-19
Dimensionality Reduction Meets Message Passing for Graph Node Embeddings✓ Link0.8115 ± 0.0002No0.9200 ± 0.00050PCAPass + XGBoost2022-02-01
DeeperGCN: All You Need to Train Deeper GCNs✓ Link0.8098 ± 0.0020No0.9238 ± 0.0009253743DeeperGCN2020-06-13
E2EG: End-to-End Node Classification Using Graph Topology and Text-based Node Attributes✓ Link0.8098 ± 0.0040Yes0.9234 ± 0.000966793520E2EG (use raw text)2022-08-09
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8092 ± 0.0037No0.9263 ± 0.0008753622GAT w/NS + C&S2020-10-27
SIGN: Scalable Inception Graph Neural Networks✓ Link0.8052 ± 0.0016No0.9299 ± 0.00043483703SIGN2020-04-23
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction✓ Link0.8049 ± 0.0028Yes0.9210 ± 0.0009275759GIANT-XRT+MLP (use raw text)2021-10-29
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks✓ Link0.8041 ± 0.0022No0.9238 ± 0.0007207919GraphSAGE w/NS + C&S2020-10-27
GraphSAINT: Graph Sampling Based Inductive Learning Method✓ Link0.8027 ± 0.0026NoPlease tell us331661GraphSAINT-inductive2019-07-10
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks✓ Link0.7971 ± 0.0042No0.9188 ± 0.0008456034ClusterGCN+residual+3 layers2019-05-20
Graph Attention Networks✓ Link0.7945 ± 0.0059NoPlease tell us751574GAT with NeighborSampling2017-10-30
Robust Optimization as Data Augmentation for Large-scale Graphs✓ Link0.7936 ± 0.0057No0.9205 ± 0.0007206895GraphSAGE+FLAG2020-10-19
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks✓ Link0.7923 ± 0.0078No0.8985 ± 0.00221540848Cluster-GAT2019-05-20
GraphSAINT: Graph Sampling Based Inductive Learning Method✓ Link0.7908 ± 0.0024No0.9162 ± 0.0008206895GraphSAINT (SAGE aggr)2019-07-10
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks✓ Link0.7897 ± 0.0033No0.9212 ± 0.0009206895ClusterGCN (SAGE aggr)2019-05-20
Inductive Representation Learning on Large Graphs✓ Link0.7870 ± 0.0036No0.9170 ± 0.0009206895NeighborSampling (SAGE aggr)2017-06-07
Inductive Representation Learning on Large Graphs✓ Link0.7850 ± 0.0014No0.9224 ± 0.0007206895Full-batch GraphSAGE2017-06-07
Inductive Representation Learning on Large Graphs✓ Link0.7829 ± 0.0016NoPlease tell usPlease tell usGraphSAGE2017-06-07
[]()0.7606 ± 0.0037No0.8991 ± 0.001122624TCNN
Semi-Supervised Classification with Graph Convolutional Networks✓ Link0.7564 ± 0.0021No0.9200 ± 0.0003103727Full-batch GCN2016-09-09
[]()0.7434 ± 0.0000No0.9091 ± 0.00000Label Propagation
[]()0.7406 ± 0.0026No0.9066 ± 0.0011120251183GraphZoom (Node2vec)
node2vec: Scalable Feature Learning for Networks✓ Link0.7249 ± 0.0010No0.9032 ± 0.0006313612207Node2vec2016-07-03
Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation✓ Link0.6886 ± 0.0046GLNN2021-10-17
Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages✓ Link0.6259 ± 0.0010No0.7721 ± 0.0015115806CoLinkDistMLP2021-06-16
Robust Optimization as Data Augmentation for Large-scale Graphs✓ Link0.6241 ± 0.0016No0.7688 ± 0.0014103727MLP+FLAG2020-10-19
Open Graph Benchmark: Datasets for Machine Learning on Graphs✓ Link0.6106 ± 0.0008No0.7554 ± 0.0014103727MLP2020-05-02