Unknown
Citation network of scientific papers. 2708 nodes, 5429 edges, 7 classes. Classic GNN benchmark.
accuracy
Higher is better
| Rank | Model | Source | Score | Year | Paper |
|---|---|---|---|---|---|
| 1 | TAPE + RevGAT TAPE (LLM-to-LM Interpreter) + RevGAT backbone. ICLR 2024. He et al. Uses LLM-generated explanations as node features. Supervised split. | Community | 92.9 | 2024 | Source |
| 2 | AuGLM (T5-large) AuGLM with T5-large backbone. Text-output node classifier. Xu et al. 2024 "How to Make LMs Strong Node Classifiers?" Table 1. | Community | 91.51 | 2024 | Source |
| 3 | ENGINE ENGINE vector-output model. Result from AuGLM comparison table (Table 1) in Xu et al. 2024. | Community | 91.48 | 2024 | Source |
| 4 | InstructGLM InstructGLM text-output model. Result from AuGLM comparison table (Table 1) in Xu et al. 2024. | Community | 90.77 | 2024 | Source |
| 5 | GLEM + RevGAT GLEM (Graph-LM EM framework) + RevGAT backbone. From AuGLM comparison table (Table 1) in Xu et al. 2024. | Community | 88.56 | 2024 | Source |
| 6 | GCNLLMEmb GCN with LLM-generated embeddings, supervised setting. From comprehensive LLM-based node classification analysis, Feb 2025. | Community | 88.15 | 2025 | Source |
| 7 | LLaGA (Mistral-7B) LLaGA with Mistral-7B backbone, supervised setting. Xu et al. 2024 Table 6. | Community | 87.55 | 2024 | Source |
| 8 | SDGAT Sparse graphs-based Dynamic Attention Network. ~3% improvement over baselines on Cora. Published PMC Dec 2024. | Community | 85.29 | 2024 | Source |
| 9 | GCN* (tuned) GCN with proper hyperparameter tuning. Best model in NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). Luo et al. | Community | 85.08 | 2024 | Source |
| 10 | GAT* (tuned) GAT with proper hyperparameter tuning. NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 84.64 | 2024 | Source |
| 11 | SGFormer SGFormer result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). Wu et al. | Community | 84.5 | 2024 | Source |
| 12 | GraphSAGE* (tuned) GraphSAGE with proper hyperparameter tuning. NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 84.18 | 2024 | Source |
| 13 | Polynormer Polynormer result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 83.25 | 2024 | Source |
| 14 | GOAT GOAT (Graph Transformer) result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 83.18 | 2024 | Source |
| 15 | GAT Graph Attention Network. Velickovic et al., ICLR 2018. | Community | 83 | 2018 | Source |
| 16 | GraphGPS GraphGPS result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). Rampasek et al. original model. | Community | 82.84 | 2024 | Source |
| 17 | Exphormer Exphormer result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 82.77 | 2024 | Source |
| 18 | GraphSAGE GraphSAGE result from NeurIPS 2024 "Classic GNNs are Strong Baselines" paper (Table 2). Luo et al. | Community | 82.68 | 2024 | Source |
| 19 | NodeFormer NodeFormer result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 82.2 | 2024 | Source |
| 20 | NAGphormer NAGphormer result from NeurIPS 2024 "Classic GNNs are Strong Baselines" (Table 2). | Community | 82.12 | 2024 | Source |
| 21 | GCN Graph Convolutional Network. Kipf & Welling, ICLR 2017. Standard semi-supervised split. | Community | 81.5 | 2017 | Source |