Graphs

Link Prediction

Link prediction — inferring missing or future edges in a graph — underpins knowledge graph completion, drug-target discovery, and social network recommendation. TransE (2013) launched the knowledge graph embedding era, and the field matured through DistMult, RotatE, and CompGCN, benchmarked on FB15k-237 and WN18RR. The current frontier is inductive link prediction (generalizing to unseen entities), where GNN-based methods like NBFNet and foundation models like ULTRA (2024) show that a single model can transfer across entirely different knowledge graphs without retraining.

0 datasets0 resultsView full task mapping →

Link prediction estimates the likelihood of edges between nodes in a graph — critical for recommendation systems, knowledge graph completion, and biological interaction prediction. GNN-based methods (SEAL, Neo-GNN) and knowledge graph embeddings (TransE, RotatE) are the two dominant paradigms.

History

2013

TransE learns knowledge graph embeddings by modeling relations as translations in embedding space

2014

DeepWalk embeddings applied to link prediction via dot-product similarity

2016

Node2Vec extends DeepWalk with biased random walks for better structural capture

2018

SEAL (Zhang & Chen) frames link prediction as subgraph classification around target node pairs

2019

RotatE models relations as rotations in complex space, handling symmetry/antisymmetry/composition

2020

OGB link prediction benchmarks (ogbl-collab, ogbl-ddi) provide large-scale evaluation

2021

PLNLP achieves strong results on OGB by combining pairwise learning with node features

2022

Neo-GNN exploits common neighbor structural features for link prediction

2023

NCN/NCNC (Neural Common Neighbors) explicitly models neighborhood overlap

2024

LLM-enhanced link prediction uses text descriptions to improve prediction in text-rich graphs

How Link Prediction Works

1Node Representation L…GNN layers compute embeddin…2Structural Feature Ex…Heuristic features like com…3Pair ScoringFor a candidate edge (u4Negative SamplingRandom non-edges are sample…5Ranking EvaluationPerformance is measured by …Link Prediction Pipeline
1

Node Representation Learning

GNN layers compute embeddings for each node based on its features and local graph structure.

2

Structural Feature Extraction

Heuristic features like common neighbors, Adamic-Adar index, and Katz centrality capture the structural likelihood of a link.

3

Pair Scoring

For a candidate edge (u,v), node embeddings are combined — via dot product, concatenation+MLP, or subgraph classification (SEAL) — to produce a link probability.

4

Negative Sampling

Random non-edges are sampled as negative examples, and the model is trained to rank true edges above negatives.

5

Ranking Evaluation

Performance is measured by metrics like Hits@K and MRR — how often the true link ranks in the top K predictions.

Current Landscape

Link prediction in 2025 is split between two communities: (1) general graph learning, where GNN-based methods like SEAL and NCN dominate OGB leaderboards by explicitly modeling local structural patterns, and (2) knowledge graphs, where embedding methods (TransE, RotatE) and their successors handle typed multi-relational edges. The key insight is that structural heuristics (common neighbors) are extremely strong baselines — GNN methods that don't capture these features underperform. Temporal link prediction and dynamic graphs are the growing frontier.

Key Challenges

Scalability — SEAL-style subgraph methods are expensive, extracting subgraphs for every candidate pair

Cold start — predicting links for new nodes with no existing connections is fundamentally difficult

Temporal dynamics — real networks evolve; static link prediction ignores when edges form and dissolve

Negative sampling bias — the choice of negative samples dramatically affects model training and evaluation

Evaluation leakage — improper train/test splitting (not respecting time or graph structure) inflates reported results

Quick Recommendations

Standard link prediction

SEAL / NCN

Best performance on OGB benchmarks by explicitly modeling subgraph structure

Knowledge graph completion

RotatE / ComplEx

Well-understood, scalable embeddings for typed relations

Scalable production

GraphSAGE + dot product scoring

Efficient enough for million-node graphs in real-time recommendation

Text-rich graphs

LLM node embeddings + GNN link predictor

Leverages rich text features for citation and social network link prediction

What's Next

The frontier is temporal and dynamic link prediction — predicting not just whether a link will form, but when. Expect advances in continuous-time dynamic graph networks, and integration with LLMs for text-rich social and citation networks where node content contains predictive information about future connections.

Benchmarks & SOTA

No datasets indexed for this task yet.

Contribute on GitHub

Related Tasks

Node Classification

Node classification — assigning labels to vertices in a graph using both node features and neighborhood structure — is the flagship task for Graph Neural Networks. GCN (Kipf & Welling, 2017) established the Cora/Citeseer/PubMed benchmark trinity, but these datasets are tiny by modern standards and results have saturated well above 85% accuracy. The field has moved toward large-scale heterogeneous graphs (ogbn-arxiv, ogbn-products from OGB) and the unsettled debate over whether simple MLPs with neighborhood features can match GNNs, as shown by SIGN and SGC ablations.

Graph Classification

Graph classification — predicting a label for an entire graph, not individual nodes — matters for molecular screening, social network analysis, and program verification. GIN (Xu et al., 2019) formalized the connection between GNN expressiveness and the Weisfeiler-Leman graph isomorphism test, and the TU datasets became standard benchmarks. Recent work on graph transformers (GPS, Exphormer) and higher-order GNNs pushes beyond WL limits, while OGB's ogbg-molhiv and ogbg-molpcba provide more rigorous large-scale evaluation than the classic small-graph benchmarks.

Molecular Property Prediction

Molecular property prediction — estimating toxicity, solubility, binding affinity, or other properties from molecular structure — is the workhorse task of AI-driven drug discovery. GNNs operate on molecular graphs while transformer approaches (ChemBERTa, Uni-Mol) use SMILES strings or 3D coordinates. MoleculeNet (2018) and the Therapeutic Data Commons (TDC) provide standardized benchmarks, but the real bottleneck is distribution shift: models trained on known chemical space struggle with novel scaffolds, and the gap between leaderboard accuracy and actual wet-lab utility remains the field's central challenge.

Something wrong or missing?

Help keep Link Prediction benchmarks accurate. Report outdated results, missing benchmarks, or errors.

0/2000