Fig. 1: The overall framework of ANEBR. Firstly, network augmentation is performed for positive sampling to extract and filter useful information from the sparse network. ANEBR augments the adjacency matrix
entmax==1.3
matplotlib==3.10.0
networkx==3.4.2
numpy==2.2.2
scikit_learn==1.6.1
scipy==1.15.1
torch==2.5.1+cu124
torch_geometric==2.6.1
In ./data/
, the Adjnoun
,20-Newsgroups
, PPI
, Wikipedia
and BlogCatalog
datasets are provided, along with the corresponding processed versions for link prediction.
Detailed training results and configs for network reconstruction, node classification, link prediction and network visualization are provided in ./Results.ipynb
.
Besides, it is also easy to run ./reconstruction.py
directly to perform network reconstruction, as is the case for the other graph learning tasks.
If you find the code useful for your research, we kindly request to consider citing our work:
@article{wu2025adversarial,
title={Adversarial network embedding with bootstrapped representations for sparse networks},
author={Wu, Zelong and Wang, Yidan and Lin, Guoliang and Liu, Junlong},
journal={Applied Intelligence},
volume={55},
number={6},
pages={498},
year={2025},
publisher={Springer}
}