Robustness Prediction of Complex Networks Based on CNN Improved by Graph Representation Learning Operators

Junhua Li, Changxin Xi, Yun Ke

Abstract


Network robustness prediction evaluates the stability and reliability of network systems. A complex network robustness prediction method based on graph representation learning and improved convolutional neural network is proposed to address the low computational efficiency and insufficient prediction accuracy in network robustness. This method introduces prior knowledge of network topology and uses adjacency matrices to extract features of complex networks to improve the efficiency and accuracy of robust prediction. Aiming at the low prediction accuracy, a low frame rate convolutional neural storage area network algorithm based on icon learning is proposed to improve the prediction accuracy and generalization of complex network robustness. The results showed that the proposed algorithm reduced the prediction errors of undirected network robustness by 10.94%, 23.41%, and 13.86% under random attacks, and reduced the prediction errors of weighted network robustness by 0.0041, 0.0043, and 0.0105, respectively. In the robustness prediction of scale-free network and Qrecovery network, the prediction errors of the proposed algorithm were 0.1843 and 0.0278, respectively, reducing by 47.76% and 22.90%, respectively. In the four real networks, including Movie Lens-user, Grid Yeast, C-Elegance, and Polbooks, the connectivity robustness prediction error values of the storage area low frame rate-convolutional neural network algorithm were 0.0906, 0.1106, 0.0715, and 0.1052, respectively, and the controllability robustness prediction error values were 0.5155, 0.1882, 0.0458, and 0.1456, respectively, all of which were superior to existing methods. The proposed algorithm has certain practical application value in the fields of network system design and optimization.


Full Text:

PDF PDF


DOI: https://doi.org/10.31449/inf.v49i21.7316

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.