Multimodal Image Fusion and Classification of Power Equipment Using Non-Subsampled Contourlet Transform and Adaptive PulseCoupled Neural Network
Abstract
This paper presents a multimodal image fusion and classification method for power equipment based on the Non-Subsampled Contourlet Transform (NSCT) and Adaptive Pulse-Coupled Neural Network (APCNN). The approach begins with image normalization, geometric alignment, and adaptive noise filtering as preprocessing steps. The NSCT is then applied to decompose input images into low- and highfrequency subbands. Low-frequency components are fused using phase congruency weighting to retain energy features, while high-frequency subbands with structural details are selectively fused using APCNN for precise edge and contour extraction. For efficiency, subbands beyond the fifth decomposition level use local energy maximization for fusion. Experiments were conducted on a dataset of 3,000 images of transformers, current transformers, and disconnectors collected by inspection robots. The model achieved maximum recognition accuracies of 99.39% for transformers, 99.57% for current transformers, and 98.74% for disconnectors. The average classification time per image was 2.36 seconds. Compared with APCNN, PCNN, LeNet, AlexNet, and SVM, the proposed NSCT-APCNN model demonstrated superior performance in accuracy, F1-score, and processing speed. This work provides an effective and scalable solution for real-time multimodal image classification in substation inspection scenarios, with potential for extension to fault detection in smart grids.References
Zhang P, Li T, Wang G, et al. Multi-source information fusion based on rough set theory: A review[J]. Information Fusion, 2021, 68: 85-117.
Zhao X, Peng Z, Zhao S. Substation electric power equipment detection based on patrol robots[J]. Artificial Life and Robotics, 2020, 25(3): 482-487.
Wachs J P, Stern H I, Burks T, et al. Low and high-level visual feature-based apple detection from multi-modal images[J]. Precision Agriculture, 2010, 11: 717-735.
Choudhary G, Sethi D. From conventional approach to machine learning and deep learning approach: an experimental and comprehensive review of image fusion techniques[J]. Archives of Computational Methods in Engineering, 2023, 30(2): 1267-1304.
Zhang H, Xu H, Tian X, et al. Image fusion meets deep learning: A survey and perspective[J]. Information Fusion, 2021, 76: 323-336.
Choudhary G, Sethi D. From conventional approach to machine learning and deep learning approach: an experimental and comprehensive review of image fusion techniques[J]. Archives of Computational Methods in Engineering, 2023, 30(2): 1267-1304.
Cheng T, Gu J, Zhang X, et al. Multimodal image registration for power equipment using clifford algebraic geometric invariance[J]. Energy Reports, 2022, 8: 1078-1086.
Wang Q, Zhang J, Du J, et al. A fine-tuned multimodal large model for power defect image-text question-answering[J]. Signal, Image and Video Processing, 2024, 18(12): 9191-9203.
Chun-Man Y A N, Bao-Long G U O, Meng Y I. Fast algorithm for nonsubsampled contourlet transform[J]. Acta Automatica Sinica, 2014, 40(4): 757-762.
Tian J, Chen L, Ma L, et al. Multi-focus image fusion using a bilateral gradient-based sharpness criterion[J]. Optics communications, 2011, 284(1): 80-87.
Ibrahim S I, El-Tawel G S, Makhlouf M A. Brain image fusion using the parameter adaptive-pulse coupled neural network (PA-PCNN) and non-subsampled contourlet transform (NSCT)[J]. Multimedia Tools and Applications, 2024, 83(9): 27379-27409.
Zhang Q, Maldague X. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing[J]. Infrared Physics & Technology, 2016, 74: 11-20.
Zhao C, Guo Y, Wang Y. A fast fusion scheme for infrared and visible light images in NSCT domain[J]. Infrared Physics & Technology, 2015, 72: 266-275.
Alaslni M G, Elrefaei L A. Transfer learning with convolutional neural networks for iris recognition[J]. Int. J. Artif. Intell. Appl, 2019, 10(5): 47-64.
Wei G, Li G, Zhao J, et al. Development of a LeNet-5 gas identification CNN structure for electronic noses[J]. Sensors, 2019, 19(1): 217.
Lv M, Zhou G, He M, et al. Maize leaf disease identification based on feature enhancement and DMS-robust alexnet[J]. IEEE access, 2020, 8: 57952-57966.
Li S, Kwok J T Y, Tsang I W H, et al. Fusing images with different focuses using support vector machines[J]. IEEE Transactions on neural networks, 2004, 15(6): 1555-1561.
DOI:
https://doi.org/10.31449/inf.v49i26.8729Downloads
Published
How to Cite
Issue
Section
License
I assign to Informatica, An International Journal of Computing and Informatics ("Journal") the copyright in the manuscript identified above and any additional material (figures, tables, illustrations, software or other information intended for publication) submitted as part of or as a supplement to the manuscript ("Paper") in all forms and media throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication. This transfer includes the right to reproduce and/or to distribute the Paper to other journals or digital libraries in electronic and online forms and systems.
I understand that I retain the rights to use the pre-prints, off-prints, accepted manuscript and published journal Paper for personal use, scholarly purposes and internal institutional use.
In certain cases, I can ask for retaining the publishing rights of the Paper. The Journal can permit or deny the request for publishing rights, to which I fully agree.
I declare that the submitted Paper is original, has been written by the stated authors and has not been published elsewhere nor is currently being considered for publication by any other journal and will not be submitted for such review while under review by this Journal. The Paper contains no material that violates proprietary rights of any other person or entity. I have obtained written permission from copyright owners for any excerpts from copyrighted works that are included and have credited the sources in my article. I have informed the co-author(s) of the terms of this publishing agreement.
Copyright © Slovenian Society Informatika







