Dual-Channel Convolutional Neural Network with Mask Guidance for Digital Media Image Super-Resolution

Abstract

To address the issues of low resolution and detail loss in digital media visual image reconstruction, this paper proposes a super-resolution reconstruction algorithm based on a dual-channel mask-guided convolutional neural network (CNN). This algorithm innovatively designs a parallel guidance channel that utilizes edge masks generated from the input image as prior knowledge to enhance the main channel's ability to extract high-frequency structural features. Through multi-layer feature fusion and improved Adam optimizer, the model achieves a balance between reconstruction quality and efficiency. Tested on the CIFAR-10 and ILSVRC2020 datasets, and validated using image quality assessment metrics, it was found that under super-resolution tasks, the peak signal-to-noise ratio of the research method improved by an average of about 1.1 dB, with a maximum of 37.12 dB. The structural similarity index improved by an average of about 2.1%, with a maximum of 0.9682. At the same time, the algorithm demonstrated excellent efficiency advantages while maintaining high performance, with model parameters reduced to 1.2M, which was only 1/35 of advanced models in the same category. The average inference time for a single image has been reduced to about 20 ms, which was about 45% faster than the baseline method and demonstrated superior overall performance. The improved algorithm offers improved reconstruction quality and achieves a balance between quality and efficiency when the number of parameters is limited. Improving CNN through dual channel collaborative design and feature reuse mechanisms can provide new technological solutions for digital media image processing.

Authors

  • Qian Song

DOI:

https://doi.org/10.31449/inf.v50i5.10264

Downloads

Published

02/02/2026

How to Cite

Song, Q. (2026). Dual-Channel Convolutional Neural Network with Mask Guidance for Digital Media Image Super-Resolution. Informatica, 50(5). https://doi.org/10.31449/inf.v50i5.10264