Image Super-Resolution via CNN-Guided Prior Integration in PDE-Based Reconstruction Framework

Tianfu Ji

Abstract


The traditional image super-resolution reconstruction method has the problems of relying on hardware devices, high computational cost, and poor interpretability and generalization ability. To improve the efficiency of image super-resolution reconstruction, the study proposes a partial differential equation image super-resolution reconstruction method that introduces a priori information. The study first extracts the a priori information of the image based on convolutional neural network, and then fuses the extracted a priori information with the partial differential equation model. This convolutional neural network is based on the ResNet-18 framework. It enhances the differential expression of feature channels and the precise capture of edge features. This is achieved by removing batch normalization layers and introducing channel attention modules and gradient-guided branches. The experiment was conducted on the Flickr2K dataset and evaluated using cross method comparison metrics such as structural similarity index and peak signal to noise ratio. The results indicated that feature similarity and edge preservation rate were highest when extracting the gradient information of the image based on convolutional neural network when compared with other methods. When the number of iterations was 500, the feature similarity and edge preservation rate were 0.88 and 88.7% respectively. Edge pixel accuracy and gradient feature correlation were best when extracting gradient information from the image based on convolutional neural network. The values of edge pixel accuracy and gradient feature correlation were 0.92 and 0.87 respectively when the iteration was 500. The proposed method of partial differential equation image super-resolution reconstruction by introducing a priori information has superior performance and can provide technical support for image super-resolution reconstruction.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i6.9325

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.