MFFCN-GAN: Multi-Scale Feature Fusion CNN with GAN for Automated Artistic Scene Generation in Film Animation

Abstract

In the context of the digital transformation of the film art industry, the traditional animation scene production model faces the problems of low efficiency, high cost, and difficulty in meeting the audience's demand for high-quality scenes. To overcome this dilemma, this paper utilizes convolutional neural networks for the automatic generation of film art animation scenes. By constructing a multi-scale feature fusion convolutional network (MFFCN), the multi-scale convolution kernels are integrated to extract features in parallel, and the attention mechanism is combined with the generative adversarial network for scene generation. The experiment uses Kaggle's Anime Images Dataset, which includes fantastical landscapes and futuristic cityscapes. The proposed MFFCN model, with three convolutional branches and two attention modules, is compared to four models, including a geometric rule-based model and a support vector machine. Results demonstrate that MFFCN improves PSNR by 15 dB and SSIM by over 40% over the geometric model. It also excels in scene richness and visual style. This research advances computer graphics and deep learning in art generation, providing a realistic and intelligent solution for animation scene development that improves film industry operations and stylization.

Authors

  • Xianrui Liu Nanjing University of the Arts, Nanjing 210013, Jiangsu, China
  • Hang Zhao Nanjing University of the Arts, Nanjing 210013, China

DOI:

https://doi.org/10.31449/inf.v49i9.8903

Downloads

Published

10/29/2025

How to Cite

Liu, X., & Zhao, H. (2025). MFFCN-GAN: Multi-Scale Feature Fusion CNN with GAN for Automated Artistic Scene Generation in Film Animation. Informatica, 49(9). https://doi.org/10.31449/inf.v49i9.8903