Optimizing Animation Style Transfer with LS-GAN and GCN-ECANet for Increased Accuracy and Efficiency

Abstract

Animation is a comprehensive art form that integrates painting, film, music, and other artistic styles. As art continues to evolve, animation style transfer has become a key research focus. However, existing transfer methods suffer from low accuracy, poor transfer quality, and long processing times. To address these issues, this study proposes an improved animation style transformation model. This model introduces the least squares method into the generative adversarial network to optimize its loss function, enhance model stability, and improve output quality. Simultaneously combining graph convolutional networks to optimize efficient channel attention networks and enhance their adaptability to complex scenes. The results show that the proposed model achieves an accuracy of 95.1% and a success rate of 86.9% in high-noise environments, with an average accuracy error as low as 0.0076%, significantly outperforming comparison models. Additionally, the model achieves a style transfer similarity of 0.971, a transfer image clarity of 94.8%, and a loss rate of only 0.195%, demonstrating its strong performance. These experimental results indicate that the proposed model exhibits high robustness and transfer stability while efficiently and accurately generating high-quality transferred images. It effectively addresses the issues in traditional animation style transfer models and provides a new method for animation style transfer research.

Authors

  • Xiaolin Zhao College of Culture and Media, Yantai Institute of Science and Technology, Yantai, 264000, China

DOI:

https://doi.org/10.31449/inf.v50i11.10110

Downloads

Published

04/23/2026

How to Cite

Zhao, X. (2026). Optimizing Animation Style Transfer with LS-GAN and GCN-ECANet for Increased Accuracy and Efficiency. Informatica, 50(11). https://doi.org/10.31449/inf.v50i11.10110