Multimodal Deep Learning Framework for Machine Translation Quality Assessment Using Bilingual Corpora
Abstract
With the acceleration of globalization, the importance of machine translation in cross-language communication has become increasingly prominent. However, the traditional machine translation quality evaluation methods have some limitations, such as the high cost of manual evaluation, and the automatic evaluation index based on reference translation relies too much on the quality of reference translation. To solve these problems, this study proposes a multimodal bilingual corpus-based machine translation quality evaluation model. The model utilizes multimodal information such as text, images, and speech to fuse features through deep learning technology to evaluate machine translation quality more comprehensively and objectively. In the experimental part, we constructed a multimodal corpus containing 10,000 pairs of bilingual sentences, covering multiple fields such as news, forums and more. Experimental results show that our model improves 15% consistency in human evaluation and 12% in semantic accuracy compared to traditional evaluation methods based on reference translation. When dealing with different types of translated texts, the comprehensive evaluation index of the model is also better than other evaluation methods, with an average increase of 8%. These results verify the effectiveness and universality of the model and provide a new idea and method for evaluating machine translation quality.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i27.9901
This work is licensed under a Creative Commons Attribution 3.0 License.








