Structured Linguistic Augmentation for Large Language Models in Complex Machine Translation

Hongjuan Li

Abstract


Traditional machine translation (MT) systems and even Large Language Models (LLMs) often face significant challenges with complex language structures, stemming from a limited understanding of intricate syntactic dependencies and subtle semantic nuances. While LLMs possess powerful generative capabilities, their over-reliance on surface-level patterns can lead to suboptimal translations for complex sentences. To
address this, we propose a novel framework, Structured Linguistic Augmentation (SLA), designed to enrich LLMs with a deep and explicit understanding of linguistic structures. The SLA framework integrates three synergistic components: (1) Contextual Linguistic Dependency Graph Construction and Pre-training
(CLDG-PT), which injects fine-grained syntactic knowledge into the LLM; (2) Common Sense-Driven Relational Augmentation (CSRA), which externalizes the LLM’s implicit knowledge to identify high-level semantic relations; and (3) Latent Structural Relation Discovery (LSRD), which uncovers subtle, implicit connections between linguistic components via a self-supervised objective. We conduct comprehensive experiments on the general-domain WMT En-De dataset and a new, challenging CS-Trans dataset curated with complex sentences. Evaluations show that SLA significantly improves both the fluency and accuracy of complex sentence translation. Notably, on the CS-Trans test set, our model achieves a COMET score
of 75.0, substantially outperforming a strong fine-tuned LLM baseline (70.0) and demonstrating superior linguistic comprehension. These results, along with strong performance on logical reasoning translation tasks, validate SLA’s effectiveness in enhancing LLMs for high-fidelity complex machine translation.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i25.10014

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.