A Lightweight Translation Architecture for Embedded Devices via Multilingual BERT Distillation and Quantization
Abstract
In order to solve the problems of difficult deployment and high computational overhead of multilingual BERT models in embedded devices, this paper proposes a lightweight translation model integrating knowledge distillation technology. The teacher-student knowledge transfer mechanism uses a combination of layer-wise and attention-based distillation strategies, along with optimization techniques such as pruning and 8-bit quantization. BLEU scores were evaluated by comparing the student model against the teacher model and baseline systems, showing competitive translation quality. By constructing a knowledge transfer mechanism between the teacher model and the student model, combined with optimization strategies such as pruning and quantification, the synergistic improvement of model compression and reasoning speed is achieved. The experimental results show that the model has a BLEU value of 28.7 in the WMT-14 English-German task, which is only 1.4 points lower than the teacher model, and retains about 95.3% of the translation quality; The accuracy rate on the XNLI cross-language reasoning dataset reaches 78.3%, which is only 3.1% lower than the teacher model. On the embedded device Jetson Nano, the inference latency of the distilled student model dropped from 1280ms of the teacher model to 195ms, achieved through optimization techniques such as hardware acceleration, resulting in a boost speed of approximately 6.56 times. The proposed model achieves an exceptional compression, reducing the model size from the original 4.2 GB to just 650MB, a reduction of 86.4%. This size reduction is achieved with minimal quality loss, with a BLEU drop of no more than 1.8, ensuring that the compressed model retains most of the performance of the original model. The compressed model has been successfully deployed on edge platforms, including the Raspberry Pi 4B, making it highly suitable for resource-constrained environments. In terms of parameter quantity, the original mBERT has about 1100M parameters, and the distillation model is 350M. After combining pruning and 8-bit quantization, only 137.5 M is left, and the inference speed is increased to 8 times that of the original model. In addition, by introducing the attention distillation mechanism in low-resource scenarios, the model's BLEU score improves by 4.2%, demonstrating the mechanism's effectiveness in enhancing semantic alignment for languages with limited resources. The power consumption test shows that the average power consumption of the student model is 4-6W, which is about 35% lower than that of the original model. Additionally, the memory footprint of the student model on the Raspberry Pi 4B is measured at 320MB during inference, a significant reduction compared to the 1.5GB required by the original mBERT model. These optimizations not only improve translation efficiency and energy efficiency but also provide a highly feasible solution for future deployment of multilingual smart devices.DOI:
https://doi.org/10.31449/inf.v49i28.9992Downloads
Published
How to Cite
Issue
Section
License
I assign to Informatica, An International Journal of Computing and Informatics ("Journal") the copyright in the manuscript identified above and any additional material (figures, tables, illustrations, software or other information intended for publication) submitted as part of or as a supplement to the manuscript ("Paper") in all forms and media throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication. This transfer includes the right to reproduce and/or to distribute the Paper to other journals or digital libraries in electronic and online forms and systems.
I understand that I retain the rights to use the pre-prints, off-prints, accepted manuscript and published journal Paper for personal use, scholarly purposes and internal institutional use.
In certain cases, I can ask for retaining the publishing rights of the Paper. The Journal can permit or deny the request for publishing rights, to which I fully agree.
I declare that the submitted Paper is original, has been written by the stated authors and has not been published elsewhere nor is currently being considered for publication by any other journal and will not be submitted for such review while under review by this Journal. The Paper contains no material that violates proprietary rights of any other person or entity. I have obtained written permission from copyright owners for any excerpts from copyrighted works that are included and have credited the sources in my article. I have informed the co-author(s) of the terms of this publishing agreement.
Copyright © Slovenian Society Informatika







