Hypotheses re-ranking in translation models using human markup
- Authors: Vorontsov K.V.1, Skachkov N.A.1
-
Affiliations:
- CC FRC CSC RAS
- Issue: No 4 (2024)
- Pages: 121-128
- Section: ARTIFICIAL INTELLIGENCE
- URL: https://kazanmedjournal.ru/0002-3388/article/view/676401
- DOI: https://doi.org/10.31857/S0002338824040074
- EDN: https://elibrary.ru/UEFMST
- ID: 676401
Cite item
Abstract
Modern machine translation systems are trained on large volumes of parallel data obtained using heuristic methods of the Internet bypassing. The poor quality of the data leads to systematic translation errors, which can be quite noticeable from the human point of view. To fix such errors a human based models hypotheses re-ranking is introduced in this work. In this paper the use of human markup is shown not only to increase the overall quality of translation, but also to significantly reduce the number of systematic translation errors. In addition, the relative simplicity of human markup and its integration in the model training process opens up new opportunities in the field of domain adaptation of translation models for new domains like online retail.
Full Text

About the authors
K. V. Vorontsov
CC FRC CSC RAS
Author for correspondence.
Email: vokov@forecsys.ru
Russian Federation, Moscow
N. A. Skachkov
CC FRC CSC RAS
Email: nikolaj-skachkov@ya.ru
Russian Federation, Moscow
References
- Bañón M., Chen P., Haddow B. et. al. ParaCrawl: Web-Scale Acquisition of Parallel Corpora // Proc. 58th Annual Meeting of the Association for Computational Linguistics. Seattle, 2020. P. 4555–4567.
- Stahlberg F. Neural Machine Translation: A Review // J. Artific. Intelligence Res. 2020. № 69. P. 343–418.
- Vaswani A., Shazeer N., Parmar N. et. al. Attention is All You Need // Proc. 31st Intern. Conf. on Neural Information Processing Systems (NIPS’17). Curran Associates Inc., Red Hook. N.Y., 2017. P. 6000–6010.
- Yang Z., Cheng Y., Liu Y. et. al. Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach // Proc. 57th Annual Meeting of the Association for Computational Linguistics. Florence, 2019. P. 6191–6196.
- Vijayakumar A.K., Cogswell M., Selvaraju R.R. et. al. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models // ArXiv. 2016. abs/1610.02424.
- Papineni K., Roukos S., Ward T. et. al. Bleu: a Method for Automatic Evaluation of Machine Translation // Proc. 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, 2002. P. 311–318.
- Barrault L., Bojar O.R., Costa-jussà M. et al. Findings of the Conf. on Machine Translation (WMT19) // Proc. Fourth Conf. on Machine Translation. Florence, 2019. V. 2: Shared Task Papers.
- Kingma D.P., Ba J. Adam: A Method for Stochastic Optimization // 3rd Intern. Conf. on Learning Representations (ICLR). San Diego, CA, 2015.
Supplementary files
