Suggested Reading List for Machine Translation BLOG

Please select one of the paper from following. Newer papers are preferred.
Artetxe, Mikel, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. “Unsupervised Neural Machine Translation.” ArXiv:1710.11041 [Cs], February 26, 2018. http://arxiv.org/abs/1710.11041.
Arthur, Philip, Dongwon Ryu, and Gholamreza Haffari. “Multilingual Simultaneous Neural Machine Translation.” In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 4758–66. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.findings-acl.420.
Baevski, Alexei, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. “Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations.” ArXiv:2006.11477 [Cs, Eess], October 22, 2020. http://arxiv.org/abs/2006.11477.
Bentivogli, Luisa, Mauro Cettolo, Marco Gaido, Alina Karakanta, Alberto Martinelli, Matteo Negri, and Marco Turchi. “Cascade versus Direct Speech Translation: Do the Differences Still Make a Difference?” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2873–87. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.224.
Bhattacharyya, Sumanta, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, and Andrew McCallum. “Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4528–37. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.349.
Cai, Deng, Yan Wang, Huayang Li, Wai Lam, and Lemao Liu. “Neural Machine Translation with Monolingual Translation Memory.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 7307–18. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.567.
Conneau, Alexis, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. “Unsupervised Cross-Lingual Representation Learning for Speech Recognition.” In Interspeech 2021, 2426–30. ISCA, 2021. https://doi.org/10.21437/Interspeech.2021-329.
Dong, Qianqian, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. “Consecutive Decoding for Speech-to-Text Translation.” In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. https://dqqcasia.github.io/projects/COSTT/.
Dong, Qianqian, Rong Ye, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, and Lei Li. “Listen, Understand and Translate: Triple Supervision Decouples End-to-End Speech-to-Text Translation.” In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. https://dqqcasia.github.io/projects/LUT/.
Edunov, Sergey, Myle Ott, Michael Auli, and David Grangier. “Understanding Back-Translation at Scale.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 489–500. Brussels, Belgium: Association for Computational Linguistics, 2018. https://doi.org/10.18653/v1/D18-1045.
Fang, Jiarui, Yang Yu, Chengduo Zhao, and Jie Zhou. “TurboTransformers: An Efficient GPU Serving System For Transformer Models.” ArXiv:2010.05680 [Cs], February 20, 2021. http://arxiv.org/abs/2010.05680.
Freitag, Markus, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. “Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation.” ArXiv:2104.14478 [Cs], April 29, 2021. http://arxiv.org/abs/2104.14478.
Ghazvininejad, Marjan, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. “Mask-Predict: Parallel Decoding of Conditional Masked Language Models.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 6112–21. Hong Kong, China: Association for Computational Linguistics, 2019. https://doi.org/10.18653/v1/D19-1633.
Han, Chi, Mingxuan Wang, Heng Ji, and Lei Li. “Learning Shared Semantic Space for Speech-to-Text Translation.” In The 59th Annual Meeting of the Association for Computational Linguistics (ACL) - Findings, 2021.
Jiang, Qingnan, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. “Learning Kernel-Smoothed Machine Translation with Retrieved Examples.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
Kudo, Taku, and John Richardson. “SentencePiece: A Simple and Language Independent Subword Tokenizer and Detokenizer for Neural Text Processing.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 66–71. Brussels, Belgium: Association for Computational Linguistics, 2018. https://doi.org/10.18653/v1/D18-2012.
Lample, Guillaume, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. “Unsupervised Machine Translation Using Monolingual Corpora Only,” 2018. https://openreview.net/forum?id=rkYTTf-AZ.
Lee, Ann, Michael Auli, and Marc’Aurelio Ranzato. “Discriminative Reranking for Neural Machine Translation.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 7250–64. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.563.
Li, Xian, Changhan Wang, Yun Tang, Chau Tran, Yuqing Tang, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. “Multilingual Speech Translation from Efficient Finetuning of Pretrained Models.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 827–38. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.68.
Liang, Jianze, Chengqi Zhao, Mingxuan Wang, Xipeng Qiu, and Lei Li. “Finding Sparse Structure for Domain Specific Neural Machine Translation.” In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. https://ohlionel.github.io/project/Prune-Tune/.
Lin, Zehui, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. “Pre-Training Multilingual Neural Machine Translation by Leveraging Alignment Information.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
Lin, Zehui, Liwei Wu, Mingxuan Wang, and Lei Li. “Learning Language Specific Sub-Network for Multilingual Machine Translation.” In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
Liu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. “Multilingual Denoising Pre-Training for Neural Machine Translation.” Transactions of the Association for Computational Linguistics 8 (November 1, 2020): 726–42. https://doi.org/10.1162/tacl_a_00343.
Liu, Zihan, Genta Indra Winata, and Pascale Fung. “Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation.” In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2706–18. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.findings-acl.239.
Long, Quanyu, Mingxuan Wang, and Lei Li. “Generative Imagination Elevates Machine Translation.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 5738–48. Online: Association for Computational Linguistics, 2021.
Miao, Ning, Hao Zhou, Chengqi Zhao, Wenxian Shi, and Lei Li. “Kernelized Bayesian Softmax for Text Generation.” In The 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
Müller, Mathias, and Rico Sennrich. “Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 259–72. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.22.
Pan, Xiao, Liwei Wu, Mingxuan Wang, and Lei Li. “Contrastive Learning for Many-to-Many Multilingual Neural Machine Translation.” In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021. https://medium.com/@panxiao1994/mrasp2-multilingual-nmt-advances-via-contrastive-learning-ac8c4c35d63.
Qian, Lihua, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. “Glancing Transformer for Non-Autoregressive Neural Machine Translation.” In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
Rei, Ricardo, Craig Stewart, Ana C Farinha, and Alon Lavie. “COMET: A Neural Framework for MT Evaluation.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2685–2702. Online: Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.emnlp-main.213.
Shah, Harshil, and David Barber. “Generative Neural Machine Translation.” In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, Inc., 2018. https://papers.nips.cc/paper/2018/hash/e4bb4c5173c2ce17fd8fcd40041c068f-Abstract.html.
Sun, Zewei, Mingxuan Wang, and Lei Li. “Multilingual Translation via Grafting Pre-Trained Language Models.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP) - Findings, 2021.
Tang, Yun, Juan Pino, Xian Li, Changhan Wang, and Dmitriy Genzel. “Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4252–61. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.328.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc., 2017. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Wang, Mingxuan, Hongxiao Bai, Hai Zhao, and Lei Li. “Cross-Lingual Supervision Improves Unsupervised Neural Machine Translation.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT), 89–96. Online: Association for Computational Linguistics, 2021.
Wang, Mingxuan, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, and Lei Li. “Towards Linear Time Neural Machine Translation with Capsule Networks.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
Wang, Qiang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. “Learning Deep Transformer Models for Machine Translation.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1810–22. Florence, Italy: Association for Computational Linguistics, 2019. https://doi.org/10.18653/v1/P19-1176.
Wang, Tao, Chengqi Zhao, Mingxuan Wang, Lei Li, Hang Li, and Deyi Xiong. “Secoco: Self-Correcting Encoding for Neural Machine Translation.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP) - Findings, 2021.
Wang, Xiaohui, Ying Xiong, Yang Wei, Mingxuan Wang, and Lei Li. “LightSeq: A High Performance Inference Library for Transformers.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT), 113–20. Online: Association for Computational Linguistics, 2021.
Weng, Rongxiang, Hao Zhou, Shujian Huang, Yifan Xia, Lei Li, and Jiajun Chen. “Correct-and-Memorize: Learning to Translate from Interactive Revisions.” In The 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019.
Xu, Chen, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, and Jingbo Zhu. “Stacked Acoustic-and-Textual Encoding: Integrating the Pre-Trained Models into Speech Translation Encoders.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2619–30. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.204.
Xu, Jingjing, Hao Zhou, Chun Gan, Zaixiang Zheng, and Lei Li. “Vocabulary Learning via Optimal Transport for Neural Machine Translation.” In The 59th Annual Meeting of the Association for Computational Linguistics (ACL), 2021. https://jingjing-nlp.github.io/volt-blog/.
Xu, Jitao, Josep Crego, and Jean Senellart. “Boosting Neural Machine Translation with Similar Translations.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 1580–90. Online: Association for Computational Linguistics, 2020. https://doi.org/10.18653/v1/2020.acl-main.144.
Yang, Jiacheng, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, and Lei Li. “Towards Making the Most of BERT in Neural Machine Translation.” In The 34th AAAI Conference on Artificial Intelligence (AAAI), 2020.
Ye, Rong, Mingxuan Wang, and Lei Li. “End-to-End Speech Translation via Cross-Modal Progressive Training.” In Proc. of INTERSPEECH, 2021.
Zeng, Xingshan, Liangyou Li, and Qun Liu. “RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer.” In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2461–74. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.findings-acl.218.
Zhang, Biao, Ivan Titov, Barry Haddow, and Rico Sennrich. “Beyond Sentence-Level End-to-End Speech Translation: Context Helps.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2566–78. Online: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.acl-long.200.
Zhang*, Tianyi, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. “BERTScore: Evaluating Text Generation with BERT,” 2020. https://openreview.net/forum?id=SkeHuCVFDr.
Zheng, Renjie, Junkun Chen, Mingbo Ma, and Liang Huang. “Fused Acoustic and Text Encoding for Multimodal Bilingual Pretraining and Speech Translation.” In Proceedings of the 38th International Conference on Machine Learning, 12736–46. PMLR, 2021. https://proceedings.mlr.press/v139/zheng21a.html.
Zheng, Zaixiang, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, and Lei Li. “Duplex Sequence-to-Sequence Learning for Reversible Machine Translation.” In The 35th Conference on Neural Information Processing Systems (NeurIPS), 2021.
Zheng, Zaixiang, Hao Zhou, Shujian Huang, Lei Li, Xinyu Dai, and Jiajun Chen. “Mirror Generative Models for Neural Machine Translation.” In International Conference on Learning Representations (ICLR), 2020. https://openreview.net/forum?id=HkxQRTNYPH.
Zhu, Yaoming, Jiangtao Feng, Chengqi Zhao, Mingxuan Wang, and Lei Li. “Counter-Interference Adapter for Multilingual Machine Translation.” In The Conference on Empirical Methods in Natural Language Processing (EMNLP) - Findings, 2021.