第十五届全国机器翻译大会
人工智能时代口译技术应用研究
王华树 | 国内首部聚焦口译技术应用和教学的著作
新书推荐
口笔译教育与评价国际论坛 二号公告
在厦门大学百年校庆之际,邀您齐聚厦门、共襄盛举
论坛推荐
直播: 9月27-29日
(本次直播不作回看)
第十五届全国机器翻译大会(China Conference on Machine Translation, CCMT 2019)将于2019年9月27日至29日在江西南昌举行。本次会议由中国中文信息学会主办,江西师范大学承办。
CCMT旨在为国内外机器翻译界同行提供一个交互平台,加强国内外同行的学术交流,召集各路专家学者针对机器翻译的理论方法、应用技术和评测活动等若干关键问题进行深入的研讨,为促进中国机器翻译事业的发展,起到积极的推动作用。会议已连续成功召开了十四届(之前名为全国机器翻译研讨会CWMT)。其中,共组织过八次机器翻译评测,一次开源系统模块开发(2006)和两次战略研讨(2010、2012)。这些活动对于推动我国机器翻译技术的研究和开发产生了积极而深远的影响。因此,CCMT已经成为我国自然语言处理领域颇具影响的学术活动。
除学术论文报告外,本次会议将会邀请国内外知名专家进行特邀报告,面向学生和青年学者举行专题讲座,邀请学界和产业界专家举行专题讨论会,面向研究者和用户进行系统展示等,通过丰富多彩的形式和与会者互动探讨机器翻译最炽热的研究论点,揭示机器翻译最前沿的蓝图。同时,CCMT2019继续组织机器翻译评测,包括机器翻译双语翻译(汉英、英汉、维汉、藏汉和蒙汉),多语言翻译(汉、日、英),语音翻译(汉英)和翻译质量自动评估(汉英、英汉)等多个任务。会上也会就评测工作进行学术交流和专题讨论。
本届会议为期三天。会议热忱欢迎高校、科研机构和IT企业的积极参与!愿机器翻译同道,携起手来,共同为机器翻译的研究做出贡献!愿学术界和企业界,能够有更多的交流和合作!
大会日程
直播环节以此颜色字体标注
2019/9/27
中国中文信息学会《前沿技术讲习班》(ATT)第18期
09:00-11:00 Tutorial 1 Speech Translation
14:00-16:00 Tutorial 2 Domain Adaptation for Neural Machine Translation
2019/9/28
08:30-09:00 开幕式
09:00-10:00 特邀报告1 Multimodal natural language processing: when text is not enough
10:00-11:00 茶歇(poster论文交流1)
11:00-12:00 学术报告1
- Improving Bilingual Lexicon Induction on Distant Language Pairs (ID: 56)
- Improving Quality Estimation of Machine Translation by Using Pre-trained Language Representation (ID: 65)
- Neural Machine Translation with Attention Based on A New Syntactic Branch Distance (ID: 76)
- Research on Neural Machine Translation with Document-Level Context (ID: 47)
14:00-15:00 评测报告和讨论
- 第十五届全国机器翻译大会(CCMT 2019)评测报告
- CCMT2019 中国科学技术大学机器翻译系统
- Tencent Minority-Mandarin Translation System (ID: 43)
- 基于数据增强及领域适应的神经机器翻译技术(ID: 58)
- OPPO 机器翻译系统(ID: 63)
- 厦门大学参评系统报告
- NiuTrans Submission for CCMT19 Quality Estimation Task (ID: 50)
15:15-16:00 茶歇(评测技术交流)
16:00-17:20 Panel 1机器翻译数据增强技术探讨
2019/9/29
09:00-10:00 特邀报告2 Neural Machine Translation with Monolingual Data
10:00-11:00 茶歇(论文poster交流2)
11:00-12:00 学术报告2
- 基于多语言预训练语言模型的译文质量估计方法(ID: 71)
- 基于子词的句子级别神经机器译文质量估计方法(ID: 35)
- 面向维汉机器翻译的层次化多特征融合模型(ID: 42)
- 基于粗粒度到细粒度的神经机器翻译系统推断加速方法研究(ID: 57)
14:00-15:20 Panel 2机器翻译技术应用探讨
15:20-15:40 茶歇
15:40-16:50 Panel 3机器翻译博士生培养
16:50-17:10 闭幕式
Keynote 1: Multimodal natural language processing: when text is not enough
报告简介:In this talk I will provide an overview of work on multimodal machine learning, where images are used to build richer context models for natural language tasks. Most of the talk will be focused on approaches to machine translation that exploit both textual and visual information to deal with complex linguistic ambiguities as well as common linguistic biases. I will cover state of the art approaches and their limitations and describe studies on when and how images can be beneficial to the task.
特邀专家简介:Lucia Specia is Professor of Natural Language Processing at Imperial College London and University of Sheffield. Her research focuses on various aspects of data-driven approaches to language processing, with a particular interest in multimodal and multilingual context models and work at the intersection of language and vision. Her work can be applied to various tasks such as machine translation, image captioning, quality estimation and text adaptation. She is the recipient of the MultiMT ERC Starting Grant on Multimodal Machine Translation (2016-2021) and is currently involved in other funded research projects on machine translation, multilingual video captioning and text adaptation. In the past she worked as Senior Lecturer at the University of Wolverhampton (2010-2011), and research engineer at the Xerox Research Centre, France (2008-2009, now Naver Labs). She received a PhD in Computer Science from the University of São Paulo, Brazil, in 2008.
Keynote 2: Neural Machine Translation with Monolingual Data
报告简介:Powered by deep learning, Neural Machine Translation (NMT) has make great progress in past 5 years. In addition to bilingual data, monolingual data also plays an important role in NMT. In this talk, we will introduce several latest techniques using monolingual data for NMT: (1) Dual learning, which helps us to win 4 top places in the recent machine translation challenge organized by the fourth Conference on Machine Translation (WMT19),leverages the structure duality of the forward translation and back translation to learn from monolingual data; (2) MASS, which helps us to win 2 top places in WMT19, is a pre-training method for sequence to sequence generation; and (3)BERT-fuse, which is a fine-tuning method, leverages the pre-trained BERT model in a carefully designed way to boost NMT.
特邀专家简介:秦涛博士,微软亚洲研究院首席研究员/经理,中国科学技术大学兼职教授和博士生导师,IEEE、ACM高级会员,于清华大学电子工程系获得学士和博士学位。他的主要研究领域包括机器学习和人工智能(重点是深度学习和强化学习的算法设计及在实际问题中的应用)、机器翻译、互联网搜索与计算广告、博弈论和多智能体系统,在国际会议和期刊上发表学术论文100余篇。曾任/现任AAAI、SIGIR、AAMAS、ACML领域主席,WWW 2020研讨会主席,DAI 2019工业论坛主席,担任多个国际学术大会程序委员会成员,曾任多个国际学术研讨会联合主席。他带领的团队获得2019年国际机器翻译大赛8项冠军。
讲习班日程
2019年9月27日
中国中文信息学会《前沿技术讲习班》(ATT)第18期
Tutorial 1 Speech Translation
时间:2019/9/27 09:00-11:00
内容:We will start with an overview on the different use cases and difficulties of speech translation. Due to the wide range of possible applications these systems differ in data, difficulty of the language and spontaneous effects. Furthermore, the interaction with human has an important influence. In the main part of the tutorial, we will review state-of-the-art methods to build speech translation system. We will start with reviewing the translation approach of spoken language translation, a cascade of an automatic speech recognition system and a machine translation system. We will highlight the challenges when combining both systems. Especially, techniques to adapt the system to scenario will be reviewed. With the success of neural models in both areas, we see a rising research interest in end-to-end speech translation. While we see promising results on this approach, international evaluation campaigns like the Shared Task of the International Workshop on Spoken Language Translation (IWSLT) have shown that currently often cascaded systems still achieve a better translation performance. We will highlight the main challenges of end-to-end speech translation. In the final part of the tutorial, we will review techniques that address key challenges of speech translation, e.g. Latency, spontaneous effects, sentence segmentation and stream decoding.
Jan Niehues博士简介:Jan Niehues is an assistant professor at Maastricht University. He received his doctoral degree from Karlsruhe Institute of Technology in 2014 on the topic of “Domain Adaptation in Machine Translation”. He has conducted research at Carnegie Mellon University and LIMSI/CNRS, Paris. His research has covered different aspects of Machine Translation and Spoken Language Translation. He has been involved in several international projects on spoken language translation e.g. the German-French Project Quaero, the H2020 EU project QT21 EU-Bridge and Elitr. Currently, he is one of the main organizer of the spoken language track in the IWSLT shared tsk.
Tutorial 2 Domain Adaptation for Neural Machine Translation
时间:2019/9/27 14:00-16:00
内容:Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this tutorial, we give a comprehensive review of the state-of-the-art domain adaptation techniques for NMT. We hope that this tutorial will be both a starting point and a source of new ideas for researchers and engineers who are interested in domain adaptation for NMT.
褚晨翚博士简介:Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understandng.
王瑞博士简介:Rui Wang is a tenure-track researcher at NICT. His research focuses on machine translation, a classic task in NLP (or even in AI). He (as the first or the corresponding authors) has published more than 20 MT papers in top-tier NLP conferences and journals, such as ACL, EMNLP, COLING, AAAI, IJCAI, TASLP, TALLIP, etc. He has also won the first places on several language pairs of WMT shared tasks, such as the unsupervised Czech<->German task in 2019 and the supervised Finnish/Estonian<->English tasks in 2018. He served as the area co-chairs of CCL-2018/2019 and the organization co-chairs of PACLIC-29 and YCCL-2012.
会议机构
会议主席/General Chair
黄河燕(北京理工大学)
程序委员会主席/Program Co-chairs
KevinKnight(DiDi Labs)
黄书剑(南京大学)
评测委员会主席/Evaluation Chair
杨沐昀(哈尔滨工业大学)
组织委员会主席/Organizing Chair
王明文(江西师范大学)
讲座主席/Tutorial Co-chairs
陈博兴(阿里)
段湘煜(苏州大学)
研讨主席/Workshop Co-chairs
刘树杰(微软亚洲研究院)
冯洋(中科院计算所)
出版主席/Publication Co-chairs
曹海龙(哈尔滨工业大学)
陈毅东(厦门大学)
赞助主席/Sponsorship Co-chairs
冯冲(北京理工大学)
肖桐(东北大学)
宣传主席/Publicity Co-chairs
李茂西(江西师范大学)
毛存礼(昆明理工大学)
赞助单位
钻石:金山AI
白金:中译语通科技股份有限公司
金牌:搜狗翻译 小牛翻译
银牌:腾讯翻译君
相关推荐
★★★★★ 5/5
喜欢李培芸,给你表白比心心~
下午14:00开始第一个环节,已标注直播环节
点赞!太棒了!
李培云学姐 真棒!!向你看齐???