Neural Machine Translation | Gabriele Sarti

Neural Machine Translation

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

This talk presents the PECoRe framework for quantifying the plausibility of context reliance in neural machine translation. The framework is applied to a case study on the impact of context on the translation of gendered pronouns and other contextual phenomena in English-to-French translation. Finally, the online demo allowing users to try PECoRe with any generative language model is presented.

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

This talk presents the PECoRe framework for quantifying the plausibility of context reliance in neural machine translation. The framework is applied to a case study on the impact of context on the translation of gendered pronouns and other contextual phenomena in English-to-French translation. Finally, the online demo allowing users to try PECoRe with any generative language model is presented.

Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models

After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated on two case studies related to gender bias in machine translation and locating factual knowledge withing GPT-2 representations.

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.

Towards User-centric Interpretability of NLP Models

With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a fundamental effort to ensure the development of robust AI systems aligned with human values. In this talk, two perspectives on AI interpretability will be presented alongside two case studies in natural language processing. The first study leverages behavioral data and probing tasks to study the perception and encoding of linguistic complexity in humans and language models. The second introduces a user-centric interpretability perspective for neural machine translation to improve post-editing productivity and enjoyability. The need for such application-driven approaches will be emphasized in light of current challenges in faithfully evaluating advances in this field of study.

Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Discussing the potential applications of interpretability research to the field of neural machine translation.