Natural Language Processing | Gabriele Sarti

Natural Language Processing

RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation

We introduce Retrieval and Attribute-Marking enhanced Prompting (RAMP) to perform attribute-controlled MT with multilingual LLMs.

Post-hoc Interpretability for Neural Language Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding tasks. However, the internal computations of these models are hardly interpretable due to their highly nonlinear structure, hindering their usage for mission-critical applications requiring trustworthiness and transparency guarantees. This presentation will introduce interpretability methods used for tracing the predictions of language models back to their inputs and discuss how these can be used to gain insights into model biases and behaviors. Throughout the presentation, several concrete examples of language model attributions will be presented using the Inseq interpretability library.

Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples introducing state-of-the-art approaches for interpreting language models such as contrastive attribution, tuned lenses and causal mediation analysis.

Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples introducing state-of-the-art approaches for interpreting language models such as contrastive attribution, tuned lenses and causal mediation analysis.

Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models

After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated on two case studies related to gender bias in machine translation and locating factual knowledge withing GPT-2 representations.

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

We analyze input contributions of char-level MT models and show how they modulate word and character-level information.

Inseq: An Interpretability Toolkit for Sequence Generation Models

We present Inseq, a Python library to democratize access to interpretability analyses of sequence generation models.

Attributing Context Usage in Language Models

An interpretability framework to detect and attribute context usage in language models' generations

Inseq: An Interpretability Toolkit for Sequence Generation Models

An open-source library to democratize access to model interpretability for sequence generation models

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.