Interpretability | Gabriele Sarti

Interpretability

Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation

We analyze input contributions of char-level MT models and show how they modulate word and character-level information.

Inseq: An Interpretability Toolkit for Sequence Generation Models

We present Inseq, a Python library to democratize access to interpretability analyses of sequence generation models.

Attributing Context Usage in Language Models

An interpretability framework to detect and attribute context usage in language models' generations

Inseq: An Interpretability Toolkit for Sequence Generation Models

An open-source library to democratize access to model interpretability for sequence generation models

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.

Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.

Towards User-centric Interpretability of NLP Models

With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a fundamental effort to ensure the development of robust AI systems aligned with human values. In this talk, two perspectives on AI interpretability will be presented alongside two case studies in natural language processing. The first study leverages behavioral data and probing tasks to study the perception and encoding of linguistic complexity in humans and language models. The second introduces a user-centric interpretability perspective for neural machine translation to improve post-editing productivity and enjoyability. The need for such application-driven approaches will be emphasized in light of current challenges in faithfully evaluating advances in this field of study.

Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Discussing the potential applications of interpretability research to the field of neural machine translation.

Characterizing Linguistic Complexity in Humans and Language Models

Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned representations in neural language models

That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models

This paper investigates the relationship between two complementary perspectives in the human assessment of sentence complexity and how they are modeled in a neural language model (NLM), highlighting how linguistic information encoded in representations changes when the model learns to predict complexity.