Academic | Gabriele Sarti

Academic

Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples introducing state-of-the-art approaches for interpreting language models such as contrastive attribution, tuned lenses and causal mediation analysis.

Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples introducing state-of-the-art approaches for interpreting language models such as contrastive attribution, tuned lenses and causal mediation analysis.

Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models

After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated on two case studies related to gender bias in machine translation and locating factual knowledge withing GPT-2 representations.

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.

Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Discussing the potential applications of interpretability research to the field of neural machine translation.

Characterizing Linguistic Complexity in Humans and Language Models

Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned representations in neural language models

Neural Language Models: the New Frontier of Natural Language Understanding

An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.

Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Is it possible to induce sparseness in neural networks while preserving its performances? An overview of latest advances in making neural approaches more parsimonious