Recent & Upcoming Talks | Gabriele Sarti

Recent & Upcoming Talks

2024

Interpretability for Language Models: Current Trends and Applications

In this presentation, I will provide an overview of the interpretability research landscape and describe various promising methods for …

Interpreting Context Usage in Generative Language Models with Inseq, PECoRe and MIRAGE

This presentation focuses on applying post-hoc interpretability techniques to analyze how language models (LMs) use input information …

Interpreting Context Usage in Generative Language Models with Inseq and PECoRe

This talk discusses the challenges and opportunities in conducting interpretability analyses of generative language models. We begin by …

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

This talk presents the PECoRe framework for quantifying the plausibility of context reliance in neural machine translation. The …

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

This talk presents the PECoRe framework for quantifying the plausibility of context reliance in neural machine translation. The …

Post-hoc Interpretability for Generative Language Models: Explaining Context Usage in Transformers

This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …

2023

Explaining Language Models with Inseq

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Post-hoc Interpretability for Language Models

This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …

Post-hoc Interpretability for NLG & Inseq: an Interpretability Toolkit for Sequence Generation Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Post-hoc Interpretability for Neural Language Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Explaining Neural Language Models from Internal Representations to Model Predictions

As language models become increasingly complex and sophisticated, the processes leading to their predictions are growing increasingly …

Post-hoc Interpretability for Neural Language Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …

Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …

Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models

After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence …

2022

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort …

Towards User-centric Interpretability of NLP Models

With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a …

2021

Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Discussing the potential applications of interpretability research to the field of neural machine translation.

Characterizing Linguistic Complexity in Humans and Language Models

Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned …

2019

Neural Language Models: the New Frontier of Natural Language Understanding

An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.

The Literary Ordnance: When the Writer is an AI

Discussing the applications of AI and NLP in the fields of literature and digital humanities.

Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Is it possible to induce sparseness in neural networks while preserving its performances? An overview of latest advances in making …