Recent & Upcoming Talks | Gabriele Sarti

Recent & Upcoming Talks

2024

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

This talk presents the PECoRe framework for quantifying the plausibility of context reliance in neural machine translation. The …

Post-hoc Interpretability for Generative Language Models: Explaining Context Usage in Transformers

This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …

2023

Explaining Language Models with Inseq

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Post-hoc Interpretability for Language Models

This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …

Post-hoc Interpretability for NLG & Inseq: an Interpretability Toolkit for Sequence Generation Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Post-hoc Interpretability for Neural Language Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Explaining Neural Language Models from Internal Representations to Model Predictions

As language models become increasingly complex and sophisticated, the processes leading to their predictions are growing increasingly …

Post-hoc Interpretability for Neural Language Models

In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …

Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …

Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models

This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …

Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models

After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence …

2022

Towards User-centric Interpretability of Machine Translation Models

With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort …

Towards User-centric Interpretability of NLP Models

With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a …

2021

Empowering Human Translators via Interpretable Interactive Neural Machine Translation

Discussing the potential applications of interpretability research to the field of neural machine translation.

Characterizing Linguistic Complexity in Humans and Language Models

Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned …

2019

Neural Language Models: the New Frontier of Natural Language Understanding

An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.

The Literary Ordnance: When the Writer is an AI

Discussing the applications of AI and NLP in the fields of literature and digital humanities.

Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Is it possible to induce sparseness in neural networks while preserving its performances? An overview of latest advances in making …