Gabriele Sarti
Avatar

Gabriele Sarti

PhD in Natural Language Processing

CLCG, University of Groningen

About me

Welcome to my website! 👋 I am a PhD student at the Computational Linguistics Group of the University of Groningen and member of the InDeep consortium, working on user-centric interpretability for neural machine translation. I am also the main developer of the Inseq library. My supervisors are Arianna Bisazza, Malvina Nissim and Grzegorz Chrupała.

Previously, I was a research intern at Amazon Translate NYC, a research scientist at Aindo, a Data Science MSc student at the University of Trieste and a co-founder of the AI Student Society.

My research focuses on interpretability for generative language models, with a particular interest to end-users’ benefits and the usage of human behavioral signals. I am also into causality topics and open source collaboration.

Your (anonymous) feedback is always welcome! 🙂

Interests

  • Conditional Text Generation
  • Interpretability for Deep Learning
  • Behavioral Data for NLP
  • Causality and Uncertainty Estimation

Education

Experience

🗞️ News

 

Selected Publications

 

Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses

We evaluate the rebus-solving capabilities of large language models on a new Italian dataset.

Multi-property Steering of Large Language Models with Dynamic Activation Composition

We propose Dynamic Activation Composition, an adaptive approach for multi-property activation steering of LLMs

Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation

MIRAGE uses model internals for faithful answer attribution in retrieval-augmented generation applications.

A Primer on the Inner Workings of Transformer-based Language Models

This primer provides a concise technical introduction to the current techniques used to interpret the inner workings of …

Quantifying the Plausibility of Context Reliance in Neural Machine Translation

We introduce PECoRe, an interpretability framework for identifying context dependence in language model generations.

RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation

We introduce Retrieval and Attribute-Marking enhanced Prompting (RAMP) to perform attribute-controlled MT with multilingual LLMs.

Blog posts

 

ICLR 2020 Trends: Better & Faster Transformers for Natural Language Processing

A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.

Recent & Upcoming Talks

Interpreting Context Usage in Generative Language Models with Inseq, PECoRe and MIRAGE
Interpreting Context Usage in Generative Language Models with Inseq and PECoRe
Quantifying the Plausibility of Context Reliance in Neural Machine Translation

Projects

 

Attributing Context Usage in Language Models

An interpretability framework to detect and attribute context usage in language models’ generations

Inseq: An Interpretability Toolkit for Sequence Generation Models

An open-source library to democratize access to model interpretability for sequence generation models

Contrastive Image-Text Pretraining for Italian

The first CLIP model pretrained on the Italian language.

Covid-19 Semantic Browser

A semantic browser for SARS-CoV-2 and COVID-19 powered by neural language models.

AItalo Svevo: Letters from an Artificial Intelligence

Generating letters with a neural language model in the style of Italo Svevo, a famous italian writer of the 20th century.

Histopathologic Cancer Detection with Neural Networks

A journey into the state of the art of histopathologic cancer detection approaches.