Inseq: An Interpretability Toolkit for Sequence Generation Models
Inseq is a Pytorch-based hackable toolkit to democratize the study of interpretability for sequence generation models. Inseq supports a wide set of models from the 🤗 Transformers library and an ever-growing set of feature attribution methods, leveraging in part the widely-used Captum library. For a quick introduction to common use cases, see the Getting started with Inseq page.
Using Inseq, feature attribution maps that can be saved, reloaded, aggregated and visualized either as HTMLs (with Jupyter notebook support) or directly in the console using rich. Besides simple attribution, Inseq also supports features like step score extraction, attribution aggregation and attributed functions customization for more advanced use cases.
Related
- Attributing Context Usage in Language Models
- Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties
- Characterizing Linguistic Complexity in Humans and Language Models
- Contrastive Language-Image Pre-training for the Italian Language
- Contrastive Image-Text Pretraining for Italian
Publications
Inseq: An Interpretability Toolkit for Sequence Generation Models
We present Inseq, a Python library to democratize access to interpretability analyses of sequence generation models.
Published in: ACL Demo 2023
Talks
Interpretability for Language Models: Current Trends and Applications
In this presentation, I will provide an overview of the interpretability research landscape and describe various promising methods for …
Interpreting Context Usage in Generative Language Models with Inseq, PECoRe and MIRAGE
This presentation focuses on applying post-hoc interpretability techniques to analyze how language models (LMs) use input information …
Interpreting Context Usage in Generative Language Models with Inseq and PECoRe
This talk discusses the challenges and opportunities in conducting interpretability analyses of generative language models. We begin by …
May 20, 2024
Politecnico di Torino, Piedmont, Italy
Politecnico di Torino Invited Talk
Post-hoc Interpretability for Generative Language Models: Explaining Context Usage in Transformers
This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …
Explaining Language Models with Inseq
In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …
Nov 2, 2023
University of Amsterdam, Amsterdam
InDeep Masterclass - Explaining Foundation Models
Post-hoc Interpretability for Language Models
This talk discusses the challenges of interpreting generative language models and presents Inseq, a toolkit for interpreting sequence …
Post-hoc Interpretability for NLG & Inseq: an Interpretability Toolkit for Sequence Generation Models
In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …
Post-hoc Interpretability for Neural Language Models
In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …
Jun 1, 2023
University of Trieste, Italy
Invited Talk at COSMO Seminars, AI-Lab UniTS
Explaining Neural Language Models from Internal Representations to Model Predictions
As language models become increasingly complex and sophisticated, the processes leading to their predictions are growing increasingly …
May 31, 2023
University of Pisa, Italy
Lab at AILC Lectures on Computational Linguistics 2023
Post-hoc Interpretability for Neural Language Models
In recent years, Transformer-based language models have achieved remarkable progress in most language generation and understanding …
Inseq: An Interpretability Toolkit for Sequence Generation Models
This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …
Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models
This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples …
Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models
After motivating the usage of interpretability methods in NLP, this talk introduces the Inseq toolkit for interpreting sequence …