Post-hoc Interpretability for Neural Language Models | Gabriele Sarti
Home
About me
Publications
Blog
Talks
Projects
Activities
CV
Communities
AI2S
AISIG
Post-hoc Interpretability for Neural Language Models
Gabriele Sarti
Natural Language Processing
,
Academic
Code
Project
Slides
Date
May 23, 2023
Event
AILo Talk at RUG Bernoulli Institute
Location
University of Groningen, Groningen
Natural Language Processing
Interpretability
Sequence-to-sequence
Language Modeling
Feature Attribution
Related
Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models
Inseq: An Interpretability Toolkit for Sequence Generation Models
Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models
Towards User-centric Interpretability of Machine Translation Models
Towards User-centric Interpretability of NLP Models
Cite
×