Explaining Neural Language Models from Internal Representations to Model Predictions | Gabriele Sarti
Home
About me
Publications
Blog
Talks
Projects
Activities
CV
Communities
AI2S
AISIG
Explaining Neural Language Models from Internal Representations to Model Predictions
Gabriele Sarti
,
Alessio Miaschi
Natural Language Processing
,
Academic
Project
Lab Materials
Date
May 31, 2023
Event
Lab at AILC Lectures on Computational Linguistics 2023
Location
University of Pisa, Italy
Natural Language Processing
Interpretability
Sequence-to-sequence
Language Modeling
Feature Attribution
Probing Classifiers
Related
Post-hoc Interpretability for Neural Language Models
Post-hoc Interpretability for Neural Language Models
Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models
Explaining Language Models with Inseq
Inseq: An Interpretability Toolkit for Sequence Generation Models
Cite
×