Post-hoc Interpretability for Neural Language Models | Gabriele Sarti
Home
About me
Publications
Blog
Talks
Projects
Activities
CV
Communities
AI2S
AISIG
Post-hoc Interpretability for Neural Language Models
Gabriele Sarti
Natural Language Processing
,
Academic
Code
Project
Slides
Date
Jun 1, 2023
Event
Invited Talk at COSMO Seminars, AI-Lab UniTS
Location
University of Trieste, Italy
Natural Language Processing
Interpretability
Sequence-to-sequence
Language Modeling
Feature Attribution
Related
Explaining Neural Language Models from Internal Representations to Model Predictions
Post-hoc Interpretability for Neural Language Models
Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models
Inseq: An Interpretability Toolkit for Sequence Generation Models
Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation
Cite
×