Gabriele Sarti | Gabriele Sarti
Home
About me
Publications
Blog
Talks
Projects
Activities
CV
Academic CV
Short CV
Tools
Inseq
LangLearn
Communities
AI2S
AISIG
Gabriele Sarti
Latest
Interpreting Context Usage in Generative Language Models
Interpreting Context Usage in Generative Language Models
Interpretability for Language Models: Trends and Applications
From Insights to Impact: Actionable Interpretability for Neural Machine Translation
From Insights to Impact: Actionable Interpretability for Neural Machine Translation
Interpreting and Understanding LLMs and Other Deep Learning Models
Interpretability for Language Models: Current Trends and Applications
Interpreting Context Usage in Generative Language Models
Unsupervised Word-level Quality Estimation for Machine Translation Through the Lens of Annotators (Dis)agreement
Interpreting Latent Features in Large Language Models
QE4PE: Word-level Quality Estimation for Human Post-Editing
Interpretability for Language Models: Current Trends and Applications
Interpreting Context Usage in Generative Language Models
QE4PE: Word-level Quality Estimation for Human Post-Editing
Aprire la scatola nera dei modelli del linguaggio: rischi e opportunità
Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses
Interpretability for Language Models: Current Trends and Applications
Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses
Interpreting Context Usage in Generative Language Models with Inseq, PECoRe and MIRAGE
Multi-property Steering of Large Language Models with Dynamic Activation Composition
Interpreting Context Usage in Generative Language Models with Inseq and PECoRe
IT5: Text-to-text Pretraining for Italian Language Understanding and Generation
Quantifying the Plausibility of Context Reliance in Neural Machine Translation
A Primer on the Inner Workings of Transformer-based Language Models
Quantifying the Plausibility of Context Reliance in Neural Machine Translation
Post-hoc Interpretability for Generative Language Models: Explaining Context Usage in Transformers
Explaining Language Models with Inseq
Post-hoc Interpretability for Language Models
DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Quantifying the Plausibility of Context Reliance in Neural Machine Translation
Post-hoc Interpretability for NLG & Inseq: an Interpretability Toolkit for Sequence Generation Models
Post-hoc Interpretability for Neural Language Models
Explaining Neural Language Models from Internal Representations to Model Predictions
RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation
Post-hoc Interpretability for Neural Language Models
Inseq: An Interpretability Toolkit for Sequence Generation Models
Advanced XAI Techniques and Inseq: An Interpretability Toolkit for Sequence Generation Models
Introducing Inseq: An Interpretability Toolkit for Sequence Generation Models
Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation
Inseq: An Interpretability Toolkit for Sequence Generation Models
Attributing Context Usage in Language Models
Inseq: An Interpretability Toolkit for Sequence Generation Models
Towards User-centric Interpretability of Machine Translation Models
Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties
DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Towards User-centric Interpretability of NLP Models
Empowering Human Translators via Interpretable Interactive Neural Machine Translation
Characterizing Linguistic Complexity in Humans and Language Models
Contrastive Language-Image Pre-training for the Italian Language
Contrastive Image-Text Pretraining for Italian
Teaching NLP with Bracelets and Restaurant Menus: An Interactive Workshop for Italian Students
That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models
Interpreting Neural Language Models for Linguistic Complexity Assessment
UmBERTo-MTSA@ AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations
ETC-NLG: End-to-end Topic-Conditioned Natural Language Generation
ICLR 2020 Trends: Better & Faster Transformers for Natural Language Processing
Covid-19 Semantic Browser
Neural Language Models: the New Frontier of Natural Language Understanding
The Literary Ordnance: When the Writer is an AI
AItalo Svevo: Letters from an Artificial Intelligence
Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Histopathologic Cancer Detection with Neural Networks
Cite
×