Welcome to my website! 👋 I am a PhD student in the InCLoW team within the Natural Language Processing group (GroNLP 🐮) at the University of Groningen. I’m also a member of the InDeep consortium, working on user-centric interpretability for generative language models. My supervisors are Arianna Bisazza, Malvina Nissim and Grzegorz Chrupała.
Previously, I was a applied scientist intern at Amazon Translate NYC, a research scientist at Aindo, and a Data Science MSc student at the University of Trieste, where I helped found the AI Student Society.
My research aims to translate theoretical advances in language models interpretability into actionable insights for improving trustworthiness and human-AI collaboration. To this end, I lead the development of open-source interpretability software projects to enable reproducible analyses of model behaviors. I am also excited about the potential of human behavioral signals such as keylogging, gaze and brain recordings to improve the usability and personalization of AI solutions.
Your (anonymous) constructive feedback is always welcome! 🙂
PhD in Natural Language Processing
University of Groningen (NL), 2021 - Ongoing
MSc. in Data Science and Scientific Computing
University of Trieste & SISSA (IT), 2018 - 2020
DEC in Software Management
Cégep de Saint-Hyacinthe (CA), 2015 - 2018
Applied Scientist Intern
Amazon Web Services (US), 2022
Research Scientist
Aindo (IT), 2020 - 2021
Visiting Research Assistant
ILC-CNR ItaliaNLP Lab (IT), 2019
Our paper QE4PE: Word-level Quality Estimation for Human Post-Editing was accepted by TACL, and Unsupervised Word-level Quality Estimation for Machine Translation Through the Lens of Annotators (Dis)agreement was accepted to EMNLP Main! I will present both at EMNLP in Suzhou, China 🇨🇳
I am co-organizing the BlackboxNLP Workshop at EMNLP 2025! Test your localization methods in our shared task! 🔍
I am visiting the IRT Saint-Exupéry in Toulouse, France, to collaborate on an interpretability project with the DEEL team! 🇫🇷
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation is accepted to EMNLP 2024, and Multi-property Steering of Large Language Models with Dynamic Activation Composition is accepted to BlackboxNLP 2024! See you in Miami! 🌴
PECoRe is accepted to ICLR 2024, and I presented it in Vienna! 🎉 I also co-organized the first Mechanistic Interpretability social at ICLR togehter with Nikhil Prakash, and we had more than 100 attendees!
An interpretability framework to detect and attribute context usage in language models’ generations
An open-source library to democratize access to model interpretability for sequence generation models
The first CLIP model pretrained on the Italian language.
A semantic browser for SARS-CoV-2 and COVID-19 powered by neural language models.
Generating letters with a neural language model in the style of Italo Svevo, a famous italian writer of the 20th century.
A journey into the state of the art of histopathologic cancer detection approaches.