Welcome to my website! 👋 I am a PhD student at the Computational Linguistics Group of the University of Groningen and member of the InDeep consortium, working on user-centric interpretability for neural machine translation. I am also the main developer of the Inseq library. My supervisors are Arianna Bisazza, Malvina Nissim and Grzegorz Chrupała.
Previously, I was a research intern at Amazon Translate NYC, a research scientist at Aindo, a Data Science MSc student at the University of Trieste and a co-founder of the AI Student Society.
My research focuses on interpretability for generative language models, with a particular interest to end-users’ benefits and the usage of human behavioral signals. I am also into causality topics and open source collaboration.
Your (anonymous) feedback is always welcome! 🙂
PhD in Natural Language Processing
University of Groningen (NL), 2021 - Ongoing
MSc. in Data Science and Scientific Computing
University of Trieste & SISSA (IT), 2018 - 2020
DEC in Software Management
Cégep de Saint-Hyacinthe (CA), 2015 - 2018
Applied Scientist Intern
Amazon Web Services (US), 2022
Research Scientist
Aindo (IT), 2020 - 2021
Visiting Research Assistant
ILC-CNR ItaliaNLP Lab (IT), 2019
Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation is accepted to EMNLP 2024, and Multi-property Steering of Large Language Models with Dynamic Activation Composition is accepted to BlackboxNLP 2024! See you in Miami! 🌴
Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses is accepted to CLiC-it 2024! See you in Pisa! 🎉
PECoRe is accepted to ICLR 2024, and I presented it in Vienna! 🎉 I also co-organized the first Mechanistic Interpretability social at ICLR togehter with Nikhil Prakash, and we had more than 100 attendees!
I was awarded two research grants from the Imminent Research Center and the Amsterdam eScience Center to fund the development of the Inseq library and my future research on machine translation.
An interpretability framework to detect and attribute context usage in language models’ generations
An open-source library to democratize access to model interpretability for sequence generation models
The first CLIP model pretrained on the Italian language.
A semantic browser for SARS-CoV-2 and COVID-19 powered by neural language models.
Generating letters with a neural language model in the style of Italo Svevo, a famous italian writer of the 20th century.
A journey into the state of the art of histopathologic cancer detection approaches.