We propose DecoderLens, a method to evaluate the iterative refinement of representations in encoder-decoder Transformer models.
We introduce PECoRe, an interpretability framework for identifying context dependence in language model generations.
We introduce Retrieval and Attribute-Marking enhanced Prompting (RAMP) to perform attribute-controlled MT with multilingual LLMs.
We analyze input contributions of char-level MT models and show how they modulate word and character-level information.
We present Inseq, a Python library to democratize access to interpretability analyses of sequence generation models.
An open-source library to democratize access to model interpretability for sequence generation models
IT5s are the first encoder-decoder transformers pretrained on more than 40 billion Italian words.
Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned representations in neural language models
We present the first CLIP model for the Italian Language (CLIP-Italian), trained on more than 1.4 million image-text pairs.
The first CLIP model pretrained on the Italian language.