Interpretability

Interpreting Neural Language Models for Linguistic Complexity Assessment

This thesis presents a model-driven study of multiple phenomena associated with linguistic complexity, and how those get encoded by neural language models' learned representations.

Italian Transformers Under the Linguistic Lens

We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.