Neural Language Models

Italian Transformers Under the Linguistic Lens

We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.

[email protected] AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations

This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.