Neural Language Models | Gabriele Sarti

Neural Language Models

Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties

We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.

UmBERTo-MTSA@ AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations

This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.