Natural Language Processing

Interpreting Neural Language Models for Linguistic Complexity Assessment

This thesis presents a model-driven study of multiple phenomena associated with linguistic complexity, and how those get encoded by neural language models' learned representations.

Italian Transformers Under the Linguistic Lens

We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.

[email protected] AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations

This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.

ETC-NLG: End-to-end Topic-Conditioned Natural Language Generation

We present ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections.

ICLR 2020 Trends: Better & Faster Transformers for Natural Language Processing

A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.

Covid-19 Semantic Browser

A semantic browser for SARS-CoV-2 and COVID-19 powered by neural language models.

Neural Language Models: the New Frontier of Natural Language Understanding

An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.

The Literary Ordnance: When the Writer is an AI

Discussing the applications of AI and NLP in the fields of literature and digital humanities.