This thesis presents a model-driven study of multiple phenomena associated with linguistic complexity, and how those get encoded by neural language models' learned representations.
We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.
This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.
We present ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections.
A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.
A semantic browser for SARS-CoV-2 and COVID-19 powered by neural language models.
An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.
Is it possible to induce sparseness in neural networks while preserving its performances? An overview of latest advances in making neural approaches more parsimonious