This primer provides a concise technical introduction to the current techniques used to interpret the inner workings of Transformer-based language models, focusing on the generative decoder-only architecture.
We present Inseq, a Python library to democratize access to interpretability analyses of sequence generation models.
We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.
This thesis presents a model-driven study of multiple phenomena associated with linguistic complexity, and how those get encoded by neural language models' learned representations.
This work describes a self-supervised data augmentation approach used to improve learning models' performances when only a moderate amount of labeled data is available.
We present ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections.
A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.