This talk introduces the Inseq toolkit for interpreting sequence generation models. The usage of Inseq is illustrated with examples introducing state-of-the-art approaches for interpreting language models such as contrastive attribution, tuned lenses and causal mediation analysis.
This thesis presents a model-driven study of multiple phenomena associated with linguistic complexity, and how those get encoded by neural language models' learned representations.
A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.
An overview of the latest advances in the field of NLP, with a focus on neural models and language understanding.
Generating letters with a neural language model in the style of Italo Svevo, a famous italian writer of the 20th century.