With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.
With the astounding advances of artificial intelligence in recent years, the field of interpretability research has emerged as a fundamental effort to ensure the development of robust AI systems aligned with human values. In this talk, two perspectives on AI interpretability will be presented alongside two case studies in natural language processing. The first study leverages behavioral data and probing tasks to study the perception and encoding of linguistic complexity in humans and language models. The second introduces a user-centric interpretability perspective for neural machine translation to improve post-editing productivity and enjoyability. The need for such application-driven approaches will be emphasized in light of current challenges in faithfully evaluating advances in this field of study.
We present IT5, the first family of encoder-decoder transformer models pretrained specifically on Italian on more than 40 billion words, reaching state-of-the-art performance for most Italian conditional language generation tasks.
We developed an interactive workshop designed to illustrate the basic principles of NLP and computational linguistics to high school Italian students aged between 13 and 18 years, in the form of a game in which participants play the role of machines needing to solve some of the most common problems a computer faces in understanding language.
This paper investigates the relationship between two complementary perspectives in the human assessment of sentence complexity and how they are modeled in a neural language model (NLM), highlighting how linguistic information encoded in representations changes when the model learns to predict complexity.