With the astounding advances of artificial intelligence in recent years, interpretability research has emerged as a fundamental effort to ensure the development of robust and transparent AI systems aligned with human needs. This talk will focus on user-centric interpretability applications aimed at improving our understanding of machine translation systems, with the ultimate goal of improving post-editing productivity and enjoyability.
Presenting my work on studying different metrics of linguistic complexity and how they correlate with linguistic phenomena and learned representations in neural language models
Is it possible to induce sparseness in neural networks while preserving its performances? An overview of latest advances in making neural approaches more parsimonious