We investigate the impact of word-level quality estimation on MT post-editing with 42 professional post-editors.
We analyze input contributions of char-level MT models and show how they modulate word and character-level information.
We investigate whether and how using different architectures of probing models affects the performance of Italian transformers in encoding a wide spectrum of linguistic features.
We present ETC-NLG, an approach leveraging topic modeling annotations to enable fully-unsupervised End-to-end Topic-Conditioned Natural Language Generation over emergent topics in unlabeled document collections.