Contrastive Image-Text Pretraining for Italian
CLIP is a multimodel model that can learn to represent images and text jointly in the same space. In this project, we aim to propose the first CLIP model trained on Italian data, that in this context can be considered a low resource language. Using a few techniques, we have been able to fine-tune a SOTA Italian CLIP model with only 1.4 million training samples.
For more information, refer to our demo.
- Teaching NLP with Bracelets and Restaurant Menus: An Interactive Workshop for Italian Students
- That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models
- UmBERTo-MTSA@ AcCompl-It: Improving Complexity and Acceptability Prediction with Multi-task Learning on Self-Supervised Annotations
- ETC-NLG: End-to-end Topic-Conditioned Natural Language Generation
- Interpreting Neural Language Models for Linguistic Complexity Assessment