HuggingFace | Gabriele Sarti
IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation
We present IT5, the first family of encoder-decoder transformer models pretrained specifically on Italian on more than 40 billion words, reaching state-of-the-art performance for most Italian conditional language generation tasks.
Contrastive Language-Image Pre-training for the Italian Language
We present the first CLIP model for the Italian Language (CLIP-Italian), trained on more than 1.4 million image-text pairs.
Contrastive Image-Text Pretraining for Italian
The first CLIP model pretrained on the Italian language.