Oral presentation at CLiC-it 2024
      
    
   
  
  
    
    
      
      We evaluate the rebus-solving capabilities of large language models on a new Italian dataset.
      
    
   
  
  
    
    
      
      IT5s are the first encoder-decoder transformers pretrained on more than 40 billion Italian words.
      
    
   
  
  
    
    
      
      We present the first CLIP model for the Italian Language (CLIP-Italian), trained on more than 1.4 million image-text pairs.
      
    
   
  
  
    
    
      
      The first CLIP model pretrained on the Italian language.
      
    
   
  
  
    
    
      
      We developed an interactive workshop designed to illustrate the NLP and computational linguistics to Italian high schoolers.