The study of linguistic complexity is a deeply multidisciplinary area, ranging from the study of cognitive processing in human readers to the classification of structural complexity characterizing expressions in natural language. In recent times, the use of computational methods for the processing and analysis of language has produced important developments in the understanding of multiple phenomena associated with linguistic complexity. In line with the state of the art in the sector, this thesis presents a model-driven study of multiple phenomena associated with linguistic complexity. First, relationships subsisting between various extrinsic complexity metrics - perception of linguistic complexity, readability, cognitive processing and predictability - are empirically explored, highlighting similarities and differences from a linguistically and cognitively motivated perspective. Then, it is studied how the information underlying the different complexity metrics can be acquired by language models based on neural networks, at various levels of abstraction and granularity, using interpretability techniques derived from the literature on natural language processing. In conclusion, the ability of various computational models of complexity in predicting cognitive processing difficulties associated with atypical syntactic constructs, such as garden-path sentences, is evaluated. The experimental results of this study provide convergent evidence regarding the limited abstraction and generalization abilities of state-of-the-art neural language models for the prediction of linguistic complexity, and encourage the adoption of lines of research that integrate symbolic and interpretable information in this sector. Code is available at https://github.com/gsarti/interpreting-complexity