The field of Natural Language Processing went through a significant paradigm shift in the last few years, moving rapidly from grammars and rulesets to neural networks. In this talk, I will focus on the latest significant advances in the field, namely contextual representation and the transformer architecture for neural language models, showing their relation to natural language inference and understanding.