Lecture 14 (21/04/2023, 4.5 hours): more on the Transformer, pre-trained language models; introduction to lexical semantics
More on the Transformer architecture. Pre-trained language models: BERT, GPT, RoBERTa, XLM. Introduction to lexical semantics: meaning representations, WordNet, BabelNet. Neurosymbolic NLP.
No comments:
Post a Comment