Introduction to syntax. Context-free grammars and languages. Treebanks. Normal forms. Dependency grammars. Syntactic parsing: top-down and bottom-up. Structural ambiguity. Backtracking vs. dynamic programming for parsing. The CKY algorithm. The Earley algorithm. Probabilistic CFGs (PCFGs). PCFGs for disambiguation: the probabilistic CKY algorithm. PCFGs for language modeling.
Home Page and Blog of the Multilingual NLP course @ Sapienza University of Rome
Friday, March 27, 2015
Friday, March 20, 2015
Lecture 4: Part-of-Speech Tagging
Introduction to part-of-speech (POS) tagging. POS tagsets: the Penn Treebank tagset and the Google Universal Tagset. Rule-based POS tagging. Stochastic part-of-speech tagging. Hidden markov models. Deleted interpolation. Linear and logistic regression: Maximum Entropy models. Transformation-based POS tagging. Handling out-of-vocabulary words.
Saturday, March 14, 2015
Lecture 3: language modeling
We introduced N-gram models (unigrams, bigrams, trigrams), together with their probability modeling and issues. We discussed perplexity and its close relationship with entropy, we introduced smoothing and interpolation techniques to deal with the issue of data sparsity.
We also discussed the homework 1a in detail (see slides on the class group).
We also discussed the homework 1a in detail (see slides on the class group).
Friday, March 6, 2015
Lecture 2: morphological analysis + homework 1b
We introduced words and morphemes. Before delving into morphology and morphological analysis, we introduced regular expressions as a powerful tool to deal with different forms of a word. We also introduced finite state transducers for encoding the lexicon and orthographic rules. We assigned homework 1b for Wiktionary-based morphological analysis (which covers 2 of the three homeworks).
Sunday, March 1, 2015
Lecture 1: introduction
We gave an introduction to the course and the field it is focused on, i.e., Natural Language Processing, with a focus on the Turing Test as a tool to understand whether "machines can think". We also discussed the pitfalls of the test, including Searle's Chinese Room argument. We then provided examples of tasks in desperate need for accurate NLP: computer-assisted and machine translation, text summarization, personal assistance, text understanding, machine reading, question answering, information retrieval.
Subscribe to:
Posts (Atom)