Thursday, May 30, 2013

Lecture 10: Statistical Machine Translation

Introduction to Machine Translation. Rule-based vs. Statistical MT. Statistical MT: the noisy channel model. The language model and the translation model. The phrase-based translation model. Learning a model of training. Phrase-translation tables. Parallel corpora. Extracting phrases from word alignments. Word alignments

IBM models for word alignment. Many-to-one and many-to-many alignments. IBM model 1 and the HMM alignment model. Training the alignment models: the Expectation Maximization (EM) algorithm. Symmetrizing alignments for phrase-based MT: symmetrizing by intersection; the growing heuristic. Calculating the phrase translation table. Decoding: stack decoding. Evaluation of MT systems. BLEU. Log-linear models for MT.

Friday, May 24, 2013

Lecture 9: Semantic Role Labeling, Discourse and Advanced Topics

Semantic Fields. The semantics of events. Semantic roles. Thematic Roles. FrameNet. Semantic restrictions and preferences. Semantic Role Labeling. Features. The state of the art.

Discourse: computational discourse. Motivating examples. Unsupervised vs. supervised linear segmentation. Text coherence. Automatic coherence assignment. Reference resolution. Pronominal Anaphora Resolution.

Advanced topic: Multilingual unsupervised Part-of-Speech Tagging.

Lecture 8: NLP Research at Sapienza

Maud Ehrmann: Acronym extraction. Stefano Faralli: Ontology Learning from scratch. Tiziano Flati: automatic harvesting of semantic predicates. Marc Franco Salvador: plagiarism detection. David Jurgens: Gathering annotated data and Relational similarity. Andrea Moro: Semantically-Enhanced Open Information Extraction. Taher Pilehvar: Textual similarity. Daniele Vannella: Word Sense Induction.

Thursday, May 9, 2013

Lecture 7: Word Sense Disambiguation

Introduction to Word Sense Disambiguation (WSD). Motivation. The typical WSD framework. Lexical sample vs. all-words. WSD viewed as lexical substitution and cross-lingual lexical substitution. Knowledge resources. Representation of context: flat and structured representations. Main approaches to WSD: Supervised, unsupervised and knowledge-based WSD. Two important dimensions: supervision and knowledge. Supervised Word Sense Disambiguation: pros and cons. Vector representation of context. Main supervised disambiguation paradigms: decision trees, neural networks, instance-based learning, Support Vector Machines. Unsupervised Word Sense Disambiguation: Word Sense Induction. Context-based clustering. Co-occurrence graphs: curvature clustering, HyperLex. Knowledge-based Word Sense Disambiguation. The Lesk and Extended Lesk algorithm. Structural approaches: similarity measures and graph algorithms. Conceptual density. Structural Semantic Interconnections. Evaluation: precision, recall, F1, accuracy. Baselines. The Senseval and SemEval evaluation competitions. Applications of Word Sense Disambiguation. Issues: representation of word senses, domain WSD, the knowledge acquisition bottleneck.
 

Friday, May 3, 2013

Lecture 6: semantics

Introduction to computational semantics. Syntax-driven semantic analysis. Semantic attachments. First-Order Logic. Lambda notation and lambda calculus for semantic representation. Lexicon, lemmas and word forms. Word senses: monosemy vs. polysemy. Special kinds of polysemy. Computational sense representations: enumeration vs. generation. Graded word sense assignment. Encoding word senses: paper dictionaries, thesauri, machine-readable dictionary, computational lexicons. WordNet. Wordnets in other languages. BabelNet.

Thursday, April 11, 2013

Lecture 5: syntax

Introduction to syntax. Context-free grammars and languages. Treebanks. Normal forms. Dependency grammars. Syntactic parsing: top-down and bottom-up. Structural ambiguity. Backtracking vs. dynamic programming for parsing. The CKY algorithm. The Earley algorithm. Probabilistic CFGs (PCFGs). PCFGs for disambiguation: the probabilistic CKY algorithm. PCFGs for language modeling.

Friday, April 5, 2013

Lecture 4: part-of-speech tagging

Introduction to part-of-speech (POS) tagging. POS tagsets: the Penn Treebank tagset and the Google Universal Tagset. Rule-based POS tagging. Stochastic part-of-speech tagging. Hidden markov models. Deleted interpolation. Linear and logistic regression: Maximum Entropy models. Transformation-based POS tagging. Handling out-of-vocabulary words.

Lecture 3: language models and smoothing

The third lecture was about language models. You discovered how important language models are and how we can approximate real language with them. N-gram models (unigrams, bigrams, trigrams) were discussed, together with their probability modeling and issues. We discussed perplexity and its close relationship with entropy, we introduced smoothing and interpolation techniques to deal with the issue of data sparsity.

Friday, March 15, 2013

Lecture 2: morphological analysis and language models

We continued our introduction to regular expressions in Perl. We also introduced finite state transducers for encoding the lexicon and orthographic rules. Today's lecture is about language models. We discussed the importance of language models and how we can approximate real language with them. We introduced N-gram models (unigrams, bigrams, trigrams), together with their probability modeling and issues.



Thursday, March 7, 2013

Lecture 1: introduction and morphology (1)

We gave an introduction to the course and the field it is focused on, i.e., Natural Language Processing, with a focus on the Turing Test as a tool to understand whether "machines can think". We also discussed the pitfalls of the test, including Searle's Chinese Room argument.

In the second part, we introduced words and morphemes. Before delving into morphology and morphological analysis, we introduced regular expressions as a powerful tool to deal with different forms of a word.

Homework 1: watch 2001: a Space Odyssey!
Homework 2: read The Hitchhiker's Guide to the Galaxy!

So, you see, you now have two homeworks already...

IMPORTANT: Each lecture will start around 12.55 and end at 4 (with a 10-minute break after about 1 hour 1/2). No lecture on Fridays.

Monday, March 4, 2013

Final course schedule: Thursday 12.45-15.45pm

Pfiuuu! Done! The final course schedule is then Thursday 12.45-15.45pm, in Via Salaria 113, third floor, seminar room (aula seminari)! We will not start before 1pm, so if you are coming from Via Ariosto, take your time. We will discuss the 15-20 min range together!

Saturday, February 23, 2013

Please sign up!


The new Google group for the NLP course is now open:

http://groups.google.com/group/naviglinlp2013


IMPORTANT - PLEASE READ: Please sign up by providing in the "You can send additional information to the manager by filling in the text box below." text box:  

First Name, Last Name, Email, Matricola

IMPORTANT 2 - PLEASE READ: I received several emails from the Artificial Intelligence & Robotics M.Sc. degree. I am changing the course schedule. In order to receive further communications about that, please sign up and go to the Doodle poll to provide your course schedule preferences! This will also help me understand how many students will attend and better organize the course and its exam+project.

Saturday, February 16, 2013

Ready to start?

Are you ready to start? The course will most likely start on March 5 with exciting news and a new pragmatic structure (more information soon).