Wednesday, June 3, 2015

Lecture 11: multilingual POS tagging, Open Information Extraction and research in Rome!

Multilingual part-of-speech tagging. (Open) Information Extraction.

NLP research at Sapienza:

Monday, May 25, 2015

Lecture 10: semantic similarity and relatedness / Natural Language Generation

What is semantic relatedness? String-based similarity measures. Longest common substring/subsequence; n-gram overlap. Knowledge-based approaches: Lesk; Leacock & Chodorow; Wu & Palmer. Corpus-based approaches: Vector-space models, Explicit Semantic Analysis (ESA). Align, Disambiguate and Walk. Cross-level semantic similarity.

Introduction to Natural Language Generation, by prof. Michael Zock.

Friday, May 15, 2015

Lecture 9: statistical machine translation

Introduction to Machine Translation. Rule-based vs. Statistical MT. Statistical MT: the noisy channel model. The language model and the translation model. The phrase-based translation model. Learning a model of training. Phrase-translation tables. Parallel corpora. Extracting phrases from word alignments. Word alignments

IBM models for word alignment. Many-to-one and many-to-many alignments. IBM model 1 and the HMM alignment model. Training the alignment models: the Expectation Maximization (EM) algorithm. Symmetrizing alignments for phrase-based MT: symmetrizing by intersection; the growing heuristic. Calculating the phrase translation table. Decoding: stack decoding. Evaluation of MT systems. BLEU.

Presentation of the NLP projects.

Friday, May 8, 2015

Lecture 8: Neural networks, word embeddings and deep learning

Motivation. The perceptron. Input encoding, sum and activation functions; objective function. Linearity of the perceptron. Neural networks. Training. Backpropagation. Connection to Maximum Entropy. Connection to language. Vector representations. NN for the bigram language model. Word2vec: CBOW and skip-gram. Word embeddings. Deep learning. Language modeling with NN. The big picture.

Friday, April 24, 2015

Lecture 7: Word Sense Disambiguation and Entity Linking

Introduction to Word Sense Disambiguation (WSD). Motivation. The typical WSD framework. Lexical sample vs. all-words. WSD viewed as lexical substitution and cross-lingual lexical substitution. Knowledge resources. Representation of context: flat and structured representations. Main approaches to WSD: Supervised, unsupervised and knowledge-based WSD. Two important dimensions: supervision and knowledge. Supervised Word Sense Disambiguation: pros and cons. Vector representation of context. Main supervised disambiguation paradigms: decision trees, neural networks, instance-based learning, Support Vector Machines. Unsupervised Word Sense Disambiguation: Word Sense Induction. Context-based clustering. Co-occurrence graphs: curvature clustering, HyperLex. Knowledge-based Word Sense Disambiguation. The Lesk and Extended Lesk algorithm. Structural approaches: similarity measures and graph algorithms. Conceptual density. Structural Semantic Interconnections. Evaluation: precision, recall, F1, accuracy. Baselines. Entity Linking. Main approaches. Babelfy.


Friday, April 10, 2015

Lecture 6: semantics

Introduction to computational semantics. Syntax-driven semantic analysis. Semantic attachments. First-Order Logic. Lambda notation and lambda calculus for semantic representation. Lexicon, lemmas and word forms. Word senses: monosemy vs. polysemy. Special kinds of polysemy. Computational sense representations: enumeration vs. generation. Graded word sense assignment. Encoding word senses: paper dictionaries, thesauri, machine-readable dictionary, computational lexicons. WordNet. Wordnets in other languages. BabelNet.

Friday, March 27, 2015

Lecture 5: syntax

Introduction to syntax. Context-free grammars and languages. Treebanks. Normal forms. Dependency grammars. Syntactic parsing: top-down and bottom-up. Structural ambiguity. Backtracking vs. dynamic programming for parsing. The CKY algorithm. The Earley algorithm. Probabilistic CFGs (PCFGs). PCFGs for disambiguation: the probabilistic CKY algorithm. PCFGs for language modeling.

Friday, March 20, 2015

Lecture 4: Part-of-Speech Tagging

Introduction to part-of-speech (POS) tagging. POS tagsets: the Penn Treebank tagset and the Google Universal Tagset. Rule-based POS tagging. Stochastic part-of-speech tagging. Hidden markov models. Deleted interpolation. Linear and logistic regression: Maximum Entropy models. Transformation-based POS tagging. Handling out-of-vocabulary words.

Saturday, March 14, 2015

Lecture 3: language modeling

We introduced N-gram models (unigrams, bigrams, trigrams), together with their probability modeling and issues. We discussed perplexity and its close relationship with entropy, we introduced smoothing and interpolation techniques to deal with the issue of data sparsity.

We also discussed the homework 1a in detail (see slides on the class group).

Friday, March 6, 2015

Lecture 2: morphological analysis + homework 1b

We introduced words and morphemes. Before delving into morphology and morphological analysis, we introduced regular expressions as a powerful tool to deal with different forms of a word. We also introduced finite state transducers for encoding the lexicon and orthographic rules. We assigned homework 1b for Wiktionary-based morphological analysis (which covers 2 of the three homeworks).


Sunday, March 1, 2015

Lecture 1: introduction

We gave an introduction to the course and the field it is focused on, i.e., Natural Language Processing, with a focus on the Turing Test as a tool to understand whether "machines can think". We also discussed the pitfalls of the test, including Searle's Chinese Room argument. We then provided examples of tasks in desperate need for accurate NLP: computer-assisted and machine translation, text summarization, personal assistance, text understanding, machine reading, question answering, information retrieval.


Friday, February 13, 2015

SIGN UP NOW!

IMPORTANT: The 2015 class hour schedule will be on Fridays 2.30pm-5.45pm. BUT: we will discuss tomorrow whether we can move it to 4-7pm.
Please sign up to the NLP class!