Friday, May 19, 2017

Lecture 12: Entity linking; semantic similarity; sense embedding; semantic parsing; project presentation

Entity Linking. Main approaches. AIDA, TagMe, Wikifier, DBpedia spotlight, Babelfy. The MASC annotated corpus. Semantic similarity. Sense embeddings. Semantic parsing and Abstract Meaning Representations. Project presentation.

Friday, May 12, 2017

Lecture 11: Word Sense Disambiguation

Introduction to Word Sense Disambiguation (WSD). Motivation. The typical WSD framework. Lexical sample vs. all-words. WSD viewed as lexical substitution and cross-lingual lexical substitution. Knowledge resources. Representation of context: flat and structured representations. Main approaches to WSD: Supervised, unsupervised and knowledge-based WSD. Two important dimensions: supervision and knowledge. Supervised Word Sense Disambiguation: pros and cons. Vector representation of context. Main supervised disambiguation paradigms: decision trees, neural networks, instance-based learning, Support Vector Machines, IMS with embeddings, neural approaches to WSD. Unsupervised Word Sense Disambiguation: Word Sense Induction. Context-based clustering. Co-occurrence graphs: curvature clustering, HyperLex. Knowledge-based Word Sense Disambiguation. The Lesk and Extended Lesk algorithm. Structural approaches: similarity measures and graph algorithms. Conceptual density. Structural Semantic Interconnections. Evaluation: precision, recall, F1, accuracy. Baselines. Entity Linking.

Sunday, May 7, 2017

Lecture 10: computational semantics (2/2)

Encoding word senses: paper dictionaries, thesauri, machine-readable dictionary, computational lexicons. WordNet. Wordnets in other languages. Problems of wordnets. BabelNet. Presentation of the third homework: question-answer pair extraction.



Friday, April 28, 2017

Lecture 9: syntactic parsing (2/2); intro to computational semantics (1/2)

The Earley algorithm. Probabilistic CFGs (PCFGs). PCFGs for disambiguation: the probabilistic CKY algorithm. PCFGs for language modeling. Introduction to computational semantics. Syntax-driven semantic analysis. Semantic attachments. First-Order Logic. Lambda notation and lambda calculus for semantic representation. Lexicon, lemmas and word forms. Word senses: monosemy vs. polysemy. Special kinds of polysemy. Computational sense representations: enumeration vs. generation. Graded word sense assignment.

Friday, April 21, 2017

Lecture 8: syntactic parsing (1/2)

Introduction to syntax. Context-free grammars and languages. Treebanks. Normal forms. Dependency grammars. Syntactic parsing: top-down and bottom-up. Structural ambiguity. Backtracking vs. dynamic programming for parsing. The CKY algorithm. Neural transition-based dependency parsing.

Friday, April 7, 2017

Lecture 7: part-of-speech tagging

Introduction to part-of-speech (POS) tagging. POS tagsets: the Penn Treebank tagset and the Google Universal Tagset. Rule-based POS tagging. Stochastic part-of-speech tagging. Hidden markov models. Deleted interpolation. Linear and logistic regression: Maximum Entropy models. Transformation-based POS tagging. Handling out-of-vocabulary words. The Stanford POS tagger. Neural POS tagging with bidirection LSTMs. Presentation of homework 2.


Friday, March 31, 2017

Lecture 6: deep learning; intro to part of speech tagging

Recurrent Neural Networks and Long-Short Term Memory networks. Practical session on character-based LSTMs with Keras. Introduction to part-of-speech tagging.


Monday, March 27, 2017

Lecture 5: practical session on Keras; more on NNs for NLP; word embeddings

Practical session on Keras. More on NNs for NLP: hierarchical softmax; negative sampling. Vector representations. Word2vec. Word embeddings and their properties.


Friday, March 17, 2017

Lecture 4: language modeling (2); neural networks and NLP

We discussed perplexity and its close relationship with entropy, we introduced smoothing and interpolation techniques to deal with the issue of data sparsity. Practical session on language modeling with Python and the Berkeley LM toolkit.

Friday, March 10, 2017

Lecture 3: morphological analysis: practical session; homework 1; language modeling (1)

We had a practical session on morphological analysis in Python and Java. We reviewed basic probability concepts. introduced N-gram models (unigrams, bigrams, trigrams), together with their probability modeling and issues.

We also discussed homework 1 (see post on the class group).