Machine Learning
CS/EE 559/659 Machine Learning
This course will provide theoretical foundations and practical experience in statistical learning methods or machine learning. Both supervised and unsupervised learning methods will be covered. Specific topics include linear models for regression and classifications, decision trees, neural networks, support vector machines, kernel methods, Gaussian processes, dimension reduction, density estimation and clustering. Prerequisite: probability and statistics or equivalent, calculus, linear algebra and proficiency in at least one high-level programming language.
Prerequisite: MATH 530/630 Probability & Statistical Inference for Scientists and Engineers
CS/EE 555/655 Analyzing Sequences
Many types of data occur sequentially. A conversation between two people can be thought of as a sequence of speech turns; a written sentence can be thought of as a sequence of words and punctuation symbols, while a spoken sentence can be thought of as a sequence of phonemes. Time-series data (weather observations, stock tickers, etc.) is inherently sequential, and, of course, molecular biology is replete with examples of sequential data (sequences of nucleotide bases in DNA and RNA, amino acids in protein strands, etc.).
This course, Analyzing Sequences, will concern itself with methods for analyzing these kinds of data, with a particular emphasis on sequences derived from linguistic data. Topics will include:
- Applications of finite state automata, in particular with regards to language modeling;
- Discrete and continuous Hidden Markov Models, in the context of tagging;
- Gaussian Mixture Models & the Expectation-Maximization algorithm;
- Conditional Random Fields;
- Suffix Arrays & Trees/Tries;
- Sequence alignment & matching.
While we will be primarily focused on written language, we will also occasionally explore problems from other areas (such as speech recognition).
CS 560/660 Knowledge Representation and Reasoning
This course surveys the foundations and applications of symbolic approaches to artificial intelligence. Knowledge is declaratively expressed using a formalism, and a formal semantics specifies the conditions under which a formula is true. A number of knowledge representation formalisms are introduced that allow new conclusions to be reached, to make consistent assumptions, plan new actions, or reason about belief. For each formalism, a reasoning procedure is given and shown to only give correct results according to the formal semantics. These reasoning procedures are at the heart of building intelligent agent-based systems. The theory in this course is balanced by building the reasoning procedures and building working programs in the logic-based programming language Prolog. The material in this course is useful for expressing the semantics for natural language processing.
Course Website
CS/EE 623 Deep Learning
This course covers a number of topics in machine learning. Topics include neural networks and multi-layer perceptron, sampling techniques such as Gibbs sampling and Metropolis-Hasting, learning energy-based models such as restricted Boltzmann machines (RBMs), overview of optimization techniques, and sparse autoencoders. The final topic is deep neural networks (DNN), which have been recently demonstrated to outperform other machine learning techniques in a variety of tasks ranging from speech recognition and natural language processing to computer vision. In fact, the topics were purposely chosen to cover all of the background material needed for DDN. Students will learn how to effectively train DNNs through unsupervised and supervised techniques, and will enable them to employ DNNs in their research problems. The course will also draw from applications in speech and language processing. Recommended background of this course includes programming proficiency in Python or Matlab, enough knowledge of calculus, linear algebra, and probability theory.
Prequisites: MATH 530/630 Probability & Statistical Inference for Scientists and Engineers, and CS/EE 559/659 Machine Learning.