Course Summary
This graduate-level course combines lectures with paper readings. It covers several foundational topics in learning theory and explores how these ideas inspire and connect with modern research in machine learning and AI. Topics include PAC learning, uncertainty quantification, no-regret online learning, game theory, reinforcement learning theory, and the theory of in-context learning.
Administrative Information
Lectures: Mon, Wed, 16:40-17:55
Location: inc-Rm FB007
Instructor: Zhun Deng
Teaching Assistants: Xiaowei Yin Office hours: 3-4pm ET every Tuesday Location: Common area at SN Hall (temporary)
Requirements: This is a graduate-level course that requires some mathematical background.
Required background: probability, discrete math, calculus, analysis, linear algebra, algorithms and data structures
Forms
1. Groups for final projects
2. Paper list
3. Paper presentation registration
Grading and Collaboration
Grading: Attendancy (15%), paper presentation (35%), and a final course project (50%).
Collaboration: Collaboration on presentation and course projects are allowed.
Textbook and Readings
1. Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David, available here
2. Online Learning and Online Convex Optimization, by Shai Shalev-Shwartz, available here
3. (Draft) Learning in Games (and Games in Learning), by Aaron Roth, available here
4. (Draft) Reinforcement Learning: Theory and Algorithms, by Alekh Agarwal, Nan Jiang, Sham M. Kakade, and Wen Sun, available here
5. Foundations of Machine Learning, by Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar
6. High-Dimensional Statistics, by Martin Wainwright
Additional papers for reading will be posted as the course progresses.
Schedule
Lecture 1 (08/18): Course Introduction + basic tail bounds (related reading: Book 6, Chapter 2-3)
Lecture 2 (08/20): Introduction to PAC learning (related reading: )
Lecture 3 (08/25): Uniform convergence and VC dimension
TBD