PENN CIS 625, FALL 2020: THEORY OF MACHINE LEARNING
Prof. Michael Kearns
Tuesdays and Thursdays 10:30AM-Noon
Location: Virtual lectures via Zoom at this URL. All students are strongly encouraged to attend lectures "live" and with video on to increase engagement and interaction. However, all lectures will be recorded and notes provided to accomodate those in other time zones.
Office Hours: Prof. Kearns will hold office hours from noon to 1PM after each Tuesday's lecture at this URL.
URL for this page:
Previous incarnations of this course:
www.cis.upenn.edu/~mkearns/teaching/COLT/colt15.html (with Grigory Yaroslavtsev)
www.cis.upenn.edu/~mkearns/teaching/COLT/colt12.html (with Jake Abernethy)
www.cis.upenn.edu/~mkearns/teaching/COLT/colt08.html (with Koby Crammer)
This course is an introduction to the theory of machine learning, and provides mathematical, algorithmic, complexity-theoretic and probabilistic/statistical foundations to modern machine learning and related topics.
The first part of the course will follow portions of An Introduction to Computational Learning Theory, by M. Kearns and U. Vazirani (MIT Press). We will cover perhaps 6 or 7 of the chapters in K&V over (approximately) the first half of the course, often supplementing with additional readings and materials. Copies of K&V will be available at the Penn bookstore; I'm also looking into providing electronic access, as I do for the first chapter in the schedule below. The second portion of the course will focus on a number of models and topics in learning theory and related areas not covered in K&V.
The course will give a broad overview of the kinds of theoretical problems and techniques typically studied and used in machine learning, and provide a basic arsenal of powerful mathematical tools for analyzing machine learning problems.
Topics likely to be covered include:
COURSE FORMAT, REQUIREMENTS, AND PREREQUISITES
Much of the course will be in fairly traditional "chalk talk" lecture format (virtually on Zoom), but with ample opportunity for discussion, participation, and critique. The course will meet Tuesdays and Thursdays from 10:30 to noon, with the first meeting on Tuesday September 1.
The course will involve advanced mathematical material and will cover formal proofs in detail, and will be taught at the doctoral level. While there are no specific formal prerequisites, background or courses in algorithms, complexity theory, discrete math, combinatorics, convex optimization, probability theory and statistics will prove helpful, as will "mathematical maturity" in general. We will be examining detailed proofs throughout the course. If you have questions about the desired background, please ask. Auditors and occasional participants are welcome.
The course requirements for registered students will be a mixture of active in-class participation, problem sets, possibly leading a class discussion, and a final project. The final projects can range from actual research work, to a literature survey, to solving some additional problems.
Tue Sep 1 Thu Sep 3 Tue Sep 8 Thu Sep 10 Tue Sep 15 Thu Sep 17 Tue Sep 22 Thu Sep 24 Tue Sep 29 Thu Oct 1 Tue Oct 6 Thu Oct 8 Tue Oct 13 Thu Oct 22 Tue Oct 27 Thu Oct 29 Tue Nov 3 Thu Nov 5 Tue Nov 10 Thu Nov 12 Tue Nov 17 Thu Nov 19 Tue Nov 24 Tue Dec 1 Thu Dec 3 Tue Dec 8
The exact timing and set of topics below will depend on our progress and will be updated as we proceed.
Course overview, topics, and mechanics.
Model, algorithm and analysis for the rectangle learning problem.
Introduction to the PAC model.
Learning Boolean conjunctions in the PAC learning model.
Hardness of PAC-learning 3-term DNF.
PAC learning 3-term DNF learnability by 3CNF. Introduction to consistency, compression and learning.
Consistency, compression and learning, continued.
Consistency, compression and learning, continued. Discussion of Problem Set 1.
Introduction to the VC dimension.
VC dimension continued: Sauer's Lemma.
VC dimension continued: probalistic analysis, two-sample trick.
VC dimension continued: sample size lower bounds; structural risk minimization; distribution-dependent improvements.
VC dimension finished (finally): data-dependent improvements; other loss functions. Musings on Problem Set 1.
More musings on Problem Set 1; introduction to learning with classification noise (CN).
Learning with CN continued; the statistical query (SQ) model.
The SQ model continued: SQ learning implies CN learning; hardness of parity functions in SQ.
The SQ model continued: the SQ dimension; hardness of learning decision trees and DNF in the SQ model.
The PAC Solar System: containments and separations; cryptographic hardness; PAC with membership queries. Preview of Problem Set 2.
Boosting: model and problem formulation; Schapire's ternary majority construction.
Adaboost: algorithm and analysis; connection to weak and strong PAC learning.
Introduction to no-regret learning; Multiplicative Weights algorithm and analysis.
MW analysis continued; discussion and connection to PAC; lower bounds on regret.
No-regret learning and game theory; the minimax theorem.
Fairness in ML: introduction; group fairness definitions; where ML can go wrong.
Fairness continued: a post-processing approach and algorithm; error-unfairness Pareto frontiers.
Fairness continued: endogenizing fairness into training; game-theoretic/no-regret approach; individual/metric fairness; fine-grained subgroup fairness.
Tue Sep 1
Thu Sep 3
Tue Sep 8
Thu Sep 10
Tue Sep 15
Thu Sep 17
Tue Sep 22
Thu Sep 24
Tue Sep 29
Thu Oct 1
Tue Oct 6
Thu Oct 8
Tue Oct 13
Thu Oct 22
Tue Oct 27
Thu Oct 29
Tue Nov 3
Thu Nov 5
Tue Nov 10
Thu Nov 12
Tue Nov 17
Thu Nov 19
Tue Nov 24
Tue Dec 1
Thu Dec 3
Tue Dec 8