PENN CIS 625, SPRING 2015: COMPUTATIONAL LEARNING THEORY

Prof. Michael Kearns
mkearns@cis.upenn.edu

Dr. Grigory Yaroslavtsev
grigory.yaroslavtsev@gmail.com

Time: Mondays 12-3 PM
Location: 303 Towne and 307 Levine (we'll use both locations depending on the week, in order to be able to serve lunch)

IMPORTANT NOTE: The first course meeting will be on Monday Jan 26.

URL for this page:
www.cis.upenn.edu/~mkearns/teaching/COLT

Previous incarnations of this course:
www.cis.upenn.edu/~mkearns/teaching/COLT/colt12.html (with Jake Abernethy)
www.cis.upenn.edu/~mkearns/teaching/COLT/colt08.html (with Koby Crammer)


COURSE DESCRIPTION

This course is an introduction to Computational Learning Theory, a field which attempts to provide algorithmic, complexity-theoretic and probabilistic foundations to modern machine learning and related topics.

The first part of the course will closely follow portions of An Introduction to Computational Learning Theory, by M. Kearns and U. Vazirani (MIT Press). We will cover perhaps 6 or 7 of the chapters in K&V over (approximately) the first half of the course, often supplementing with additional readings and materials. Copies of K&V will be available at the Penn bookstore. The second portion of the course will focus on a number of models and topics in learning theory and related areas not covered in K&V.

The course will give a broad overview of the kinds of problems and techniques typically studied in Computational Learning Theory, and provide a basic arsenal of powerful mathematical tools for analyzing machine learning problems.

Topics likely to be covered include:

  • Basics of the Probably Approximately Correct (PAC) Learning Model
  • Occam's Razor, Compression and Learning
  • Uniform Convergence and the Vapnik-Chervonenkis Dimension
  • Learning in the Presence of Noise and Statistical Query Learning
  • Learning and Cryptography
  • Query Models
  • Boosting
  • Online and No-Regret Learning
  • Discrete Fourier Methods in Learning
  • Agnostic Learning
  • Property Testing and Learning

    Additional topics we may also cover include:

  • Active Learning
  • Learning Theory and Algorithmic Mechanism Design
  • Learning Theory and Differential Privacy
  • Learning Theory and Computational Models of Evolution
  • PAC-style Analyses in Reinforcement Learning
  • PAC-style Analyses in Probabilistic Inference


    COURSE FORMAT, REQUIREMENTS, AND PREREQUISITES

    Much of the course will be in fairly traditional "chalk talk" lecture format, but with ample opportunity for discussion, participation, and critique. The course will meet once a week on Mondays from 12 to 3 (but see note above about the first meeting). Lunch will be served.

    The course will involve advanced mathematical material and will cover formal proofs in detail, and will be taught at the doctoral level. While there are no specific formal prerequisites, background or courses in algorithms, complexity theory, discrete math, combinatorics, probability theory and statistics will prove helpful, as will "mathematical maturity" in general. We will be examining detailed proofs throughout the course. If you have questions about the desired background, please ask. Auditors and occasional participants are welcome.

    The course requirements for registered students will be some to-be-determined mixture of active in-class participation, problem sets, possibly leading a class discussion, and a final project. The final projects can range from actual research work, to a literature survey, to solving some additional problems.


    DETAILED MEETING/TOPIC SCHEDULE
    The exact timing and set of topics below will depend on our progress and will be updated as we proceed. "[MK]" indicates lectures by Prof. Kearns; "[GY]" by Dr. Yaroslavtsev.

    Mon Jan 26
    [MK]

    In the first meeting, we will go over course mechanics and present a course overview, then immediately begin investigating the Probably Approximately Correct (PAC) model of learning. Detailed topics covered: Learning rectangles in the real plane; definition of the PAC model; PAC-learnability of rectangles in d dimensions; PAC-learnability of conjunctions of boolean variables.

    READING: K&V Chapter 1.

    Mon Feb 2
    [MK; meeting in Levine 307]

    Intractability of PAC-learning 3-term DNF. Learning with a larger hypothesis class. General sample complexity bounds via consistency and cardinality.

    READING: K&V Chapters 1,2.

    Homework 1 --- Due in hardcopy format at the start of class, Monday Feb 9: Solve the following problems from K&V: 1.1, 1.3, 2.3, 2.4. You are free to collaborate on the homework assignments, but please turn in separate write-ups and acknowledge your collaborations. Please do not attempt to look up solutions in the literature.

    Mon Feb 9
    [MK; meeting in Levine 307]

    Occam's Razor, learning and compression. Sample complexity bounds via the Vapnik-Chervonenkis dimension.

    READING: K&V Chapters 2,3.

    Mon Feb 16
    [MK; meeting in Towne 303]

    Sample complexity bounds via the Vapnik-Chervonenkis dimension, continued.

    READING: K&V Chapter 3.

    Mon Feb 23
    [MK; meeting in Levine 307]

    VC dimension wrap-up. Learning with classification noise.

    READING: K&V Chapter 5.

    Homework 2 --- Due in hardcopy format at the start of class, Monday Mar 2: Solve the following problems from K&V: 3.1, 3.2, 3.5, 3.7. You are free to collaborate on the homework assignments, but please turn in separate write-ups and acknowledge your collaborations. Please do not attempt to look up solutions in the literature.

    Mon Mar 2
    [MK; meeting in Levine 307]

    Classification noise and the statistical query model.

    READING: K&V Chapter 5.

    Mon Mar 9
    Spring break, no meeting

    Mon Mar 16
    [MK; meeting in Levine 307]

    Boosting.

    READING: K&V Chapter 4.

    Mon Mar 23
    [MK; meeting in Towne 303]

    Boosting continued; learning and cryptography; learning finite automata from examples and membership queries.

    READING: K&V Chapter 4, 6, 8.

    Mon Apr 6
    [GY; meeting in Towne 303]

    Basic Fourier analysis: Fourier expansion; basic Fourier formulas; Parseval/Plancharel; convolution
    Approximate linearity; Blum-Luby-Rubinfeld test and analysis
    Spectral structure and learning: low-degree spectral concentration; subspaces an decision trees; restrictions; Goldreich-Levin/Kushilevitz Mansour algorithm

    Much of this material is covered in Ryan O'Donnell's book "Analysis of Boolean Functions", which can be downloaded here.

    Here are slides for these lectures.

    Mon Apr 13
    [GY; meeting in Levine 307]

    More Fourier Fun. Here are slides for these lectures.

    Mon Apr 20
    [GY; meeting in Levine 307]

    Learning and Testing Submodular Functions. Here are slides for these lectures.

    Mon Apr 27
    [GY; meeting in Levine 307]

    L_p testing. Here are slides for these lectures.


    INFORMATION ON COURSE PROJECTS.

    For those of you registered for credit in the course, your final assignment will be a course project, for which you have three choices:

  • Research Paper. For a research paper, you should either choose an open problem related to the models we have examined in the class and write up your attempts to solve it; or you should write a paper introducing a new model of learning that extends or modifies those we examined in class, and ideally motivate and illustrate your model with some initial results. For those going the "open problem" route, you must both choose your own problem (don't ask me for problems; that's part of doing research), and do enough of a literature search to make sure it hasn't been solved already. Ditto for finding and researching your own new model.
  • Final Problem Set. Here you can choose and solve any 8 problems from K&V that were not already assigned in the homeworks. Try to pick a mix of difficulty levels.
  • Literature Survey/Synthesis. Here you may pick a few related papers from the learning theory literature, and do a write-up surveying their models and results, and relating the papers to each other. Try not to simply regurgitate what's in the papers themselves.

    For your final projects, you should work alone. You are welcome to send email to Prof Kearns proposing what your final project will be in order to get feedback and advice.

    The deadline for final projects will be May 10, via email to Prof Kearns.