CIS 620: ADVANCED TOPICS IN ARTIFICIAL INTELLIGENCE

URL for this page: http://www.cis.upenn.edu/~cis620/home.html

COURSE COORDINATES:
Wednesday 10:30 - 1:30 in Moore 222. Some meetings may run shorter than the full three hours. Special guest lectures may be given at other times and locations.

INSTRUCTOR:
Dr. Michael Kearns
AT&T Labs, AI Principles Research Department
Telephone: (973)360-8322
Fax: (973)360-8970
Local office: Moore 561A
Local phone: (215)573-2821
Email: mkearns@linc.cis.upenn.edu or mkearns@research.att.com
Office hours: I will generally try to be around for a few hours following class each Wednesday.


COURSE DESCRIPTION:
The foundations of artificial intelligence have shifted dramatically in the last decade, with probablistic and statistical frameworks for classical problems resulting in new algorithms, analyses and applications. As a direct result of this "probablistic revolution", there is increased coherence between the various subfields of AI. This weekly seminar course will examine a sampling of models and methods in modern AI, including probablistic reasoning (Bayesian networks and graphical models), Markov decision processes and reinforcement learning, machine learning and neural networks, visual processing, and computational neuroscience.

Students should have a firm grasp of basic probability theory and statistics; some background in the analysis of algorithms and the theory of computation is useful but not required.


COURSE FORMAT:
The course will be run in an informal manner, as a mixture of ``seminar'' and ``reading group'' formats. I will give some lectures, and we will also read and discuss papers together; we may have some outside speakers on particular topics as well. Requirements for registered students are to be determined, but might include periodically being responsible for leading the discussion on a paper or presenting certain portions of it to the class; there may be an occasional exercise as well.


COURSE CONTENTS:
In keeping with the informal nature of the course, this listing will grow and evolve depending on the interests of the participants and our rate of progress. The first two topics below (graphical models and reinforcement learning), however, we will certainly study, and I've already put some of the material we may examine below. I'll try to keep the web page updated to at least reflect what we've covered, and what we're about to cover.

PART I: GRAPHICAL MODELS AND PROBABILISTIC INFERENCE

WEEK 1 (Jan 14): Representing probability distributions by Bayesian networks (directed graphical models); subtleties of inference in Bayesian networks (``explaining away''); potential simplifications from hidden variables; the inference and learning problems; the d-separation criterion for the conditional independence P(X,Y|E) = P(X|E)P(Y|E).

  • Copies of the slides I used in the first meeting, plus some additional material. Unfortunately, does not contain the diagrams, which are lifted from the Russell and Norvig AI textbook for those who have it.

    The following paper has some tutorial material on Bayesian networks, although it is more oriented towards learning than inference:

  • "A Tutorial on Learning with Bayesian Networks", D. Heckerman

    WEEK 2 (Jan 21): Review of d-separation; an efficient algorithm for exact inference in polytrees; variational methods for approximate inference. Please start reading the Jordan et al. paper below in preparation.

  • "An Introduction to Variational Methods for Graphical Models", M. Jordan, Z. Ghahramani, T. Jaakkola, L. Saul

    WEEK 3 (Jan 28): We'll continue an examination of variational methods for approximate inference, especially as applied to two-layer noisy-OR networks. I recommend taking a look at the relevant sections of the Jordan et al. paper above and the Jaakkola and Jordan paper below. We'll also take a look at an experimental evaluation of the use of such networks for medical diagnosis; this is the Middleton et al. paper below.

  • "Variational Methods and the QMR-DT Database", T. Jaakkola, M. Jordan

  • "Probabilistic Diagnosis Using a Reformulation of the INTERNIST-1/QMR Knowledge Base II: Evaluation of Diagnostic Performance", B. Middleton, M. Shwe, D. Heckerman, M. Henrion, E. Horvitz, H. Lehmann, G. Cooper

    WEEK 4 (Feb 4): We'll continue examining algorithms for approximate inference in Bayesian networks. In particular, we'll take a look at a number of sampling-based approaches, and also cover some basic material on Markov chains and their convergence times that will also prove beneficial when we later study reinforcement learning. I may also describe some recent work I have been doing with Larry Saul that proves performance guarantees for algorithms related to the variational methods, and make some connections with sampling approaches.

    Much of the material for this week will be drawn from the two Radford Neal publications below; the first is a long review, and I'll stay mainly in Chapters 3 and 4.

  • "Probabilistic Inference Using Markov Chain Monte Carlo Methods", R. Neal.

  • "Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function", R. Neal.

    WEEK 5 (Feb 11): We will wrap up our study of inference in Bayesian networks with a group discussion of the two papers below. The first defines a stochastic functional programming language that generalizes the models we have been studying, and gives a procedure for inference in distributions defined by programs in this language. The second draws connections between inference procedures in Bayesian networks and the problem of decoding various classical and recent codes for noisy channels.

    Please take a look at them in advance of the class, and please bring hard copies with you to class.

  • "Effective Bayesian Inference for Stochastic Programs", D. Koller, D. McAllester, A. Pfeffer

  • "A Revolution: Belief Propagation in Graphs with Cycles", B. Frey, D. MacKay

    ADDITIONAL MATERIAL ON GRAPHICAL MODELS:

  • "An Introduction to Graphical Models", M. Jordan

    A SIMULATOR FOR BAYESIAN NETWORKS (color monitor preferable):

  • Link to Fabio Cozman's JavaBayes Simulator

    PART II: COMPUTATIONAL NEUROSCIENCE

    WEEK 6 (Feb 18): NO CLASS

    WEEK 7 (Feb 23): We will dive right in with a paper that touches on many relevant and current topics in Computational Neuroscience, including the specialization of neurons, problems of measurement, reconstruction of stimuli from spike train data, and modeling.

    Please bring a hard copy to class with you.

  • "Interpreting Neuronal Population Activity by Reconstruction: A Unified Framework with Applications to Hippocampal Place Cells", K. Zhang, I. Ginzburg, B. McNaughton, T. Sejnowski

    WEEK 8 (Mar 4): Continuing our readings in Computational Neuroscience, we'll cover the following three papers; please ``peruse'' and bring copies to class.

  • "Coding of Naturalistic Stimuli by Auditory Midbrian Neurons", H. Attias, C.E. Schreiner

  • "Using Helmholtz Machines to Analyze Multi-Channel Neuronal Recordings", V. de Sa, R.C. deCharms, M.M. Merzenich

  • "A Non-Linear Information Maximisation Algorithm that Performs Blind Separation", A.J. Bell, T.J. Sejnowski

    WEEK 9 (Mar 11):

    Spring break; no class

    WEEK 10 (Mar 18):

    We'll finish our studies in computational neuroscience with the following paper.

  • "Hidden Markov Modeling of Simultaneously Recorded Cells in the Associative Cortex of Behaving Monkeys", I. Gat, N. Tishby, M. Abeles

    PART III: MARKOV DECISION PROCESSES AND REINFORCEMENT LEARNING

    WEEK 11 (Mar 25):

    SPECIAL GUEST LECTURE BY PROF. SATINDER SINGH, U. COLORADO

    WEEK 12 (Apr 1):

    I'll describe the new E^3 algorithm for reinforcement learning.