CIS 700: Explainable AI

Lyle Ungar

Artificial intelligence is seeing a flourishing of methods for generating explanations of complex machine-learned models. These are useful for debugging machine learning algorithms, which often give the right predictions for the wrong reasons and thus fail to generalize, and for detecting bias in models. They are also critical for applications in medicine, where doctors want to know the logic behind any given recommendation. Explanations take many forms, from simplified rule-based models to visualizations of which pixels drive a given neural network node or prediction. We will read both classical papers on topics such as variable importance and more recent papers describing popular methods such as LIME and SHAP and their application in NLP, machine vision, and medicine. In addition to reading and discussing papers, students will implement a novel explainable AI algorithm and test it on real data.

Prerequisites: CIS519, CIS520 or equivalent
Evaluation: class participation 20%, short paper comments 30%, final project 50%
Meeting: Wednesdays 2-5; DRLB 3C6; attendance mandatory
Enrollment Apply via the waitlist (only by permission)
Links
Format (subject to change): Every week, papers will be assigned to be read in advance of the class. Students will submit a one page write-up with observations and questions about the papers in advance of class, and we will discuss the papers in class. Students will also do a final project, which they will present at the end of the semester.

ungar@cis.upenn.edu