**
COMPUTATIONAL GAME THEORY: A TUTORIAL**

Neural Information Processing Systems (NIPS) 2002

December 9, 2002

Vancouver, Canada

**
****MICHAEL KEARNS**

**Computer and Information Sciences**

**Institute for Research in Cognitive Science**

**University of Pennsylvania**

**
Research web site:
****http://www.cis.upenn.edu/~mkearns**

Email:
**mkearns@cis.upenn.edu **

** TUTORIAL DESCRIPTION **

Recently there has been renewed interest in game theory in several research
disciplines, with its uses ranging from the modeling of evolution to the design of
distributed protocols. In the AI community, game theory is emerging as the
dominant formalism for studying strategic and cooperative interaction in
multi-agent systems.
Classical work provides rich mathematical foundations and equilibrium concepts,
but relatively little in the way of computational and representational insights that
would allow game theory to scale up to large, complex systems. The rapidly
emerging field of computational game theory is addressing such algorithmic
issues, and this tutorial will provide a survey of developments so far. As the NIPS
community is well-poised to make significant contributions to this area, special
emphasis will be placed on connections to more familiar topics. The tentative outline is:

Examples of Strategic Conflict as Matrix Games
Basics Definitions of (Matrix) Game Theory
Notions of Equilibrium: Overview
Definition and Existence of Nash Equilibria
Computing Nash Equilibria for Matrix Games
Graphical Models for Multiplayer Game Theory
Computing Nash Equilibria in Graphical Games
Other Equilibrium Concepts:
Correlated Equilibria,
Correlated Equilibria and Graphical Games,
Evolutionary Stable Strategies,
Nash's Bargaining Problem, Cooperative Equilibria
Learning in Repeated Games:
Classical Approaches;
Regret Minimizing Algorithms
Games with State:
Connections to Reinforcement Learning
Other Directions and Conclusions
The tutorial will be self-contained, assuming no prior knowledge of game theory.

** PRESENTATION MATERIAL **

Here are **postscript slides** for the main body of
the tutorial.
Here they are in **PDF.**
Here are **powerpoint slides** on the topic of
graphical models and game theory.

** REFERENCES **

Here I will try to collect some pointers to articles discussed in, or related to,
the tutorial presentation.

** Basics of Game Theory:**

[McKelvey and McLennan, 1996]Richard D.
McKelvey and Andrew McLennan.
Computation of equilibria in finite games.
In Handbook of Computational Economics, volume I, pages 87-142.
1996.

[Nash, 1951]J. F. Nash.
Non-cooperative games.
Annals of Mathematics, 54:286-295, 1951.

[Owen, 1995]G. Owen.
Game Theory.
Academic Press, UK, 1995.

** Graphical Models for Game Theory:**

[Kakade et al., 2001] S. Kakade, M. Kearns, J. Langford, L. Ortiz.
Correlated equilibria and graphical games.
Preprint, 2002.

[Kearns et al., 2001]M. Kearns, M. Littman, and
S. Singh.
Graphical models for game theory.
In Proceedings of the Conference on Uncertainty in Artificial
Intelligence, pages 253-260, 2001.

[Koller and Milch, 2001]Daphne Koller and Brian
Milch.
Multi-agent influence diagrams for representing and solving games.
Submitted, 2001.

[LaMura, 2000]P. La Mura.
Game networks.
In Proceedings of the 16th Conference on Uncertainty in Artificial
Intelligence (UAI), pages 335-342, 2000.

[Littman et al., 2002]M. Littman, M. Kearns,
and S. Singh.
An efficient exact algorithm for singly connected graphical games.
In Neural Information Processing Systems, 2002.

[Ortiz and Kearns, 2003]L. Ortiz and M. Kearns.
Nash propagation for loopy graphical games.
In Neural Information Processing Systems, 2003.
To appear.

[Vickrey and Koller, 2002]D. Vickrey and
D. Koller.
Multi-agent algorithms for solving graphical games.
In Proceedings of the National Conference on Artificial Intelligence
(AAAI), 2002.
To appear.

** Other Compact Game Representations:**

[Kearns and Mansour, 2002]M. Kearns and
Y. Mansour.
Efficient Nash computation in large population games with bounded influence.
Proceedings of UAI, 2002.

[Monderer and Shapley, 1996]D. Monderer and
L. Shapley.
Potential games.
Games and Economic Behavior, 14:124-143, 1996.

[Rosenthal, 1973]R. Rosenthal.
A class of games possessing pure-strategy Nash equilibria.
International Journal of Game Theory, 2:65-67, 1973.

[Voorneveld et al., 1999]M. Voorneveld,
P. Borm, F. Van Megen, S. Tijs, and G. Facchini.
Congestion games and potentials reconsidered.
International Game Theory Review, 1(3 and 4):283-299, 1999.

** Correlated Equilibria:**

[Aumann, 1974]R.J. Aumann.
Subjectivity and correlation in randomized strategies.
Journal of Mathematical Economics, 1, 1974.

[Aumann, 1987]R.J. Aumann.
Correlated equilibrium as an expression of Bayesian rationality.
Econometrica, 55, 1987.

[Foster and Vohra, 1997]D. Foster and R. Vohra.
Calibrated learning and correlated equilibrium.
Games and Economic Behavior, 1997.

[Gilboa and Zemel, 1989]I. Gilboa and E. Zemel.
Nash and correlated equilibria: some complexity considerations.
Games and Economic Behavior, 1:80-93, 1989.

** Evolutionary Game Theory:**

[Maynard Smith, 1982]John Maynard Smith.
Evolution and the Theory of Games.
Cambridge University Press, 1982.

[Weibull, 1995]J. Weibull.
Evolutionary Game Theory.
The MIT Press, 1995.

** Learning in Repeated Games:**

[Foster and Vohra, 1999]D. Foster and R. Vohra.
Regret in the on-line decision problem.
Games and Economic Behavior, pages 7 -- 36, 1999.

[Fudenberg and Levine, 1999]D. Fudenberg and
D. Levine.
The Theory of Learning in Games.
MIT Press, 1999.

** Stochastic Games and Reinforcement Learning:**

[Boutilier et al., 1999]Craig Boutilier,
Moisés Goldszmidt, and Bikash Sabata.
Continuous value function approximation for sequential bidding policies.
In Proceedings of the 15th Conference on Uncertainty in Artificial
Intelligence, 1999.

[Brafman and Tennenholtz, 2000]Ronen I. Brafman
and Moshe Tennenholtz.
A near-optimal polynomial time algorithm for learning in certain classes of
stochastic games.
Artificial Intelligence, 121(1-2):31-47, 2000.

[Hu and Wellman, 1998]Junling Hu and Michael P.
Wellman.
Multiagent reinforcement learning: Theoretical framework and an algorithm.
In Proceedings of the Fifteenth International Conference on Machine
Learning, pages 242-250, 1998.

[Kearns et al., 2000]M. Kearns, Y. Mansour and S. Singh.
Fast planning in stochastic games.
Proceedings of UAI
, pages 309-316, 2000.

[Shapley, 1953]L.S. Shapley.
Stochastic games.
Proceedings of the National Academy of Sciences of the United States of
America, 39:1095-1100, 1953.