SMART ANIMATED AGENTS

SIGGRAPH 2000 Course #24

NORMAN I. BADLER
University of Pennsylvania

JOHN FUNGE
Sony Computer Entertainment America

=============================================================================

Interactions with animated characters should be through the modalities that we share among real people such as language, gesture, and shared perceptions of the world. This course will explore several ways that real-time, animated, embodied characters can be given more human-like intelligence and communication skills so that they can act, react, make decisions, and take initiatives.

As real-time characters become almost commonplace, we begin to face the next challenge of making those characters interact with real people. Interactions with these characters should be through the modalities that we share among real people: especially, language, gesture, and shared perceptions of the world. This course will explore several ways that real-time, animated, embodied characters can be given more intelligence and communication skills so that they can act, react, make decisions, and take initiatives. Applications to collaborative groups, interactive training, and smarter games will be addressed.

Actions required for animated agents (faces, arms, legs, and eyes.) Knowledge and action representation. Commonsense and logical reasoning. Agent architectures. Learning. Smart conversations. Agents for pedagogical interaction. Managing multi-agent interactions. Language and gesture as control modalities.

Some experience with graphical modeling and animating human-like characters would be an asset, but not strictly essential.

=============================================================================

Tentative course syllabus:

8:30 - 8:35    Badler      (5 min.)

  Welcome and Overview

8:35 - 10:00   Badler      (85 min.)

  Action Primitives
   - Attribute Taxonomy for smart embodied agents
   - Application Domains
  Action Representation
   - Parallel Transition Networks
   - Parameterized Action Representation (PAR)
  Agent Models
   - Components
   - Construction
  Natural Language Interfaces
   - Action dictionaries
   - Standing Orders
  Cognitive and Empirical Models of Behavior
   - Visual Attention
   - Agent manner via the EMOTE model
   - Building PARs by demonstration

10:00 - 10:15  (break)     (15 min.)

10:15 - 11:45  Cassell     (90 min.)

  Conversational Agents
  What It Means to be "Smart" about Conversation
   - conversation is composed of propositional & interactional smarts
   - conversation is advanced by verbal and nonverbal means
  How Humans are Smart about Conversation
   - we pick up on very tiny cues in both speech and non-verbal channel
  The Role of Conversational Smarts in Animated Agents
   - Increased smoothness of interaction with humans
   - Less disfluency
   - Allows both system & human to take the initiative in the interaction
  Agent Integration
   - How to incorporate conversational smarts into agent architectures
   - KQML frames for verbal & nonverbal, propositional & interactional data
   - Maintaining verbal and non-verbal focus throughout the architecture
   - Modeling social and goal-oriented behaviors
   - Some examples of smart conversational agents

11:45 - 12:00  Questions and Issues

12:00 - 1:30   (lunch)     (90 min.)

1:30 - 2:30    Funge       (60 min.)

  Introduction to Cognitive Modeling
  Case Study 1: Prehistoric World
   - Knowledge Representation
   - Planning
   - Goal-directed Behavior Specification
  Case Study 2: Cinematography
  Case Study 3: Undersea World
   - System Architecture
   - Uncertainty
   - IVE Fluents
  Conclusion


2:30 - 3:00    Rickel      (30 min.)

  Task-Oriented Collaboration 
   - Plan construction, revision and execution
   - Plan recognition
   - Task-oriented dialogue
   - Teams

3:00 - 3:15    (break)     (15 min.)

3:15 - 3:45    Rickel (continued) (30 min.)

3:45 - 4:45    Blumberg    (60 min.)

  Learning the Consequences of Behavior and Learning as Behavior
  Why should characters learn?
  What sorts of things should they learn?
  How can they learn the things they should?
  Using animal learning and training as a model
  Types of learning: Context, Consequences, Control
  What animals learn: consequences of action
  Terms of the trade: Reinforcement, Behavior, Context
  If it is so hard how do you train an animal to do anything?
  The secrets of great animal trainers
   - Event markers
   - Shaping
   - Behavior, then context
   - Generalization
   - Training for variability vs. consistency
  Examples of computational learning inspired by learning in animals
  Lessons and Caveats


4:45 - 5:00    Questions and Issues (15 min.)

============================================================================= 

Course presenter biographies.


NORMAN I. BADLER
Director, Center for Human Modeling and Simulation
Professor, Computer and Information Science Department
200 South 33rd St.
University of Pennsylvania
Philadelphia, PA 19104-6389

Tel: 1-215-898-5862
Fax: 1-215-573-7453
badler@central.cis.upenn.edu
http://www.cis.upenn.edu/~badler

Dr. Norman I. Badler is Professor of Computer and Information Science and
Director of the Center for Human Modeling and Simulation at the University of
Pennsylvania.  Active in computer graphics since 1968, his research focuses on
human figure modeling, manipulation, and animation.  He is the originator of
the ``Jack'' software system (now a commercial product from Engineering
Animation, Inc.). Badler received the BA degree in Creative Studies
Mathematics from the University of California at Santa Barbara in 1970, the
MSc in Mathematics in 1971, and the Ph.D. in Computer Science in 1975, both
from the University of Toronto.


JOHN FUNGE
Research Scientist 
Sony Computer Entertainment America 
919 East Hillsdale Boulevard 
Foster City, California 94404-2175 

Tel: 1-650-655-5658  
Fax: 1-650-655-8180  
john_funge@playstation.sony.com  
http://www.cs.toronto.edu/~funge/  

John Funge recently joined Sony Computer Entertainment America (SCEA)
where he works in a group that performs advanced research into
technology for future computer games.  Previously John was a member of
Intel's Microcomputer Research Lab. He received a B.Sc. in Mathematics
from King's College London in 1990, an M.Sc. in Computer Science from
Oxford University in 1991, and a Ph.D. in Computer Science from the
University of Toronto in 1997. For his Ph.D. John successfully
developed a new approach to high-level control of characters in games
and animation. John is the author of numerous technical papers and his
new book "AI for Games and Animation: A Cognitive Modeling Approach"
is one of the first to take an academic look at AI techniques in the
context of computer games and animation. His current research
interests include computer animation, computer games, smart networked
devices, interval arithmetic and knowledge representation.


BRUCE BLUMBERG
Assistant Professor
Synthetic Characters Group
The Media Lab
Massachusetts Institute of Technology
E15-311, 20 Ames St.
Cambridge MA 02139

Tel: 1-617-253-9832
Fax: 1-617-253-6205
bruce@media.mit.edu
http://www.media.mit.edu/~bruce

Bruce Blumberg, is an assistant professor and head of the Synthetic Characters
Group at the Media Lab of MIT. Bruce is a well known researcher in the area of
autonomous animated characters focusing on the development of computational
models of behavior, motivation, perception, emotion and adaptation inspired by
work in animal behavior, psychology and artificial intelligence. His group is
a frequent contributor to the interactive venues of Siggraph, including (void
*): A Cast of Characters at Siggraph '99, SWAMPED at Siggraph '98, and ALIVE
at Siggraph 95 and 93.  He has a Master's from the Sloan School at MIT and a
B.A. from Amherst College. Prior to coming to the lab he held positions at
Apple Computer Inc, and NeXT Inc.


JUSTINE CASSELL
MIT Media Lab, E15-315
20 Ames Street
Cambridge, MA 02139

Tel: 1-617-253-4899
Fax: 1-617-253-6215
justine@media.mit.edu
http://www.media.mit.edu/~justine/

Justine Cassell is faculty at MIT's Media Laboratory.  After ten years
studying human communication through microanalysis of videotaped data, Cassell
began to bring her knowledge of human conversation to the design of computational
 systems, co-designing the first autonomous animated agent with
speech, gesture, intonation and facial expression in 1994.  She is currently
implementing the third generation of embodied conversational character.  The
architecture for this new agent is based on conversational functions, allowing
the system to exploit users' natural speech, gesture and head movement in the
input to organize conversation, and to respond with autonomous appropriate
verbal and nonverbal behaviors of its own.


JEFF RICKEL
USC Information Sciences Institute
4676 Admiralty Way, Suite 1001
Marina del Rey, CA 90292-6695

Tel: 1-310-448-9124
Fax: 1-310-822-0751
rickel@isi.edu
http://www.isi.edu/isd/rickel

Jeff Rickel is a Project Leader at the Information Sciences Institute
and a Research Assistant Professor in the Department of Computer
Science at the University of Southern California.  He has been active
in artificial intelligence research since 1985, when he joined Texas
Instruments (TI) to study the use of artificial intelligence in
industrial automation. During his years at TI, he published on topics
ranging from knowledge-based planning and simulation to automated
production scheduling and intelligent tutoring.  Dr. Rickel received
his Ph.D. in Computer Science from the University of Texas in 1995 for
his work on automated modeling of physical systems.  Since then, his 
research has focused on animated, intelligent agents for training in 
virtual reality.  This work has resulted in STEVE, a virtual human that 
has been featured in academic publications as well as on CNN, the 
Discovery Channel, BBC, and magazines and newspapers around the world.