Smart(er) Animated Agents

Full Day Course #27, Monday, August 9, 1999
ACM SIGGRAPH '99

NORMAN I. BADLER (Course Organizer)
Computer and Information Science Department
200 South 33rd St.
University of Pennsylvania
Philadelphia, PA 19104-6389
215-898-5862
215-573-7453 or 215-898-0587 fax
badler@central.cis.upenn.edu
http://www.cis.upenn.edu/~badler

Dr. Norman I. Badler is Professor of Computer and Information Science and Director of the Center for Human Modeling and Simulation at the University of Pennsylvania. Active in computer graphics since 1968, his research focuses on human figure modeling, manipulation, and animation. He is the originator of the ``Jack'' software system (now a commercial product from Engineering Animation, Inc.). Badler received the BA degree in Creative Studies Mathematics from the University of California at Santa Barbara in 1970, the MSc in Mathematics in 1971, and the Ph.D. in Computer Science in 1975, both from the University of Toronto.

JUSTINE CASSELL
MIT Media Lab, E15-315
20 Ames Street
Cambridge, MA 02139
Tel: 1-617-253-4899
Fax: 1-617-253-6215
justine@media.mit.edu
http://www.media.mit.edu/~justine/

Justine Cassell is faculty at MIT's Media Laboratory. After ten years studying human communication through microanalysis of videotaped data, Cassell began to bring her knowledge of human conversation to the design of computational systems, co-designing the first autonomous animated agent with speech, gesture, intonation and facial expression in 1994. She is currently implementing the third generation of embodied conversational character. The architecture for this new agent is based on conversational functions, allowing the system to exploit users' natural speech, gesture and head movement in the input to organize conversation, and to respond with autonomous appropriate verbal and nonverbal behaviors of its own.

BARBARA HAYES-ROTH
Computer Science Department
Gates Building
Stanford University
Palo Alto, CA 04305
Tel: 1-650-723-0506
bhr@cs.stanford.edu
http://www-ksl.stanford.edu/people/bhr
http://www.extempo.com

Barbara Hayes-Roth is Senior Research Scientist in the Computer Science Department at Stanford University, where she has directed the Adaptive Agents Project and, more recently, the Virtual Theater Project since 1982. Her current research focuses on interactive characters in applications designed to support learning through play and artistic self-expression. In September, 1995, Dr. Hayes-Roth also founded Extempo Systems, Inc., which makes interactive characters for commercial applications in adaptive learning, electronic commerce, and interactive entertainment.

W. LEWIS JOHNSON
Director, Center for Advanced Research in Technology for Education (CARTE)
USC / ISI
4676 Admiralty Way
Marina del Rey, CA 90292
Tel: 1-310-822-1511
Fax: 1-310-823-6714
johnson@ISI.EDU
http://www.isi.edu/isd/johnson.html

Dr. Johnson is a Project Leader at USC / Information Sciences Institute and Research Associate Professor of Computer Science at the University of Southern California (USC). Lewis Johnson received his A.B. degree in Linguistics in 1978 from Princeton University, and his M.Phil. and Ph.D. degrees in Computer Science from Yale University in 1980 and 1985, respectively. He is co-editor of the journal Automated Software Engineering. He is President Elect of the Artificial Intelligence in Education Society, member of the governing board of the Autonomous Agents Conferences, Chair of SIGART, and member of the ACM SIG Board.

JAMES LESTER
Department of Computer Science
North Carolina State University
Engineering Graduate Research Center
Raleigh, NC 27695-7534
Tel: 1-919-515-7534
Fax: 1-919-515-7925
lester@csc.ncsu.edu
http://multimedia.ncsu.edu/imedia

James Lester is Director of the IntelliMedia Initiative at North Carolina State University, where he is also an Assistant Professor of Computer Science. Lester has lectured widely on lifelike animated agents that provide realtime problem-solving advice to students, and virtual cinematography for 3D self-explaining learning environments. He earned his PhD, MSCS, and BA in Computer Sciences from the University of Texas at Austin. He also holds a BA in History from Baylor University. He was recognized with the Best Paper Award at the 1997 International Conference on AI in Education and with a Career Award by the National Science Foundation.

JEFF RICKEL
USC Information Sciences Institute
4676 Admiralty Way, Suite 1001
Marina del Rey, CA 90292-6695
Tel: 1-310-822-1511 x124
Fax: 1-310-822-0751
rickel@isi.edu
http://www.isi.edu/isd/rickel

Jeff Rickel is a Research Computer Scientist at the Information Sciences Institute and a Research Assistant Professor in the Department of Computer Science at the University of Southern California. He has been active in artificial intelligence research since 1985, when he joined Texas Instruments to study the use of artificial intelligence in industrial automation. He received his Ph.D. in Computer Science from the University of Texas in 1995 for his work on automated modeling of physical systems. Since then, Dr. Rickel's research has focused on animated, intelligent agents for training in virtual reality.

=============================================================================

Short Description

Interactions with animated characters should be through the modalities that we share among real people such as language, gesture, and shared perceptions of the world. This course will explore several ways that real-time, animated, embodied characters can be given more human-like intelligence and communication skills so that they can act, react, make decisions, and take initiatives.

=============================================================================

Expanded Statement

As real-time characters become almost commonplace, we begin to face the next challenge of making those characters interact with real people. Interactions with these characters should be through the modalities that we share among real people: especially, language, gesture, and shared perceptions of the world. This course will explore several ways that real-time, animated, embodied characters can be given more intelligence and communication skills so that they can act, react, make decisions, and take initiatives. Applications to collaborative groups, interactive training, and smarter games will be addressed.

=============================================================================

Course Prerequisites

Some experience with graphical modeling and animating human-like characters would be an asset, but not strictly essential.

=============================================================================

List of Topics

Actions required for animated agents (faces, arms, legs, and eyes.)
Knowledge and action representation.
Agent architectures.
Smart conversations.
Agents for pedagogical interaction.
Managing multi-agent interactions.
Language and gesture as control modalities.

=============================================================================

Course Syllabus

8:30 - 8:40 Badler (10 min.)
Welcome and Overview

8:40 - 9:30 Badler (50 min.)
Action Primitives
- Computational requirements for smarter embodied agents
- Faces, arms, legs, and eyes
Action Representation
- Non-linear animation
- Parallel Transition Networks
- Planning
- Control, interruption, surprise, and opportunism

9:30 - 10:00 Cassell (30 min.)
Conversational Agents
What It Means to be "Smart" about Conversation
- conversation is composed of propositional & interactional smarts
- conversation is advanced by verbal and nonverbal means
How Humans are Smart about Conversation
- we pick up on very tiny cues in both speech and non-verbal channel
The Role of Conversational Smarts in Animated Agents
- Increased smoothness of interaction with humans
- Less disfluency
- Allows both system & human to take the initiative in the interaction

10:00 - 10:15 (break) (15 min.)

10:15 - 11:00 Cassell (45 min.)
Agent Integration
- How to incorporate conversational smarts into agent architectures
- KQML frames for verbal & nonverbal, propositional & interactional data
- Maintaining verbal and non-verbal focus throughout the architecture
- Modeling social and goal-oriented behaviors
- Some examples of smart conversational agents

11:00 - 12:00 Hayes-Roth (60 min.)
Communicative Agents
- Agent architectures: Blackboards
- Virtual actors and improvisation
- Natural language dialogue
- Entertainment applications

12:00 - 1:30 (lunch) (90 min.)

1:30 - 2:15 Johnson (45 min.)
Pedagogical Agents
- The SOAR agent architecture
- Education and training applications

2:15 - 3:00 Rickel (45 min.)
Task-Oriented Collaboration
- Plan construction, revision and execution
- Plan recognition
- Task-oriented dialogue
- Teams

3:00 - 3:15 (break) (15 min.)

3:15 - 4:00 Lester (45 min.)
Personality-Rich Pedagogical Agents
- Situated emotive communication
- The emotive-kinesthetic behavior sequencing framework
- Designing emotive behavior spaces
- Structuring emotive behavior spaces - pedagogical speech acts
- Deictic believability
- Ambiguity appraisal
- Gesture & locomotion planning
- Utterance planning
- Coordinating deictic gesture, locomotion, and speech

4:00 - 4:30 Badler (30 min.)
Natural Language Interfaces
- Parsers and semantic tagging
- Space and motion references
- Agent manner
- Action dictionaries

4:30 - 5:00 Panel (all) (30 min.)
Questions and Issues