next up previous
Next: Connecting Language and Up: Agents and Avatars Previous: Locomotion with anticipation

Multi-agent task allocation

By encapsulating virtual human activities in PaT-Nets, we can interactively control the assignment of tasks to agents. A menu or program binds actions to individuals, who then execute the PaT-Net processes. Since the processes have the power to query the environment and other agents before starting to execute, multi-agent synchronization and coordination can be modeled. Thus an agent can start a task when another signals that the situation is ready, or one agent can lead another in a shared task. The latter would be especially useful when an avatar works with a simulated agent to perform a two-person task. One virtual human is designated as the ``leader'' (typically the avatar, so the live participant is in control) and the other the ``follower.'' The follower's timing and motion are performed after each time-stepped motion of the leader. (The reverse situation, where the agent leads the avatar, may be needed for training and educational applications.) These are clearly the first steps toward a virtual social architecture.

We developed a prototype system for agent task assignment to evaluate a multi-function aircraft maintenance equipment cart (``MASS''). The user specifies tasks for an agent, and the agent accepts tasks for which it is both qualified and responsible. The tasks can be queued in advance, and are executed as prior tasks are completed or other agent or environment conditions obtain.

Once we can generate and control multiple agents and avatars, many social and community issues arise including authentication of identity, capabilities, permissions, social customs, transference of object control, sharing behaviors, coordinating group tasks, etc. Underlying technology to share interactive experience will depend on distributed system protocols and communication technology, client workstation performance, avatar graphics, and so on. Many of these issues are being addressed by other ad hoc groups, such as Living Worlds, Open Community, and Universal Avatars. Having two avatars ``shake hands'' is considered the first stage of a social encounter requiring significant attention to the details of avatar interaction, body representation, and action synchronization. Assuming that the communications can be done fast enough (a big assumption), our avatars should be able to reach for each other's hand, detect a collision/connection, and then allow the follower avatar to position his/her hand according to the leader's spatial position. Indeed, such a demonstration has already been readily constructed by Stansfield at Sandia National Labs with Jack avatars, in-house network communication software, head-mounted displays, and end effector position/orientation sensors on the participants.



next up previous
Next: Connecting Language and Up: Agents and Avatars Previous: Locomotion with anticipation



Dr. Norman Badler
Thu Apr 17 08:17:25 EDT 1997