|Research Directions of the Center:
The Center for Human Modeling and Simulation (HMS) of the Department
of Computer and Information Science exists to promote first quality research
of international stature. Our mission may be broadly defined as the study
of multi-modal communication with computers. As such it encompasses generation
of and human interaction with visual images, video, sound, and touch. Dr.
Badler, the Center's Director, has been actively involved in the national
and international computer graphics community since 1975. The Center has
produced dozens of Ph.D. students and numerous Masters' degrees. The research
of the Center is well represented in the mainstream computer graphics literature.
The major foci of the HMS Center are: parameterized action representation,
embodied agent models, behavior-based animation of human movement,
real-time simulation, articulated and deformable object modeling
through physics-based techniques, computer vision techniques
for dynamic and deformable objects, biomedical modeling coordinating
anatomy and physiology, applications of control theory techniques
to dynamic models, and understanding the bi-directional relationships
between human movement, natural languages, and communication.
AF: AVIS-MS: Advanced Visual and Instruction Systems for Maintenance Support (N. Badler)
Complexity, customization, and packaging of military platforms and systems increase maintenance difficulty at the same time as the available pool of skilled technical personnel may be shrinking. In this environment maintenance training, technical order presentation, and flight-line operational practice may need to adopt “just-in-time” procedural aids. Moreover, the realities of real-world maintenance may not permit the hardware indulgences and rigid controls of laboratory settings for visualization and training systems, and at the same time the actual activities of maintainers will challenge requirements for portable or wearable devices. This project investigates technologies that maybe used in the maintenance of Air Force equipment.
"RIVET: Rapid Interactive
Visualization for Extensible Training" (N. Badler)
The new NASA mandate calls for missions of unprecedented remoteness
and duration. Challenges include high system complexity, and
low training time and tolerance for error. Human capabilities
remain relatively fixed and current training and instruction
tools are inadequate. OUR MANDATE: To provide computer based
integrated training and instruction tools that are visually
intuitive and adaptable to user skill level and context.
NSF: American Sign Language Natural Language Generation and Machine Translation. (N. Badler, M. Marcus)
The goal of this project is to develop new technologies that enable the machine translation of English text into animations of American Sign Language. This research will make more information and services available to the majority of Deaf Americans who face English literacy challenges. Because signed languages, like ASL, contain phenomena not seen in traditional written/spoken languages, they are particularly challenging to process using standard MT approaches. Exploring the computational linguistics of ASL can help us understand the limitations of current MT technologies and motivate the development of new ones.
Improving the Realism of Agent Movement for High Density Crowd Simulation (N. Badler, N. Pelechano)
The simulation of realistic, large, dense crowds of autonomous agents is still a
challenge for the computer graphics community. Typical approaches either look like
particle simulations (where agents ‘vibrate’ back and forth) or are conservative in the
range of motion possible (agents aren’t allowed to ‘push’ each other). Our HiDAC
system (High Density Autonomous Crowds) focuses on the problem of simulating the
local motion behaviors of crowds moving in a natural manner within dynamically
changing virtual environments.
and Analysis of Communicative Gesture" (N. Badler,
Encoded with the principles of movement observation science,
specifically Laban Movement Analysis (LMA) and its Effort and
Shape components, our synthesis system has the power and flexibility
to procedurally synthesize gestures based on key pose and time
information plus Effort and Shape qualities. The acquisition
system is to extract the Effort and Shape qualities from live
performance and correlate with observations validated by LMA
"Virtual Human Testbed"
Highly controllable virtual humans are essential for simulations
that involve interactions with highly detailed virtual environments.
These virtual environments could represent a power plant, an
airport, a ship, or a factory. In most cases these virtual environments
are created from CAD systems and represent conceptual or detailed
designs. It is desired that these simulations include real world
operations and activities for the virtual humans. This will
ensure the design is safer, more useful, maintainable, and more
comfortable for the targeted user population.
Technologies and Environments" (N. Badler)
The Virtual Technologies and Environments (VIRTE) project from
the Office of Naval Research is developing a virtual reality
based training system for Marine Corps fire teams in close quarters
battle (CQB). HMS has the task of creating smarter, real-time,
reactive computer generated forces (CGFs). As in many games,
current virtual opponents simply replay stored motions (key-framed
or motion captured). This means their actions are not tailored
to context often resulting in large motion datasets or unacceptable
behaviors, such as limbs passing through walls or stereotyped
(un)emotional reactions. Virtual opponents also often lack appropriate
responses to environmental stimuli, such as low illumination,
obstacles, light flashes, other agents, and explosions. HMS
is beginning work on this project by investigating extending
the capabilities of the underlying game engine, Gamebryo, by
adding finite state machine controllers and inverse kinematics.
Other areas of focus include synthetic vision and attention
models, which will allow smart agents with specific cognitive
tasks and internal states to generate appropriate eye, face,
head, and body behaviors in dynamically changing environments. Previous work in visual attention.
The LiveActor project has the overall objective of providing
real-time, 3D, multi-modal interaction between live and virtual
embodied agents. Crucial to this interaction is the computational
modeling of agents with empirical attributes based on known
testing instruments, physiological performance models, and psychosocial
behaviors. Agent construction methodologies are under-studied,
and we address this issue through both graphical and language
user interfaces. We hypothesize that realistic agent behaviors
are even more important than visual appearance in experiential
veracity of a live simulation, and this can be tested in the
Amplifying Control and Understanding of Multiple ENtities"
This project involves the synthesis and recognition of aggregate
movements in a virtual environment with a high-level (natural
language) interface. The principal components include:
an interactive interface for aggregate control based on a collection
of parameters extending an existing movement quality model,
a feature analysis of aggregate motion verbs, and recognizers
to detect occurrences of features in a collection of simulated
The Cultural Heritage Project involves the 3D reconstruction of Godin Tepe, an archaeological site in Iran . Excavation had begun there in 1975, however, due to erosion, the mud brick reconstruction of the architecture has degraded. This project is an effort to recreate the site as well as aid in the visualization of how this site was used. AutoCad drawings of Godine Tepe were brought into Maya and served as the blueprint for the architecture. Human figures were also modeled in Maya and placed in Room 18, speculated to be a distribution center for weaponry and slingballs. The pottery is modeled from drawings of those found in Period V of Godin Tepe. The final product is a movie walkthrough. This digital reconstruction is guided by archaeology PhD student Virginia Badler.