Mixed-Initiative Systems for Dynamic Autonomy
Effective collaboration between robots and humans in accomplishing complex
tasks requires the use of an efficient interface whereby a human can communicate
and interact with a robot almost as efficiently as he/she would with another
human. This level of interaction requires a number of capabilities
not often found in deployed robotic systems today. These include
voice recognition with integrated natural language understanding, recognition
of human gestures (such as pointing to objects), and built-in behaviors
for sequencing and executing tasks requiring various levels of control
by --- and interaction with --- a human supervisor (we refer to this as
dynamically adjustable autonomy, or dynamic autonomy). Use of cognitive
models aboard the robots may further enhance the human-robot interaction
through use of a common set of representations, process steps and process
times for processing sensory data, and expectations shared by both human
The goal of this effort is to enhance
human-robot interaction for mobile, humanoid, social, and other robots
through use of cognitively plausible behaviors and human-centric interface
capabilities on-board the robots. Achieving this goal will reduce the
cognitive load associated with humans working with autonomous systems,
and allow a higher ratio of robots to humans in autonomous vehicle operations.
In this project we design and implement a robotic system architecture
for a robot which can be used to collaborate with a human. The capabilities
required of the robot include voice recognition, natural language understanding,
gesture recognition, spatial reasoning, and cognitive modeling with perspective-taking.
These represent of a small subset of potential capabilities humans utilize
with one another in collaborating to perform a task in a complex environment,
and barely scratches the surface of capabilities we might want to build
into an intelligent, collaborative robot.
Use of a cognitive model aboard
the robot further facilitates better communication and interaction between
the human and the robot through use of a common representational framework
for the environment and objects within it, processing of sensor information,
and joint problem solving involving both humans and robots. Current
development efforts focus on enhancing the use of cognitive models. We
are also extending the architecture and methodology to include and study
collaboration between teams of robots and humans.
Results of this project are reported to DARPA for evaluation. The results
are also shared with the other academic and government research groups
through presentation and publication at robotics and other technical conferences,
and publication in technical journals. Parts of this architecture
are also being extended to several robots designed specifically for enhanced
human interaction, namely NASA's humanoid robot Robonaut and MIT's clearly
non-humanoid robot Leonardo.
This effort is supported by the Defense Advanced Research Projects Agency
(DARPA) under the Mobile Autonomous Robot Software program managed by
Dr. Douglas Gage.
Click here for a full publication list for this project
C. Schultz, Principal Investigator