Our approach to Human-Robot Interaction is through cognitive robotics: understanding how and why people act the way they do. More capable and intelligent robots and autonomous systems will require more human-like cognitive abilities.
Our hypothesis is that robots and autonomous systems that use human-like representations, strategies, and knowledge will enable better collaboration and interaction with the people who use them. Similar representations and reasoning mechanisms make it easier for people to work with these autonomous systems. An autonomous system must be able to explain its decisions in a way that people understand, which should lead to better trust and acceptance of the system. If an autonomous system can predict a person's needs, even in the very short term, it can prepare for it and act appropriately.
In this line of research, computational cognitive models are used to build process models of human cognitive skills, and those models are then used as as reasoning mechanisms on the robots and autonomous systems. We build computational cognitive models of people -- their perception, their memory, their attention, their reasoning, their spatial abilities, and their thinking. We use an embodied version of ACT-R (Anderson et al., 2007) that we call ACT-R/E (Trafton et al., 2013). ACT-R (and ACT-R/E) are computational systems that are based on theories of how human reasoning work, and which capture known facts and constraints known about how the mind works, and connect well with psychological data (experiments) and neuroscience data (fMRI).
We have two primary scientific goals:
- To understand the embodied nature of cognition: how people work in the physical world.
- To improve human robot interaction by high fidelity models of individuals so that we can provide some assistance to them. For example, our models understand that people do not have perfect memories and can not see behind their head. This knowledge allows our model to remind a person what they were doing if they forgot or to show them something in the environment they didn't see.
Some of the cognitive models that have been developed and have been used in various research projects include:
- Gaze following: The ability of an infant around the age of 18 months to follow objects in the environment (such as a toy).
- Level 1 Perspective Taking: The ability to understand what another person is pointing at that developed around the age of two years.
- Visual, Spatial Perspective taking via mental simulation: Around the age of 4-5 years of age, a child can mentally simulate how the world looks from someone else's point of view.
- Conversation tracking: Being able to follow several people engaged in conversation and knowing where to look when during conversations.
- Teaming via model of one's self: Allows deciding what a team mate will do based on modeling the team mate as one's self.
- Theory of Mind: The ability to infer the beliefs, desires and intentions of others, which develops around the age of 5.
- Robotic Secrets Revealed, Episode 001 (Publication Approval: 09-1226-1952 )
- A Naval Research Laboratory (NRL) scientist shows a magic trick to a Mobile-Dextrous-Social Robot, demonstrating the robot's use and interpretation of gestures. The video highlights recent gesture recognition work and NRL's novel cognitive architecture, ACT-R/E. While set in a popular game of skill, this video illustrates several Navy relevant issues, including: computational cognitive architecture that allows autonomous function and integrates perceptual information with higher level cognitive reasoning; gesture recognition for shoulder-to-shoulder human-robot interaction; and anticipation and learning on a robotic system. Such abilities will be critical for future Naval Autonomous systems for persistent surveillance, tactical mobile robots and other autonomous platforms. Researchers at NRL's Navy Center for Applied Research in Artificial Intelligence (NCARAI), within the laboratory's Information Technology Division, received the "Most Informative Video" award at the 21st International Joint Conference on Artificial Intelligence held in California.
- Robotic Secrets Revealed, Episode 002 (Publication Approval: 11-1226-2182)
Episode 2 of Robotic Secrets Revealed demonstrates research on robot perception (including object recognition and multi-modal person identification) and embodied cognition (including theory of mind, or the ability to reason about what others believe). The video features two people interacting with two robots. Naval Research Laboratory (NRL) scientists won the "Best Educational Video" award at the Association for the Advancement of Artificial Intelligence's annual conference in San Francisco on August 8, 2011.
Dr. J Gregory Trafton
Intelligent Systems, Code 5515
Naval Research Laboratory
Washington DC 20375
Email: W5515 "at" itd.nrl.navy.mil