Our approach to Human-Robot Interaction is through cognitive robotics: understanding how and why people act the way they do. More capable and intelligent robots and autonomous systems will require more human-like cognitive abilities.
Our hypothesis is that robots and autonomous systems that use human-like representations, strategies, and knowledge will enable better collaboration and interaction with the people who use them. Similar representations and reasoning mechanisms make it easier for people to work with these autonomous systems. An autonomous system must be able to explain its decisions in a way that people understand, which should lead to better trust and acceptance of the system. If an autonomous system can predict a person's needs, even in the very short term, it can prepare for it and act appropriately.
In this line of research, computational cognitive models are used to build process models of human cognitive skills, and those models are then used as as reasoning mechanisms on the robots and autonomous systems. We build computational cognitive models of people -- their perception, their memory, their attention, their reasoning, their spatial abilities, and their thinking. We use an embodied version of ACT-R (Anderson et al., 2007) that we call ACT-R/E (Trafton et al., 2013). ACT-R (and ACT-R/E) are computational systems that are based on theories of how human reasoning work, and which capture known facts and constraints known about how the mind works, and connect well with psychological data (experiments) and neuroscience data (fMRI).
We have two primary scientific goals:
- To understand the embodied nature of cognition: how people work in the physical world.
- To improve human robot interaction by high fidelity models of individuals so that we can provide some assistance to them. For example, our models understand that people do not have perfect memories and can not see behind their head. This knowledge allows our model to remind a person what they were doing if they forgot or to show them something in the environment they didn't see.
Some of the cognitive models that have been developed and have been used in various research projects include:
- Gaze following: The ability of an infant around the age of 18 months to follow objects in the environment (such as a toy).
- Level 1 Perspective Taking: The ability to understand what another person is pointing at that developed around the age of two years.
- Visual, Spatial Perspective taking via mental simulation: Around the age of 4-5 years of age, a child can mentally simulate how the world looks from someone else's point of view.
- Conversation tracking: Being able to follow several people engaged in conversation and knowing where to look when during conversations.
- Teaming via model of one's self: Allows deciding what a team mate will do based on modeling the team mate as one's self.
- Theory of Mind: The ability to infer the beliefs, desires and intentions of others, which develops around the age of 5.
Dr. J Gregory Trafton
Navy Center for Applied Research in Artificial Intelligence
Information Technology Division
Naval Research Laboratory
Washington DC 20375
Email: W5515 "at" itd.nrl.navy.mil