Octavia depicted in maintenance task
where good cognitive skills are required

Our approach to Human-Robot Interaction is through cognitive robotics: understanding how and why people act the way they do. More capable and intelligent robots and autonomous systems will require more human-like cognitive abilities.

Our hypothesis is that robots and autonomous systems that use human-like representations, strategies, and knowledge will enable better collaboration and interaction with the people who use them. Similar representations and reasoning mechanisms make it easier for people to work with these autonomous systems. An autonomous system must be able to explain its decisions in a way that people understand, which should lead to better trust and acceptance of the system. If an autonomous system can predict a person's needs, even in the very short term, it can prepare for it and act appropriately.

In this line of research, computational cognitive models are used to build process models of human cognitive skills, and those models are then used as as reasoning mechanisms on the robots and autonomous systems. We build computational cognitive models of people -- their perception, their memory, their attention, their reasoning, their spatial abilities, and their thinking. We use an embodied version of ACT-R (Anderson et al., 2007) that we call ACT-R/E (Trafton et al., 2013). ACT-R (and ACT-R/E) are computational systems that are based on theories of how human reasoning work, and which capture known facts and constraints known about how the mind works, and connect well with psychological data (experiments) and neuroscience data (fMRI).

We have two primary scientific goals:

  • To understand the embodied nature of cognition: how people work in the physical world.
  • To improve human robot interaction by high fidelity models of individuals so that we can provide some assistance to them. For example, our models understand that people do not have perfect memories and can not see behind their head. This knowledge allows our model to remind a person what they were doing if they forgot or to show them something in the environment they didn't see.

Some of the cognitive models that have been developed and have been used in various research projects include:

  • Gaze following: The ability of an infant around the age of 18 months to follow objects in the environment (such as a toy).
  • Level 1 Perspective Taking: The ability to understand what another person is pointing at that developed around the age of two years.
  • Visual, Spatial Perspective taking via mental simulation: Around the age of 4-5 years of age, a child can mentally simulate how the world looks from someone else's point of view.
  • Conversation tracking: Being able to follow several people engaged in conversation and knowing where to look when during conversations.
  • Teaming via model of one's self: Allows deciding what a team mate will do based on modeling the team mate as one's self.
  • Theory of Mind: The ability to infer the beliefs, desires and intentions of others, which develops around the age of 5.


Robotic Secrets Revealed, Episode 001 (Publication Approval: 09-1226-1952 )
A Naval Research Laboratory (NRL) scientist shows a magic trick to a Mobile-Dextrous-Social Robot, demonstrating the robot's use and interpretation of gestures. The video highlights recent gesture recognition work and NRL's novel cognitive architecture, ACT-R/E. While set in a popular game of skill, this video illustrates several Navy relevant issues, including: computational cognitive architecture that allows autonomous function and integrates perceptual information with higher level cognitive reasoning; gesture recognition for shoulder-to-shoulder human-robot interaction; and anticipation and learning on a robotic system. Such abilities will be critical for future Naval Autonomous systems for persistent surveillance, tactical mobile robots and other autonomous platforms. Researchers at NRL's Navy Center for Applied Research in Artificial Intelligence (NCARAI), within the laboratory's Information Technology Division, received the "Most Informative Video" award at the 21st International Joint Conference on Artificial Intelligence held in California.

Robotic Secrets Revealed, Episode 002 (Publication Approval: 11-1226-2182)
Episode 2 of Robotic Secrets Revealed demonstrates research on robot perception (including object recognition and multi-modal person identification) and embodied cognition (including theory of mind, or the ability to reason about what others believe). The video features two people interacting with two robots. Naval Research Laboratory (NRL) scientists won the "Best Educational Video" award at the Association for the Advancement of Artificial Intelligence's annual conference in San Francisco on August 8, 2011.

Dr. J Gregory Trafton
Intelligent Systems, Code 5515
Naval Research Laboratory
Washington DC 20375
Email: W5515 "at" itd.nrl.navy.mil

Key publications

L. Moshkina, Trickett, S. B., and Trafton, J. G., Social Engagement in Public Places: A Tale of One Robot, Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction. ACM, Bielefeld, Germany, pp. 382-389, 2014.PDF icon Download PDF (630.47 KB)
J. G. Trafton, Hiatt, L. M., Harrison, A. M., Tamborello, F., Khemlani, S. S., and Schultz, A. C., ACT-R/E: An Embodied Cognitive Architecture for Human Robot Interaction, Journal of Human-Robot Interaction, vol. 2, no. 1, pp. 30-55, 2013.PDF icon Download PDF (894.49 KB)
W. Lawson, J Trafton, G., and Martinson, E., Complexion as a Soft Biometric in Human-Robot Interaction, IEEE Sixth International Conference on Biometrics: Theory, Applications, and Systems. IEEE Press, 2013.PDF icon Download PDF (337.91 KB)
W. Lawson and J Trafton, G., Unposed Object Recognition using an Active Approach, International Conference on Computer VIsion Theory and Applications. Barcelona, Spain, pp. 309-314, 2013.PDF icon Download PDF (1.82 MB)
L. M. Hiatt, Harrison, A. M., and Trafton, J. G., Accommodating human variability in human-robot teams through theory of mind, in Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three, 2011.PDF icon Download PDF (756.99 KB)

Selected Publications

A. M. Harrison and Trafton, J. G., Gaze-following and awareness of visual perspective in chimpanzees, 9th International Conference on Cognitive Modeling - ICCM 2009. 2009.PDF icon Download PDF (319.82 KB)
W. G. Kennedy, Bugajska, M. D., Harrison, A. M., and Trafton, J. G., "Like-Me" simulation as an effective and cognitively plausible basis for social robotics, International Journal of Social Robotics, vol. 1, pp. 181-194, 2009.PDF icon Download PDF (553.17 KB)
W. G. Kennedy, Bugajska, M. D., Adams, W., Schultz, A. C., and Trafton, J. G., Incorporating Mental Simulation for a More Effective Robotic Teammate, in Proceedings of the Twenty-Third Conference on Artificial Intelligence (AAAI 2008), Vancouver, 2008.PDF icon Download PDF (114.22 KB)
J. G. Trafton, Bugajska, M. D., Fransen, B. R., and Ratwani, R. M., Integrating vision and audition within a cognitive architecture to track conversations, in Proceedings of the Third International Conference on Human Robot Interaction (HRI 2008), Amsterdam, 2008.PDF icon Download PDF (1.5 MB)
W. G. Kennedy, Bugajska, M. D., Marge, M., Adams, W., Fransen, B. R., Perzanowski, D., Schultz, A. C., and Trafton, J. G., Spatial Representation and Reasoning for Human-Robot Collaboration, in Proceedings of the Twenty-Second Conference on Artificial Intelligence, Vancouver, 2007.PDF icon Download PDF (176.62 KB)
J. G. Trafton, Schultz, A. C., Perzanowski, D., Adams, W., Bugajska, M. D., Cassimatis, N., and Brock, D. P., Children and robots learning to play hide and seek, in Proceedings of the 2006 ACM conference on human-robot interaction, 2006.PDF icon Download PDF (408.05 KB)
J. G. Trafton, Cassimatis, N., Bugajska, M. D., Brock, D. P., Mintz, F. E., and Schultz, A. C., Enabling effective human-robot interaction using perspective-taking in robots, IEEE Transactions on Systems, Man, and Cybernetics - Part A, vol. 35, no. 4, pp. 460-470, 2005.PDF icon Download PDF (1.01 MB)
T. W. Fong, Nourbakhsh, I., Ambrose, R., Simmons, R., Schultz, A. C., and Scholtz, J., The Peer-to-Peer Human-Robot Interaction Project, in Proceedings of the AIAA Space 2005, Long Beach, 2005.PDF icon Download PDF (627.37 KB)
N. Cassimatis, Trafton, J. G., Bugajska, M. D., and Schultz, A. C., Integrating Cognition, Perception and Action through Mental Simulation in Robots, Journal of Robotics and Autonomous Systems, vol. 49, no. 1-2, pp. 12-23, 2004.PDF icon Download PDF (178.91 KB)
M. D. Bugajska, Schultz, A. C., Trafton, J. G., Taylor, M., and Mintz, F. E., A hybrid cognitive-reactive multi-agent controller, in Proceedings of 2002 IEEE/RSJ International conference on Intelligent Robots and Systems (IROS 2002), Switzerland, 2002, pp. 2807-2812.PDF icon Download PDF (476.82 KB)