|Title||Using vision, acoustics, and natural language for disambiguation|
|Publication Type||Conference Paper|
|Year of Publication||2007|
|Authors||Fransen, BR, Morariu, V, Martinson, E, Blisard, S, Marge, M, Thomas, S, Schultz, AC, Perzanowski, D|
|Conference Name||Proceedings of the ACM/IEEE international conference on Human-robot interaction|
|Keywords||acoustics, artificial intelligence, auditory perspective-taking, dialog, human-robot interaction, natural language understanding, spatial reasoning, vision|
Creating a human-robot interface is a daunting experience. Capabilities and functionalities of the interface are dependent on the robustness of many different sensor and input modalities. For example, object recognition poses problems for state-of-the-art vision systems. Speech recognition in noisy environments remains problematic for acoustic systems. Natural language understanding and dialog are often limited to specific domains and baffled by ambiguous or novel utterances. Plans based on domain-specific tasks limit the applicability of dialog managers. The types of sensors used limit spatial knowledge and understanding, and constrain cognitive issues, such as perspective-taking.In this research, we are integrating several modalities, such as vision, audition, and natural language understanding to leverage the existing strengths of each modality and overcome individual weaknesses. We are using visual, acoustic, and linguistic inputs in various combinations to solve such problems as the disambiguation of referents (objects in the environment), localization of human speakers, and determination of the source of utterances and appropriateness of responses when humans and robots interact. For this research, we limit our consideration to the interaction of two humans and one robot in a retrieval scenario. This paper will describe the system and integration of the various modules prior to future testing.
NRL Publication Release Number: