The ability to carry out deliberative inference appears to be uniquely human. For more than five decades, researchers in artificial intelligence (AI) have studied, formalized, and encoded aspects of this cognitive ability, bringing us closer to both a better understanding of the human intellect and intelligent machines that can augment our everyday experience. The traditional approach to building intelligent systems has emphasized the selection of one or more facets of human inference and programs that encode these facets to carry out complex activities such as planning, diagnosis, problem solving, and learning. Most of the systems of this sort share the characteristic of single-layered inference. That is, the procedures are often fixed and lack compositionality, leaving the systems unable to learn new modes of inference and new reasoning strategies.
This research program has as its goal the development of a cognitive system that acquires strategies for controlling inference. Much like humans can learn to solve mathematical equations, prove logical theorems, analyze filmic metaphor, and construct legal arguments, a broadly intelligent system must be able to develop new forms of reasoning about the world. Even commonplace activities such as walking down the street while composing a grocery list require this capacity. Although planning and navigation may rely on their own inference routines, learning to switch attention between the two tasks is, itself, another inference strategy. To make progress on this front, this research program emphasizes two principal ideas: the centrality of attention and the dual representation of inference.
Throughout the day our attention shifts constantly, moving from thought to thought, sometimes attracted by a sudden noise or directed by an involved storyline. Deliberative reasoning, in particular, involves the conscious control of attention from one focal point to another to solve a challenging problem or to pursue a compelling belief. Extending this view to inference in general, whether conscious or beneath awareness, introduces a simplifying, powerful idea: the direction of attention controls the mechanisms of inference. By creating strategies for attending to thoughts and perceptions, one can express recipes for different kinds of reasoning. And, by extension, learning these strategies involves attending to trains of thought that make sense of the world.
Identifying attention strategies with the control of inference does not imply that a cognitive system should cast all its reasoning as an interplay between attention and inference. AI researchers have made tremendous advances on a myriad of tasks including object recognition, temporal and causal reasoning, and decision making to name a few. Attempting to recast these successes in terms of a theory of attention is premature, and introduces unnecessary obstacles on the road to developing functioning intelligent agents. Instead, a focus of attention combined with an interlingua can provide a route to integrating these systems into a larger whole. Importantly, this dual representation of inference opens the way to learning strategies from the combined interaction of existing components; strategies from which the system can construct new components whose semantics are defined by, but not identified with, their parts.
Developing a system that engages in and adapts its deliberative inference while leveraging existing AI technologies represents a powerful new approach to constructing autonomous agents that can improve our everyday lives.
POC: Dr. Will Bridewell
Interactive Systems, Code 5512
Naval Research Laboratory
Washington DC 20375
Email: w5512 "at" itd.nrl.navy.mil