Suppose an agent-controlled unmanned autonomous vehicle, during the course of its logistics supply mission, encounters an injured person. Suppose further that its engineers did not specify how it should respond to such situations. Should it change its goal, albeit temporarily, to attend to this person, such as by contacting its operator or a medic? Similar conundrums may be encountered by non-embodied agents, such as those that are embedded interactive decision aids. How should agents act intelligently in these and other situations in which self-initiated behavior is desirable?
Our work on goal reasoning focuses on these questions, where our hypothesis is that agents endowed with this ability are preferred by their users/operators, at least under some conditions (e.g., after they have gained their user's trust). To date our goal reasoning research has been on a simple model called Goal-Driven Autonomy (GDA), which focuses on identifying, explaining, and responding to unexpected situations that arise in the environment, independent of whether they imply an impending plan execution failure or a new opportunity to achieve goals of interest. This gives rise to further questions, such as what methods should be used to recognize surprises, can models of the environment that explain observations be accurately inferred, when and what type of additional information is needed when low-confidence explanations are generated, when should goals be formulated in response, and how should an agent select which goals it should pursue when multiple goals have high expected utility?
We are exploring solutions to these questions while designing, developing, and testing increasingly mature goal reasoning agents in increasingly challenging environments.