Swarm intelligence is characterized by the emergence of collective capabilities from simple autonomous agents resulting from local interactions between the agents and their environment. Natural examples of swarm intelligence (e.g., colonies of ants) have led to the development of a number of distributed approaches to controlling agents. Swarm intelligence has a number of advantages over more traditional control approaches. These advantages include robustness to uncertainty and change, the ability to self organize, and their decentralized nature which makes them less vulnerable. However, the emergent nature of swarm intelligence poses its own challenges. It is difficult to predict the emergent behavior that will result from such a system; therefore, novel methods are needed for designing and programming these systems to perform useful tasks, and for enabling a human to dynamically direct and constrain the swarm behavior as needed.
While swarm intelligence has certainly been shown to produce "interesting" emergent behavior, it is much less clear how swarm-based systems can be designed to perform useful tasks. From this challenge has emerged the new field of swarm engineering. Much of the previous work in swarm engineering involves mathematical analysis that becomes intractable in more complex environments. Furthermore, these mathematical approaches only address design-time decisions, while little work has focused on real-time control of swarms. We have begun to address the complexity issue of swarm engineering by developing a generalized graph-based method for engineering swarm solution, and address the real-time control of swarms that is critical to applying this important technology to problems of interest to the Navy and Marine Corps.
The method of swarm control we will use in this project is called physicomimetics. This method is based on an artificial physics representation in which agents behave as point-mass particles and respond to artificial forces generated by local interactions with nearby particles. We have developed a generalized form of physicomimetics that supports heterogeneous agents through multiple particle types and multiple force laws. While our previous work has primarily used Newtonian-like force laws, other physical laws such as Hooks and Lennard-Jones could be incorporated into the system, as well as various types of social laws. This gives us the flexibility to build systems that exhibit a wide variety of behaviors.
We will take a multi-tiered approach to the design and real-time control of physicomimetics swarms that includes the following components: a graph-based method for performing the initial design of the swarm; machine learning techniques for the acquisition of swarm behavior modes; and a human-swarm interface enabling an operator to dynamically influence the behavior of the swarm.
Graph-Based Method: The graph-based method we use in the initial swarm design allows the inclusion of engineer-provided knowledge through explicit design decisions pertaining to specialization, heterogeneity, and modularity. The method also significantly reduces the growth of the swarm parameter space as the size and complexity of the problem increases.
Machine Learning: While appropriate parameter settings for simple swarms may be determined empirically or through the use of a formal approach to swarm engineering, more complex swarms require the use of machine learning to determine these settings. We will investigate top-down and bottom-up approaches. In the top-down approach, global swarm characteristics are defined and the parameter settings of the individual agents optimized to achieve such characteristics. Instance-based learning can be used to sample the space of possibilities with respect to the characteristics and store solutions in the form of parameter settings. We then construct a library of swarm settings tuned to specific modes of behavior useful in solving the types of tasks to which the agents are being applied. Transition dynamics also need to be analyzed to insure system stability when moving from one behavior mode to another. In contrast, the bottom-up approach does not modify the behavior of the current agents, but instead uses virtual agents that do not exist in the environment but which interact with the real agents via the same force law mechanisms and can therefore influence swarm behavior. We use evolutionary computation and other techniques to learn the parameterization of virtual agents for performing useful functions such as leading, blocking, and so on.
Human-Swarm Interface: Once we have a library of swarm settings for producing a variety of behavior modes, they can be adjusted, combined and sequenced in useful ways by a human operator. In addition, the operator may influence the behavior of the swarm through the manipulation of the previously designed virtual agents. This includes taking direct control of the movement of one or more virtual agents, and introducing and graphically positioning other static ones that indirectly constrain the movement of the real agents. This can facilitated by a human-swarm interface that mimics the way humans observe, manipulate and anticipate the movement of objects in the physical world.
Dr. Mitchell Potter
Adaptive Systems, Code 5514
Naval Research Laboratory
Washington DC 20375
Email: w5514 "at" itd.nrl.navy.mil