Military operations are becoming increasingly diverse in their nature. To cope with new and more demanding tasks, the military has researched new tools for use during operations and during training for these operations. There have been numerous goals driving this research over the past several decades. Many of the military requirements and capabilities have specifically driven development of augmented reality (AR) systems.
The overall goal of the Battlefield Augmented Reality System (BARS) was to do for the dismounted warfighter what the Super Cockpit and its successors had done for the pilot. Initial funding came from the Office of Naval Research. The challenges associated with urban environments were a particular concern: complex 3D environment, dynamic situation, and loss of line-of-sight contact of team members. Unambiguously referencing landmarks in the terrain and integrating unmanned systems into an operation can also be difficult for distributed users. All of these examples show the impairment of situation awareness (SA) military operations in urban terrain (MOUT). The belief was that the equivalent of a head-up display would help solve these. By networking the mobile users together and with a command center, BARS could assist a dispersed team in establishing collaborative situation awareness.
This raises numerous issues in system configuration. BARS includes an information database, which can be updated by any user. Sharing information across the area of an operation is a critical component of team SA. We designed an information distribution system so that these updates would be sent across the network. We enabled BARS to be able to communicate with semi-automated forces (SAF) software to address the training issues discussed above. We chose to use commercially-available hardware components so that we could easily upgrade BARS as improved hardware became available. We built UI components so that routes could be drawn on the terrain in the command center application and assigned to mobile users, or drawn by mobile users and suggested to the commander or directly to other mobile users. Typical AR system issues like calibration were investigated. Specific research efforts within BARS for the UI and human factors aspects:
- X-ray vision and depth perception
- Basic Perception
- Information Filtering
- Object Selection
- Embedded Training
For more information, contact email@example.com.
Among the things our initial domain analysis indicated as a potential advantage for AR for dismounted troops was the ability to show where distributed troops were in an urban area of operations. Later, client interest included the abiltiy to communicate points of interest in the environment to distributed team members (without the benefit of line-of-sight contact between team members). Both of these goals require the AR system to identify objects that are occluded from the user. This became a central focus of the BARS research program.
The metaphor of "Superman's X-ray vision" has long been applied to the capability of AR to depict a graphical object that is occluded by real objects. There are three aspects to the problem of displaying cues that correspond to occluded virtual objects. First, the alignment or registration of the graphics on the display must be accurate. This is a defining aspect of AR. Second, the ordinal depth between the real and virtual objects must be conveyed correctly to the user. Because we selected optical see-through HWD for operational reasons, we needed to replace the natural occlusion cue for depth ordering. Third, the metric distance of the virtual object must be understood to within a sufficient accuracy that the user can accomplish the task. This requires the cues that are provided to be sufficiently accurate to estimate distance. Further, each successive aspect depends on the previous ones.
We began our investigation with a study that identified graphical cues that helped convey the ordinal depth of graphical objects. We found that changing the drawing style, decreasing the opacity with increasing distance, and decreasing intensity with increasing distance helped users properly order graphical depictions of buildings that corresponded to occluded buildings on our campus. This task was detached from the real world, however, so our next experiment used a depth-matching task with the same graphical cues. This enabled us to measure metric distance judgments and forced our users to compare the depth of graphical objects to the real environment.
In our most recent experiment, we made one more important change in the experimental design. We used military standard map icons and applied the drawing styles discovered early in our sequence of experiments to these icons. We compared this to several other techniques for displaying occluded information that had appeared in the AR literature. The opacity and drawing style techniques were not as effective as newer techniques. A virtual tunnel built by drawing virtual holes in known occluding infrastructure led to the lowest error in interpreting the ordinal depth of a virtual squad icon amongst real buildings. The next best technique was one we devised for this study, a virtual wall metaphor with the number of edges increasing with ordinal depth. However, both of these techniques led users to perceive the icons as closer than they were intended to be. A ground grid technique which drew concentric circles on the ground plane resulted in the signed error that was closest to zero, even though users made more errors in this condition.
One of the problems encountered by users in our urban skills evaluation was an extreme difficulty in seeing clearly through the video AR display we selected in order to overcome the occlusion issue. This prompted an investigation into exactly how well users could see through the AR displays. We began to consider several aspects of basic perception in AR displays: contrast sensitivity, color perception, and stereo perception.
Contrast sensitivity accounts for the varying size requirements for different levels of contrast. But such a contrast sensitivity function for AR has two forms: one can measure the ability of the user to see the graphics presented on the AR display or measure the ability to see the real environment through the AR display. Some optical see-through displays inhibited the user's ability to see the real world. Some graphical presentations were interpreted at the visual acuity one would expect from the geometric resolution of the display device. Thus it is fair to speculate whether poor visual quality of the display devices could be blamed for difficulties in any of the applications or evaluations we conducted.
Color perception can also be a key display property for military applications and a particularly novel hazard for optical see-through displays. Black in a rendering buffer becomes transparent on an optical see-through display, allowing the real world to be seen. So one can easily imagine that dark colors will be perceived improperly. But even bright colors do not fully occlude the real-world background, and thus they too are subject to incorrect perception depending on the color (or mix of colors) that appears behind them in the real environment. We measured the color distortion seen in AR displays, two optical see-through and one video. We found that all three displays distorted colors that were seen on white or black backgrounds, and that this occurred with both graphics presented on the displays and real-world targets seen through the displays.
Stereo presentation of graphical images has often been considered a requirement for AR displays. The belief is that in order for the user to perceive graphics as representing 3D objects existing in the surrounding 3D environment, the graphics must be in stereo. One limiting factor in whether a user is able to fuse two images for the left and right eye is vertical alignment. Using nonius lines, we detected errors in alignment ranging from a few hundreths of a degree (well within the tolerance of the human visual system) to four tenths of a degree (an amount that would likely cause eye fatigue or headaches if the user were to force the images to fuse). Simple geometric corrections applied to one eye were sufficient to alleviate these errors. We then were able to measure the stereo acuity that users experience with AR displays, again finding that the differences in depth that a user could detect between real and graphical imagery were well above thresholds in normal human vision of real objects.
The issue of information overload, as noted above, can become a primary difficulty in MOUT. The physical environment is complex, 3D, and dynamic, with people and vehicles moving throughout. Thus one may think that more information would be of obvious assistance to the military personnel engaged in such operations. But the amount of information can become too much to process in the dynamic pace of military operations, to the point where it inhibits the personnels' ability to complete their assigned tasks. We have thus developed algorithms for restricting information that is displayed to users. Our filtering algorithm evolved from a region-based filter to a hybrid of the spatial model of interaction, rule-based filtering, and the military concept of an area of operations.
In order to query, manipulate, or act upon objects, the user must first select these objects. BARS allows a user to select objects by combining gaze direction (using tracking of the head) with relative pointing within the field of view using a 2D or 3D mouse or eye tracker. The complex nature of the selection operation makes it susceptible to error. In BARS, with the "X-ray vision" paradigm, these occlusion relationships complicate matters more than many applications. To mitigate these errors, we designed a multimodal (speech and gesture) probabilistic selection algorithm. This algorithm incorporates an object hierarchy, several gaze and pointing algorithms, and speech recognition. The algorithms are combined using a weighted voting scheme.
BARS includes an information database, which can be updated by any user. Sharing information across the area of an operation is a critical component of team SA. We designed an information distribution system so that these updates would be sent across the network according to a priority scheme. We enabled BARS to be able to communicate with semi-automated forces (SAF) software to address the training issues. We built UI components so that routes could be drawn on the terrain in the command center application and assigned to mobile users, or drawn by mobile users and suggested to the commander or directly to other mobile users. Virtual globe applications provide a platform for a command-and-control application; we found Google Earth to be suitable due to the 3D building layer and the API that enabled rapid prototyping of environments and an application. We simulated having sensors in the environment by merging in live camera views onto this simple 3D terrain. We computed the projection of the camera's image onto known geometry to approximate a live view of the environment.
MOUT training requires that trainees operate in urban structures and against other live trainees. Often the training uses simulated small-arms munitions and pits instructors against students in several scenarios. Many military bases have "towns" for training that consist of concrete block buildings with multiple levels and architectural configurations. AR and MR can enhance this training by providing synthetic opposing forces and non-combatants. Using AR for MOUT training is a difficult undertaking. Once one has providing synthetic opposing forces and non-combatants. Using AR for MOUT training is a difficult undertaking. Once one has cleared acceptance and logistics issues, there are many technical challenges to face. Many of these challenges are the same as those as described earlier when AR is used for operations: wearable form factor, accurate tracking indoors and outdoors, and so on. One unique challenge to using AR for training operations is that the simulated forces need to appear on the user's display to give the illusion that they exist in the real world.