The success in network-centric warfare requires information superiority to obtain dominant battlespace awareness. The time required to take a decision has been reduced by orders of magnitude, while the volume of accessible data has been increased exponentially. When this volume is displayed to an operator, the risk of reaching a state of information overload is real and great care shall be taken to make sure that what is provided is actually information and not noise. In this paper we propose a novel interaction environment that leverages the augmented reality technology to provide a digitally enhanced view of a real command and control table. The operator equipped with an optical see-through head-mounted display controls the virtual context, a synthetic view of the common operational picture, remaining connected to the real world. Technical details of the system are described together with the evaluation method. The results showed effectiveness of the proposed system in terms of understanding perception, depth impression, and level of immersion. A relevant reduction of the reaction time and of the number of errors made during the execution of complex tasks, have been obtained.


Keywords:   Augmented realityhead-mounted displayhuman-computer interactionnetwork-centric operationsmilitary decision support system

Network-centric warfare (NCW)1,2 is the best term to describe changes in the operating environment of military organisations, and the emerging capabilities that affect the ability to understand and influence this competitive space in the information age. The power of NCW, and consequently of network-centric operations (NCO) derives from the effective linking of forces that are geographically or hierarchically distributed. The networking of knowledgeable entities allows them to share information and collaborate to develop a shared awareness.

Figure 1 shows an example of a simple network-centric scenario (NCS): different platforms, have the capability of sensing some limited areas and each one has a personal limited awareness of its proximity; (a) each platform sends collected data to a specific platform, known as command and control (C2), that has the special task to fuse data, in a manual or automatic way; then, (b) the C2 sends the fused data to the platforms; in this way they will share the same enhanced situational awareness; this process is continuously repeated during the military operation.

As described in the example, the C2 holds an important role in these networks: it is the system devoted to the decision-making process of the operational aspects of the warfare. Commanders operate such systems by means of a human-computer interface (HCI) to get access to the common operational picture (COP) and to manifest decisions (e.g., plans, orders).

Increasing the number of commanded platforms, an operator of the C2 can easily reach a state of information overload3, where information flow rate is greater than the operator’s processing rate; this situation could cause a wrong mental model of the mission scenario and, consequently, the making of wrong decisions that could lead to catastrophic situations.

Thus, the human-computer interface becomes a key factor when developing the architecture of a C2. The focus of this paper is on improving common operational picture reading, presenting, and understanding. In particular, a novel interaction environment that leverages the augmented reality (AR) technology to provide a digitally enhanced view of a real C2 table is proposed. The great advantage of augmented reality is that it helps overcome information overload by providing only relevant data (e.g., related to user’s role in the team) and displaying these at convenient and more intuitive locations. Augmented reality therefore accelerates the user’s ability to achieve higher situational awareness, which leads to better and faster decision-making. This is very important in military operations considering the fast decisions rate required. Furthermore, we know that the enhanced experience provided by augmented reality allows to create new perceptual or cognitive cues that go beyond typical human sensory experience. The proposed approach has not been seen in the literature in the context of NCW. The experimentation setup and assessed technologies are also uncommon.

Figure 1. Example of a network-centric scenario.

Gaining a detailed understanding of the modern battle space is essential for the success of any military operation. In these applications, the main function of a human-computer interface is to display the common operational picture: a single identical display of relevant information concerning friendly, enemy, and neutral forces. Examples of information that can be integrated in a common operational picture include location (e.g., current positions, rate of movements), environment (e.g., current weather conditions, terrain features), status (e.g., capabilities of offensive and defensive enemy weapons systems).

Several research groups have focused their activities on the design and development of new display paradigms and technologies for advanced information visualisation.

Dragon4 has been one of the first research projects in formalising requirements for systems with the need to visualise a huge amount of information on tactical maps. A real-time virtual environment adaptable to different types of interaction devices, display platforms and information sources has been realised.

Pettersson5, et al. have proposed a visualisation environment based on the projection of four independent stereoscopic image pairs at full resolution upon a custom-designed optical screen. This system suffers from apparent crosstalk between stereo images pairs.

Kapler6 and Wright have developed a novel visualisation technique for displaying and tracking events, objects and activities within a combined temporal and geospatial display. The events are represented within an X, Y, T coordinate space, in which the X and Y plane shows flat geographic space and the T-axis represents time into the future and past. This technique is not adequate for an immersive 3-D virtual environment because it uses an axis to describe the time evolution, reducing the spatial representation to a flat surface; the altitude information, that is important in avionic scenarios, cannot be displayed. However, it is remarkable that the splitting-up of geographical and logical information (e.g., health of a platform) can enhance the usability of the system.

Other research groups have focused on the development of completely immersive visualisation systems. In NextVC27 the main idea is to leverage the virtual reality (VR) technology to create a shared collaborative environment with a customised view of the real world objects and events. Hodicky8 and Frantis have conducted a research programme to investigate ways to increase the level and quality of information about battlefield by means of VR devices. The commander equipped with a head-mounted display (HMD) can operate the virtual environment by head and body movements. The left head rotation moves the current scene into the left area of interest. The right head rotation works on the contrary. The movement of the whole body creates effect of flying over the virtual terrain. The commander can also use a data glove to communicate with a new presentation layer. The use of a system based only on VR has the disadvantage that the virtual world isolates the operator making difficult the connection with the real world.

Adithya9 has presented a paper-based augmented map for military scope that uses the AR Toolkit marker-tracking based approach and a video see-through HMD to manage interaction and visualisation. The dynamic information of the terrain, the placing of virtual objects, and the interaction are features of the digital world that are superimposed onto the paper map. The main limitation of this type of system is that the field of action is restricted to a specific area.

The aim of this research is to propose an innovative technology for modern battle space visualisation based on the AR to create the impression that the virtual environment is part of the real world. This facility is part of Loki, an advanced C2 system for EW that coordinates a set of heterogeneous platforms (air, surface, sub surface) having onboard sensors and actuators in the domain of electronic defence.

3.1 High-level Architecture of Loki

Figure 2 shows the high-level architectural view of the Loki system. The core component continuously executes an advanced multi-sensor data fusion process on the data retrieved from cooperating systems. Once these data are properly fused, the system is capable to infer new important information such as a better localisation of emitters and countermeasures strategy. This information is transferred to the presentation layer using a communication middleware based on data distribution service (DDS) paradigm.

Figure 2. Loki architecture in the large.

The augmented environment (AE) component is responsible for configuring the visual appearance of the COP and accepting and validating user input. Moreover, it provides a persistence mechanism to decouple the data-access logic from the core logic.

3.2 Overview of the Augmented Environment

The main virtual world component consists of a geo-referenced 3D map of the mission area on which the EW scenario is positioned (Fig. 3). The localised platforms (characterised by latitude, longitude and altitude) are placed on the scene faithfully reflecting their geographic coordinates and are represented according to the common war fighting symbology MIL-STD-2525C10 standard. If the direction of arrival (DOA) of a threat is known with a margin of error, the uncertainty volume is shown as a pyramid with vertex in the platform that has performed the detection.

Figure 3. Geo-referenced 3-D map.

The so-called logical view of the scenario is provided through several vertical layers. In Fig. 4, the intersection of the uncertainty areas relating to a specific threat is shown by a top view of the 3D model. The views are linked and synchronised so that changes in one are reflected in the others. A small viewport, placed in the upper left corner of the display, alerts the operator of critical information (e.g., warning emitters detection).

Figure 4. Activation of layers normal to the C2 table.

The precise alignment between real and virtual world is obtained through an electromagnetic tracker (Polhemus Patriot). It includes a system electronic unit, one source and two sensors. The source, that is also the system’s reference frame for sensors measurements, is fixed under the real table. The virtual world coordinate system is matched with the source and shifted up by the table-top thickness. This configuration has several advantages: it is not subject to flickering of rendered model and it allows to have the source near the user without cluttering the table-top.

As shown in Fig. 5, the commander sees the virtual world superimposed on the real world looking through the lens of an optical see-through HMD (NVIS nVisor ST50).

The tracking system detects in real-time the position and the orientation both of the observer’s head and of the dominant hand through two sensors fixed respectively at the back of the HMD and at the top of the index finger. The head tracking is essential for consistently moving the virtual camera and for always setting the main vertical layer orientation perpendicular to the operator’s direction of view. The operator can visualise portions of the AE that are not included in the actual view frustum by moving around the C2 table and rotating or inclining the head. For example, he can walk up to a specific area of interest to see it larger and more clearly. The interaction with the virtual context (e.g., threats selection, commands execution) is performed using the tracked finger. The purpose is to replace key board and mouse with touch.

The AR environment can be observed simultaneously from several users and each one can interact with own virtual context. This system provides multiple views of the mission area distributed between team members within the C2 centre. This is expected that this feature encourages collaboration and reduces information overload.


3.2.1 SW Design and Implementation

The AE has been designed with high modularity applying user interface design patterns (UIDP). These patterns help to ensure that key human factors concepts are quickly and correctly implemented.

Figure 5. Operator equipped with a see-through HMD.

The SW has been developed in C++ using OGRE11, an open-source 3D engine that abstracts the details of the underlying system libraries (e.g., OpenGL, DirectX) and provides an interface based on high-level classes. Qt Core API has been integrated to take advantage of its powerful mechanism for object communication called signals and slots and to handle, by means the Qt serial port add-on module, the serial port to which the electromagnetic tracker is attached.

The high-definition map of the mission area has been generated through several steps. First, the 3D model has been generated starting from digital terrain elevation data (DTED), using Autodesk infrastructure design suite. After that, autodesk 3DS Max has been used to add textures, details and colours. Finally, the resultant model has been converted in a format supported by OGRE. An important thing that must be considered is that in this process the geo-referencing information is lost and a mapping algorithm12 to associate a specific point inside the map to each pair of longitude and latitude value has been implemented.

To avoid a different spatial perception of the virtual context through the HMD in contrast to the real world, the camera view frustum has been calibrated to the display view frustum. The calibration method13 adopted requires that each user aligns tracked real world markers with virtual target positions and can be completed in approximately one minute for both eyes.

3.3 Evaluation Setup

The purpose of the evaluation was to assess the usability of the proposed AR system. The formal usability study was carried out at the facilities of ELT in Rome and involved 12 members of the Armed Forces of different countries. The case study required about three hours per participant to be completed.

The test procedure started with a brief presentation of the system and the aim of the investigation. Each operator was involved in the understanding of complex NCW scenarios, using three display systems: the proposal AE, a wall-sized stereoscopic HCI14,15, and a multi-screen system. The following visual detection tasks of increasing complexity were assigned to participants:

T1. Identification: the operator shall detect and select specific warning emitters in a dense electromagnetic environments;

T2. Correlation: the operator shall detect the situations where multiple sensors pick up the same threat and shall require the correlation of the redundant information (Fig. 6);

T3. Triangulation: the operator shall determine a threat location performing a triangulation (Fig. 7).

The simulator, where the EW scenarios were developed and executed, is an integration with Commercial-On-The-Shelf (COTS) products and proprietary software and is based on the principles of distributed and live simulation16.

To measure the operational behaviours, the completion time and the number of failures were automatically acquired for each task. In addition at the end of the test, the participants filled up a questionnaire to express their preference on the display technologies evaluating the following qualitative data: Depth impression, level of immersion, understanding perception, viewing comfort, and sense of isolation.

Figure 6. (a) Two detections of the same threat by the same platform, (b) Result of the correlation operation: a single threat marked by the correlation symbol ‘C’.

Figure 7. (a) Two detection of the same threat by different platforms, (b) Result of the triangulation operation: a new threat placed at the intersection of the uncertainty volumes and marked by the triangulation symbol ‘T’.

The histograms in Fig. 8 show the results for quantitative parameters. Regarding to the identification (Task 1) only minor differences have been detected between the considered display systems. The impact of AR becomes more evident with increasing the level of task difficulty. The correlation (Task 2) and triangulation (Task 3) were performed in considerably less time using the AR system. This result is especially important because a key element in NCW is the speed of commander or the time it takes to recognize and understand a situation (or change in the situation), identify and assess options, select an appropriate course of action, and translate it into actionable orders. The reduction in the number of failures under AR conditions is impressive for Task 2 and Task 3. The highest number of errors were made performing the triangulation (Task 3) using the multi-screen system. Many users commented that the abstraction of VR systems increases the agitation and reduces operative capacity.

Figure 8. Quantitative data related to users’ performance. (a) Each bar represents the mean time employed to complete a task using a specific display technology, (b) Each bar represents the mean number of errors committed during the execution of a task using a specific display technology.

The above results represent the major outcome for our experiments. The ability to activate/deactivate multiple views of the scenario (perfectly embedded in the real world) and the capability to ‘touch’ the virtual objects allows to reduce information overload and to increase situational awareness. This leads to a reduction of the reaction time and of the number of errors made during the execution of complex tasks.

Figure 9. Qualitative data related to users’ experiences.

The pie charts in Fig. 9 depict the results of the qualitative evaluation. Most of the participants (46%) had no doubt that the depth impression is higher in case of AR. The result can be explained considering that optical see-through approach allows to perceive the depth of real objects in a natural way looking through the semi-transparent lens. This helps to maintain a good accuracy in estimation of spatial relations. Regarding the level of immersion, the AR system received the 50% of preferences. This means that the accurate and stable registration between real and synthetic world allows to create a mixed reality where user is fully immersed. According to many users, the contextualisation of the virtual world on the real table increases the level of focus and attention. The sense of isolation is approximately equal between AR system, and wall-sized stereo system. In the AR system the perception of the virtual world as part of the real world, allows to relieve the seclusion that derives from the use of an HMD. The majority of users (59%) noted a clear and satisfying understanding perceptionin case of AR system. The AE provides advantages in mental model creation and improves significantly the perception and comprehension of situation. Moreover it leaves the view of the real world nearly intact and enhances it by overlapping the virtual context. There are not significant differences in viewing comfort between AR system and wall-sized stereo system. Many operators preferred the multi-screen system because no wearable device (e.g., shutter glasses, HMD) is needed.

In the design and development of the C2 systems for NCO, a key element is the visualisation system. Wrong assumptions may lead the operator to an information overload state.

In this paper we evaluated the use of a novel interaction environment that leverages the AR technology to provide a digitally enhanced view of a real C2 table.

The formal usability study showed the relevant role played by the proposed system and its advantages in terms of understanding perception, Depth impression and level of immersion. A significant reduction in the number of failures and of the completion time have been obtained. The results presented represent the initial experimentation phase of continuing research into user interaction for military purposes. The lack of comparative evaluation with respect to other works specifically addressing NCW is due to the actual complexity of this domain.

In the next future, a comparison between optical and video see-through HMDs will be conducted and the gesture and motion control by means of low-cost devices (e.g., Leap Motion, Myo) will be explored.

1. Alberts, D.S.; Garstka, J.J. & Stein, F.P. Network-centric warfare: Developing and leveraging information superiority. CCRP Publication Series, USA, 1999.

2. Braulinger, T.K. Network-centric warfare implementation and assessment. U.S. Army Command and General Staff College, USA, 2005. (Master’s Thesis.)

3. Shanker, T. & Richtel, M. New Military, Data overload can be deadly. In The New York Times, 2011. http://www.nytimes.com/2011/01/17/technology/17brain.html [Accessed on 04 January 2015].

4. Julier, S.; King, R.; Colbert, B.; Durbin, J. & Rosenblum, L. The software architecture of a real-time battlefield visualisation virtual environment. In Proceedings of the IEEE Virtual Reality Conference, Houston, Texas, USA, 1999.

5. Pettersson, L.W.; Spak, U. & Seipel, S. Collaborative 3-D visualisations of geo-spatial information for command and control. In Proceedings of the Swedish Chapter of Eurographics, Gävle, Sweden, 2004.

6. Kapler, T. & Wright, W. Geo time information visualisation. In Proceedings of the IEEE Symposium on information visualisation, Austin, Texas, USA, 2004.

7. Carvalho, M.M. & Ford, R. NextVC2 - A next generation virtual world command and control. In Proceedings of the IEEE Military Communications Conference, Orlando, Florida, USA 2012.

8. Hodicky, J. & Frantis, P. Decision support system for a commander at the operational level. In Proceedings of the International Conference on Knowledge Engineering and Ontology Development, Madeira, Portugal, 2009.

9. Adithya, C.K.K. Augmented Reality Approach for Paper Map Visualisation. In Proceedings of the International Conference on Communication and Computational Intelligence, Erode, India, 2010.

10. MIL-STD-2525C: http://www.mapsymbs.com/ms2525c.pdf [Accessed on 04January2015].

11. Rocha, R.V; Rocha, R.V & Araújo, R.B. Selecting the best open source 3D games engines. In Proceedings of the Brazilian Symposium on Games and Digital Entertainment, Florianópolis, Santa Catarina, Brazil, 2010.

12. Bowditch, N. The American practical navigator. National Imagery and Mapping Agency, USA, 2012.

13. Kellner, F.; Bolte, B.; Bruder, G.; Rautenberg, U.; Steinicke, F.; Lappe, M. & Koch, R. Geometric calibration of head-mounted displays and its effects on distance estimation. IEEE Trans. Visual. Comput. Graps., 2012, 18(4), 589-596. doi: 10.1109/TVCG.2012.45

14. Zocco, A.; Cannone, D. & De Paolis, L.T. Effects of stereoscopy on a human-computer interface for network-centric operations. In Proceedings of the International Conference on Computer Vision Theory and Applications. Lisbon, Portugal, 2014.

15. Zocco A.; Livatino, S. & De Paolis L.T. Stereoscopic-3D vision to improve situational awareness in military operations. In Lecture Notes in Computer Science, 2014, 8853, 351-62.

16. Sindico, A.; Cannone, D.; Tortora, S.; Italiano, G.F. & Naldi, M. Distributed simulation of electronic warfare command and control scenarios, In Proceedings of the International Defence and Homeland Security Simulation Workshop, Rome, Italy, 2011.

Mr Alessandro Zocco is a PhD Scholar at the Department of Engineering for Innovation of the University of Salento and a SW Engineer at Elettronica S.p.A. His research covers virtual reality, augmented reality and human-computer interaction applied to electronic warfare.

Prof. Lucio Tommaso De Paolis received the MSc (Electronic Engineering) from the University of Pisa in 1994. He is Assistant Professor of Information Processing Systems and has the scientific responsiblity of the AVR laboratory of the Department of Engineering for Innovation of the University of Salento. His research interest concerns the computer-aided surgery and the human-computer interaction.