Integrated enhanced and Synthetic Vision System for Transport aircraft

A new avionics concept called integrated enhanced and synthetic vision system (IESVS) is being developed to enable flight operations during adverse weather/visibility conditions even in non precision airfields. This paper presents the latest trends in IESVS, design concept of the system and the work being carried out at National Aerospace Laboratories, Bangalore towards indigenous development of the same for transport aircraft.


Keywords:    Enhanced vision systemsynthetic vision systemimage fusion 

In order to accommodate the increasing air transportation demands in a safe, efficient and reliable manner, equivalent visual operations (EVO) is envisioned as the concept for the next generation air transportation system. EVO helps to achieve the safety and pace of the existing visual flight rules (VFR) operations irrespective of the weather and visibility conditions1. The instrument landing system (ILS) is currently the predominant navigation aid to enable low-visibility/ceiling approach and take-off operations. But it is very expensive and economically not feasible to provide ILS at all airports. To minimize the cost, aircraft-based technologies are being envisaged to provide EVO capability. Synthetic vision system (SVS) and enhanced vision system (EVS) or a combination of the two known as integrated enhanced and synthetic vision system (IESVS) as well as global position system (GPS) with augmentation system are the key technologies being considered. These new aircraft-based enhanced flight vision data in combination with an accurate airport database will allow greater access and throughput at airports that would otherwise be unavailable due to insufficient ground infrastructure.


Reduced visibility and reduced situational awareness are the main cause for accidents during controlled flight into terrain (CFIT)2 and IESVS is the suggested technology to bring down such accidents3. National Research Council (NRC) report on ‘decadal survey of civil aeronautics’4 lists SVS and EVS as one of the top fifty research and technology challenges for NASA in the next decade. NASA and many other leading avionic research teams world over are currently involved in research, development,testing, certification, and commercialization of IESVS5-12.


Numerous analytical, simulator and flight test studies comparing IESVS to conventional displays have documented the potential of IESVS displays for providing improved aviation safety, enhanced pilot vehicle performance, and increased operational capacity13-19.


Integrated enhanced and synthetic vision system (IESVS) is functionally a combination of EVS and SVS. EVS generates the images in real time from combination of weather penetrating multispectral infrared (IR) imaging sensors like short wave infrared (SWIR), medium wave infrared (MWIR), long wave infrared (LWIR) and millimeter wave radar (MMWR). SVS generates a rendered image of the external scene topography from the perspective of the flight deck derived from aircraft attitude and high precision navigation data using onboard database of terrain, obstacles and relevant cultural features2. In principle, SVS generated from high precision onboard terrain database is sufficient to enable the pilot to land the aircraft under all weather conditions. However, under low visibility conditions there is no way the pilot can verify whether this information is correct, or if there are errors in either the navigation data or in the airport database. For the high precision tasks of approach and landing, very high integrity of airport databases and navigation data derived from on-board sensors has to be guaranteed2. Further, GPS could be unavailable due to jamming effects or the database could be inaccurate and may not include obstacles and incursions. Hence, the weather penetrating imaging sensors are used to extract significant structures like the runway and other obstacles in real time to provide separate thread integrity monitor and provide ‘enhanced vision’ to the pilot. The concept of IESVS2,20 is illustrated in Fig 1. .



Figure 1. IESVS concept.


The IESVS is conceived to be a system of sensors, databases, computers, displays, and controls that will present visual representations of the environment outside the cockpit. Figure 2.shows the subsystem components of IESVS and other supporting avionic systems required for full functionalities/operational capabilities.


Figure 2.IESVS Subsystem components with other avionic systems


Design aspect of some of the key system components recommended by the NASA21. and other research teams and the technologies available from avionics companies is presented below:

4.1 Enhanced Vision System Sensors

The enhanced vision system (EVS) is designed to provide improved visibility of the outside environment in real time during night and adverse atmospheric conditions such as fog, rain, haze, dust, or smog. The EVS system will be equipped with multispectral infrared (IR) sensors which sense the runway lights and other important runway features and millimeter wave radar (MMWR) which is an active sensor used for runway obstacle detection. Three types of infrared sensors working in three different infrared bands are commercially available for EVS applications22-24.


Not with standing the high sensitivities that are now available, IR- based EVS is no solution for moderate to heavy fog and rain, and the natural choice to complement the IR sensors is MMW system. The MMW penetrates fog/rain quite well, but with limited resolution; an image-fusion system needs to be used to produce the composite image. With ongoing research, imaging MMW systems continues to progress in performance, physical size, and cost. MMWR working at 35 GHz or 94 GHz can be used for EVS applications25,26.

4.2 Synthetic Vision System Elements

Main synthetic vision system (SVS) elements consist of terrain database and image generating engine (hardware/ software to render SV image). The digital elevation model (DEM) resolution is one factor that determines how well the SVS terrain depiction will match the actual terrain environment. NASA for its SVS applications has used 1 and 3 arc-s DEMs for approach, landing, and take-off/departure operations27. The most critical part affecting the accuracy and reliability of SVS is quality of terrain database used as it could lead to ‘hazardously misleading information (HMTI)’. Thus, there is a definite requirement of some mechanism to monitor terrain database in real-time using other instruments on-board the aircraft. The required level of terrain database integrity depends upon the SVS application (whether advisory or flight critical) and the importance of the terrain database within the application. To mitigate potential risk of HMTI, NASA’s best practices recommends the use of active database integrity monitoring equipment (DIME) like radar altimeter/millimeter wave radars21.

4.3 Flight Displays

Flight displays play crucial role in effective implementation of IESVS concepts. The information provided on the displays should integrate tactical and strategic information necessary for flight operations as well as surface operations, including the runway incursion prevention along with the information from SVS and EVS. The IESVS displays could be presented on any one of head-up-display (HUD), head-down-display (HDD) or primary functional display (PFD), navigational display (ND) and synthetic vision auxiliary display (SV-AD). Display analysis and design should include the full range of performance parameters such as field of view (FOV), luminance, contrast, and resolution, etc. to generate matrices of performance against environmental conditions. Human factors evaluations must be integrated with evaluations of display approaches and technologies.

Multi Sensor Data Fusion (MSDF) group at CSIR-NAL, has embarked on developing an indigenous IESVS and integrating it to the avionics suite of Indian transport aircraft. Indigenous IESVS is expected to provide aircraft the capability of operation from all Indian regional airports with minimal infrastructure and instrumentation facility under adverse weather conditions including day, night, rain, fog, smoke and other low visibility conditions. The GPS aided geo augmented navigation (GAGAN) program of the airport authority of India and ISRO likely to get into operation by the year 2014 is expected to provide CATI landing capabilities at all airports within India. With IESVS it is expected to achieve CAT II and possibly CAT IIIa approach and landing without any additional infrastructure facilities at most of the Indian airports. As part of the initial work, technology analysis, requirement specification was carried out and a roadmap for the technology development has been evolved. Following is a description of the prototype development and testing that was carried out at CSIR-NAL.


5.1 EVS Prototype Development and Testing

A scaled down version of the EVS prototype hardware with 8-12 µm long wave infrared (LWIR) sensor and electro-optic (EO) colour camera was designed and developed. The prototype EVS unit was field tested on a ground vehicle at HAL airport runway for collecting data to study issues related to sensor latency and FOV in addition to generating data for evaluation of different sensor/image fusion algorithms. Fig 4. shows the EVS prototype experimental setup and Fig 4. shows the EVS prototype unit mounted on test vehicle.


Figure3. EVS prototype experimental setup.



Figure 4. EVS Prototype unit on ground test vehicle.


Experiments were conducted at HAL airport runway both during day time and after sunset. During both the days, test vehicle with the EVS unit mounted was made to traverse the run way from one end to the other. During the run, video was captured from both the cameras and recorded on the laptop which was subsequently anlayzed. For evaluation of the sensor characteristics across different lighting conditions during day and night, the EVS prototype unit was mounted on the National Flight Test Center (NFTC) tower at HAL airport to capture the images of aircraft landing, takeoff, and taxing at HAL airport and data was collected over a period of two weeks. The data gathered was also used to evaluate the image fusion algorithms developed in-house and for obstacle detection from the images.


5.2 Image Enhancement, Registration and Fusion for EVS

Generally the display devices in aircraft cockpit are of low dynamic range, but vision sensors used in EVS acquire high dynamic range (HDR) images. When attempting to display such HDR images in low dynamic devices, the low intensity areas are underexposed and appear black, and high intensity areas are overexposed and cannot be seen. To overcome this problem, histogram equalization (HE), and RETINEX algorithms28 have been implemented and applied to the data collected by field tests.

The images acquired by multiple vision sensors have to be fused to produce single image to display on HUD or HDD. Before combining/fusing, the images from different sensors are required to be registered for their alignment orientation. Image registration algorithm using the point-mapping procedure has been implemented. Using point-mapping technique, number of control point pairs are selected from both reference and input images. Using these control point pairs, affine transform is computed which is applied on input image in order to align this image with reference image. For fusing the registered LWIR and EO image/video pixel level fusion technique, wavelet transform (WT) and laplacian pyramid (LP) algorithms have been developed and evaluated29,30. WT and LP algorithms are selected for fusing images/video of EVS as they are computationally very simple and are suitable for real time applications. In these algorithms image/video fusion is performed by decomposing the images. The performance of these fusion algorithms are evaluated in terms of root mean square error (RMSE), peak signal to noise ratio (PSNR), spatial frequency and standard deviation. The fusion quality evaluation metrics are shown in Table 1. It is observed that fusion with higher levels of decomposition would provide better results, but requires higher computation time. Therefore, level of decomposition should be based on performance requirement and application. Wavelet-based image fusion algorithms provides slightly better results based on the objective evaluation metrics shown in Table 1. Subjective evaluation of these algorithms will be carried out to select the better image fusion algorithm.


Table 1. Image fusion quality evaluation metrics


Fig 5. and Fig 6. show the images recorded by EO and LWIR cameras and the fused image during day time and after sunset with runway lights ‘ON’. It can be observed from these images that only runway lights are visible in EO image and not the other features of the runway, whereas in LWIR image runway markings are clearly visible and not the runway lights. When both the images are fused with proper image registration, the fused image contains all the necessary information of the runway for better situational awareness to the pilot.

Figure 5. Image of HA L runway taken at day time..


Figure 6. Images of HA L runway taken after sunset with runway lights.


In these experiments, along with EO and LWIR, GPS receiver was used to record camera position and correction using the GPS ground station facility available at Airforce System Testing Establishment (ASTE), Bangalore was applied to GPS data to generate accurate differential GPS (DGPS) position data. The GPS heading was also recorded and the position and heading data subsequently used for rendering synthetic runway data from the synthetic terrain data base which includes SRTM DTED level 1 database of the HAL airport integrated in the distributed engineer in the loop simulator (DELS) at NAL. The GPS data in WGS84 coordinate system was transformed to local ENV frame (East, North and Vertical w.r.t database) for rendering the terrain data. Fig 7. shows the images captured by EO and LWIR cameras, the fused image data after registration and the corresponding synthetic images retrieved from DELS using recorded camera position data from GPS.


Figure 7. Recorded EO/LWIR, fused and corresponding synthetic image.


5.3 Database Integrity Check5.2 Image Enhancement, Registration and Fusion for EVS

NAL team has developed terrain database integrity monitoring strategies using downward looking (DWL) sensors and forward looking (FWL) sensors. DWL sensor (Radar Altimeter) based approach is straight forward and requires minimum retrofit of existing aircrafts whereas FWL sensor (Weather Radar) based approach is statistically more complex and may require installation or modification of sensors onboard the aircraft. The advantage of using FWL sensor is that it alerts the pilot much in advance as it can see the terrain ahead of aircraft. DWL sensor based terrain integrity check algorithm has been validated using radar altimeter and GPS data of a high performance fighter aircraft obtained from various flight trials done at different geographical locations in India31.


5.4 Flight Simulator and Avionics

Integration plans for IESVS development and testing Development and testing of IESVS is planned with coordinated use of various techniques like component tests in laboratory; hardware-in-loop simulations on desktop; flight simulator tests and flight tests. Desktop simulation studies are primarily to validate candidate technologies of the IESVS components, fusion algorithms and physical properties of the environments of concern. Experiments on the flight simulator are meant to develop operational scenarios, study optimum human machine interface with different display strategies and study system integration issues. Simulator testing involves validation of a far more constrained set of models in the context of realistic operational scenarios flown by qualified pilots. Key features of the flight simulator from IESVS development perspective are that it gives high fidelity simulation of the cockpit environment, various flight scenarios, sensor visuals and weather conditions. To meet this demand a reconfigurable flight simulator is being setup at NAL with necessary flight displays, cockpit environment, sensors and weather simulation models.

It is expected that NextGen civil transport aircraft will be equipped with integrated modular avionics (IMA) architecture. Hence it is planned to integrate the IESVS with other avionics components in IMA architecture. Fig 8.shows the proposed integration plan.


Figure 8. Recorded EO/LWIR, fused and corresponding synthetic image.


Integrated enhanced and synthetic vision system (IESVS) is currently the focus of advanced displays for avionics research to reduce accidents in commercial aviation which occur because of controlled flight into terrain (CFIT). NASA and other leading avionic research teams world over have carried out extensive research activities in IESVS and have evolved several best practice concepts for IESVS development. This paper highlights the technology and the design concept for realizing IESVS. Indigenous development of IESVS for the transport aircraft application has been initiated at National Aerospace Laboratories. As part of this effort, key operational requirements and the technologies required for realizing EVS for transport aircraft have been identified. A scaled down version of EVS hardware prototype with electro optical and infrared cameras has been developed and some field trials on a ground test vehicle are conducted. Functional and operational requirements, system requirements, and developmental road map for the development of IESVS for transport aircraft have been identified. Algorithms for image enhancement, registration, fusion, terrain integrity monitoring for synthetic data base and terrain elevation data and display requirements for IESVS have been identified. It is planned to carry out the development in a phased manner with human factor studies on a research simulator and then integrating the system onto the avionics suite for transport aircraft applications.

This work was initiated at National Aerospace Laboratories (NAL) by Dr Kota Harinarayana, who is currently occupying ‘DS Kothari DRDO Chair’. Authors express gratitude to Dr Kota Harinarayana, Dr Upadhya, Ex Director, NAL and Mr Shyam Chetty, Acting Director, NAL for their encouragement and support.

1. Kramer, Lynda J.; Bailey, Randall E.; Ellis, Kyle K.E.;Norman, R. Michael; Williams,  Steven P.; Arthur III, Jarvis J.; Shelton, Kevin J. & Prinzel III, Lawrence J. Enhanced and synthetic vision for terminal maneuvering area nextgen operations. In Proceeding of SPIE, 2011, 8042, Paper No. 8042.

2. Parrish, Russell V. Aspects of synthetic vision display systems and the best practices of the NASA’s SVS project. NASA-TP-2008-215130, May 2008

3. Strategic Research Agenda, Advisory Council for Aeronautics Research in Europe (ACARE), Oct 2004, 1.

4. Decadal Survey of Civil Aeronautics: Foundation for the Future. National Research Council (NRC). http://www.nap.edu/catalog/11664.html [Accessed on 27 Aug 2012]

5. EBACE: Superman vision in sight for future cockpits. Flight International, May 12, 2011. http://www.flightglobal.com/articles/2011/05/10/356173/ebace-supermanvision-in-sight-for-future-cockpits.html [Accessed on 27/8/2012]

6. Professional Pilot. Queensmith Communications Corp., Alexandria, VA, USA, Feb 06, 2012 Issue. http://www.propilotmag.com/archives/2011/Nov%2011/A3_zero_p1.html [Accessed on 27/8/2012]

7. Kerr, J.R. EVS technology offers improved situational awareness around airports. ICAO J., 2004, 59(2), 15-17.

8. http://www.honeywell.com/sites/portal?page=ipfd_primus&smap=aerospace&theme=T5 [Accessed on 27/8/2012]

9. Bailey, R.E. Awareness and detection of traffic and obstacles using synthetic and enhanced vision systems. NASA TM-2012-217324, Jan 2012.

10.   Professional Pilot. Queensmith Communications Corp., Alexandria, VA, USA, Aug 22, 2012. http://www.propilotmag.com/archives/2010/Oct%2010/A3_Combined_vision_p2.html [Accessed on 27/8/2012]

11. Combined vision systems. Avionics News, May 2011. http://www.jetcraft.com/wp-content/uploads/2011/03/CVS_KenElliott_Avionics-News_May2011.pdf [Accessed on 27/8/2012]

12. Honeywell moves forward on head-down EVS/SVS combo. NBAA Convention News, October 10, 2011. www.ainonline.com [Accessed on 27/8/2012]

13. Arthur III, J.J.; Prinzel III, L.J.; Kramer, L.J.; Parrish, R.V. & Bailey, R.E. Flight simulator evaluation of synthetic vision display concepts to prevent controlled flight into terrain (CFIT). NASA TP-2004-213008, April 2004.

14. Bailey, R.E.; Parrish, R.V.; Arthur III, J.J. & Norman, R.M. Flight test evaluation of tactical synthetic vision display concepts in a terrain - challenged operating environment. In the SPIE 16th Annual International Symposium on Aerospace/Defense Sensing, Simulation, and Controls - AeroSense, April 2002.

15. Jones, D.R.; Quach, C.C.; & Young, S.D. Runway incursion prevention system- demonstration and testing at the dallas/fort worth international airport. In Proceedings of the 20th Digital Avionics Systems Conference, Oct. 2001.

16. Jones, D.R. Runway incursion prevention system simulation evaluation. In Proceeding of the 21st Digital Avionics Systems Conference, Oct. 2002.

17. Prinzel III, L.J.; Kramer, L.J.; Bailey, R.E.; Arthur III, J.J.; Williams, S.P.; & McNabb, J. Augmentation of cognition and perception through advanced synthetic vision technology. In the 1st International Conference on Augmented Cognition, July 22-27, 2005.

18. Hemm, R.; Lee, D.; Stouffer, V. & Gardner, A. Additional benefits of synthetic vision technology. Logistics Management Institute NS014S1, June, 2001.

19. Williams, D.; Waller, M.; Koelling, J.; Burdette, D.W.; Doyle, T.; Capron, W.; Barry, J. & Gifford, R. Concept of operations for commercial and business aircraft synthetic vision systems. Version 1.0. NASA/TM-2001- 211058, Dec. 2001.

20. Peter, Hecker; Hans-Ullrich, Doehler; Reiner, Suikat. Enhanced vision meets pilot assistance. In Proceedings of SPIE, 1999, 3691.

21. Parirish R. V. Aspects of synthetic vision display systems and the best practices of the NASA’s SVS project. NASA/TP-2008-215130, May 2008.

22. http://www.esterline.com/Portals/17/Documents/en-us/SureSight_4pager.pdf [Accessed on 27/8/2012]

23. http://www.elbitsystems-us.com/commercial-aviation/products/enhanced-flight-vision-system-evs/system-components  [Accessed on 27/8/2012]

24. www.max-viz.com [Accessed on 27 Aug 2012]

25. Compans, E. & Hellemann, K. A MM - Wave radar -Sensor with proven capabilities of enhanced vision. In Proceedings of SPIE, 2001, 4363.

26. Aviation Week, Nov 28, 2005.

27. User requirements for terrain and obstacle data. Document DO-276A/ED-98A, RTCA/ EUROCAE, July, 2005.

28. Naidu, V. P. S.; Madhuri, Abhilash & Girija, G. Algorithm for high dynamic range image rendition. In Proceedings of the second international conference on Intelligent Human Computer Interaction (IHCI 2010), Organized by IIIT, Allahabad, January 15-17, 2010, pp. 178-183.

29. Naidu, V.P.S.; Narayana, Rao P.; Sudesh K Kashyap; Shanthakumar N.; & Girija G. Experimental study with enhanced vision system prototype unit. In Proceedings of International Conference on Image Information Processing (ICIIP 2011), 978-1-61284-861-7/11, IEEE,  Shimla, November 3-5, 2011

30. Naidu, V.P.S. & Raol, J.R. Pixel-level image fusion using wavelets and principal component analysis – A comparative analysis. Def. Sc. J., 2008, 58(3), 338-352.

31. Kashyap, S.K. & Girija, G. Terrain Database Integrity Monitoring Using Radar Altimeter and GPS Data. In Proceedings of International Radar Symposium India (IRSI), NIMHANS Conventional Centre, Bangalore, 8-11 December, 2009

Mr N Shantha Kumar obtained his MTech (Aerospace Engineering) from IIT Bombay in 1987. Presently he is working in Flight Mechanics and Control Division of CSIR-National Aerospace Laboratories (NAL). He is currently heading the Multi Sensor Data Fusion Group at NAL. His areas of interest are: Multi sensor data fusion, Kalman filtering, target tracking and integrated enhanced and synthetic vision system. He is a life member of Aeronautical Society of India..

 

Dr Sudesh K. Kashyap obtained his ME (Electrical Engineering) from M.S. University of Baroda, Gujarat and PhD (Electrical and Electronics Engineering) from University of Mysore, Karnataka. Presently, he is working at CSIR-National Aerospace Laboratories, Bangeluru as principal scientist. His areas of interest are: Kalman filtering, multi sensor data fusion, fuzzy logic, Bayesian theory, neural network and image processing.

Dr VPS Naidu obtained his ME (Medical electronics) from Anna University Chennai and PhD (Electronics) from University of Mysore, Mysore. Presently he is working as Scientist at Multi Sensor Data Fusion Group, National Aerospace Laboratories, Bangalore. His areas of interest are: Multi sensor data fusion and enhanced flight vision system.

Dr (Mrs) Girija Gopalratnam obtained her PhD from Bangalore University in the year 1996. She is working as a Scientist at the National Aerospace Laboratories. She received NAL Outstanding Performance award for Research in the year 1996. She has led teams which have received NAL Outstanding Performance award for Design, Development and Project execution in the areas of parameter estimation and multi sensor data fusion. She has over 60 research publications.