Registration of Airborne Infrared Images using Platform Attitude Information

In current warfare scenario stealth and passive threat detection capabilities are considered as prime requirements to accomplish desired mission by the fighter aircrafts. To improve the stealth of an aircraft, the trend is towards detecting threats with the help of passive sensors (electro optic or infrared). Current situation caters for systems like infrared search and track (IRST) and Passive missile warning systems (PMWS). Both IRST and PMWS systems detect targets of interest by processing IR images acquired in mid-IR region. The prime challenge in IRST system or PMWS is detecting a moving target of size typically 1~2 pixels in acquired image sequences. The temporal change caused by moving target in consecutive frames can be considered as one important factor to detect them. The temporal change caused by moving target is identified through absolute frame differencing of successive frames. This principle has limitation in application to IRST and PMWS as the imaging sensor with the aircraft is moving. This motion also imparts temporal change in the acquired images. In this paper authors are proposing a method for removing the temporal change caused by the platform motion in two consequently acquired frames using registration process. The proposed method uses the platform attitude information at frame sampling times. Authors have analyzed the sensitivity of registration process to noisy platform attitude information.


Keywords:    Missile warning systemsinfrared search and trackimage registration

In current warfare scenario, detection of threats by passive systems is of great importance for the fighters to accomplish their intended mission.  These capabilities can be incorporated into aircraft using systems like infrared search and track (IRST)1 and passive missile warning system (PMWS)2 respectively. Both of these systems share common similarities among them, as far as the detection phase of target or threat concerned. IRST and PMWS exploit the thermal radiation of the scene for detecting the target of interest. Both the systems process acquired infrared (IR) imagery for identifying the targets of size 1~2 pixels in wide field of view sensor against background clutter. This low spatial extent, low signal to noise ratio (SNR) target is called dim target in research community. In this paper authors are referring to both IRST, PMWS with a common name as ‘passive target detection system (PTDS)’.
Commonly a PTDS consists of data processing stages3: Clutter rejection, tracking and association, and discrimination.
(a)   Clutter rejection stage converts large amount of pixel data into few number of pixel data that may contain targets.
(b)   Tracking and association stage associates the detections from same spatial source into a track.
(c)   Discrimination stage discriminates tracks into true target tracks or false target tracks by extracting various features.

The data processing in clutter rejection stage is further elaborated below, as the registration algorithm that authors are proposing is part of it  

1.1   Clutter Rejection

Clutter rejection stage operates on preprocessed images for rejecting clutter (radiation from background, natural or man-made sources). The main goal of clutter rejection stage is to attenuate the clutter level to zero without affecting the true target signal. The observation of random image embedded with dim small target could be modeled as4,5,

I(i,j,k) = S(i,j,k) + Ib(i,j,k) + N(i,j,k)                               (1)
k = 0,1,2….

where I denote an image embedded with small target, S represents the signal intensity of target, and Ib is clutter background. N is noise induced by the detector elements present in the sensor. An image consists of target and noise could be obtained after subtracting the clutter background from original image.  This can be modeled as,

It (i,j,k) = S(i,j,k) + N(i,j,k)                                              (2)
k = 0,1,2….

where It denotes an image containing target signal intensity and noise induced by the detector elements. This mathematical model in Eqn. (2) indicates that, ideally the target could be correctly detected if image background could be estimated accurately and the noise in image could be excluded.
Limited literature available for the problem of detecting weak point targets in Infrared clutter. The background estimation algorithms in IR images can be categorized into  spatial filtering6,7, temporal filtering8,9, spatio-temporal filtering10,11, hypothesis testing12,13, track before detect14,15, image morphology based16, multi spectral image fusion methods17, wavelet decomposition18,19, and super resolution reconstruction methods20.
Two primary challenges in developing algorithm for dim target detection are:
(i)     The algorithm should well distinguish the target and clutter.
(ii)    To fast detect the target, the algorithm should be simple and real time implementable.
Of the various methods referred for background estimation, temporal filtering methods are gaining more attention in PTDS, being simple and real time implementable nature. PTDS uses imaging sensors with typical frame rate in range 60Hz to 120Hz. Temporal filtering methods analyses the moving property of target. The principle is ‘if two successive images are generated at slightly different times from the same point in space, they should exhibit almost exactly the same backgrounds, pixel for pixel’. If a moving target is present in the field of view, its signature will be different in the two images. Subtraction of one image from the other should therefore result in nearly complete cancellation of background clutter with much less cancellation of target. The main challenge in temporal filtering method is two consecutively sampled images to be aligned before frame differencing in order to nullify the platform motion. Consider the two images presented in Fig. 1., which are captured at two different time instants with a time gap of 30 ms. Upon comparison of two images one can observe that background content is same in both images, but the same background content is appearing in different pixel positions of the images due to platform motion.

The registration process geometrically aligns previously sampled image (previous image) with current sampled reference image (current image). In PTDS, Current image is the one sampled recently by the imaging sensor. Whereas previous image is, one sampled previously to the current image. Registration process is achieved by maximizing the normalized cross-correlation for two images, given by



corr(f,g)= x,y (f(x,y) f ¯ ) (g(x,y) g ¯ ) x,y (f(x,y) f ¯ ) 2 (g(x,y) g ¯ ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaam4yaiaad+ gacaWGYbGaamOCaiaaykW7caGGOaGaamOzaiaacYcacaWGNbGaaiyk aiabg2da9maalaaabaWaaabqaeaadaWgaaWcbaGaamiEaiaacYcaca WG5baabeaakiaacIcacaWGMbGaaiikaiaadIhacaGGSaGaamyEaiaa cMcacqGHsislceWGMbGbaebacaGGPaaaleqabeqdcqGHris5aOGaai ikaiaadEgacaGGOaGaamiEaiaacYcacaWG5bGaaiykaiabgkHiTiqa dEgagaqeaiaacMcaaeaadaGcaaqaamaaqaeabaWaaSbaaSqaaiaadI hacaGGSaGaamyEaaqabaGccaGGOaGaamOzaiaacIcacaWG4bGaaiil aiaadMhacaGGPaGaeyOeI0IabmOzayaaraGaaiykamaaCaaaleqaba GaaGOmaaaaaeqabeqdcqGHris5aOGaaiikaiaadEgacaGGOaGaamiE aiaacYcacaWG5bGaaiykaiabgkHiTiqadEgagaqeaiaacMcadaahaa Wcbeqaaiaaikdaaaaabeaaaaaaaa@6EA6@


D



where f(x, y) and g(x, y) are the two images to be registered, and  f ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabmOzayaara aaaa@38A5@ , g ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabm4zayaara aaaa@38A6@ are the mean values for the respective images. Based on the assumption that ground is approximately flat (or aircraft is sufficiently high above the ground any terrain effects are negligible), a three dimensional affine or projective transformation can be used to describe the transformation from one frame to next21. The registration process merely finds the transformation that maximizes the cross correlation.
Nevertheless majority of registration methods consists of primarily four stages presented in Fig. 2.
(i)    Feature detection stage identifies salient and distinctive features like closed boundary regions, edges, contours, line intersections, corners, in current and previous images. These feature points are called control points in literature22. The important consideration in selection of feature or control points is invariance of same under scaling, rotation and translation.
(ii)    In Feature matching step, the correspondence between the features detected in current image and features detected in previous image is established.
(iii)   In Transformation model estimation, the transformation or parameters of the transformation that best describes the established feature correspondence is estimated. The transformation model that describes the correspondence can be local or global. Typically in airborne imaging systems the transformation model assumed is affine transform, which is local.
(iv)   After obtaining the transformation model and its parameters, it is applied on the base image to align to current image. Image values in non integer coordinates are found by appropriate interpolation techniques23.



The difficulties involved in using this conventional registration process, for registration of IR images are:
  • PTDS shall require the completion of registration process on subsequent frames at frame rate for real time operation. But the conventional registration process involves computation intensive tasks like feature extraction, feature matching, transform model estimation. This may limit the real time performance of PTDS.
  • For good registration accuracy there should be enough of the ground features to be common between current image and previous image. The low resolution and noisy nature of IR imagery poses more problems for conventional registration algorithms.
  •   IR imagers tend to generate a lot of fixed pattern noise (dead and saturated pixels) and other non- uniformities. This can cause false correlations and other registration difficulties.

To overcome these difficulties authors are proposing a method to find the registration transformation between current frame and previous frame using platform attitude. The details of the proposed approach are presented in next Section.


The proposed registration method uses following information.

  • Sensor installation angles to the platform.
  • Pixel’s line of sight (LOS) vectors.
  • Platform attitude information (Yaw, Pitch, Roll, velocities in inertial coordinate system) available at each frame sampling time.

Inertial coordinate system also known as NED coordinate system is a geographical coordinate system for representing the state vectors that is commonly used in aviation24.  It consists of three numbers: one represents the position along the northern axis, one along the eastern axis, and one represents vertical position. Down is chosen as opposed to up in order to comply with the right-hand rule. The origin of this coordinate system is usually chosen to be the aircraft’s center of gravity.
The pictorial representation of registration process for aligning two frames sampled at two different time instants is presented in Fig. 3.


Figure 3. Figure demonstrating the process for identification of registration transformation for a pixel in current image to previous image.


In proposed approach registration of a pixel in current image into previous image is carried out in three steps.
1. The inertial coordinates (NED coordinates) of point imaged into current image pixel is obtained using pixel’s unit inertial LOS vector and range to the 3-D point in space. The range to 3-D point in space is obtained with assumption of ground is approximately flat  by,

R= H cos(el) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamOuaiabg2 da9maalaaabaGaamisaaqaaiGacogacaGGVbGaai4CaiaaykW7caaM c8UaaiikaiaadwgacaWGSbGaaiykaaaaaaa@4379@

         H is platform altitude above ground level. el is inertial elevation angle of 3-D world point. The steps involved for identifying the Line of sight vector of a pixel in inertial coordinate system is presented in Fig. 4.


Figure 4. Transformations involved for representing the pixel’s line of sight in inertial coordinates.


2. LOS vector  C ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabm4qayaara aaaa@3882@ of 3-D world point into the previous image coordinates is obtained in triangulation with platform translational vector  B ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabmOqayaara aaaa@3881@ and current image pixel LOS vector A ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabmyqayaara aaaa@3880@ . From Fig. 3.

C ¯ = A ¯ + B ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabm4qayaara Gaeyypa0JabmyqayaaraGaey4kaSIabmOqayaaraaaaa@3C27@

Platform translational vector  is obtained by [XN     YE     ZD] = [VN     VE     VD]  *  T
[VN     VE     VD] is inertial velocity vector of the platform. T is the time gap between the frames. Using the LOS vector C ¯ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipu0JLipgYlb91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqai=hGuQ8kuc9pgc9q8qqaq=dir=f0=yqaiVgFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGabm4qayaara aaaa@3882@ , pixel location in previous image is obtained through consecutive rotations depicted in Fig. 5.
3. Registration map obtained from current image to previous image consists of non integer locations. Intensity values at non integer locations of previous image are obtained through interpolation.



Figure 5. Transformations involved for reprojecting the 3-D world point into the previous frame.


The proposed approach is evaluated on video recordings obtained from a sensor mounted on airborne platform.

4.1  Experimental Data set

The proposed approach is evaluated on three different scenarios of video recordings NR01, NR02, NR03. These video recordings are collected from an IR sensor (operating in mid IR region of Electromagnetic spectrum) mounted on commercial aircraft. The IR sensor samples the picture of thermal scene at 30 frames per second. Platform attitude information is recorded from an inertial navigation sensor. Recorded platform attitude information is synchronized to frame sampling time using interpolation or extrapolation. The details about video recordings are presented in Table 1 and Fig.6.



Table 1. Details of video recordings used for evaluation of the proposed approach .




Figure 6. Figures demonstrating the background features present in each of video recordings (a) NR01, (b) NR02, (c) NR03, respectively.


4.2  Performance Analysis

The goal of registration process is to maximize the normalized cross correlation between two images. The performance measure used for evaluating proposed registration process is normalized cross correlation coefficient. The proposed approach is evaluated on three video recordings, and the results are presented in Table 2. Correlation coefficient for a video recording mentioned in Table 2 is obtained by averaging the correlation coefficient value of every frame with its corresponding previous frame. One can observe from Table 2, with registration process the correlation coefficient values of video recording increased. Video recordings NR01, NR02 are exhibiting high correlation coefficient values even without registration of frames. The reason for this is homogeneous background content present in video frames. With registration process correlation between frames in NR01, NR02 improved by 2%, which is a significant improvement in PTDS for effectively removing the clutter. In case of NR03 without registration the correlation between frames is very less, because of heterogeneous background content. With registration process the correlation between frames is improved by 20%. Even with registration the correlation between frames in NR03 is quite less compared to NR01, NR02. The reason for this is mainly due to registration inaccuracies contributed from height measurement errors. In case of NR03, platform is flying at low altitudes so the assumption of ground is approximately flat is failing and further contributing to registration errors.


Table 2. Performance comparison of the registration process on three experimental data sets.



4.3  Effect of Noisy attitude information on Registration accuracy

The proposed registration process identifies the registration map using platform attitude information. The accuracy of the registration process is analyzed with respect to noisy attitude information. Known amounts of additive Gaussian noise is added to the platform attitude variables Yaw, Pitch, Roll and NED plane velocities. The registration performance in terms of cross correlation coefficient for three video recordings is analyzed varying the noise levels in attitude information. Results are presented in Fig. 7, and Fig. 8. Registration performance is decreasing rapidly with increase in noise for Yaw, Pitch, Roll variables. Whereas the performance fall in registration process is very minimal with increase in noise for NED plane velocities. The reason for less sensitivity of registration process towards noisy NED plane velocities is less time gap between the frames (in order of msec). Due to less time gap between frames, the error contributed by NED plane velocities in finding platform translational vector ( β ) is negligible.




Figure 7. Sensitivity of registration process with respect to noise in inertial velocities.



Figure 8. Sensitivity of registration process with respect to noise in platform’s Yaw, Pitch, Roll measurements.


A novel method for registering air borne images using platform own ship motion is presented. The performance of the proposed approach is evaluated on video recordings with correlation coefficient as performance metric. The performance of the proposed approach is also evaluated for sensitivity towards noisy platform attitude information. It is observed that proposed approach is more sensitive to the noise in Yaw, Pitch, Roll parameters than inertial plane velocities. The performance of proposed approach is to be improved by accurately estimating the range to the 3-D world point, under low altitude flying scenarios of platform.


1.     Srivastava, H.B.;  Limbu, Y.B.; Saran, R. & Kumar, A.  Airborne infrared search and track systems. Def. Sci. J., 2007, 57(5), 739-53.

2.     Neele, F. Two-color infrared missile warning sensors. In Proceeding of the SPIE5787:  Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications II, 134 (June 03, 2005) [Full text via CrossRef]

3.     Tidhar, G. & Schlisselberg, R.  Evolution path of MWS technologies: RF, IR, and UV. In Proceeding of the  SPIE 5783: Infrared Technology and Applications XXXI, 662 (June 03, 2005) [Full text via CrossRef]

4.     Zhang, F.;  Li, Chengfang & Shi, Lina. Detecting and tracking dim moving point target in IR image sequences. Infrared Phys. Technol.,2005,  46(4), 323-28.[Full text via CrossRef]

5.     Xu, b.; Zheng, L.;Wang, Y.X., Survey of Dim Target Detection and Tracking in Infrared Image Sequences. Infrared and Laser Engineering, 2004 33(), 482-87.

6.     Suyog, D.D.;  Meng, H.E.;  Ronda, V. & Philip, C. Max-mean and max-median filters for detection of small targets. In Proceeding of the SPIE3809: Signal and Data Processing of Small Targets, 74 (October 04, 1999) [Full text via CrossRef]

7.     Soni, T.; Zeidler, J.R. & Ku, W.H. Adaptive whitening filters for small target detection. In Proceeding of the SPIE 1698: Signal and Data Processing of Small Targets, 21 (August 25, 1992) [Full text via CrossRef]

8.     Ralph, J.F.; Smith, M.I. & Heather, J.P. Motion-based detection, identification, and tracking for missile warning system applications. In Proceeding of the SPIE 5809: Signal Processing, Sensor Fusion, and Target Recognition XIV, 53 (May 31, 2005), [Full text via CrossRef]

9.     Silverman, J.;  Caefer, C.E.;  DiSalvo, S. & Vickers, V.E.  Temporal filtering for point target detection in staring IR imagery: II. Recursive variance filter. In Proceeding of the SPIE 3373, Signal and Data Processing of Small Targets 1998, 44 (September 3, 1998), [Full text via CrossRef]

10.  Tartakovsky, A.G. & Blazek, R.B. Effective adaptive spatial-temporal technique for clutter rejection in IRST. In Proceeding of the SPIE 4048: Signal and Data Processing of Small Targets 2000, 85 (July 13, 2000), [Full text via CrossRef]

11.  Aridgides, T.; Fernandez, M.F.; Randolph, D. & Bray, D.  Adaptive three-dimensional spatio-temporal filtering techniques for infrared clutter suppression. In Proceeding of the  SPIE 1305: Signal and Data Processing of Small Targets 1990, 63 (October 1, 1990), [Full text via CrossRef]

12.  Tzannes, A.P. & Mooney, J.M. Point target detection in IR image sequences using spatio-temporal hypothesis Testing. Air Force Research Laboratory, Hanscom AFB, MA, 1999.

13.  Blostein, S.D.  Detecting small moving objects in image sequences using sequential hypothesis testing. IEEE Trans. Signal Process.,1991, 39(7), 1611-29.[Full text via CrossRef]

14.  Kligys, S.;  Rozovsky, B. & Tartakovsky, A. Detection algorithm and track before detect architecture based on non linear filtering for infrared search and track systems. Centre for Applied Mathematical Sciences, University of Southern California, Los Angeles, CA, 1998.

15.  Warren, R.C. A bayesian track before detect algorithm for IR point target detection. Weapons systems division, Aeronautical and Maritime Research Laboratory, Victoria, Austalia, 2002.[Full text via CrossRef]

16.  Victor, T.; Tamar, P.; May, L. & Bondary, K. Morphology-based algorithm for point target detection in infrared backgrounds. In Proceeding of the  SPIE: Signal and Data Processing of Small Targets, 1993, 1954,  2-11.[Full text via CrossRef]

17.  Toet, A.  Detection of dim point targets in cluttered maritime backgrounds through multisensor image fusion. In Proceeding of the SPIE 4718: Targets and Backgrounds VIII: Characterization and Representation, 118 (August 6, 2002) .[Full text via CrossRef]

18.  Wang, Y. & Zhang, J. Dim target detection system based on DSP. In Proceeding of the  Second Symposium International Computer Science and Computational Technology (ISCSCT’09), Academy publisher, 2009. pp. 321-24.

19.  Del Marco, S. & Agaian, S. The design of wavelets for image enhancement and target detection. In Proceeding of the  SPIE:  Mobile Multimedia/Image Processing, Security, and Applications, 2009,  7351, pp. 12-26.[Full text via CrossRef]

20.  Dijk, J.; van Eekeren, Adam W.M.; Schutte, Klamer ; Lange, Dirk-Jan J. de & Van Veliet, Lucas J.  Performance study on point target detection using super-resolution reconstruction. In Proceeding of the SPIE 7335: Automatic Target Recognition XIX, 73350M (May 04, 2009)[Full text via CrossRef]

21.  Ralph, J.F.;  Smith, M.I. & Heather, J.P.  Identification of missile guidance laws for missile warning systems applications. In Proceeding of the SPIE 6236: Signal and Data Processing of Small Targets 2006, 62360B (May 19, 2006)[Full text via CrossRef]

22.  Zitoa, B. & Flusser, J. Image registration methods: A survey.  Image Vision Comput., 2003, 21,  977-1000.http://dx.doi.org/10.1016/S0262-8856(03)00137-9">[Full text via CrossRef]

23.  Amanatiadis, A. & Andreadis, L. A survey on evaluation methods for image interpolation. Meas. Sci. Technol., 2009, 20, [Full text via CrossRef]

24.  Koks, Don. Using rotations to build aerospace coordinate systems. Electron Warfare Radar Division System Science Laboratory, Ediburgh, SA, DSTO-TN-0640, 2006.

Mr Ravi Shankar Chekuri obtained his BE (Electronics & Communication) from Andhra University, in 2008 and ME (Computer Vision & Embedded System Design) from IIT Kharagpur, in 2010. He is currently working at Defence Avionics Research Establishment, Banglore. His areas of interest include: Image processing, pattern classification, deep belief networks, hyper spectral image processing, automatic target recognition, automatic topographic map understanding and geographical information systems.

Ms R. Anand Raji obtained her BE (Computer Science & Engineering) from Madras University. She is currently working as Team Leader for Algorithm Group in Avionics Division, Defence Avionics Research Establishment, Banglore. She is currently working for the Mission Management System for UAV and Algorithm development for dual color missile approach warning system. Her contributions were mainly on software development for mission computer, display processor and EW systems.