Sensor Non Uniformity Correction Algorithms and its Real Time Implementation for Infrared Focal Plane Array-based Thermal Imaging System

The advancement in infrared (IR) detector technologies from 1st to 3rd generation and beyond has resulted in the improvement of infrared imaging systems due to availability of IR detectors with large number of pixels, smaller pitch, higher sensitivity and large F-number. However, it also results in several problems and most serious of them is sensor non-uniformities, which are mainly attributed to the difference in the photo-response of each detector in the infrared focal plane array. These spatial and temporal non-uniformities result in a slowly varying pattern on the image usually called as fixed pattern noise and results in the degradation the temperature resolving capabilities of thermal imaging system considerably. This paper describes two types of non uniformity correction methodologies.  First type of algorithms deals with correction of sensor non-uniformities based upon the calibration method. Second type of algorithm deals with correction of sensor non uniformities using scene information present in the acquired images. The proposed algorithms correct both additive and multiplicative non uniformities. These algorithms are evaluated using the simulated & actual infrared data and results of implementations are presented. Furthermore, proposed algorithms are implemented in field programmable gate array based embedded hardware.



Keywords:  Infrared focal plane array  infrared imaging  field programmable gate array  non uniformities


Infrared (IR) imaging systems are used for various applications extending from surveillance and reconnaissance to long range acquisition and engagement of targets1-3. Most of the present infrared systems use infrared focal plane arrays (IRFPAs), which consists of a mosaic of photo detectors placed at the focal plane of imaging systems1-4. The IRFPA technology has advanced immensely in recent years, resulting in the development of  focal plane array (FPA) with smaller pitch and improved noise equivalent temperature difference (NETD), thus, improving the performance of the system considerably5,6. However, it has also resulted in several problems and most serious of them is sensor non-uniformities. Non-uniformity arises due to the fact that each individual detector in the array has a different photo response from its neighboring detector even if the two detectors are illuminated by same radiance. The pixel-to-pixel fluctuations can be attributed to a number of factors such as 1/f noise associated with detector and the corresponding readout integrated circuits (ROIC), and the non linear dependence of the detector gain on the photon flux incident on it. All these factors result in spatial and temporal non-uniformities also called as fixed pattern noise (FPN), thereby degrading the image quality significantly5-8. This FPN is not fixed but drifts slowly in time due to variation in FPA temperature, unstable bias voltages and the change in scene irradiance. This temporal drift is manifested in the form of a slowly varying pattern superimposed on the acquired infrared image, which degrades the spatial resolution, radiometric accuracy, and therefore, reduces the temperature resolving capability of the FPA. Thus, one time laboratory calibration of the system does not guarantee optimum performance under all conditions and hence requires recalibration to account for the temporal drift in the detector response.


There are mainly two types of non-uniformity correction (NUC) techniques: (a) calibration-based12,13 and (b) scene-based6-11. The most common calibration- based technique is two-point calibration method using black body or by varying the integration time of the FPA. In this method, the normal operation of the thermal imaging system is interrupted as the camera images a uniformly calibrated target at two distinct and known temperatures. The gain and offset of each detector are then calibrated across the array so that all the detectors produce a radio-metrically accurate and uniform readout at the two reference temperatures. Scene based non uniformity correction techniques, do not interrupt the operation of infrared system during calibration. These techniques are generally algorithmic in nature. Such techniques use image sequences and use the motion or change in the actual scene to provide diversity in the scene temperature per detector and thus remove both gain and offset non-uniformities.


In this paper, an IR sensor model relating the number of photoelectrons generated as a function of incident flux, integration time of the FPA and ambient temperature is described. It also describes gain and offset non-uniformities present in infrared sensor. Different methodologies for calibrating the sensor non-uniformities based upon the calibration as well as scene based NUC correction techniques are presented. In calibration based technique, different tables of gain and offset coefficients of infrared sensor are then computed at different temperatures or using the different integration time and coefficients are stored in the on board flash memory of the video processing electronics. In scene based technique, scene information is used to compute the appropriate gain and offset coefficients adaptively. These values of gain and offset coefficients are used to perform non-uniformity correction on the incoming video data. This results radiometrically calibrated output under all environmental conditions.

2.1   Infrared Sensor Model


Figure.1 Schematic for IR camera model.An ideal infrared staring mode sensor model maps the focal plane array onto the object space is illustrated in Fig.1. Each cell of object space is mapped through geometric optics, into a corresponding pixel of the focal plane. Diffraction and other form of cross talks are neglected. The image of the scene generates a signal at each pixel that is proportional to the local image incidence14.

Figure 1.Schematic for IR camera model.


This model assumes that the IRFPA is exposed to a uniform source of infrared (IR) radiations. The image of the scene generates a signal at each pixel that is proportional to local image incidence14. The total current generated by the sensor element usually consists of the photon current, dark current and stray current. The stray current is due to dewar stray emission and window stray emission and is generally negligible. The model expresses the output of an arbitrary (i,j)th pixel in terms of the number of electrons accumulated at the pixel over the integration time.Nt is the total number of electrons accumulated at the pixel,Np is number of electrons due to photons and Nd is the number of electrons due to dark current.


The number of photoelectrons accumulated at(i,j)th  pixel during the integration time is given by14 the following Eqn.

N ij = τ ο T int λ 1 λ 2 η ij [ ε ij (λ).L (λ, T ij )+{1 ε ij (λ)}.L(λ, T b )]. A ij . Ω ij .dλ                 (1)

where L(λ, T ij ) is the photon radiance as a function of the wavelength for (i,j) th cell in the object space and L(λ, T b ) is the ambient photon radiance for the background. Plank’s law gives the photon radiance of the object as a function of wavelength and temperature and is defined as

L(λ,T)= 2Πc λ 4 exp( hc λKT )1                 (2)

The term Ω ij   described in (1) is the projected solid angle as viewed from the FPA and is given as

Ω ij = π. cos 4 θ ij 1+4 (F/#) 2                 (3)

ε ij is the emissivity of the scene as a function of the wavelength averaged over the pixel and is the quantum efficiency of the  (i,j) th pixel as a function of the wavelength. The term  [1 ε ij (λ)] is the reflectivity of the scene as a function of the wavelength averaged over the pixel. It is included to account that each pixel of the object space is not only emitting, but also reflecting as well. The remaining terms from the above equations are defined as follows. F/#   is the f-number of the optics, A ij is the pixel active area, is the effective τ ο transmittance of the optical system,  θ ij is the pixel angular displacement from optical axis. λ1 and λ2 are the lower and upper cutoff wavelengths of the optical system respectively,Tint is the integration time and c is the speed of light (= 3.10 8 m/s) . The cosine term represents the systematic variation of pixel illumination with position of the focal plane.


Dark charge will also accumulate at each pixel. The dark charge is proportional to e ( E g /2kT) , where Eg is the band gap of the sensor material and T is absolute temperature in K.  In general the amount of dark charge is also non-uniform from pixel to pixel. Thus the total number of electrons accumulated a(i,j)th  pixel is


N ij = τ ο T int λ 1 λ 2 η ij [ ε ij (λ).L (λ, T ij )+{1 ε ij (λ)}.L(λ, T b )]. A ij . Ω ij .dλ+ N ij d                                       (4)

                                                                                                          
Defining the response coefficient Rij  by

R ij = τ ο . T int . A ij . Ω ij = τ ο . T int [ π. cos 4 θ ij 1+4 (F/#) 2 ] A ij                                 (5)

Equation (4) may be written as

N ij = R ij λ 1 λ 2 η ij [ ε ij (λ).L (λ, T ij )+{1 ε ij (λ)}.L(λ, T b )].dλ+ N ij d                                 (6)

N ij d is the dark charge accumulated during the integration time. Figure 2 shows the variation of the number of accumulated photoelectrons with ambient temperature at different integration times. Figure 3 represents the variation of the number of accumulated photoelectrons with integration time at different ambient temperature. In these calculations the well fill capacity of the sensor is taken as 7 Me- and dark current is 15 pA. Eqns. (4) and (6) describe the conversion of infrared radiations into visible image.  


2.2   Model for Sensor Non-Uniformities


To analyze the impact of these non-uniformities, the output of a specific pixel is now described in term of its unique non-uniformities. We assume that the source is uniform and taking spatial and temporal averages of the total number of

Figure 2.Variation of number of photoelectrons with ambient temperature.



Figure 3.Variation of number of photoelectrons with integration time.


accumulated electrons across the array we obtain.

< N ij __ >=< R ij . λ 1 λ 2 η ij {ε(λ) L(λ, T s ) _________ +[1ε(λ)] L(λ, T b) ________ }.dλ>+ < N ij d > __                     (7)

where Ts is the source temperature. We define

R ij =< R ij >+ r ij                     (8)

N ij d __ =< N ij d __ >+ n ij d __                     (9)

η ij (λ)=< η ij (λ)>+ κ ij (λ)                    (10)        

< R ij > , < N ij d __ >   and  < η ij (λ)> represent the spatial averages of the response coefficient, dark current and quantum efficiency respectively, taken over all pixels in the detector array. The quantities r ij , n ij d __   and κ ij (λ) represent the incremental deviation of  (i,j) th pixel from mean value in response coefficient, dark current and quantum efficiency respectively. Substituting Eqns. (7)-(10) in Eqn. (6) and taking the temporal average and neglecting the product of two small term  r ij and, κ ij (λ) we arrive at

N ij __ =< N ij _______ >+ r ij λ 1 λ 2 < η ij (λ)> {ε(λ) L(λ, T s ) ________ +[1ε(λ)] L(λ, T b ) ________ }dλ+ n ij d ___ + < R ij > λ 1 λ 2 κ ij (λ). {ε(λ) L(λ, T s )+[1 ________ ε(λ)] L(λ, T b ) ________ }dλ+ r ij λ 1 λ 2 κ ij (λ). {ε(λ) L(λ, T s )+[1 ________ ε(λ)] L(λ, T b ) ________ }dλ                 (11)
The last integral in the above equation can be neglected as it is the product of two small term rij  and κ ij (λ)  . Thus, Eqn. (11) can be approximated by

N ij __ =< N ij _______ >+ r ij < R ij > ( < N ij > _____ < N ij d > _____ )+ n ij d ___ + < R ij > λ 1 λ 2 κ ij (λ) {ε(λ) L(λ, T s ) ________ +[1ε(λ)] L(λ, T b ) ________ }dλ                           (12)

It can be seen from Eqn. (12) that if the spectral response of the pixel is uniform, i.e. κ ij (λ)=0   then a function involving only multiplication and addition can be used to correct the infrared imagery.

3.1   Calibration Based NUC


The output Y ij MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamywamaaBa aaleaacaWGPbGaamOAaaqabaaaaa@38BA@   of (i,j) th   pixel is proportional to the number of photoelectrons accumulated at the pixel over the integration time, as given by Eqn. (1). To perform the non-uniformity correction, sensor output is acquired at two different temperatures or at two different integration times by exposing the system with a uniform and high emissivity source such as blackbody. To achieve this, first set of image data I1 is recorded at lower blackbody temperature T1 and second set of image data I2 is recorded at higher blackbody temperature T2. 32 image frames, at each temperature are taken and averaged to reduce the temporal noise.
For the  pixel in the focal plane array, the measured signal  (detector response) is given by the following linear relationship.

Y ij = a ij X ij + b ij                     (13)

where a ij and  b ij are the gain and offset non-uniformities associated with the  (i,j) th pixel respectively and  X ij is the irradiance received by the  (i,j) th detector pixel. Thus, after NUC correction, the above equation can be expressed as

X ij =a ' ij ( Y ij b ij )               (14)

where    a ' ij = 1 a ij               (15)

defining   a ' ij = I 2 I 1 I 2ij I 1ij               (16)

      b ij = I 1ij               (17)

I 1ij & I 2ij are  (i,j) th pixel intensities at lower and higher temperatures respectively. < I 1 > and  < I 2 > are the spatial averages of the image frames at lower and higher temperature respectively and are defined as.


  I1= 1 n1.n2 i=1,j=1 n1,n2 I 1ij               (18)


I2= 1 n1.n2 i=1,j=1 n1,n2 I 2ij               (19)

n1.n2 is the total number of pixels in a frame and n1 and  n2 are number of rows and columns respectively. Thus, from Eqns. (14) – (19) the corrected output of the pixel  (i,j) th is given as

X ij = I 2 I 1 I 2ij I 1ij ( Y ij I 1ij )               (20)


3.2   Scene-based Non-Uniformity Correction


3.2.1  Adaptive Non-Uniformity Correction using Scene Statistics


The IRFPA sensor response is modeled as a first order linear relationship between the input irradiance and the detector output as given in Eqn. (1). For the (i,j) th detector element in the FPA the readout signal corresponding to the pixel of the FPA in  frame is given as8-11

Y ij (n)= a ij (n) X ij (n)+ b ij (n)+ η ij (n)               (21)

where a ij (n) and  b ij (n) are respectively the gain and offset non-uniformities associated with the (i,j) th pixel in the FPA in n th frame. X ij (n)   is the irradiance received by the (i,j) th detector pixel in n th frame. The term η ij (n) represents the additive readout noise associated with the  (i,j) th pixel in  n th frame. This noise is assumed to be Gaussian with zero mean and unity variance and scene statistics are used to calculate the gain and the offset non-uniformities.


This algorithm assumes that due to motion in the infrared scene or camera panning, all the detectors in the IRFPA are exposed to similar scene intensity statistics over a period of time. Thus, the mean flux over a period of time incident on the IRFPA as well as the standard deviation of the flux should be same for every detector in the FPA. Under this assumption offset and gain components of each detector element are computed as first and second order statistics of its output over a period of time. The mean and the variance of the measured FPA output can be written as (the frame subscript is omitted for the ease of mathematical manipulation)
E[ Y ij ]=E[ a ij . X ij + b ij (n)+ v ij (n)] = a ij .E[ X ij ]+E[ b ij (n)]+E[ v ij (n)]


= a ij .E[ X ij ]+ b ij (n) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaeyypa0Jaam yyamaaBaaaleaacaWGPbGaamOAaaqabaGccaGGUaGaamyraiaacUfa caWGybWaaSbaaSqaaiaadMgacaWGQbaabeaakiaac2facqGHRaWkca WGIbWaaSbaaSqaaiaadMgacaWGQbaabeaakiaacIcacaWGUbGaaiyk aaaa@4626@               (22)


and

σ Yij 2 =Var[ Y ij ]= b ij 2 . σ Xij 2               (23)

respectively.


Without loss of generality we can assume that variable X is having zero mean and unity variance i.e.  E[X]=0 and σ X 2 =1 . It is because, if variable X has a non-zero mean and non-unity variance, then non-zero mean can be incorporated in the additive offset noise and the non unity mean can be incorporated in the multiplicative gain non-uniformity. Thus by simplifying the above equations, we can write

a ij 2 = σ Yij 2               (24)

b ij =E[ Y ij ]               (25)

where  σ Yij 2   is given by
σ 2 Yij (n)= 1 n ( k=1 n ( Y ij (k)E[ Y ij (k)]) 2               (26)


To simplify computation, Eqns (25) and (26) can be written in the recursive form of b ij (k)= Y ij (k)+(k1). b ij (k1) k               (27)


  a ij 2 (k)= 1 n.k k=1 n [ Y ij (k) b ij (k)] 2 + (k1) k . σ k1 2               (28)


The above algorithm can be further modified by applying an exponential filter of window length Mi. The exponential filter is a simple linear recursive filter and these filters are widely used especially for time series analysis9,15. The general form of exponential filter is given by following expression

y(t)=(1γ).y(t1)+γ.x(t)               (29)

where
y(t) : Output of the filter at time t
y(t-1): Output of the filter at previous time moment t-1
X(t): Input of the filter
γ : Parameter of the filter in the range 0γ1
From Eqns. (4)-(9) it can be seen that output s of the exponential filter is the weighted sum of the previous output y(t1)   taken with weight (1γ)   and the current input value x(t) taken with weight γ . The smaller is the parameter γ , the longer is the memory of the exponential filter and the greater is the degree of smoothing.  The term exponential here means that each previous input x(tτ1) makes  γ times smaller contribution to the output y(t) than the input x(tτ) . With this modification the estimates of bias and gain coefficients are computed recursively and are given as follows.

b ij (n)= M l 1 M l . b ij (n1)+ 1 M l Y ij (n)               (30)



  σ 2 Yij (n)= M l 1 M l . σ ij 2 (k)+ 1 M l . 1 n ( k=1 n ( Y ij (k)E[ Y ij (k)]) 2                (31)



  a ij (n)= σ 2 Yij (n)               (32)



Using the above algorithms the non-uniformity corrected output is X ij
that is mapped to the full dynamic range of the scene



X ij (n)= Y ij (n) b ij (n) a ij (n)

4.1   Calibration Based Non-Uniformity Corrections


The capability of non-uniformity correction algorithm is evaluated by applying the algorithm on real infrared data and studying the performance parameters. Performance parameter called as residual non-uniformity (RNU) is defined and used to evaluate the performance of the algorithm. The infrared data is collected using 320 x 256 elements InSb staring focal plane array based cooled thermal imaging system operating in the 3-5 µm wavelength region16,17. Infrared imaging system is designed for a 50 mm optical aperture having an F-number (F/#) of F/3. The video processing electronics of the thermal imaging system has been designed to perform non uniformity correction, bad pixel replacement, digital scan conversion, automatic gain control and several image enhancement functions such as contrast enhancement and histogram equalization. A 14 bit analog to digital conversion is used to digitize the raw video with non uniformities from IRFPA. Finally, it generates consultative committee for international radio (CCIR) standard video output at 50 Hz, which can be displayed on any monitor. Different set of image data were captured during morning, afternoon, evening and at night. Data were also recorded during different seasons.


The integration time of the IRFPA is controlled through a video processing board. Two sets of image data are acquired at lower integration time of 1.8 milliseconds (msec) and higher integration time of 2.2 msec respectively. Sixty four frames of image data are collected at each integration time and averaged to reduce the temporal noise.  This data is then used to compute the gain and offset coefficients, which can be stored separately in the onboard flash on the video processing board. Figure 4 shows some of the sample image frames with sensor non-uniformities and corrected image after performing non-uniformity correction.


4.1.1      Performance Evaluation


The performance of an integration based non-uniformity correction algorithm is estimated by a correction parameter called as residual non-uniformity (RNU)18,19. This is defined as standard deviation (SD) of the corrected FPA signal divided

Figure 4. Image frame (a), (c), and (e) before NUC (b), (d) and (f) after NUC.


by the mean signal (mean). Mathematically it can be expressed as



RNU=SD/Mean= 1 X ¯ 1 LxM ( i=1 L j=1 M ( X i,j X ¯ ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqipq0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9 vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=x fr=xb9adbaqaaeGacaGaaiaabeqaamaabaabaaGcbaGaamOuaiaad6 eacaWGvbGaeyypa0Jaam4uaiaadseacaGGVaGaamytaiaadwgacaWG HbGaamOBaiabg2da9maaleaaleaacaaIXaaabaWaa0aaaeaacaWGyb aaaaaakmaakaaabaWaaSqaaSqaaiaaigdaaeaacaWGmbGaamiEaiaa d2eaaaGccaGGOaWaaabCaeaadaaeWbqaaiaacIcacaWGybWaaSbaaS qaaiaadMgacaGGSaGaamOAaaqabaaabaGaamOAaiabg2da9iaaigda aeaacaWGnbaaniabggHiLdGccqGHsisldaqdaaqaaiaadIfaaaaale aacaWGPbGaeyypa0JaaGymaaqaaiaadYeaa0GaeyyeIuoaaSqabaGc caGGPaWaaWbaaSqabeaacaaIYaaaaaaa@59A5@                  (34)

where L and M are the number of rows and columns of IRFPA, X ij is the output of the pixeli,j and  X ¯ is spatial mean of the pixels in the IRFPA.


A PC based application is developed to control the various function of the video processing board through serial link and capture the digital data. The infrared data with non-uniformities was captured by varying the integration time of FPA. One set of data was captured by exposing the infrared system with a uniform source at 30 °C temperature. Another set of data was captured at 17 °C temperature. Furthermore, over 100 different types of image data were captured under different conditions by varying the integration time at different temperature. The non-uniformities were computed for the uncorrected data and corrected data and the plots of RNU (%) with integration time at 30 °C and 17 °C are given in Fig. 5.

Figure 5.Variation of RNU with Integration time at different temperature.


It can be seen from above graphs that the non-uniformities are reduced to approximately 0.6 % from 6 %. The spatial non-uniformities are compared with the temporal noise of the system, which is given by standard deviation. Figure 6 gives the comparison of spatial and the temporal noise after the non uniformity correction.

4.2   Adaptive Non-Uniformity Correction Using Scene Statistics


Infrared image sequence having 2000 frames which was pre calibrated with calibration based NUC was taken and corrupted with gain and offset non-uniformities. Different levels of offset and gain non-uniformities were added.

Figure 6.Post NUC Temporal and Spatial noise variation with integration time.


Offset non uniformities having mean 0 and standard deviation 1 to15 was added. Gain non uniformities having mean 1 and standard deviation 0.01 to 0.5 was added. The proposed algorithm was applied to the uncorrected data and the results of the algorithms are given in Fig. 7.

4.2.1      Performance Evaluation


The performance of the algorithms were defined by the performance parameter root mean square error (RMSE) averaged over the all detectors20-25, which is defined as



   RMSE= 1 l.m i=1 l j=1 m ( X ij c (n) X ij (n) ) 2                (35)



where l and m are the number of rows and column respectivelyof the FPA. X ij c (n)

is the corrected output and X ij (n)

is the actual output. RMSE for the image sequence is calculated as its variation with the number of frames is shown in Fig 8. In addition  root mean square errors for the three algorithms  Scribner26, Harris27 and the  proposed algorithm are calculated and results are shown in Fig. 8. It can be seen that RMSE is the lowest for the proposed algorithm. There is a fairly close agreement between the proposed and the Scribner’s algorithm. However, the two seem to deviate as the number of frames increases.

5.1   Calibration Based Non-Uniformity Corrections


The calibration based non-uniformity correction algorithm can be easily implemented in FPGA based hardware. FPGA28-31 based hardware implementation and photograph of the FPGA based hardware is given in Fig. 9.


The analog video signal from IRFPA is pre-processed and converted into a digital data using a 14-bit ADC. The 14-bit ADC is used to ensure that quantisation error to be less than signal corresponding to the NETD of the sensor array. Incase digital IRFPA is used, the ADC module can be bypassed and FPGA receives the digital data directly from IRFPA. This raw video digital data, at different integration time, is stored in the frame memory. Serial link is provided to capture the data through PC based application. Multiple frames of raw video data are captured and averaged by successively operating the frame memories in read and write mode. This is done to reduce the temporal noise. The integration time of the IRFPA

Figure 7.(a), (b), (c), (d) Input Image frames with simulated non-uniformities (e), (f), (g), (h) Image frames after correction using proposed algorithm.



Figure 8.RMSE Value of the proposed algorithm.



Figure 9.Hardware implementation of the algorithm.


can be varied through the PC based application, The hardware is implemented in very high-speed integrated circuit hardware description language (VHDL)30 using Xilinx ISE tool31. The gain and the offset coefficients, thus calculated, are stored in offset and gain flash memory and are applied on incoming video data in real time.


5.2 Adaptive Non-Uniformity Correction Using Scene Statistics


Adaptive NUC algorithm using scene statistics uses two SRAMs in ping-pong mode to calculate and store the offset coefficients successively on pixel-by-pixel basis. Four SRAMs are required for the computation of gain coefficients, out of which two SRAMs are used for storing the intermediate mean value of a pixel in a frame successively and remaining two SRAMs are used to calculate and store the variance of a pixel in a frame successively. Figure 10 illustrates the data path of the scene statistics algorithm that is implemented in the FPGA.


Infrared data corrupted with non-uniformities Y(n) is fed to an adder. Simultaneously, previous value of the offset coefficient from the SRAM is multiplied with a constant and then given to adder. These two values are added. The length of exponential filter presently implemented is 16, which results a constant multiplication by 15 and division by 16. The output of the adder is shifted right by 4 to implement the division by 16. After formatting the data the offset coefficients are stored in the SRAM for the next computation. A temporal averaging of incoming data on every pixel is performed by mean value computation block. This average value is written in the SRAM and also used for the computation of variance. The average value is subtracted from  and then is multiplied with itself to find the variance. Data formatting is done to guard against the expansion of bits as a result of multiplication and then it is written in a SRAM. The variance is fed to an adder and simultaneously the previous value of variance   from the SRAM is multiplied by 15 given to the adder. These two values are added and then shifted right by 4 to implement the division by 4. After formatting the data square root function is implemented to compute gain coefficients. The gain coefficient is fed to delay compensator and then fed to the divider block as divisor where offset corrected data is inputted as dividend. The output of divisor results in the non-uniformity corrected data.

In this paper, infrared processing algorithms for correcting the sensor non-uniformities based upon the calibration based methods and the scene based methods are presented. The proposed algorithms are tested with simulated and actual data and results are compared with the standard methods. The results indicate that NUC calibration based on variation of integration time and scene based NUC gives radiometricallly accurate output and removes both additive and multiplicative types of sensor non-uniformities. The results of calibration based NUC algorithm shows that the non-uniformities are reduced from 6 % to less that 0.6 % after performing the correction. Spatial noise after non-uniformity correction is compared with the temporal noise of the system. The results show that the spatial noise is reduced significantly below the temporal noise of the system. Adaptive NUC using scene statisticsuses statistical parameters of the scene for non-uniformity correction. Offset and gain non-uniformities are represented by global mean and global variance of the sense. An exponential filter is used for faster convergence of the method. Performance of the algorithm is evaluated using root mean square error (RMSE) as the performance parameter. Furthermore, this method is compared with the published methods. The results of image processing algorithms indicate that these algorithms performs equally well for 3-5 µm and 8-12 µm infrared sensors and can be easily implemented in real time

In this paper, infrared processing algorithms for correcting the sensor non-uniformities based upon the calibration based methods and the scene based methods are presented. The proposed algorithms are tested with simulated and actual data and results are compared with the standard methods. The results indicate that NUC calibration based on variation of integration time and scene based NUC gives radiometricallly accurate output and removes both additive and multiplicative types of sensor non-uniformities. The results of calibration based NUC algorithm shows that the

Figure 10.Data path implementation of the scene statistics algorithm.


non-uniformities are reduced from 6 % to less that 0.6 % after performing the correction. Spatial noise after non-uniformity correction is compared with the temporal noise of the system. The results show that the spatial noise is reduced significantly below the temporal noise of the system. Adaptive NUC using scene statisticsuses statistical parameters of the scene for non-uniformity correction. Offset and gain non-uniformities are represented by global mean and global variance of the sense. An exponential filter is used for faster convergence of the method. Performance of the algorithm is evaluated using root mean square error (RMSE) as the performance parameter. Furthermore, this method is compared with the published methods. The results of image processing algorithms indicate that these algorithms performs equally well for 3-5 µm and 8-12 µm infrared sensors and can be easily implemented in real time.

1. Aviram, G. & Rotman, S.R. Evaluation of human detection performance of targets embedded in natural and enhanced infrared image by using image matrices.  Opt. Eng., 2004, 4(4), 885-896.

2. Crawford, F.I. Electro optics sensor overview. IEEE Aerospace and Electronic Systems Conference. 1997. pp. 7-14,

3. Der, S.; Chan, A.; Nasrabadi N.  & Kuon, H. Automatic vehicle detection in infrared looking infrared imagery. Appl. Opt., 2004, 43(2), 333-348.

4. Norton, P.; Campbell, James & Horn, Stuart. Third generation infrared imagers. In Proceedings of SPIE, 2000, 4130, 226-235.

5. Nelson, Mark D. J.; Johnson, F.  & Lomheim, T.S. General noise process in hybrid infrared focal plane arrays. Opt. Eng., 1991, 30(11), pp. 1682-1693.

6. Mooney, J.M.; Shepherd, F.; Ewing, W.; Murguia J. & Silverman, J. Responsivity non-uniformity limited performance of infrared imaging cameras. Opt. Eng., 1989, 28(11), 1151-1161.

7. Milton, A.F.; Barone F.R. & Kruer, M.R.  Influence of non-uniformity on infrared focal plane array performance. Opt. Eng., 1985, 24(5), 855-862.

8. Eismann, M.T. & Schwartz, C.R. Focal plane array non-linearity and non-uniformity impacts to target detection with thermal infrared imaging spectrometers. In Proceedings of SPIE, 1997, 3063, 164-173.

9. Harris, John G. & Chang, Yu Ming. Non-uniformity correction of infrared image sequence using constant statistics constraint , IEEE trans. on Image Processing, 1999, 8(8), pp.1148-1151.

10. Torres, Sergio N. & Hayat, Majeed M. Kalman filtering for adaptive non-uniformity correction for infrared focal plane arrays. J. Opt. Soc. Am. A, 2003, 20(3), 470-480.

11. Ratliff, Bradley M.; Hayat, Majeed M. & Hardie, Russell C. An algebraic algorithms for non-uniformity correction in focal plane arrays. J. Opt. Soc. Am. A, 2002, 19(9), 1737-1745.

12. Schulz, M. & Caldwell, L. Non-uniformity correction and correctibility of focal plane arrays. Infrared Phys.  Technol., 1995, 36(4), 763-737.

13. Shi, Yan; Zhang, Tianxu; Cao, Zhiguo & Hui, Li. A feasible approach for non uniformity correction in IRFPA with nonlinear response. Infrared Phys.  Technol., Article in Press.

14. Perry, D.L. & Dereniak, Eustace L. Linear theory of non uniformity correction in infrared sensors. Opt. Eng., 1993, 32(8), 1854-1859.

15. Papoulis, A. Probability random variables and stochastic process. McGraw-Hill Inc, USA, 1991.

16. 320 X  256 InSb Focal Plane Array Detector user guide, M/S SCD Israel.

17. Kumar, Ajay & Negi, S.S. Design and development of a high performance 3rd generation hand held thermal camera. In  Proceeding of SPIE, 2004, 5563.

18. Friedenberg,  A. & Goldblatt, I.  Non uniformity two point linear correction errors in infrared focal plane arrays.  Opt. Eng., 1998, 3(4), 1251-1253.

19. Huitong, L.; Qi, W.; Sihai, C. & Xinjian, Y.  Analysis of the residual error after non uniformity correction for infrared focal plane array. Infrared and Millimeter Waves Conference of IEEE, 2000, pp. 213-214.

20. Torres, S.N. & Hayat, M.M.  Kalman filtering for adaptive non-uniformity correction for infrared focal plane arrays.  J. Opt. Soc. Am. A, 2003, 20(3),  470-480.

21. Ratliff, B.M., Hayat,  M.M. & Hardie, R.C,  An algebraic algorithms for non-uniformity correction in focal plane arrays. J. Opt. Soc. Am. A, 2002, 19(9), 1737-1745.

22. Schulz, M. & Caldwell, L. Non-uniformity correction and correctibility of focal plane arrays. Infrared Phys.  Technol., 1995, 36(4), 763-737.

23. Shi, Y.; Zhang, T.; Cao, Z. & Hui, L.  A feasible approach for non uniformity correction in IRFPA with nonlinear response. Infrared Phys. Technol., 2005, 46(4), 329-337.

24. Kalman, R.E.  A new approach to linear filtering and prediction problems.  ASME, Series D, Basic Eng., 1990, 82, 35-45.

25. Kalman, R.E. & Bucy, R.S. New results in linear filtering and prediction theory.  ASME, Series D, Basic Eng., 1961, 83, 95-107.

26. Scribner, D.A.; Sarkay, K.A.; Caldfield, J.T.; Kruer, M.R.; Katz, G. & Gridley, C.J. Non-uniformity correction for staring\focal plane arrays using scene-based techniques. In Proceedings of SPIE, 1990, 1308, 24–33

27. Harris, J.G. & Chang, Y.M. Non-uniformity correction of infrared image sequence using constant statistics constraint.  IEEE trans. on Image Processing, 1999, 8(8), 1148-1151.

28. Rose, J.; Gamal, A.E. & Vincentelli, A.S. Architecture of field programmable gate array. Proceeding of IEEE, 1993, 81(7), 1013-1029.

29. Trimberger, S. A Reprogrammable gate array and applications. Proceeding of IEEE, 1993, 81(3), 1030-1041.

30. Skahill, K. VHDL for programmable logic (Book), Addison Wesley, CA, USA, 1996.

31. Xilinx: The programmable logic data book, Xilinx Inc., San Jose, CA,  Available online, http://www.xilinx.com.(Accessed on 10 January 2013)

 


Dr Ajay Kumar has obtained his MTech from IIT Kanpur and PhD from IIT Roorkee. He is working at Instruments Research and Development Establishment, Dehradun, India. He has been working in the area of infrared system design, modeling, development, design and development of sensor and image signal processing algorithms and architectures, digital and analog circuits design, FPGAs based design and implementation, and embedded system design. He is a member of IEEE and IETE.