Set Down Study of Projectile in Flight Through Imaging

Deformation study of projectile immediately after firing is essential for its successful impact. A projectile that undergoes more than the tolerated amount of deformation in the barrel may not produce the requisite results. The study of projectile deformation before its impact requires it to be imaged in flight and perform some computation on the acquired image. Often the deformation tolerance is of the order of tens of micrometer and the acquired image cannot produce image with such accuracy because of photographic limitations. Therefore, it demands sub-pixel manipulation of the captured projectile image. In this work the diameter of a projectile is estimated from its image which became blur because of slow shutter speed. First the blurred image is restored and then various interpolation methods are used for sub-pixel measurement. Two adaptive geometrical texture based interpolation schemes are also proposed in this research. The proposed methods produce very good results as compared to the existing methods.

Study of projectile behaviours closer to launcher is often desirable. An accurate measurement of projectile deformation immediately after firing is necessary for the successful impact. Deformation above the tolerance may not be effectual in producing intended result. Traditionally, diameters at different points of the cylindrical portion of the projectile are measured precisely before firing. After firing, the projectiles are collected and again measurements are taken at the same points. The differences give the deformation.

In this paper, the authors have investigated the above behaviour through in-flight image processing1. The deformation measurement is of the order of micrometer. For such an investigation, the projectile image is required to be captured soon after it is released from the muzzle. There is every possibility that the image obtained would be blurred because of the speed of the projectile. Use of very high speed camera can freeze the motion, and thereby a blur-free image can be acquired. However the state-of-the-art camera available nowadays cannot detect the micrometer deformation in the projectile because of its sensor limitations. The micrometer deformation demands sub-pixel manipulation of the projectile image. Existing interpolation methods2 can be employed to compute missing pixel values. In addition, the authors have proposed two adaptive, edge oriented interpolation schemes.

The first proposed method selects the 4x4 neighbourhood of the missing pixel and constructs 19 adaptive windows based on regular geometrical texture shape. The next task is to find the smoothest window, which is computed based on variance of intensity values within each window. The smoothest block represents the edge direction, which is used for interpolating missing pixel. An exponential weight measure is chosen so that pixels closer to the re-sampling pixel are given more weight and vice versa.

In the second method, the authors have used Newton polynomial for interpolating missing pixel values. The second- and fourth-order difference of contiguous pixels gray values are computed to determine the edge orientation. Consequently, an adaptive function is inferred based on the texture alignment and the polynomial.

The goal is to estimate the deformation with respect to diameter of a projectile in the order of micrometer through in-flight imaging. Considering the exposure time and projectile velocity into account, it has been deduced that the acquired images will become blur out of motion. Further, the projectile image demands sub-pixel manipulation. The mathematical formulation of the problem statement is given below:

• Shell diameter = 40 mm
• Camera resolution = 320 × 736
• Focal length = 300 mm
• Single-pixel resolution in object plane = 0.3123 mm
• Required measurable set down = 0.01 mm
• Sub-pixel accuracy factor = 0.3123/0.01 = 31.23 ≈ 32

The following objectives are identified to counter the above problems in estimating the deformation.

• Identification of notch-free part of projectile in the input image.
• Computation of blur length and point spread function (PSF) estimation.
• Deblurring of the input image with the estimated PSF.
• Sub-pixel interpolation of deblurred image
• Detection of projectile contour (edge)
• Computation of diameter.

Pixel level edge-detection limits the dense knowledge about the real world object. Each pixel is linked to one sensor of the camera. High resolution camera can access more detailed information in the expense of higher cost. Various image processing mechanisms and statistical knowledge are comprised to access the sub-pixel information. Usually moment based methods3 and interpolation mechanisms are employed for this purpose.

Non-adaptive interpolation methods include traditional algorithms like nearest neighbour, bilinear, bicubic, sinc interpolation, etc. Unlike traditional methods, adaptive algorithms4-5 includes the gradient direction and edge orientation6-7 information. The efficacy of any interpolation scheme is influenced by two elements8: visual quality of the scaled image and its time complexity. Most of the reported non-adaptive schemes suffer from undesirable artifacts such as blurring, aliasing, edge halos, especially in texture region of the image9. Adaptive interpolation mechanisms are applied to produce superior results. However, these advanced algorithms have high computational overload10.

In this section, we detailed all the necessary steps required for the computation of projectile diameter. The block diagram for the overall procedure is shown in Fig. 1.

4.1 Input Image Normalisation

The ROI portion here refers to the cylindrical part of the projectile. Accordingly, the authors do necessary crop operation. The input image is normalised in the range 0 to 1, which is obtained by dividing each pixel value by the maximum intensity value of the image. let O be the original image. The normalised image g is given by:

$g=\text{\hspace{0.17em}}\frac{O}{\mathrm{max}\text{\hspace{0.17em}}\left(O\right)}$ (1)

4.2 Blur Length Computation, PSF Estimation, and Motion Deblurring

The in-flight image gets blurred due to motion of the projectile. The motion length ML is computed as,

$ML\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{v×t}{l}$ (2)

Here v, t and l denote the velocity of the projectile (m/s), exposure time of the camera (s), and single-pixel length in object plane, respectively. The two-dimensional point spread function (PSF) is estimated using the procedure1.

One can use either the well-known iterative Lucy- Richardson algorithm11 or Improved Iterative Blind Image Deconvolution (IIBID) algorithm12 for deblurring the cropped normalised image.

4.3 Sub-Pixel Manipulation

We have employed interpolation mechanism to compute sub-pixel measurement. Existing methods like Nearest Neighbor, Bilinear, Bicubic, and Lanczos re-sampling are used for this purpose. In addition, we have proposed two adaptive, edge-oriented interpolation schemes, as given below.

4.3.1 Method 1

The re-sampling pixel value is influenced by the local smoothness of pixels in neighbourhood, especially in the high frequency regions of the image. Our proposed scheme divides the neighbourhood kernel into different regions based on the shifting of high frequencies in different directions. For each missing pixel location, first its 4×4 neighbourhood window is considered. The next task is to divide this window to 19 different adaptive regions, as shown in Fig. 2, where each region indicates a geometrical regularity (possible edge orientation). The next task is to compute variance of intensity values for all 19 blocks. Block having minimum variance is chosen for re-sampling, because the smoothest block represents along- edge direction. To compute weight to the sampling point (xp,yp)inside the smoothest window, an exponential distance measure is chosen corresponding to the re-sampling point (x,y) as:

$W\left(p\right)\text{\hspace{0.17em}}={e}^{\text{-}\frac{1}{2}\left(d{x}^{2}+d{y}^{2}\right)}$ (3)

where dx2 and dy2 are the Euclidian distance between sampling location (xp,yp) from the re-sampling location (x,y). To compute the re-sampling point f(x,y), the weighted mean of the interpolation kernel is computed as:

$f\left(x,y\right)\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\frac{\sum _{i=1}^{k}\left({W}_{i}×{f}_{i}\right)}{\sum _{i=1}^{k}{W}_{i}}$ (4)

Here k represents the smoothest window size and value of k is less than or equal to 16, which depends on local degree of smoothness and fi, Wi represents the pixel value and weight at ith location in the smoothest window respectively.

4.3.2 Method 2

In this approach, both second- and fourth-order Newton polynomial were used to assign missing pixel values. The third- order polynomial is asymmetric; as a result it cannot be used for interpolation. Two-dimensional image interpolation operates in two perpendicular directions; horizontal re-sampling followed by vertical re-sampling or vice-versa. The second and fourth- order Newton polynomial selects 3 and 5 adjacent pixels in one direction, respectively.

c

Given the equally spaced space function, f(x) is expressed as f (x)= fi, i = 0,1,2,... the second- and fourth-order polynomial can be represented by

${P}_{2}\left(x\right)\text{\hspace{0.17em}}=\text{\hspace{0.17em}}{f}_{0}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\Delta {f}_{0}t\text{\hspace{0.17em}}+\text{\hspace{0.17em}}{\Delta }^{2}{f}_{0}\frac{t\left(t-1\right)}{2}$ (5)

$\begin{array}{l}{P}_{4}\left(x\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}{f}_{0}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}\Delta {f}_{0}t+\text{\hspace{0.17em}}{\Delta }^{2}{f}_{0}\frac{t\left(t-1\right)}{2}\text{\hspace{0.17em}}+\text{\hspace{0.17em}}{\Delta }^{3}{f}_{0}\text{\hspace{0.17em}}\frac{t\left(t-1\right)\left(t-2\right)}{6}+\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Delta }^{4}{f}_{0}\text{\hspace{0.17em}}\frac{t\left(t-1\right)\left(t-2\right)\left(t-3\right)}{24}\end{array}$ (6)

Our proposed scheme first decides the texture alignment and accordingly selects neighbourhood pixels for interpolation. Each re-sampling pixel selects the closest 6×6 neighbourhood of known pixels. The relationship between the unknown and adjacent known pixels in one direction is shown in Fig. 3.

The absolute value of second- and fourth-order difference among adjacent pixels is computed to determine the edge orientation. In Fig. 3, the edge direction is decided by the smallest among the four measures,

$\begin{array}{l}|{\Delta }^{2}{f}_{0}|\text{\hspace{0.17em}}=\text{\hspace{0.17em}}|{f}_{2}-2{f}_{1}+{f}_{0}|\text{\hspace{0.17em}}\\ {\Delta }^{2}{f}_{1}|=\text{\hspace{0.17em}}|{f}_{3}-2{f}_{2}+{f}_{1}|\\ |{\Delta }^{4}{f}_{0}\text{\hspace{0.17em}}|=\text{\hspace{0.17em}}\text{\hspace{0.17em}}|{f}_{4}-4{f}_{3}+6{f}_{2}-4{f}_{1}+{f}_{0}|\\ |{\Delta }^{4}{f}_{1}|=\text{\hspace{0.17em}}\text{\hspace{0.17em}}|\text{\hspace{0.17em}}{f}_{5}-4{f}_{4}+\text{\hspace{0.17em}}6{f}_{3}-4{f}_{2}+{f}_{1}|\\ \Delta {f}_{\mathrm{min}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{min}\text{\hspace{0.17em}}\left\{|{\Delta }^{2}{f}_{0}|,\text{\hspace{0.17em}}\text{\hspace{0.17em}}|{\Delta }^{2}{f}_{1}|,\text{\hspace{0.17em}}\text{\hspace{0.17em}}|{\Delta }^{4}{f}_{0}|,\text{\hspace{0.17em}}\text{\hspace{0.17em}}|{\Delta }^{4}{f}_{1}|\right\}\end{array}$

Thus, adaptive function to calculate the intensity of the re-sampling pixel (*) is given by,

4.4 Foreground Segmentation

This process converts the cropped interpolated gray scale image to a binary image. The authors have used the optimal global thresholding Otsu method1 for binarisation. Pixels having intensity value 0 and 1 represent the background and foreground regions, respectively.

$f\left(*\right)=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left\{\begin{array}{l}{f}_{0}+\Delta {f}_{0}t\text{\hspace{0.17em}}+{\Delta }^{2}{f}_{0}\frac{t\left(t-1\right)}{2}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}if\text{\hspace{0.17em}}\Delta {f}_{\mathrm{min}}=\text{\hspace{0.17em}}|\text{\hspace{0.17em}}{\Delta }^{2}{f}_{0}|\\ {f}_{1}+\Delta {f}_{1}t\text{\hspace{0.17em}}+{\Delta }^{2}{f}_{1}\frac{t\left(t-1\right)}{2}\text{\hspace{0.17em}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}if\text{\hspace{0.17em}}\Delta {f}_{\mathrm{min}}=\text{\hspace{0.17em}}|\text{\hspace{0.17em}}{\Delta }^{2}{f}_{1}|\\ {f}_{0}+\Delta {f}_{0}t\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\Delta }^{2}{f}_{0}\frac{t\left(t-1\right)}{2}+{\Delta }^{3}{f}_{0}\frac{t\left(t-1\right)\left(t-2\right)}{6}+{\Delta }^{4}{f}_{0}\frac{t\left(t-1\right)\left(t-2\right)\left(t-3\right)}{24},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}if\text{\hspace{0.17em}}\Delta {f}_{\mathrm{min}}=\text{\hspace{0.17em}}|\text{\hspace{0.17em}}{\Delta }^{4}{f}_{0}|\\ {f}_{1}+\Delta {f}_{1}t\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\Delta }^{2}{f}_{1}\frac{t\left(t-1\right)}{2}+{\Delta }^{3}{f}_{1}\frac{t\left(t-1\right)\left(t-2\right)}{6}+{\Delta }^{4}{f}_{1}\frac{t\left(t-1\right)\left(t-2\right)\left(t-3\right)}{24},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}if\text{\hspace{0.17em}}\Delta {f}_{\mathrm{min}}=\text{\hspace{0.17em}}|\text{\hspace{0.17em}}{\Delta }^{4}{f}_{1}|\end{array}$ (7)

4.5 Vertical Scanning

Here, each vertical line of the binary image is scanned two times: mid-point-to-bottom and mid-point-to-top (Here midpoint is the median point of each vertical line) until the first foreground edge point in each case is found and the difference of the two edge points is stored in an array named boundary.

4.6 Diameter Estimation

The boundary array contains the diameter of each vertical line. The arithmetic mean of all these values is taken as the diameter of the corresponding part of the projectile. However, this estimated value represents diameter in image plane dimage. The required diameter in object plane dobject is given by:

${d}_{object}\text{\hspace{0.17em}}=\text{\hspace{0.17em}}{d}_{image}×\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{l}{resize\text{\hspace{0.17em}}factor}$ (8)

The silhouette of the projectile supplied by PXE, alongwith its different cylindrical parts is shown in Fig. 4. The diameter range at different parts of the projectile given by PXE is given in Table 1. For sub-pixel manipulation, existing interpolation algorithms like Nearest Neighbour, Bilinear, Bicubic, Lanczos2, Lanczos3, and authors two methods have been employed. The interpolation results for different portions of the projectile image are shown in Tables 2 to 6.

In this work the deformation of projectile is studied in flight through imaging. The projectile image is captured through a high speed acquisition device immediately it is fired near the muzzle. The image thus acquired suffers from motion blur. Considering the muzzle velocity and the shutter speed the blur length is computed and the point spread function is determined to approximate a de-blurred image. Various segments of the de-blurred projectile image are extracted to estimate the diameter. The proposed interpolation methods along with some of the existing standard interpolation methods are applied on the extracted segments and diameter is estimated. It is found that the proposed schemes have better accuracy than its counterparts.

1. Gonzalez, Rafael C.; Woods, Richard Eugene & Steven, L. Eddins. Digital image processing using MATLAB. Pearson Education India, 2004.

2. Philippe, Thévenaz; Blu, Thierry & Unser, Michael. Image interpolation and resampling. 2000, 393-420. [Full text via CrossRef]

3. Shan, Y. & Goh, Wooi Boon. Sub-pixel location of edges with non-uniform blurring: a finite closed-form approach. Image Vision comput., 2000, 18(13), 1015-1023. [Full text via CrossRef]

4. Arandiga, F.; Donat, R. & Mulet, P. Adaptive interpolation of images. Signal Proces., 2003, 83(2), 459–464. [Full text via CrossRef]

5. Wong, Chi-Shing & Siu, Wan-Chi. Adaptive Directional Window Selection For Edge-Directed Interpolation. 19th International Conference on Computer Communications and Networks (ICCCN), IEEE, 2010. [Full text via CrossRef]

6. Shezaf, Nira, Abramov-Segal, H.; Sutskover, I. & Bar-Sella, R. Adaptive low complexity algorithm for image zooming at fractional scaling ratio. The 21st IEEE Convention of the Electrical and Electronic Engineers in Israel. IEEE, 2000. [Full text via CrossRef]

7. Xiao, Jianping; Zou, Xuecheng; Liu, Zhenglin & Guo, Xu. Adaptive interpolation algorithm for real-time image resizing. In First International Conference on Innovative Computing, Information and Control, 2006. ICICIC’06. IEEE, 2, 2006. [Full text via CrossRef]

8. Zhang, Xiangjun, and Xiaolin Wu. Image interpolation by adaptive 2-D autoregressive modeling and soft- decision estimation. IEEE Trans. Image Proces., 2008, 17(6), 887-896. [Full text via CrossRef]

9. Luong, Hiêp; Ledda, Alessandro & Philips, Wilfried. An image interpolation scheme for repetitive structures. In Image Analysis and Recognition. Springer Berlin Heidelberg, 2006, 104-115. [Full text via CrossRef]

10. Muhammad, Sajjad; Ejaz, Naveed & Baik, Sung Wook. Multi-kernel based adaptive interpolation for image super-resolution. Multimedia Tools Appli., 2012, 72(3), 1-23. [Full text via CrossRef]

11. Richardson, William Hadley. Bayesian-based iterative method of image restoration. JOSA 1972, 62(1), 55-59. [Full text via CrossRef]

12. Sa, Pankaj Kumar, Dash, Ratnakar; Majhi, Banshidhar & Panda, Ganapati. Improved iterative blind image deconvolution. In Advances in Numerical Methods. Springer US, 2009. 271-278. [Full text via CrossRef]

 Mr Suman Kumar Choudhury obtained his BTech (CSE) from BPUT, Rourkela in 2010 and MTech (CSE) from NIT, Rourkela, in 2013.He is currently pursuing his doctoral research at NIT, Rourkela, in the area of video surveillance. His areas of interest include image processing, computer vision and pattern recognition. Dr Pankaj Kumar Sa received his MTech (CSE) and PhD (Image Processing) from NIT, Rourkela. He is working as Assistant Professor in the Department of Computer Science and Engineering, NIT Rourkela. He has 08 years of teaching and research experience. His research interest also includes computer vision and computer graphics. Mr Tapan Kumar Biswal is currently working as a scientist at the Proof & Experimental Establishment, Chandipur. He has 30 years of experience in the domain of high speed imaging and photonics. His research interests include high speed imaging and sensors. Mr Banshishar Majhi is working as a professor in the Department of Computer Science and Engineering, NIT, Rourkela. He has 24 years of teaching and research experience. His research interests include image processing, computer vision, and iris biometrics.