An Aircraft Ranging Algorithm Based on Two Frames of Image in Monocular Image Sequence

We proposed a novel rotation-invariant feature based passive ranging algorithm to estimate the distance of an imaged non-cooperation target to camera. This improved algorithm avoids sometimes occurrence of physically unreasonable results in solving the existing quartic equation, such as the happening of complex or negative value. This method uses three matched points in two adjacent frames of an image sequence to extract depth-dependent line features of the target. With this line features combination of the observer’s displacement and imaging directions, a quadratic equation was build to estimate the distance. Analysis shows that the proposed new passive ranging equation would be solvable when the observer is with non-zero displacement in adjacent sampling instances. Our reduced-model experiment also demonstrates that the proposed algorithm is not only simple and feasible but also with a relative ranging error no more than 4 per cent in most cases.

Keywords:    Invariants Scramjet,  depth cues,   feature representation,  passive ranging, imaging geometry

Although stereo vision could find the depth information of an imaged object1, but the motion stereo depth measurement2 scheme is more attractive for its simplicity in construction. As we know, from industrial robot3 to air-launched weapons4, motion stereo has been widely used. It is characterized by using monocular imaging system and being mounted on a maneuvering platform.

In motion stereo, feature extraction and feature matching are problems that must be solved5. These features include points36789, lines or edges51011, and regions12 or colours13. Among these three kinds of features, regions or colours are less applied than two former, lines and edges have got the most applications. This is because the lines are easier to extract from contour images and their characterization, by means of polygonal approximation, and are more reliable than points feature in the presence of noise11. In general, edges come from edge-based methods5, straight lines may come from either edge-based methods or two unique feature points.

In the past few decades, linearity features which include straight lines and edges have been used in image distortion correcting5, 3D image matching14, target tracking711, range finding815, vision based navigating guide1617, robot simultaneous localization and mapping (SLAM)918, pose estimation1920, and so on. Among these application, passive ranging based on motion stereo is very important, which can be seen in5821, and so on. Feature lines are wire poles or street trees in background, but in most cases, feature lines are in the target itself5821. A new method is proposed for non-cooperate aircraft ranging in this study.

2.1  Existing Ranging Model

In this study, authors take the geography coordinates o-xyz as the host coordinates, in which the north, the west and the upper direction are assigned as the positive direction for x, y and z axis respectively. It is a reasonable assumption since that the state of our measurement platform on which the camera is fixed could be known from other detection system, such as the GPS and other inboard sensors, the information includes the azimuth, pitching, radial distance to point o, velocity, acceleration, and the attitude.

The platform itself also constitutes a coordinates O-XYZ, i.e. the platform coordinates. If we take an aircraft as the platform, then the nose head, right wing, and engine- room top are the positive directions of Y, X, and Z-axis respectively. The air- borne measurement-platform in the geography coordinate is shown in Fig. 1.

Figure 1. The airborne measurement-platform in geography coordinate.

At the n-th sampling time, the point O in o-xyz coordinates is O(xn, yn, zn). Suppose the moving target in the measurement-platform coordinates is expressed as (rn, αn, βn) in a spherical form. Here, αn and βn are the azimuth and pitching from camera to the target, and the sightline of camera to the target in the geography coordinates could be expressed as the direction vector (ln, mn, nn) as below:

$\left(\begin{array}{l}{l}_{n}\\ {m}_{n}\\ {n}_{n}\end{array}\right)=\left(\begin{array}{ccc}{t}_{11}^{n}& {t}_{12}^{n}& {t}_{13}^{n}\\ {t}_{21}^{n}& {t}_{22}^{n}& {t}_{23}^{n}\\ {t}_{31}^{n}& {t}_{32}^{n}& {t}_{33}^{n}\end{array}\right)\left(\begin{array}{l}\mathrm{cos}{\alpha }_{n}\mathrm{cos}{\beta }_{n}\\ \mathrm{sin}{\alpha }_{n}\mathrm{cos}{\beta }_{n}\\ \mathrm{sin}{\beta }_{n}\end{array}\right)$            (1)

Here,$\left(\begin{array}{ccc}{t}_{11}^{n}& {t}_{12}^{n}& {t}_{13}^{n}\\ {t}_{21}^{n}& {t}_{22}^{n}& {t}_{23}^{n}\\ {t}_{31}^{n}& {t}_{32}^{n}& {t}_{33}^{n}\end{array}\right)$ is the transposed matrix of the direction vector for X, Y, and Z-axis in o-xyz coordinates.

Suppose that there exists a one-dimension scale x0 in the target, which is invariable to the rotation of the camera in two adjacent sampling times. Let’s call the scale’s projection on the camera focal plane the target’s characteristic linearity. Under normal condition both the target and the measurement-platform are in moving. Figure 2 illustrates the recursive ranging model based on characteristic linearity.

Figure 2. shows T and S are the Target and Surveyor (camera) respectively. The subscript n or (n+1) represent the sampling time. Therefore, TnTn+1, SnSn+1 is the moving trace of the target and the platform between the n-th and (n+1)-th sampling time, while φn and φn+1 are the angles of the target’s trace to the camera’s sightline at each sampling time.

Figure 2. The ranging model based on characteristic linearity.

Assume that the focuses of optical system in the camera to be fn and fn+1 at the n-th and (n+1)-th sampling time, and length of the characteristic linearity in camera focal-plane to be Ln and Ln+1. Obviously, Ln or Ln+1 belong to a kind of depth-depended line features. According to the geometry imaging principle, the Eqn. below could be concluded.

$\frac{{r}_{n+1}}{{r}_{n}}=\frac{{f}_{n+1}}{{f}_{n}}\frac{{L}_{n}}{{L}_{n+1}}\frac{\mathrm{sin}{\phi }_{n+1}}{\mathrm{sin}{\phi }_{n}}$             (2)

2.2   Existing Algorithm

Based on Eqn. (2), the following recursion passive ranging Eqn. is derived15:
${C}_{4}{r}_{n+1}^{4}+{C}_{3}{r}_{n+1}^{3}+{C}_{2}{r}_{n+1}^{2}+{C}_{1}{r}_{n+1}+{C}_{0}=0$            (3)

where
${C}_{4}=H\left[1-{\left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)}^{2}\right]$            (4)

$\begin{array}{c}{C}_{3}=2H\left\{{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+\\ {n}_{n+1}\left({z}_{n+1}-{z}_{n}\right)-\left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)\\ \left[{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n}\left({z}_{n+1}-{z}_{n}\right)\right]\right\}\end{array}$            (5)

$\begin{array}{c}{C}_{2}=H\left\{\left[{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+\\ {n}_{n}\left({z}_{n+1}-{z}_{n}\right){\right]}^{2}+{\left(}^{{x}_{n+1}}+\\ {\left(}^{{y}_{n+1}}+{\left(}^{{z}_{n+1}}\right\}\end{array}$            (6)

${C}_{1}=0$            (7)

${C}_{0}={k}_{2}{r}_{n}^{2}+{k}_{1}{r}_{n}^{}+{k}_{0}$            (8)

$H={\left(\frac{{f}_{n}}{{f}_{n+1}}\right)}^{2}{\left(\frac{{L}_{n+1}}{{L}_{n}}\right)}^{2}\frac{1}{{r}_{n}^{2}}$            (9)

and,
${k}_{2}={\left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)}^{2}-1$            (10)

$\begin{array}{c}{k}_{1}=2\left\{{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n}\left({z}_{n+1}-{z}_{n}\right)-\\ \left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)\left[{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+\\ {m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n+1}\left({z}_{n+1}-{z}_{n}\right)\right]\right\}\end{array}$            (11)

$\begin{array}{c}{{k}_{0}=\left[{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n+1}\left({z}_{n+1}-{z}_{n}\right)\right]}^{2}-\\ {{\left(x}_{n+1}-{x}_{n}\right)}^{2}-{\left({y}_{n+1}-{y}_{n}\right)}^{2}-{\left({z}_{n+1}-{z}_{n}\right)}^{2}\end{array}$             (12)

In the distance estimating Eqn. (3), each n-th distance from a target to the camera, rn, could be obtained if the initial distance information r0 is known, as through Radar or Lidar.

Nevertheless it is more attractive for passive ranging without initial distance. To our knowledge, when we have known some certain length x0 on target, the distance difference between two sampling time could be got as below

$\Delta ={r}_{n+\text{1}}-{r}_{n}=f{x}_{\text{0}}\frac{{L}_{n}-{L}_{n+\text{1}}}{{L}_{n+\text{1}}\cdot {L}_{n}}$            (13)

Substitute Eqn. (13) into Eqn. (3), we obtain a nonlinear Eqn. (14) as below15

${D}_{4}{r}_{n}^{4}+{D}_{3}{r}_{n}^{3}+{D}_{2}{r}_{n}^{2}+{D}_{1}{r}_{n}+{D}_{0}=0$            (14)

where

${D}_{4}={C}_{4}$            (15)

${D}_{3}=4{C}_{4}\Delta +{C}_{3}$            (16)

${D}_{2}=6{C}_{4}{\Delta }^{2}+3{C}_{3}\Delta +{C}_{2}+{k}_{2}$           (17)

${D}_{1}=4{C}_{4}{\Delta }^{3}+3{C}_{3}{\Delta }^{2}+2{C}_{2}\Delta +{C}_{1}+{k}_{1}$           (18)

${D}_{0}={C}_{4}{\Delta }^{4}+{C}_{3}{\Delta }^{3}+{C}_{2}{\Delta }^{2}+{C}_{1}\Delta +{k}_{0}$            (19)

Equation (14) is essentially a 4-order nonlinear eqnuation on rn. By solving Eqn. (14), rn could be obtained, consequently by rn+1 = rn + Δ Up to now, a passive distance finding scheme without initial distance has been achieved.

Compared with Eqn. (3), the initial distance r0 is no longer needed in Eqn. (14). It is particularly convenient in practical application. According to FU15, et al. the errors result from reduced-model experiment using Eqn. (14) is much close to 4%. Because of introduction of variable Δ, it is not valid in passive ranging to non-cooperation target. In addition, there sometimes exists occurrence of pathological solution to the current quartic Eqn. (14). This problem needs to be overcome.

A factor worthy to be pointed out is that the ranging algorithm need only two frames of image in Eqn. (3) or Eqn. (14), this merit is beneficial to prevent ranging error diffusion. As we know, Eqn. (3) and Eqn. (14) use not only the image information but also the imaging directions, that is, there exists some redundancy information in it. This redundancy information is favorable for the associated algorithm to be improved.

Multiplying rn2 to both sides of Eqn. (3), we get Eqn. (20) as below:

${C}_{4}{r}_{n}^{2}{r}_{n+1}^{4}+{C}_{3}{r}_{n}^{2}{r}_{n+1}^{3}+{C}_{2}{r}_{n}^{2}{r}_{n+1}^{2}+{C}_{1}{r}_{n}^{2}{r}_{n+1}+{C}_{0}{r}_{n}^{2}=0$            (20)

Let
$\left\{\begin{array}{l}{C}_{40}={C}_{4}{r}_{n}^{2}\\ {C}_{30}={C}_{3}{r}_{n}^{2}\\ {C}_{20}={C}_{2}{r}_{n}^{2}\\ {C}_{10}={C}_{1}{r}_{n}^{2}=0\text{ }\left(\because {C}_{1}=0\right)\\ {C}_{00}={C}_{0}{r}_{n}^{2}={k}_{2}{r}_{n}^{4}+{k}_{1}{r}_{n}^{3}+{k}_{0}{r}_{n}^{2}\end{array}$            (21)

The Eqn. (20) is turned into the Eqn. as below:
${C}_{40}{r}_{n+1}^{4}+{C}_{30}{r}_{n+1}^{3}+{C}_{20}{r}_{n+1}^{2}+{k}_{2}{r}_{n}^{4}+{k}_{1}{r}_{n}^{3}+{k}_{0}{r}_{n}^{2}=0$            (22)

Substitute the distance ratio in adjacent sampling times, ρ=rn/rn+1 into Eqn. (22), we can get Eqn. (23) as below:
$\left({C}_{40}+{k}_{2}{\rho }^{4}\right){r}_{n+1}^{4}+\left({C}_{30}+{k}_{1}{\rho }^{3}\right){r}_{n+1}^{3}+\left({C}_{20}+{k}_{0}{\rho }^{2}\right){r}_{n+1}^{2}=0$            (23)

After reduction of Eqn. (23), we get Eqn. (24):
${A}_{2}{r}_{n+1}^{4}+{A}_{1}{r}_{n+1}^{3}+{A}_{0}{r}_{n+1}^{2}=0$            (24)

Since the target’s distance rn+1≠0, Eqn. (24) is re-written as Eqn. (25):
${A}_{2}{r}_{n+1}^{2}+{A}_{1}{r}_{n+1}^{}+{A}_{0}=0$           (25)

From Eqn. (3) through Eqn. (12) and Eqn. (21), the coefficients of ranging Eqn. are determined as below
${A}_{2}=\left({\rho }^{4}-{H}^{\prime }\right)\left[{\left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)}^{2}-1\right]$           (26)

$\begin{array}{c}{A}_{1}=2{H}^{\prime }\left\{{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+\\ {n}_{n+1}\left({z}_{n+1}-{z}_{n}\right)-\left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)\\ \left[{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n}\left({z}_{n+1}-{z}_{n}\right)\right]\right\}+\\ 2{\rho }^{3}\left\{{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n}\left({z}_{n+1}-{z}_{n}\right)-\\ \left({l}_{n+1}{l}_{n}+{m}_{n+1}{m}_{n}+{n}_{n+1}{n}_{n}\right)\cdot \left[{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+\\ {m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n+1}\cdot \left({z}_{n+1}-{z}_{n}\right)\right]\right\}\end{array}$           (27)

$\begin{array}{c}{A}_{1}={H}^{\prime }\left\{\left[{l}_{n}\left({x}_{n+1}-{x}_{n}\right)+{m}_{n}\left({y}_{n+1}-{y}_{n}\right)+\\ {{n}_{n}\left({z}_{n+1}-{z}_{n}\right)\right]}^{2}+{\left({x}_{n+1}-{x}_{n}\right)}^{2}+{\left({y}_{n+1}-{y}_{n}\right)}^{2}+\\ {\left({z}_{n+1}-{z}_{n}\right)}^{2}\right\}+{\rho }^{2}\left\{\left[{l}_{n+1}\left({x}_{n+1}-{x}_{n}\right)+\\ {{m}_{n+1}\left({y}_{n+1}-{y}_{n}\right)+{n}_{n+1}\left({z}_{n+1}-{z}_{n}\right)\right]}^{2}-{\left({x}_{n+1}-{x}_{n}\right)}^{2}-\\ {\left({y}_{n+1}-{y}_{n}\right)}^{2}-{\left({z}_{n+1}-{z}_{n}\right)}^{2}\right\}\end{array}$            (28)

${H}^{\prime }=H{r}_{n}^{2}={\left(\frac{{f}_{n}}{{f}_{n+1}}\right)}^{2}{\left(\frac{{L}_{n+1}}{{L}_{n}}\right)}^{2}$           (29)

In Eqn. (26) through Eqn. (28), ρ=rn/rn+1=Ln+1/Ln, it is an approximation of Eqn. (2) in a smaller sampling interval. On Ln+1 or Ln, detailed process is demonstrated as below.

Let points A, B, C and A', B', C' be the matched points respectively in two adjacent frames. We would determine the line segment feature from these points, A, B and C for example, as shown in Fig. 3 A circumcircle for points A, B and C with the center named O' could be determined; we take MN, diameter of the circumcircle of triangle ABC, as our characteristic linearity. With this improvement, the characteristic linearity could be obtained more easily than that in Fu15, et al.

Figure 3. Feature points A, B, C and the selecting line segment features.

In this method, it needs only three matched points for extracting the line features, while three points is the least number in image matching. In order to obtain at least three matched points, an effective approach is adjusting the image contrast in certain range before image matching. Equations (25), (14), and (3) are homologous equations, but the former has lower orders, so its ranging error should be no more than of the latter’s.

As for the distance estimation Eqn. (25), it will always have solution provided that the observer has non-zero displacement in adjacent sampling times, i.e. the target’s distance can always be estimable. This could be satisfied by continuous observer maneuver. Even if the distance difference δ is zero, the discriminant of a quadratic equation, A12 − 4A2A0, is still greater than zero, that is the Eqn. (25) can be solved.

So far, our method does not need prior knowledge about the target, so it is quite suitable for non-cooperation target ranging. Figure 4 is the flow chart of our method for application. It has been found in our experiment that the step of ‘contrast adjustment in target and its adjacent region’ is indeed necessary for image matching and characteristic linearity extraction. Building a quadratic equation is our main improvement.

Figure 4. Flow chart of our method for application.

To verify the passive ranging algorithm based on Eqn. (25), we conducted reduced-model experiments with a ratio of 1:2300. As space is limited, here in Fig. 5 we only present the 16 frames of image with even numbers in the sequence. The photographing conditions and measurement results are shown in Table 1. It can be seen in Fig. 5, that there occurs a great change in the target. Even so, the ranging error is acceptable.

Figure 5. Experiment image sequence A. (2), (4),......(32), are frame numbers.

All the pictures in this paper were taken by a Sony ExwavePRO CCD with 768 x 576 pixels, the aircraft in pictures is a F16 model with length of 24 cm, and background removing has been done before this experiment. In natural scenario work, we us a moving target tracking technology from Visionlab23, so the contour detection and background removing become easy. Moreover, in our method only 3 matched points are required, it is the least requirement in imaging tracking.

In this experiment, the aircraft is moving in an arc shaped path and the surveyor is moving in a straight line, it can be seen in the image sequence that there is a significant change in the target’s attitude. In Table 1, relative error of target’s distance estimation is less than ±4 % in most cases, the biggest relative error is 7.81 %, and such errors can meet the demand of practical application.

As for static observer, ranging Eqn. (14) can still be used to estimate the distance. The results fully show that our improved algorithm is able to adapt to changes in target’s attitude. As a control, another group of pictures named sequence B are given in Fig. 6, and the ranging errors are shown in Table 2. In Fig. 6, only the frames with odd numbers are present. Experimental study shows that Table 2 has less error than that in Table 1, due to the less change in the target’s attitude in Fig. 6. In most cases, experimental errors are between Table 1and Table 2.

Figure 6. Experiment image sequence B. (1), (3), ......(41), are frame numbers.

Table 1. Data and ranging result of experiment A

Table 2. Data and ranging result of experiment B*

We proposed an algorithm for non-cooperation target passive ranging, in which distance estimation is turned into solving a quadratic equation after utilizing target’s imaging features and the camera positions. In contrast with the former algorithm of solving a quartic Eqn. (15), the new algorithm of solving a quadratic equation avoids pathological solutions, such as the complex solution, negative solutions, etc. Hence the new algorithm shows much more worth in practical application than the former ones. Theoretical analysis indicates that the new algorithm always has solution provided that the platform of observer has non-zero displacement in adjacent sampling times. The proposed algorithm is also examined by indoor reduced mode experiments which show that it can be implemented in practical passive ranging and the relative ranging error is less than ±4% in most cases. We also demonstrated that the distance estimation would be in much simpler mathematical form for moving observer.

This work was supported by both the National Natural Science Foundation of China under Grant No. 60872136 and by Natural Science Basic Research Plan in Shaanxi Province of China (Program No. 2011JM8002). The authors would like to thank the anonymous reviewers for their valuable advice toward the improvement of this article.

1. Reilly, J.P.; Klein, T. & Ilver, H. Design and demonstration of an infrared passive ranging. John Hopkings APL Technical Digest, 1999, 20(2), 1854-1859.

2. Suhr, J.K.; Jung, H.G.; Bae, K.H. & Kim, J.H. Monocular motion stereo-based free parking space detection apparatus and method. US Patent 8134479, 13 March 2012.

3. Olson, C.F. & Abi-Rached, H. Wide-baseline stereo vision for terrain mapping. Machine Vision Appl, 2010, 21(5), 713-725.[Full text via CrossRef]

4. Hewson, R. Taurus KEPD 350 (KEPD 150). Jane’s Air-Launched Weapons (Air-to-surface Missiles-Stand-off and Cruise). 24 April, 2012. (Accessed on 12 September, 2012).

5. Lepetit, V. & Fua, P. Monocular Model-Based 3D Tracking of Rigid Objects: A Survey. Foundations Trends Comput. Graphics Vision, 2005, 1(1), 1-89.[Full text via CrossRef]

6. Tuytelaars, T. & Mikolajczyk, K. Local invariant feature detectors: a survey. Foundations Trends Comp. Graphics Vision, 2008, 3(3), 177-280.[Full text via CrossRef]

7. Yilmaz, A.; Javed, O. & Shah, M. Object Tracking: A survey. ACM Computing Surveys, 2006, 38(4), pp. 1-45. [Full text via CrossRef]

8. Newcombe R. A.; Davison A. J. & Izadi S. Kinect Fusion: Real-time dense surface mapping and tracking. In the IEEE International Symposium on Mixed and Augmented Reality(ISMAR’11), Basel, Switzeland, October 2011, pp.127-136.[Full text via CrossRef]

9. Zhang, Z.; Huang, Y.; Li, C. & Kang, Y. Monocular Vision Simultaneous Localization and Mapping using SURF. In 7th World Congress on Intelligent Control and Automation (WCICA’ 2008), Chongqing, China, June 2008, pp. 1651-1656. [Full text via CrossRef]

10. Baker, P. & Kamgar-Parsi, B. Using shorelines for autonomous air vehicle guidance. Comp. Vision Image Understanding, 2010, 114(6), 723-729.[Full text via CrossRef]

11. Le, M. H. & Jo, K. H.. Building detection and 3D reconstruction from two-view of monocular camera. In Computational Collective Intelligence. Technologies and Applications, Berlin Heidelberg. September 2011, pp.428-437.[Full text via CrossRef]

12. Cannons, K. & Wildes, R. P. A Unifying Theoretical Framework for Region Tracking, York University Technical Report, CSE-2013-04, February 8, 2013.

. Xiong, T. & Debrunner, C. Stochastic car tracking with line-and colour-based features. IEEE Trans. Intell. Transp. Syst., 2004, 5(4), 324-328.[Full text via CrossRef]

14. Zhang, Y.; Wang, Y. & Qu, H. Rotation and Scaling Invariant Feature Lines for Image Matching. In the 2011 International Conference on Mechatronic Science, Electric Engineering and Computer, Jilin, China, August 2011, pp.1135-1138.[Full text via CrossRef]

15. FU, X.; LIU, S. & LI, E. A real time image sequence procession algorithm for target ranging. In the Proceeding SPIE 6279: High-Speed Photography and Photonics, September, Xi’an, China, Nov 2006. Part 2, p. 62793A.

16. Troiani, C. & Martinelli, A. Vision-aided inertial navigation using virtual features. In the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2012, pp. 4828-4834.[Full text via CrossRef]

17. Priyanka, D.D. & Dhok, G.P. Analysis of distance measurement system of leading vehicle. Inter. J. Instrumentation Control Sys., 2012, 2(1), 11-23.[Full text via CrossRef]

18. Smith, P.; Reid, I. & Davison, A. Real-Time Monocular SLAM with Straight Lines. In the Proceeding of 2nd International Symposium on Visual Computing, Edinburgh, GB, November 2006. pp. 1-10.[Full text via CrossRef]

19. Rosten, E. & Drummond, T. Fusing points and lines for high performance tracking. In Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, China, October 2005, 2, pp. 1508-1515.[Full text via CrossRef]

20. Ansar, A. & Daniilidis, K. Linear pose estimation from points or lines. IEEE Trans. Pattern Anal. Mach. Intell., 2003, 25(4), 1-12.

21. Von Gioi, R. G.; Jakubowicz, J.; Morel, J. M. & Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell., 2010, 32(4), 722-732.[Full text via CrossRef]

22. Gong, J.; Fan, G.; Yu, L.; Havlicek, J. P. & Chen, D. Joint view-identity manifold for target tracking and recognition. In the 19th IEEE International Conference on Image Processing (ICIP), Orlando FL, USA, September 2012, pp.1357-1360.[Full text via CrossRef]

23. VisionLab VCL + Source code 4.5, URL:http://visionlab-vcl-source-code.en.softonic.com/.

 Dr Xiaoning Fu received his PhD in Technology Physics from Xidian University in 2005. He is currently working at School of Electromechanical Engineering, Xidian University. He is in charge of photoelectric detection technology and system, video signal processing and electronic countermeasures in Xidian University. His research interests include imaging detection, signal processing, electro-optic ranging and countermeasure. Lixia Wang received her bachelor degree in Automation from Anhui Polytechnic University in 2011. She is now pursuing her master’s degree in College of Electromechanical Engineering, Xidian University. Her research interests include photoelectric guidance system and electro-optic countermeasure.