An Algorithm to Estimate Scale Weights of Complex Wavelets for Effective Feature Extraction in Aerial Images
Research on vision-based navigation of unmanned air vehicles (UAVs) has been in focus in the recent years, both in military and civilian application domains. Vision-based navigation uses features as cues to match successive images and compute the location of the UAV. The method proposed in this paper is an extension to the existing work to automatically compute scale weights of dual-tree complex wavelet transform (DTCWT) coefficients to improve the performance in terms of detection of features as against discrete wavelets or its Fourier counterparts. Existing DTCWT coefficients give improved time-shifted sensitivity, better directional selectivity with local phase information and limited redundancy. However, the drawback of this method is that the results depend on appropriate selection of scale weights α and β, which are different for different scenes. The existing technique incorporates a rule of thumb-based recommended values of α and β, irrespective of the scene which generates too many key-points, making the registration process computationally intensive. As a real-time application, the need of the problem is to extract less number of strong features. Thus, an algorithmic approach to compute optimal values of scale weights considering precision and accuracy is proposed. The method is tested on various synthetic, simulated, and aerial images having different transformations and in the presence of noise between successive images. It is observed that DTCWT descriptor performs the best and gives better results than scale invariant feature transform and Speeded up robust features.
Science Journal, Vol. 64, No. 6, November 2014, pp.549-556, DOI:http://dx.doi.org/10.14429/dsj.64.7785
where otherwise noted, the Articles on this site are licensed under Creative Commons License: CC Attribution-Noncommercial-No Derivative Works 2.5 India