Correction of Inertial Navigation System’s Errors by the Help of Video-Based Navigator Based on Digital Terrarium Map

Abstract

This paper deals with the error analysis of a novel navigation algorithm that uses as input the sequence of images acquired from a moving camera and a Digital Terrain (or Elevation) Map (DTM/DEM). More specifically, it has been shown that the optical flow derived from two consecutive camera frames can be used in combination with a DTM to estimate the position, orientation and ego-motion parameters of the moving camera. As opposed to previous works, the proposed approach does not require an intermediate explicit reconstruction of the 3D world. In the present work the sensitivity of the algorithm outlined above is studied. The main sources for errors are identified to be the optical-flow evaluation and computation, the quality of the information about the terrain, the structure of the observed terrain and the trajectory of the camera. By assuming appropriate characterization of these error sources, a closed form expression for the uncertainty of the pose and motion of the camera is first developed and then the influence of these factors is confirmed using extensive numerical simulations. The main conclusion of this paper is to establish that the proposed navigation algorithm generates accurate estimates for reasonable scenarios and error sources, and thus can be effectively used as part of a navigation system of autonomous vehicles.

Share and Cite:

Kupervasser, O. and Rubinstein, A. (2013) Correction of Inertial Navigation System’s Errors by the Help of Video-Based Navigator Based on Digital Terrarium Map. Positioning, 4, 89-108. doi: 10.4236/pos.2013.41010.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Y. Liu and M. A. Rodrigues, “Statistical Image Analysis for Pose Estimation without Point Correspondences,” Pattern Recognition Letters, Vol. 22, No. 11, 2001, pp. 1191-1206. doi:10.1016/S0167-8655(01)00052-6
[2] P. David, D. DeMenthon, R. Duraiswami and H. Samet, “SoftPOSIT: Simultaneous Pose and Correspondence Determination,” In: A. Heyden, et al., Eds., LNCS 2352, Springer-Verlag, Berlin, 2002, pp. 698-714.
[3] J. L. Barron and R. Eagleson, “Recursive Estimation of Time-Varying Motion and Structure Parameters,” Pattern Recognition, Vol. 29, No. 5, 1996, pp. 797-818. doi:10.1016/0031-3203(95)00114-X
[4] T. Y. Tian, C. Tomashi and D. J. Hegger, “Comparison of Approaches to Egomotion Computation,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, 18-20 June 1996, pp. 315-320.
[5] A. Chiuso and S. Soatto, “MFm: 3-D Motion from 2-D Motion, Causally Integrated over Time,” Washington University Technical Report, Washington University, St. Louis, 1999.
[6] M. Irani, B. Rousso and S. Peleg, “Robust Recovery of Ego-Motion,” Proceedings of Computer Analysis of Images and Patterns (CAIP), Budapest, 13-15 September 1993, pp. 371-378.
[7] D. G. Sim, R. H. Park, R. C. Kim, S. U. Lee and I. C. Kim, “Integrated Position Estimation Using Aerial Image Sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, 2002, pp. 1-18. http://dx.doi.org/10.1109/34.982881
[8] J. Oliensis, “A Critique of Structure-from-Motion Algorithms,” Computer Vision and Image Understanding, Vol. 80, No. 2, 2000, pp. 172-214. doi:10.1006/cviu.2000.0869
[9] R. Lerner and E. Rivlin, “Direct Method for Video-Based Navigation Using a Digital Terrain Map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, No. 2, 2011, pp. 406-411. doi:10.1109/TPAMI.2010.171
[10] R. Lerner, E. Rivlin and H. P. Rotstein, “Pose and Motion Recovery from Correspondence and a Digital Terrain Map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 9, 2006, pp. 1404-1417.
[11] C.-P. Lu, G. D. Hager and E. Mjolsness, “Fast and Globally Convergent Pose Estimation from Video Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 6, 2000, pp. 610-622. doi:10.1109/34.862199
[12] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,” 2004. http://www.menem.com/ilya/digital_library/control/welch-bishop-01.pdf
[13] Inertial Navigation System (INS) Toolbox for MATLAB. http://www.gpsoftnav.com/
[14] O. Kupervasser, R. Lerner, E. Rivlin and P. H. Rotstein, “Error Analysis for a Navigation Algorithm Based on Optical-Flow and a Digital Terrain Map,” Proceedings of the 2008 IEEE/ION Position, Location and Navigation Symposium, 5-8 May 2008, Monterey, 2008, pp. 1203-1212.
[15] R. M. Haralick, “Propagating Covariance in Computer Vision,” In: A. Rosenfeld, Bowyer and Ahuja, Eds., Advances in Image Understanding, IEEE Computer Society Press, Washington DC, 1996, pp. 142-157.
[16] W. G. Rees, “The Accuracy of Digital Elevation Models Interpolated to Higher Resolutions,” International Journal of Remote Sensing, Vol. 21, No.1, 2000, pp. 7-20. doi:10.1080/014311600210957
[17] D. Hoaglin, F. Mosteller and J. Tukey, “Understanding Robust and Exploratory Data Analysis,” John Wiley & Sons Inc, New York, 1983.
[18] R. Lerner, E. Rivlin and P. H. Rotstein, “Pose and Motion Recovery from Feature Correspondences and a Digital Terrain Map,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 9, 2006, pp. 1404-1417. doi:10.1109/TPAMI.2006.192
[19] P. Gurfil and H. Rotstein, “Partial Aircraft State Estimation from Visual Motion Using the Subspace Constraints Approach,” Journal of Guidance, Control and Dynamics, Vol. 24, No. 5, 2001, pp. 1016-1028.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.