孫作雷+黃嘉明+張波
DOI:10.13340/j.jsmu.2016.04.016
文章編號:1672-9498(2016)04008705
摘要:為提高單目視覺里程計算法的性能,從視覺特征選取和特征誤匹配剔除兩個方面進行研究.采用SURF描述子提取單目圖像的特征點,并匹配相鄰圖像序列的特征,使用歸一化線性八點法依次得到基礎矩陣和本質(zhì)矩陣.利用三角測量求解匹配點的三維坐標,進而根據(jù)2D2D模型解算出兩幀圖像間相機運動的旋轉(zhuǎn)和平移,從而構(gòu)建單目視覺里程計系統(tǒng).為提高算法性能,使用RANSAC算法清除初次計算的特征誤匹配,并利用地面數(shù)據(jù)獲取相機運動的平移尺度.實驗結(jié)果驗證了RANSAC算法能夠有效剔除特征誤匹配,降低單目視覺里程計的累積誤差.
關鍵詞:
機器人定位; 視覺里程計; 特征提純; 機器視覺; SURF; RANSAC
中圖分類號: TP242
文獻標志碼: A
Monocular visual odometry with RANSACbased outlier rejection
SUN Zuolei1, HUANG Jiaming1, ZHANG Bo2
(1. Information Engineering College, Shanghai Maritime University, Shanghai 201306, China;
2. Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China)
Abstract:
In order to enhance the algorithm performance of the monocular visual odometry, the visual feature extraction and the mismatched feature rejection are studied. The SURF descriptor is employed to extract features of monocular images and match features in the adjacent image sequence. The fundamental matrix and essential matrix are derived using the normalized eightpoint method. The 3D coordinates of matching points are calculated with the triangulation, and then the camera translation and rotation between two frames of images are estimated based on 2D2D model. As a result, the system of monocular visual odometry is constructed. To improve the algorithm performance, RANSAC algorithm is adopted to reject the feature mismatching in the first calculation, and the camera translation scale is achieved by the ground data. The experiment result demonstrates that RANSAC algorithm can effectively eliminate feature mismatching and reduce the cumulative error of the monocular visual odometry.
Key words:
robot localization; visual odometry; feature refining; computer vision; SURF; RANSAC
收稿日期: 20151208修回日期: 20160324
基金項目: 國家自然科學基金(61105097, 51279098, 61401270);上海市教育委員會科研創(chuàng)新項目(13YZ081)
作者簡介:
孫作雷(1982—),男,山東棗莊人,副教授,博士,研究方向為移動機器人導航和機器學習, (Email)szl@mpig.com.cn
4結(jié)論
本文使用RANSAC算法提純SURF特征點匹配,以提升單目視覺里程計(VO)性能.實驗結(jié)果證明:單純地使用比率測試法去除誤匹配的誤差較大;使用RANSAC算法進行誤匹配剔除并配合歸一化線性八點法能有效降低VO的累積誤差.
參考文獻:
[1]SCARAMUZZA D, FRAUNDORFER F. Visual odometry part I: the first 30 years and fundanmentals[J]. IEEE Robotics & Automation Magazine, 2011, 18(4): 8092. DOI: 10.1109/MRA.2011.943233.
[2]FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: fast semidirect monocular visual odometry[C]// Robotics and Automation (ICRA), 2014 IEEE International Conference on. Hong Kong: IEEE, 2014: 1522. DOI: 10.1109/ICRA.2014.6906584.
[3]HANSEN P, ALISMAIL H, RANDER P, et al. Monocular visual odometry for robot localization in LNG pipes[C]// Robotics and Automation (ICRA), 2011 IEEE International Conference on. Shanghai: IEEE, 2011: 31113116. DOI: 10.1109/ICRA.2011.5979681.
[4]鄭馳, 項志宇, 劉濟林. 融合光流與特征點匹配的單目視覺里程計[J]. 浙江大學學報(工學版), 2014, 48(2): 279284. DOI: 10.3785/j.issn.1008973X.2014.02.014.
[5]FRAUNDORFER F, SCARAMUZZA D. Visual odometry part II: matching, robustness, optimization, and applications[J]. IEEE Robotics & Automation Magazine, 2012, 19(2): 7890. DOI: 10.1109/MRA.2012.2182810.
[6]SANGINETO E. Pose and expression independent facial landmark localization using denseSURF and the Hausdorff distance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(3): 624638. DOI: 10.1109/TPAMI.2012.87.
[7]吳福朝. 計算機視覺中的數(shù)學方法[M]. 北京: 科學出版社, 2008: 6377.
[8]CHOI S, PARK J, YU W. Resolving scale ambiguity for monocular visual odometry[C]// Ubiquitous Robots and Ambient Intelligence (URAI), 2013 10th International Conference on. Jeju: IEEE, 2013: 604608. DOI: 10.1109/URAI.2013.6677403.
[9]GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset[J]. The International Journal of Robotics Research, 2013: 32(11): 12311237.
[10]GEIGER A, ZIEGLER J, STILLER C. Stereoscan: dense 3D reconstruction in realtime[C]// Intelligent Vehicles Symposium (IV), 2011 IEEE. BadenBaden: IEEE, 2011: 963968. DOI: 10.1109/IVS.2011.5940405.
(編輯賈裙平)