A Data Association Method Based on Tracking and Matching of Assisted Optical Flows from the Descriptor
-
摘要: 针对采用多状态约束卡尔曼滤波(MSCKF)的视觉惯性里程计定位精度易受特征点匹配异常值影响问题, 提出了1种基于描述符辅助光流跟踪匹配的数据关联方法。该方法采用金字塔LK光流对序列图像中特征点进行跟踪匹配, 计算每一对匹配点的rBRIEF描述符, 根据Hamming距离对描述符的相似度进行判断消除异常匹配点。在实验中从特征点匹配主观效果以及定位精度2个方面评估本文方法的有效性, 结果表明: 所提出方法能够有效滤除动态场景下图像特征匹配的异常值, 使用该方法处理后的图像进行MSCKF运动解算, 位置结果漂移率小于0.38%, 相较于未剔除异常匹配值的MSCKF算法结果, 改善了54.7%, 单帧图像处理时间约为39 ms。Abstract: The positioning accuracy of visual inertial odometer using multi-state constrained Kalman filter(MSCKF)is easily affected by mismatching points. A data association method is proposed for mitigating these outliers in this study. First, pyramid Lucas-Kanade(LK)optical flow is used to track and match the features among the sequence images. Second, the rBRIEF descriptors of each pair of matching points are achieved. Third, the Hamming distances between two rBRIEF descriptors can be calculated. Furthermore, the similarity of these descriptors is then evaluated according to Hamming distance. Last, the matching points of low similarity are eliminated as outliers in the data processing. The performances of the proposed method is assessed by the effectiveness of matching and positioning accuracy of the feature point. The results indicate that the proposed method can eliminate mismatching points in dynamic image processing. The outlier-eliminated images are applied for the MSCKF motion estimation. The derived drift rate of positioning result is less than 0.38% and shows an improvement of 54.7% with no outlier-eliminated MSCKF algorithm. The single-frame image processing time is about 39 ms.
-
Key words:
- visual-inertial /
- data association method /
- feature matching /
- RBRIEF descriptor /
- optical flow tracking /
- MSCKF
-
表 1 惯导设备参数
Table 1. Equipment parameters of inertial-navigation
指标 M40 POS320 陀螺零偏稳定性(℃/h) 50 0.3 角度随机游走(℃/$ \sqrt h $) 0.4 0.03 加表零偏稳定性(mGal) 200 200 速度随机游走/(m/(s/$ \sqrt h $)) 0.1 0.05 表 2 Basler acA1600-20gm相机参数
Table 2. Parameters of Basler acA1600-20gm camera
传感器类型 CCD 传感器尺寸/mm 7.16×5.44 分辨率/pixie 1628×1236 像元大小/μm 4.4×4.4 接口 千兆网 曝光控制 相机API编程控制
外部脉冲信号触发帧率/(帧/s) 20 通道数/(颜色) 单通道/(黑白) 像素位深度/bits 12 表 3 VIO1.0解算误差统计表
Table 3. Statistics of VIO1.0 solution errors
N/m E/m D/m ROLL/(°) PITCH/(°) YAW/(°) MAX 7.74 16.80 2.63 0.18 0.17 1.46 RMS 3.13 7.48 1.02 0.05 0.08 0.82 表 4 VIO2.0解算误差统计表
Table 4. Statistics of VIO2.0 solution errors
N/m E/m D/m ROLL/(°) PITCH/(°) YAW/(°) MAX 5.51 7.17 2.43 0.14 0.13 1.05 RMS 2.07 3.26 1.03 0.03 0.05 0.45 -
[1] 姚卓, 章红平. 里程计辅助车载GNSS/INS组合导航性能分析[J]. 大地测量与地球动力学, 2018, 38(2): 206-210. https://www.cnki.com.cn/Article/CJFDTOTAL-DKXB201802024.htmYAO Zhuo, ZHANG Hongping. Performance analysis of odometer assisted vehicle GNSS/INS integrated navigation[J]. Journal of Geodesy and Geodynamics, 2018, 38(2): 206-210. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-DKXB201802024.htm [2] QIU Xiaochen, ZHANG Hai, FU Wenxing, et al. Monocular visual-inertial odometry with an unbiased linear system model and robust feature tracking front-end[J]. Sensors, 2019, 19(8): 1941. doi: 10.3390/s19081941 [3] DELMERICO J, SCARAMUZZA D. A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots[C]. 2018 International IEEE Conference on Robotics and Automation, Queensland, Australia: IEEE, 2018. [4] FAESSLER M, FONTANA F, FORSTER C, et al. Autonomous, vision-based flight and live dense 3D mapping with a quadrotor micro aerial vehicle[J]. Journal of Field Robotics, 2016, 33(4): 431-450. doi: 10.1002/rob.21581 [5] BLOESCH M, BURRI M, OMARI S, et al. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback[J]. The International Journal of Robotics Research, 2017, 36(10): 1053-1072. doi: 10.1177/0278364917728574 [6] MOURIKIS A I, ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]. 2007 International IEEE Conference on Robotics and Automation, Roma, Italy: IEEE, 2007. [7] 齐乃新, 杨小冈, 李小峰, 等. 基于ORB特征和LK光流的视觉里程计算法[J]. 仪器仪表学报, 2018, 39(12): 216-227. https://www.cnki.com.cn/Article/CJFDTOTAL-YQXB201812025.htmQI Naixin, YANG Xiaogang, LI Xiaofeng, et al. Visual odometry algorithm based on ORB features and LK optical flow[J]. Chinese Journal of Scientific Instrument, 2018, 39(12): 216-227. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-YQXB201812025.htm [8] LIU Jingneng, ZENG Guihua. Improved global context descriptor for describing interest regions[J]. Journal of Shanghai Jiaotong University(Science), 2012, 17(2): 147-152. doi: 10.1007/s12204-012-1244-6 [9] 成怡, 朱伟康, 徐国伟. 基于余弦相似度的改进ORB匹配算法[J]. 天津工业大学学报, 2021, 40(1): 60-66. https://www.cnki.com.cn/Article/CJFDTOTAL-TJFZ202101011.htmCHENG Yi, ZHU Weikang, XU Guowei. An improved ORB matching algorithm based on cosine similarity[J]. Journal of Tianjin Polytechnic University, 2021, 40(1): 60-66. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-TJFZ202101011.htm [10] 邹斌, 赵小虎, 尹智帅. 基于改进ORB的图像特征匹配算法研究[J]. 激光与光电子学进展, 2021, 58(2): 96-103. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202102009.htmZOU Bin, ZHAO Xiaohu, YIN Zhishuai. Image feature matching algorithm based on improved ORB[J]. Laser & Optoelectronics Progress, 2021, 58(2): 96-103. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202102009.htm [11] 王亭亭, 蔡志浩, 王英勋. 无人机室内视觉/惯导组合导航方法[J]. 北京航空航天大学学报, 2018, 44(1): 176-186. https://www.cnki.com.cn/Article/CJFDTOTAL-BJHK201801021.htmWANG Tingting, CAI Zhihao, WANG Yingxun. UAV indoor vision/inertial navigation integrated navigation method[J]. Journal of Beijing University of Aeronautics and Astronautics, 2018, 44(1): 176-186. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-BJHK201801021.htm [12] 刘奕杉. 一种基于改进光流法的视觉惯性状态估计器[J]. 科学技术创新, 2021(10): 17-18. https://www.cnki.com.cn/Article/CJFDTOTAL-HLKX202110008.htmLIU Yishan. A visual inertial state estimator based on improved optical flow method[J]. Scientific and Technological Innovation, 2021(10): 17-18. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-HLKX202110008.htm [13] BOUGUET J Y. Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm[J]. Intel Corporation, 2001, 4(5): 1-10. [14] RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: An efficient alternative to SIFT or SURF[C]. 2011 International Conference on Computer Vision, Barcelona, Spain: IEEE, 2011. [15] CHUM O, MATAS J, KITTLER J. Locally optimized RANSAC[C]. Joint Pattern Recognition Symposium, Magdeburg, Germany: DAGM, 2003. [16] 李团. 单频多模GNSS/INS/视觉紧组合高精度位姿估计方法研究[D]. 武汉: 武汉大学, 2019.LI Tuan. Research on the tightly coupled single-frequency multi-GNSS/INS/Vision integration for precise position and orientation estimation[D]. Wuhan: Wuhan University, 2019. (in Chinese). [17] MUR-ARTAL R, TARDÓS J D. Orb-slam2: An open-source slam system for monocular, stereo, andrgb-d cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. http://ieeexplore.ieee.org/document/7946260