欢迎访问《兵工学报》官方网站,今天是

兵工学报 ›› 2024, Vol. 45 ›› Issue (11): 3926-3937.doi: 10.12382/bgxb.2023.1221

• • 上一篇    下一篇

融合事件的点线特征法视觉惯性里程计

刘毓敏1, 蔡志浩1,2,*(), 孙家岭1, 赵江1, 王英勋1,2   

  1. 1 北京航空航天大学 自动化科学与电气工程学院, 北京 100191
    2 北京航空航天大学 无人系统研究院, 北京 100191
  • 收稿日期:2023-12-29 上线日期:2024-11-26
  • 通讯作者:
    * 邮箱:
  • 基金资助:
    飞行器控制一体化国家重点实验室基金项目(11300LB2022103007)

Event-combined Visual-inertial Odometry Using Point and Line Features

LIU Yumin1, CAI Zhihao1,2,*(), SUN Jialing1, ZHAO Jiang1, WANG Yingxun1,2   

  1. 1 School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
    2 Institute of Unmanned System, Beihang University, Beijing 100191, China
  • Received:2023-12-29 Online:2024-11-26

摘要:

视觉惯性里程计是机器人实现自主定位的关键技术,事件相机作为一种异步视觉传感器,与传统相机具有互补的特点。针对低光照、光照大幅度变化和高速运动场景,对事件相机的输出和传统图像进行融合,并结合惯性测量单元进行实时点线特征法视觉惯性里程计研究。提出一种从事件流生成事件图像的算法,设计融合事件的点线特征检测方法;基于视觉-惯性紧耦合的思想,设计后端滑动窗口优化算法;进行数据集试验验证和无人机飞行试验验证。在数据集上的试验结果表明:与仅使用传统图像的点线特征法视觉惯性里程计相比,在高速运动的场景下,定位误差平均减少了22%以上;在低光照、光照大幅度变化的场景下,定位误差平均减少了59%以上。

关键词: 事件相机, 点线特征, 视觉惯性里程计, 视觉同时定位与地图构建, 位姿估计

Abstract:

Visual-inertial odometry is a key technology for robots to achieve autonomous localization. As an asynchronous vision sensor, the event cameras have complementary to the traditional cameras. For the scene of low light condition, high dynamic range and high-speed motion, the output of event camera and the traditional image are fused.A real-time visual inertial odometry using point and line features is proposed combined with the inertial measurement unit (IMU). An algorithm for generating an event image from event stream is proposed, a point-line feature detection method combined with events is designed, anda back-end sliding window optimization algorithm is designed based on the idea of visual-inertial tight-coupling. The dataset test and UAV flight test are conducted. The test results on the dataset show that, compared with the visual-inertial odometry using point and line features only on the traditional image, the proposed odometry can reduce the positioning error by more than 22% on average in the scene of high-speed motion, and it can reducethe positioning error by more than 59% on average in the scene of low light condition and high dynamic range.

Key words: event camera, point and line features, visual-inertial odometry, visualsimultaneous localization and mapping, pose estimation

中图分类号: