留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于视觉和惯性传感器的大型邮轮室内旅客身份识别方法

冯晓艺 马玉亭 陈聪 王一飞 刘克中 陈默子

冯晓艺, 马玉亭, 陈聪, 王一飞, 刘克中, 陈默子. 基于视觉和惯性传感器的大型邮轮室内旅客身份识别方法[J]. 交通信息与安全, 2024, 42(1): 67-75. doi: 10.3963/j.jssn.1674-4861.2024.01.008
引用本文: 冯晓艺, 马玉亭, 陈聪, 王一飞, 刘克中, 陈默子. 基于视觉和惯性传感器的大型邮轮室内旅客身份识别方法[J]. 交通信息与安全, 2024, 42(1): 67-75. doi: 10.3963/j.jssn.1674-4861.2024.01.008
FENG Xiaoyi, MA Yuting, CHEN Cong, WANG Yifei, LIU Kezhong, CHEN Mozi. A Method for Indoor Passenger Identity Recognition on Large Cruise Ships Based on Vision and Inertial Sensors[J]. Journal of Transport Information and Safety, 2024, 42(1): 67-75. doi: 10.3963/j.jssn.1674-4861.2024.01.008
Citation: FENG Xiaoyi, MA Yuting, CHEN Cong, WANG Yifei, LIU Kezhong, CHEN Mozi. A Method for Indoor Passenger Identity Recognition on Large Cruise Ships Based on Vision and Inertial Sensors[J]. Journal of Transport Information and Safety, 2024, 42(1): 67-75. doi: 10.3963/j.jssn.1674-4861.2024.01.008

基于视觉和惯性传感器的大型邮轮室内旅客身份识别方法

doi: 10.3963/j.jssn.1674-4861.2024.01.008
基金项目: 

国家自然科学基金面上项目 51979216

湖北省自然科学基金创新群体项目 2021CFA001

湖北省自然科学基金青年项目 20221J0059

详细信息
    作者简介:

    冯晓艺(2000—),硕士研究生. 研究方向:室内定位. E-mail: 318403@whut.edu.cn

    通讯作者:

    刘克中(1976—),博士,教授. 研究方向:水上交通安全. E-mail: kzliu@whut.edu.cn

  • 中图分类号: U675.79;TP212.9

A Method for Indoor Passenger Identity Recognition on Large Cruise Ships Based on Vision and Inertial Sensors

  • 摘要: 邮轮内部结构及场景复杂,基于监控用单目摄像头的旅客身份识别方法缺乏深度信息,无法准确识别旅客位置、航向及航向变化信息,难以在复杂场景下准确识别旅客身份。针对上述问题,提出了基于监控用单目摄像头与手持惯性传感器的大型邮轮室内旅客身份识别方法。根据YOLOv5视觉目标检测算法,提取监控视频帧中旅客的像素坐标与边界框;利用2D-3D坐标转换公式,将相机图像中旅客的像素坐标转换为物理世界中旅客与相机的相对坐标;再基于改进神经网络模型,估计旅客在相机相对坐标系下的航向角。利用旅客智能手机中惯性传感器,采集旅客运动数据,检测旅客加速度的变化,判别旅客行走状态;融合磁场强度,推算旅客在大地坐标系下的真实航向角;融合提取的视觉和惯性传感器数据,对旅客的有限特征及其关系进行编码,包括瞬时时刻行走状态、步长、相对航向角和相对距离,解决传感器信号的误差累积问题;基于构建的2幅多关联图,提出特征之间的相似度计算公式,再利用视觉与惯性传感器特征图匹配(vision and inertial sensors graph matching,VIGM)算法求解最大相似度矩阵,实现对2幅图中的同一旅客的识别。经长江“黄金3号”邮轮大厅、棋牌室、多功能厅和走廊4个场景实验,可以发现:VIGM算法在1~3 s窗口内平均匹配准确率达83.9%,与使用高成本深度相机的ViTag身份匹配算法相比,平均匹配准确率仅相差4.5%。实验结果表明:所提基于摄像头与惯性传感器的旅客身份识别方法及算法实现成本低,但识别效果与使用高成本深度相机的方法相当。

     

  • 图  1  旅客身份匹配效果示意图

    Figure  1.  The schematic diagram of the effects of passenger identity matching

    图  2  智能手机局部坐标系

    Figure  2.  Smartphone local coordinate system

    图  3  系统框架示意图

    Figure  3.  System framework diagram

    图  4  全局航向角3D图(左)和俯视图(右)

    Figure  4.  Global heading angle 3D plot (left) and top view (right)

    图  5  神经网络模型

    Figure  5.  Neural network models

    图  6  特征图匹配法

    Figure  6.  Feature map matching method

    图  7  典型邮轮室内场景图

    Figure  7.  Typical ship interior scene

    图  8  设备部署示意图

    Figure  8.  Device deployment diagram

    图  9  基于YOLO v5算法的行人目标检测结果

    Figure  9.  Result of Pedestrian Detection Based on YOLO v5

    图  10  视觉航向角误差分析

    Figure  10.  Visual heading angle error analysis

    图  11  惯性传感器数据差异

    Figure  11.  Inertial sensor data discrepancies

    图  12  步数检测误差分析

    Figure  12.  Step detection error analysis

    图  13  传感器航向角误差分析

    Figure  13.  Sensor heading angle error analysis

    图  14  行人身份匹配精度

    Figure  14.  Pedestrian identity matching accuracy

  • [1] CHEN L W, CHENG J H, TSENG Y C. Optimal path planning with spatial-temporal mobility modeling for individual-based emergency guiding[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2015, 45(12): 1491-1501. doi: 10.1109/TSMC.2015.2445875
    [2] 王聪. 基于Wi-Fi的路径无关的步态身份识别方法[D]. 天津: 天津大学, 2019.

    WANG C. Path independent gait identification method based on Wi-Fi[D]. Tianjin: Tianjin University, 2019. (in Chinese)
    [3] 陈天舒. Wi-Fi射频指纹提取与识别技术研究[D]. 南京: 东南大学, 2021.

    CHEN T S. Research on Wi-Fi radio frequency fingerprint extraction and recognition technology[D]. Nanjing: Southeast University, 2021. (in Chinese)
    [4] 郑礼洋. 基于图神经网络的视频人物识别研究与应用[D]. 杭州: 浙江大学, 2022.

    ZHENG L Y. Research and application of video character recognition based on graph neural network[D]. Hangzhou: Zhejiang University, 2022. (in Chinese)
    [5] 陈信强, 郑金彪, 凌峻, 等. 基于异步交互聚合网络的港船作业区域人员异常行为识别[J]. 交通信息与安全, 2022, 40(2): 22-29. doi: 10.3963/j.jssn.1674-4861.2022.02.003

    CHEN X Q, ZHENG J B, LING J, et al. Detecting abnormal behaviors of workers at ship working fields via asynchronous interaction aggregation network[J]. Journal of Transport Information and Safety, 2022, 40(2): 22-29. (in Chinese) doi: 10.3963/j.jssn.1674-4861.2022.02.003
    [6] XU J, CHEN H, QIAN K, et al. IVR: integrated vision and radio localization with zero human effort[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3(3): 1-22.
    [7] LI D, LU Y, XU J, et al. iPAC: integrate pedestrian dead reckoning and computer vision for indoor localization and tracking[J]. IEEE Access, 2019, (7): 183514-183523.
    [8] FANG S, ISLAM T, MUNIR S, et al. Eyefi: Fast human identification through vision and wifi-based trajectory matching[C]. 16th International Conference on Distributed Computing in Sensor Systems(DCOSS), Los Angeles, USA: IEEE, 2020.
    [9] LIU H, ALALI A, IBRAHIM M, et al. Vi-Fi: associating moving subjects across vision and wireless sensors[C]. 21st ACM/ IEEE International Conference on Information Processing in Sensor Networks(IPSN), Milan, Italy: IEEE, 2022.
    [10] CHEN H, MUNIR S, LIN S. RFCam: uncertainty-aware fusion of camera and Wi-Fi for real-time human identification with mobile devices[J]. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2022, 6(2): 1-29.
    [11] 陈默子. 基于信道状态信息的船舶动态环境室内定位方法研究[D]. 武汉: 武汉理工大学, 2020.

    CHEN M Z. Research on indoor positioning method of ship dynamic environment based on channel state information[D]. Wuhan: Wuhan University of Technology, 2020. (in Chinese)
    [12] ZHONG M, YOU Y, ZHOU S, et al. A robust visual-inertial SLAM in complex indoor environments[J]. IEEE Sensors Journal, 2023, 23(17): 19986-19994. doi: 10.1109/JSEN.2023.3274702
    [13] YANG C, CHENG Z, JIA X, et al. A novel deep learning approach to 5G CSI/Geomagnetism/VIO fused indoor localization[J]. Sensors, 2023, 23(3): 1311. doi: 10.3390/s23031311
    [14] BENJUMEA A, TEETI I, CUZZOLIN F, et al. YOLO-Z: Improving small object detection in YOLOv5 for autonomous vehicles[R/OL]. (2021-12)[2023-08-31]. https://arxiv.org/abs/2112.11798.
    [15] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
    [16] JIANG W, YIN Z. Combining passive visual cameras and active IMU sensors for persistent pedestrian tracking[J]. Journal of Visual Communication and Image Representation, 2017, 48: 419-431.
    [17] 刘星. 多维MEMS惯性传感器的姿态解算算法研究[D]. 哈尔滨: 哈尔滨工程大学, 2013.

    LIU X. Research on attitude solving algorithm for multidimensional MEMS inertial sensor[D]. Harbin: Harbin Engineering University, 2013. (in Chinese)
    [18] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[R/OL]. (2014-09) [2023-08-31]. https://arxiv.org/abs/1409.1556.
    [19] RAJESWARI M, JAIGANESH S, SUJATHA P, et al. A study and scrutiny of diverse optimization algorithm to solve multi-objective quadratic assignment problem[C]. International Conference on Communication and Electronics Systems(ICCES), Coimbatore, India: IEEE, 2016.
    [20] SHI S, CUI J, JIANG Z, et al. VIPS: real-time perception fusion for infrastructure-assisted autonomous driving[C]. The 28th Annual International Conference on Mobile Computing And Networking, Sydney: ACM, 2022.
    [21] CAO B B, ALALI A, LIU H, et al. ViTag: online WiFi fine time measurements aided vision-motion identity association in multi-person environments[C]. 19th Annual IEEE International Conference on Sensing, Communication, and Networking(SECON), Stockholm, Sweden: IEEE, 2022.
  • 加载中
图(14)
计量
  • 文章访问数:  169
  • HTML全文浏览量:  126
  • PDF下载量:  37
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-08-31
  • 网络出版日期:  2024-05-31

目录

    /

    返回文章
    返回