-
Title
Research on the targetless automatic calibration method for mining LiDAR and camera
-
作者
杨佳佳张传伟周李兵秦沛霖赵瑞祺
-
Author
YANG Jiajia;ZHANG Chuanwei;ZHOU Libing;QIN Peilin;ZHAO Ruiqi
-
单位
西安科技大学机械工程学院陕西交通职业技术学院天地(常州)自动化股份有限公司中煤科工集团常州研究院有限公司南京航空航天大学机电学院
-
Organization
College of Mechanical Engineering, Xi'an University of Science and Technology
Shaanxi College of Communications Technology
Tiandi (Changzhou) Automation Co., Ltd.
CCTEG Changzhou Research Institute
College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics and Astronautics
-
摘要
矿用车辆实现无人驾驶依赖于准确的环境感知,激光雷达和相机的结合可以提供更丰富和准确的环境感知信息。为确保激光雷达和相机的有效融合,需进行外参标定。目前矿用本安型车载激光雷达多为16线激光雷达,产生的点云较为稀疏。针对该问题,提出一种矿用激光雷达与相机的无目标自动标定方法。利用多帧点云融合的方法获得融合帧点云,以增加点云密度,丰富点云信息;通过全景分割的方法提取场景中的车辆和交通标志物作为有效目标,通过构建2D−3D有效目标质心对应关系,完成粗校准;在精校准过程中,将有效目标点云通过粗校准的外参投影在逆距离变换后的分割掩码上,构建有效目标全景信息匹配度目标函数,通过粒子群算法最大化目标函数得到最优的外参。从定量、定性和消融实验3个方面验证了方法的有效性:① 定量实验中,平移误差为0.055 m,旋转误差为0.394°,与基于语义分割技术的方法相比,平移误差降低了43.88%,旋转误差降低了48.63%。② 定性结果显示,在车库和矿区场景中的投影效果与外参真值高度吻合,证明了该方法的稳定性。③ 消融实验表明,多帧点云融合和目标函数权重系数显著提高了标定精度。与单帧点云相比,使用融合帧点云作为输入时,平移误差降低了50.89%,旋转误差降低了53.76%;考虑权重系数后,平移误差降低了36.05%,旋转误差降低了37.87%。
-
Abstract
The realization of autonomous driving for mining vehicles relies on accurate environmental perception, and the combination of LiDAR and cameras can provide richer and more accurate environmental information. To ensure effective fusion of LiDAR and cameras, external parameter calibration is necessary. Currently, most mining intrinsically safe onboard LiDARs are 16-line LiDARs, which generate relatively sparse point clouds. To address this issue, this paper proposed a targetless automatic calibration method for mining LiDAR and camera. Multi-frame point cloud fusion was utilized to obtain fused frame point clouds, increasing point cloud density and enriching point cloud information. Then, effective targets such as vehicles and traffic signs in the scene were extracted using panoramic segmentation. By establishing a corresponding relationship between the centroids of 2D and 3D effective targets, a coarse calibration was completed. In the fine calibration process, the effective target point clouds were projected onto the segmentation mask after inverse distance transformation using the coarse-calibrated external parameters, constructing an objective function based on the matching degree of effective target panoramic information. The optimal external parameters were obtained by maximizing the objective function using a particle swarm algorithm. The effectiveness of the method was validated from three aspects: quantitative, qualitative, and ablation experiments. ① In the quantitative experiments, the translation error was 0.055 m, and the rotation error was 0.394°. Compared with the method based on semantic segmentation technology, the translation error was reduced by 43.88%, and the rotation error was reduced by 48.63%. ② The qualitative results showed that the projection effects in the garage and mining area scenes were highly consistent with the true values of the external parameters, demonstrating the stability of the method. ③ Ablation experiments indicated that multi-frame point cloud fusion and the weight coefficients of the objective function significantly improved calibration accuracy. When using fused frame point clouds as input compared to single-frame point clouds, the translation error was reduced by 50.89%, and the rotation error was reduced by 53.76%. Considering the weight coefficients, the translation error was reduced by 36.05%, and the rotation error was reduced by 37.87%.
-
关键词
矿用车辆无人驾驶车辆激光雷达相机多帧点云融合全景分割外参标定无目标标定
-
KeyWords
mining vehicles;autonomous vehicles;LiDAR;camera;multi-frame point cloud fusion;panoramic segmentation;external parameter calibration;targetless calibration
-
基金项目(Foundation)
陕西省创新人才推进计划−科技创新团队(2021TD-27)。
-
DOI
-
引用格式
杨佳佳,张传伟,周李兵,等. 矿用激光雷达与相机的无目标自动标定方法研究[J]. 工矿自动化,2024,50(10):53-61, 89.
-
Citation
YANG Jiajia, ZHANG Chuanwei, ZHOU Libing, et al. Research on the targetless automatic calibration method for mining LiDAR and camera[J]. Journal of Mine Automation,2024,50(10):53-61, 89.
-
相关文章
-
相关专题
-
图表