这是用户在 2024-3-17 20:44 为 https://www.mdpi.com/2072-4292/15/4/963# 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
 
 
Technical Note
开放获取技术说明

Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System
使用旋转驱动的 LiDAR 系统在复杂环境中进行实时 3D 映射

1
School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
武汉大学大地测量与测绘学院, 武汉430079
2
Hubei Luojia Laboratory, Wuhan University, Wuhan 430079, China
武汉大学湖北罗家实验室, 湖北430079
*
Author to whom correspondence should be addressed.
应向其发送信件的作者。
Remote Sens. 2023, 15(4), 963; https://doi.org/10.3390/rs15040963
遥感, 2023, 15(4), 963;https://doi.org/10.3390/rs15040963
Submission received: 28 December 2022 / Revised: 3 February 2023 / Accepted: 6 February 2023 / Published: 9 February 2023
提交截止日期:2022 年 12 月 28 日 / 修订日期:2023 年 2 月 3 日 / 录用日期:2023 年 2 月 6 日 / 发布日期:2023 年 2 月 9 日

Abstract 抽象

LiDAR is a crucial sensor for 3D environment perception. However, limited by the field of view of the LiDAR, it is sometimes difficult to achieve complete coverage of the environment with a single LiDAR.
LiDAR 是用于 3D 环境感知的关键传感器。然而,受限于激光雷达的视场,有时很难用单个激光雷达实现对环境的完全覆盖。

In this paper, we designed a spinning actuated LiDAR mapping system that is compatible with both UAV and backpack platforms and propose a tightly coupled laser–inertial SLAM algorithm for it. In our algorithm, edge and plane features in the point cloud are first extracted.
本文设计了一种同时兼容无人机和背包平台的旋转驱动激光雷达测绘系统,并提出了一种紧密耦合的激光-惯性SLAM算法。在我们的算法中,首先提取点云中的边和平面特征。

Then, for the significant changes in the distribution of point cloud features between two adjacent scans caused by the continuous rotation of the LiDAR, we employed an adaptive scan accumulation method to improve the stability and accuracy of point cloud registration.
然后,针对LiDAR连续旋转导致的两次相邻扫描之间点云特征分布的显著变化,采用自适应扫描累积方法提高点云配准的稳定性和准确性。

After feature matching, the LiDAR feature factors and IMU pre-integration factor are added to the factor graph and jointly optimized to output the trajectory. In addition, an improved loop closure detection algorithm based on the Cartographer algorithm is used to reduce the drift.
特征匹配后,将LiDAR特征因子和IMU预积分因子添加到因子图中,并共同优化以输出轨迹。此外,还利用一种基于Cartographer算法的改进闭环检测算法来减少漂移。

We conducted exhaustive experiments to evaluate the performance of the proposed algorithm in complex indoor and outdoor scenarios.
我们进行了详尽的实验,以评估所提算法在复杂的室内和室外场景中的性能。

The results showed that our algorithm is more accurate than the state-of-the-art algorithms LIO-SAM and FAST-LIO2 for the spinning actuated LiDAR system, and it can achieve real-time performance.
结果表明,该算法比最先进的旋转驱动激光雷达系统算法LIO-SAM和FAST-LIO2更准确,并且能够实现实时性能。

Graphical Abstract

1. Introduction

Light Detection and Ranging (LiDAR) can achieve highly accurate and efficient distance measurement. I is insensitive to illumination changes because it measures by actively emitting a pulsed laser.
By combining LiDAR with Simultaneous Localization and Mapping (SLAM) technology, the laser SLAM system can obtain a three-dimensional map of the surrounding environment both indoors and outdoors.
These advantages make the laser SLAM system play an important role in many fields, such as autonomous driving [1], building inspection [2], forestry investigation [3], etc. Currently, the LiDAR used for laser SLAM can be basically divided into two categories. Mechanical LiDAR is the most commonly used LiDAR type. Its horizontal Field Of View (FOV) can reach close to 360°, but its vertical FOV is very limited.
Besides, with the development of microelectronics technology, solid-state LiDAR, e.g., the DJI Livox series, has become more and more commonly used [4,5]. Generally, solid-state LiDAR can provide a larger vertical FOV than mechanical LiDAR, but its horizontal FOV is much smaller. Therefore, in many scenarios, both types of LiDAR cannot completely cover the whole environment.
This shortcoming greatly limits the mapping efficiency of laser SLAM systems, especially when using a platform with limited endurance such as an Unmanned Aerial Vehicle (UAV) to carry the mapping system [6]. In narrow environments, this limited FOV will degrade the accuracy and stability of localization as only a small number of objects can be scanned.
Much research has been conducted to expand the FOV of LiDAR, and the solutions can be roughly divided into two categories. The first category is to combine multiple LiDARs [7,8,9,10,11]. Such systems typically use a horizontally mounted LiDAR to provide primary positioning information and a vertically mounted LiDAR to improve the scanning coverage. Its advantage is that the mechanical structure is simple.
However, due to the high price of LiDARs, the cost of these solutions is greatly increased. Another category of solutions is to actuate the LiDAR dynamically, and LOAM [12] is one of the most-famous methods among them. LOAM first accumulates the point cloud with the assistance of the IMU and then matches the accumulated point cloud with the global map to correct the accumulated errors.
Considering that the prediction error of the IMU grows exponentially with time and the feature extraction and matching algorithms need to wait for sufficient data to be accumulated, this time interval will become a bottleneck limiting the accuracy of the SLAM system.
To address this problem, we designed a mapping system based on a spinning actuated multi-beam LiDAR. By using a multi-beam LiDAR, more information can be obtained in each scan, which helps to increase the frequency of point cloud matching and generate a denser point cloud map.
We first extracted feature points from each frame point cloud through an improved feature extraction method based on LOAM.
Then, we judged whether the current point cloud contains enough information by analyzing the spatial distribution of feature points and performed scan-to-map matching once the requirement is met.
Compared with accumulating point clouds with a fixed number of scans, this method allows a better balance between the matching frequency of point clouds and the accuracy and reliability of the matching results.
Finally, in order to eliminate the accumulated error, we added a loop closure detection module to the algorithm. The main contributions of this paper can be summarized as follows:
  • We propose a tightly coupled laser–inertial SLAM algorithm named Spin-LOAM for a spinning actuated LiDAR system.
  • An adaptive scan accumulation method that can improve the accuracy and reliability of matching by analyzing the spatial distribution of feature points.
  • Extensive experiments were conducted in indoor and outdoor environments to verify the effectiveness of our algorithm.

2. Related Work

As a fundamental problem in robotics, numerous SLAM algorithms based on LiDAR have been proposed. LeGO-LOAM [13] extracts the ground point cloud from the real-time scan results to improve the accuracy in the elevation direction. T-LOAM [14] simultaneously extracts edge features, sphere features, planar features, and ground features to improve the matching accuracy. The works [15,16,17] refined the localization result by introducing plane constraints. Suma++ [18] not only uses geometric features in point clouds, but also introduces semantic information to assist point cloud matching. Besides these LiDAR-only odometry algorithms, many multi-sensor-fusion-based algorithms have been proposed to improve the accuracy and robustness.
LIO-SAM [19] combines the IMU pre-integration factor with the LiDAR odometry factor through a factor graph. LIO-Mapping [20] integrates the LiDAR and IMU in a tightly coupled fashion. FAST-LIO [21] adopts a tightly coupled iterated extended Kalman filter on a manifold to fuse the data and is accelerated by introducing an incremental KD-Tree in FAST-LIO2 [22]. CLINS [23] fuses LiDAR and IMU data by representing trajectories as a continuous-time function, and this framework is well compatible with arbitrary-frequency data from other asynchronous sensors.
For the purposes of acquiring complete 3D information of the environment, researchers have proposed many actuated LiDAR systems. Bosse et al. [24] designed a mapping system named Zebedee, which connects the sensors to the platform via springs. By treating the trajectory as a function of time, a surfel-based matching algorithm was adopted to estimate the 6-DOF pose. Kaul et al. [25] proposed a passively actuated rotating LiDAR system for UAV mapping, and they used a continuous-time SLAM algorithm to produce the trajectory. However, it cannot process the data in real-time. Park et al. [26] addressed this issue by introducing map deformation to replace the original global trajectory optimization in continuous-time SLAM. Fang et al. [27] proposed a two-stage matching algorithm to estimate the trajectory of a rotating LiDAR.
为了获取完整的环境3D信息,研究人员提出了许多驱动LiDAR系统。Bosse等[24]设计了一个名为Zebedee的测绘系统,该系统通过弹簧将传感器连接到平台。通过将轨迹视为时间的函数,采用基于表面的匹配算法来估计6自由度姿态。Kaul等[25]提出了一种用于无人机测绘的被动驱动旋转LiDAR系统,他们使用连续时间SLAM算法来生成轨迹。但是,它无法实时处理数据。Park等[26]通过引入地图变形来取代连续时间SLAM中原来的全局轨迹优化,解决了这个问题。Fang等[27]提出了一种两阶段匹配算法来估计旋转LiDAR的轨迹。

In the algorithm, the distortion of the current point cloud was first removed using the estimated motion generated by matching it with the local map, and then, the undistorted point cloud was matched with the global map. Unlike the aforementioned works, Zhan et al. [28] used a rotating multi-beam LiDAR for 3D mapping and combined it with stereo cameras for dense 3D reconstruction. The precision of this system is very high, but it needs to remain stationary while collecting data. R-LOAM [29] improves the localization accuracy of rotating LiDAR by leveraging prior knowledge about a reference object. Karimi et al. [30] proposed an actuated LiDAR system using the Lissajous pattern [31]. By using the scan slice instead of the full-sweep point cloud to match with the global map, they achieved low-latency localization for a UAV without an IMU in an indoor environment.
在该算法中,首先利用与局部贴图匹配产生的估计运动去除当前点云的畸变,然后将未变形的点云与全局贴图进行匹配。与上述工作不同的是,Zhan等[28]使用旋转的多波束LiDAR进行3D测绘,并将其与立体相机相结合进行密集的三维重建。该系统的精度非常高,但在收集数据时需要保持静止。R-LOAM [ 29] 利用参考对象的先验知识,提高了旋转激光雷达的定位精度。Karimi等[30]提出了一种使用Lissajous模式的驱动LiDAR系统[31]。通过使用扫描切片而不是全扫描点云来匹配全球地图,他们实现了室内环境中没有 IMU 的无人机的低延迟定位。

3. System Overview

We first introduce the hardware systems used in the study. As shown in Figure 1, our device mainly consists of a laser scanner, a step motor, and an IMU, in which the scanner is driven to rotate by the motor. The rotation angle is recorded by an encoder. The IMU is rigidly attached to the platform, so we regarded the IMU frame {I} as the body frame {B} for simplicity. The LiDAR frame is denoted as {L}, and the fixed LiDAR frame {FL} coincides with the initial LiDAR frame when the motor is reset. The Y-axis of the motor frame {M} is aligned with the spin axis. The two extrinsic parameters TIFLSE(3) and TMFLSE(3) are both calibrated manually, where TIFL represents the transformation from the frame {FL} to the frame {I} and TMFL represents the transformation from the frame {FL} to the frame {M}. The timestamp of each sensor is synchronized at the hardware level to ensure accuracy.
Figure 1. The mechanical structure of the hardware device.
Figure 2 provides an overview of our SLAM algorithm. In the front-end, first, the IMU measurements are used to construct the pre-integration factor and produce pose predictions.
图 2 概述了我们的 SLAM 算法。在前端,首先,IMU测量用于构建预积分因子并生成姿态预测。

Next, the raw LiDAR point cloud is transformed to the fixed LiDAR frame and de-skewed using the pose predictions and motor encoder data. Then, edge and plane feature points are extracted from the de-skewed point cloud.
接下来,将原始LiDAR点云转换为固定的LiDAR帧,并使用姿态预测和电机编码器数据进行去偏斜。然后,从去倾斜的点云中提取边和平面特征点。

In the scan-to-map registration module, the correspondences of these feature points and the global map are established. The spatial distribution of these matching point pairs is examined to decide whether they are to be added to the factor graph or accumulated to the next scan.
在扫描到地图配准模块中,建立了这些特征点与全局地图的对应关系。检查这些匹配点对的空间分布,以确定是将它们添加到因子图中还是累积到下一次扫描中。

In the back-end, the IMU pre-integration factor and LiDAR factors are jointly optimized to estimate the system states, as shown in Figure 3. In order to bound the amount of computation, only the latest state is optimized when no loop closure constraints are added. After optimization, the feature points are added to the global map using the optimized state.
在后端,对IMU预积分因子和LiDAR因子进行联合优化,以估计系统状态,如图3所示。为了限制计算量,在不添加循环闭合约束的情况下,仅优化最新状态。优化后,要素点将使用优化状态添加到全局地图中。

Loop closure detection is performed periodically in the background to reduce drift.
在后台定期执行闭环检测,以减少漂移。
Figure 2. System overview of the proposed laser–inertial SLAM algorithm for the spinning actuated LiDAR system.
Figure 3. Structure of the factor graph. Only the LiDAR feature factor of the latest scan is directly used in the factor graph, and the others are replaced by the odometry factors, which are relative transformed between two adjacent scans.

4. Methodology

The system state x to be estimated at time t is defined as
xt=Rt,pt,vt,btω,bta
where btωR3 and btaR3 are the gyroscope and accelerometer biases, respectively. RtSO(3), ptR3, and vtR3 are the orientation, position, and velocity of the sensor platform in the global coordinate frame {G}, respectively.

4.1. IMU Processing

The platform’s angular velocity ωt and acceleration at in the IMU frame can be obtained by the IMU, but the raw measurements of the IMU are corrupted by noise and bias. Commonly, the slow variations in the bias are modeled with Brownian motion; hence, the IMU measurement model is given by
ω˜t=ωt+btω+ηtωa˜t=atRtTg+bta+ηtabω˙=ηbω,ba˙=ηba
where g is the gravity vector, ηω, ηa, ηbω, and ηba are white Gaussian noise, and their standard deviations are σηa, σηg, σηba, and σηbω, respectively.

4.1.1. Pose Prediction

Based on the posterior state x^k1 in the previous moment and the IMU measurements between tk1,tk, we can use the Euler integration to calculate the predicted poses. The recursive formula is
Ri+1=RiExpω˜ibiωηiωΔtvi+1=vi+gΔt+Ria˜ibiaηiaΔtpi+1=pi+viΔt+12gΔt2+12Ria˜ibiaηiaΔt2
where Exp(·) is the exponential map of manifold SO(3).
These poses are sufficient to de-skew the point cloud and provide an initial value for registration. However, to help reject the outliers in Section 4.4, the covariance of these predicted poses also need to be estimated. The recursive formula of error state covariance propagation is
Pi+1=FiPiFiT+GiQGiT
Fi=ExpωΔt00JrωΔtΔt00IIΔt00Ri[a]×Δt0I0RiΔt000I00000I
Gi=JrωΔtΔt00000000RiΔt0000IΔt0000IΔt
where ω=ω˜ibiω, a=a˜ibia, and Pi is the covariance matrix of system state at time ti. The diagonal elements of the noise covariance matrix Q are σηω2, σηa2, σηba2, and σηbω2. Jr is the right Jacobian of SO(3). Note that the covariance of R is defined in the tangent space.

4.1.2. IMU Pre-Integration

The IMU pre-integration technique was first proposed in [32], and it has been widely applied in SLAM research. It uses IMU measurements between (tk1,tk) to establish the constraint between two states xk1 and xk. The IMU pre-integration factor is calculated as follows:
IMU预积分技术最早于[32]提出,并在SLAM研究中得到了广泛的应用。它使用 之间的 IMU 测量 (tk1,tk) 来建立两个状态 xk1xk 之间的约束。IMU预积分系数计算如下:
ΔR=Rk1TRk=ti(tk1,tk)Expω˜ibωiηωiΔtΔv=Rk1Tvkvk1gΔt=ti(tk1,tk)ΔRk1,ia˜ibiaηiaΔtΔp=Rk1Tpkpk1vk1Δt12gΔt2=ti(tk1,tk)Δvk1,iΔt+12ΔRk1,ia˜ibiaηiaΔt2
where ΔR, Δv, and Δp are the relative motion between two timestamps tk1,tk. More details about the on-manifold IMU pre-integration can be found in [33].
其中 ΔRΔvΔp 是两个时间戳之间的相对运动 tk1,tk 。有关歧管上 IMU 预集成的更多详细信息,请参见 [ 33]。

4.2. Feature Extraction 4.2. 特征提取

The raw point cloud is measured in the LiDAR frame and distorted by the sensor’s motion. Therefore, before feature extraction, it needs to be transformed to frame FL and de-skewed. Suppose ciL is a point in raw point cloud scan C=c0,c1,,cn:
原始点云在 LiDAR 帧中测量,并因传感器的运动而扭曲。因此,在特征提取之前,需要将其转换为帧 FL 并去歪斜。假设 ciL 是原始点云扫描 C=c0,c1,,cn 中的一个点:
ciFL=TIFL1TBkBiTIFLTMFL1RiMTMFLciL
where RiM is the rotation matrix generated by encoder data at time ti, which represents the rotation of the LiDAR frame relative to the fixed LiDAR frame in the motor frame. TBkBi=TBk1TBi, TBi, and TBk are the poses of the body frame obtained by the linear interpolation of the predicted pose, and tk is the timestamp of the latest point in the point cloud scan C.
式中 RiM ,是编码器数据在时间 ti 生成的旋转矩阵,它表示电机帧中激光雷达帧相对于固定激光雷达帧的旋转。 TBkBi=TBk1TBiTBiTBk 是预测姿态的线性插值得到的体架位, tk 是点云扫描 C 中最新点的时间戳。
Our feature extraction method extracts planar features Fp and edge features Fe from the input point cloud as shown in Figure 4. The workflow of the method is as follows:
我们的特征提取方法从输入点云中提取平面特征 Fp 和边缘特征 Fe ,如图 4 所示。该方法的工作流程如下:
Figure 4. The illustration of the edge point extraction. As described in Steps 3 and 4, there are two types of edge points. The first type of edge point is at the end of the wall, and the second type of edge point is at the corners.
图4.边缘点提取的图示。如步骤 3 和 4 中所述,有两种类型的边缘点。第一种边缘点位于墙的末端,第二种类型的边缘点位于拐角处。

These edge points can be identified from unstructured points by analyzing the properties of the points in their neighborhoods.
通过分析这些点邻域中点的属性,可以从非结构化点中识别这些边缘点。
(1)
For a point ciC, find its previous neighbors Nipre=cik,,ci1 and succeeding neighbors Nisucc=ci+1,,ci+k in the same scan line.
对于某一点 ciC ,在同一扫描线 Nisucc=ci+1,,ci+k 中查找其先前的 Nipre=cik,,ci1 邻居和后续的邻居。
(2)
Calculate the features α, β of the point using
计算点 α 的特征 , β 使用
α=arccos(cici1ci+1ci)
β=maxci+1ci,cici1minci+1ci,cici1
where α indicates the changing angle of the scan line at the point ci, which is used to characterize the smoothness. The points with α<αthr will be labeled as smooth points. β is the ratio of the distance from point ci to ci1 and ci+1, which is used to determine whether the point is an edge point.
其中 α 表示扫描线在点 ci 处的变化角度,用于表征平滑度。带有 α<αthr 的点将被标记为平滑点。 β 是从点到 ci1ci+1 ci 的距离之比,用于确定该点是否为边缘点。
(3)
For point ci with β>βthr, if all points in its closer neighbors (depending on the closest point belonging to the neighbor Nipre or Nisucc) are smooth points, then add ci to Fe.
(4)
For point ci with ββthr, if all points in its previous and succeeding neighbors are smooth points, then add ci to the candidate set of edge points.
(5)
Use the standard LOAM-based method to extract planar features Fp and edge points Fe, except that the edge points must belong to the candidate set.
This modification was based on the consideration that there are many unstructured objects (e.g., vegetation) in outdoor environments, which will degrade the performance of edge extraction. A comparison result is shown in Figure 5. In our experiment, αthr was set to 15° and k and βthr were set to 2 and 4, respectively, according to [34].
Figure 5. Feature point extraction result. The green points are the raw point cloud. The edge points extracted by the standard LOAM-based extraction method are shown in blue, and the edge points extracted by our method are shown in red.
It can be seen that many blue points are located in the vegetation, and these noises will affect the accuracy of the alignment.

4.3. Scan-to-Map Registration

Similar to [19], for an input point cloud scan C, we used the pose predicted by the IMU to find the adjacent point cloud scans in the global map and merged them into the feature map Mp and Me for data association. The map is downsampled by a voxel filter to accelerate the computation. Then, we find the k nearest neighboring points Nf of each feature point fiFp or Fe in the corresponding feature map and denote its nearest point as mi. For fiFp; the plane normal vector np of its Nf is computed, and for fiFe, the line direction ne of its Nf is computed. In addition, as the LiDAR is continuously rotating, to ensure the reliability of the matching in the initial stage, the initial global map is constructed by keeping the sensor platform stationary for about 3 s.
According to the results of feature matching, the LiDAR residual rLiDAR can be computed using the point-to-plane dp and point-to-line distance de.
dp=npTRfi+pmi
de=ne×Rfi+pmi
where R and p are the predicted orientation and position, np and ne are normalized, and mi is the corresponding point of feature point fi in the global map.

4.4. Adaptive Scan Accumulation

The spatial distribution of the point cloud has an important influence on the accuracy of the registration result. Taking point-to-plane registration as an example, the matched points should involve at least three non-parallel planes.
Unlike the point cloud obtained by the fixed mounted LiDAR, the LiDAR in our device is continuously rotating, resulting in the continuous change of the spatial distribution of the obtained point cloud. As shown in Figure 6, using only a single frame of the point cloud sometimes fails to provide reliable registration results.
A simple solution is to accumulate multi-scan point clouds before each registration, but this will reduce the computational efficiency, so we propose a method to adaptively decide whether to perform point cloud accumulation and how many scans need to be accumulated according to the spatial distribution of the point cloud.
Figure 6. Illustration of multi-scan point clouds’ accumulation. The first scan point cloud is rendered in green and only contains a few points in the horizontal plane, which will lead to an inaccurate registration result, especially in the Z direction.
This problem can be solved by merging it with the next scan point cloud.

4.4.1. Features’ Distribution Inspection

To ensure the registration results are reliable, we used the covariance of the registration result as the indicator. In the least-squares registration problem, the normal equation formed by matched feature points is
JTJx=JTb
where JRn×6 is constructed by stacking Jacobian matrices Jpdp and Jpde, n is the number of matched points, and bRn×1 is constructed by stacking distance residuals dp and de. The solution to Equation (11) is given by
x^=JTJ1JTb
The covariance of the estimated pose is
Ex^Tx^=JTJ1JTEbTbJJTJ1
where E· is the expected value operator. The covariance of b is determined by the laser measurements, and each laser observation can be treated as an independent observation. Therefore, its covariance is:
EbTb=σl2I
where σl is the accuracy of the laser measurement, which can be set depending on the model of the LiDAR. Substituting Equation (14) into Equation (13), then
Ex^Tx^=JTJ1σl2=Hσl2
Equation (15) reveals how the spatial distribution of matched points in matrix J affects the covariance of the registration result. If the spatial distribution is suitable for registration, the covariance of the estimated pose should not vary greatly in all directions. Suppose HtranR3×3 is the submatrix of H corresponding to the translation part in the pose; the ratio γ can be computed by
γ=maxdiag(Htran)mindiag(Htran)
where operator diag(·) extracts the diagonal elements of the matrix as a vector. The smaller the value of γ, the more uniform the distribution of the point cloud is, and vice versa. Here, we set the threshold γth=3. If γ<γth, the matched pairs will be accepted, and these plane and edge features will be added to the factor graph as constraints; otherwise, the extracted features will be accumulated to the next scan.

4.4.2. Outlier Removal in Matched Features

Since the point cloud in a single scan is sparse, there will inevitably be errors in the feature extraction results. These incorrectly matched point pairs will interfere with the features’ distribution inspection and reduce the accuracy of pose estimation.
To remove them while performing distribution inspection, an outlier removal algorithm based on the propagation of covariance is applied.
We assumed that the error in the global map can be ignored, which means that the distance residual is mainly caused by the error of the predicted pose and the error of the laser measurement. Then, the standard deviation of the distance residuals can be computed by
σdp=JpdpDpJpdpT+JldpDlJldpT
σde=JpdeDpJpdeT+JldeDlJldeT
Jpdp=dpRdpp=npTRfi×npT,
Jldp=npTR
Jpde=deRdep=vdeTvdene×Rfi×vdeTvdene×,
Jlde=vdeTvdene×R,vde=ne×Rfi+pmi
where Dp is the covariance of the predicted pose and can be obtained by Equation (4) and Dl is the accuracy of the LiDAR point.
The point pair with dp3σdp or de3σde is considered an outlier and removed. In theory, if the distance residuals follow a Gaussian distribution, we can remove most of the mismatched point pairs while retaining 99.5% of the correct matching results.

4.5. Loop Closure Detection

Our loop closure detection method was developed based on Cartographer [35].
As shown in Algorithm 1, there are two main improvements compared to the original algorithm: (1) point-to-plane ICP is used to replace the original probability occupancy-grid-based registration method in the fine registration stage; (2) the loop closure detection result is checked using the previous scan.
Due to the rotation of the LiDAR, there is a noticeable difference even between two adjacent scans. This can provide auxiliary information to help reject false detection results. If the loop closure detection result of the current scan is correct, then the registration results TSlCk and TSlCk1 should be consistent with the odometry transformation TCkCk1. The threshold dth was set to 5 cm in the experiment.
Algorithm 1: Loop closure detection.
Remotesensing 15 00963 i001

5. Experiments

Since there is no publicly available dataset containing spinning actuated LiDAR and IMU data, we evaluated the performance of the Spin-LOAM algorithm using the data collected in indoor and outdoor environments with the device shown in Figure 7. In the experiments, the sampling frequencies of the Velodyne VLP-16 LiDAR and Sensonor STIM300 IMU were 10 Hz and 200 Hz, respectively. The angular velocity of the step motor was set to 4.5 rad/s, and the angular resolution of the encoder was 10 bit.
All experiments were conducted on an Intel NUC computer with an Intel Core i5-1135G7 CPU and 16 GB memory.
Figure 7. Overview of the experimental sensor platform. (a) The sensor platform is carried by a backpack. (b) The sensor platform is carried by a UAV.

5.1. Evaluation in Indoor Environments

To verify the accuracy and effectiveness of Spin-LOAM, we first conducted tests in indoor environments. We compared our algorithm with two state-of-the-art LIO algorithms, FAST-LIO2 and LIO-SAM (modified for compatibility with a 6-axis IMU).
In the indoor tests, the map voxel size in all algorithms was set to 0.2 m. The standard deviations of the noise related to the IMU and LiDAR were uniformly set according to the device model.
As shown in Figure 8, we collected three datasets in different environments using a backpack to evaluate the performance of our algorithm. Data 1 is two narrow corridors on different floors connected by stairs; Data 2 is an underground garage; Data 3 is a badminton hall.
Since it was difficult to obtain the ground truth trajectory in the indoor environment, we returned to the starting position at the end of the trajectory and used the end-to-end translation error as the metric. The qualitative analysis results are given in Table 1, where “Spin-LOAM (odom)” represents our algorithm without loop closure and “Spin-LOAM (full)” represents the full SLAM algorithm.
Figure 8. Mapping results of our algorithm in indoor environments. The trajectories are represented by the red lines in the figure.
Table 1. End-to-end translation error (m).
Data 1 is the most-challenging scene among the three indoor datasets. In this scene, LIO-SAM failed to finish the localization, and FAST-LIO2 also suffered from severe drift.
Our algorithm achieved the lowest odometry drift and successfully corrected the drift through loop closure detection, as shown in Figure 9. In Data 2, our algorithm and LIO-SAM achieved similar localization accuracy, and FAST-LIO2 was slightly worse because it has no loop closure detection.
However, the accuracy of our pure odometry method was comparable to LIO-SAM, which validates the effectiveness of our algorithm. In Data 3, the accuracy of all algorithms was at the same level, because the scene was free of occlusions and full of plane features.
It is worth noting that the height of the roof in this scene was about 15 m, but our device completely acquired the point cloud of the entire scene with only one LiDAR. This reveals the advantage of the spinning actuated LiDAR system.
Figure 9. Comparison of the mapping results of the stairs in Data 1.

5.2. Evaluation in Outdoor Environments

In the outdoor test, the map voxel size in Spin-LOAM was changed to 0.4 m, and the voxel size in FAST-LIO2 and LIO-SAM was set according to the default value. Figure 10 gives an overview of the datasets collected in the outdoor environments. Data 5 was collected around a building; Data 6 was collected on a large ring road. Data 7 was collected in a residential area. Data 8 was collected using a drone on a construction site.
To quantitatively compare the performance of the algorithms, we used the GNSS RTK trajectories as the ground truth and computed the absolute trajectory error (ATE) of the trajectories.
Figure 10. Mapping results of our algorithm in outdoor environments. The ground truth trajectories generated by GNSS are represented by the black lines.
Table 2 shows that our algorithm achieved the best accuracy in the outdoor environments. FAST-LIO2 also performed very well in the experiments with only significant drift in Data 6 and completed all tests as our algorithm.
However, LIO-SAM failed to finish the localization in Data 6 and Data 8. The reason for its failure in Data 6 is that it directly uses the ICP algorithm for loop closure.
This strategy is not suitable when large drift occurs, and the accumulated errors in the large ring road make the LIO-SAM algorithm fail. After turning off the loop closure, the accuracy of LIO-SAM in Data 6 was 0.4908 m. Figure 11 gives a detailed comparison of the point cloud at the road junction. It can be seen that, even without the loop closure, our algorithm still maintained high accuracy after walking through a long loop.
In Data 8, the drone performed an aggressive motion to test the robustness of the algorithm, in which the maximum angular velocity was over 780°/s and the maximum linear velocity was over 6.5 m/s.
This test proved that our tightly coupled algorithm can work when aggressive motion occurs.
Figure 11. Point cloud maps generated by different algorithms in Data 6.
Table 2. Absolute trajectory error (ATE) of trajectories (m).
An ablation study was conducted to further analyze the contribution of our proposed adaptive scan accumulation method. “Spin-LOAM (wo-asa)” in Table 2 represents our algorithm without adaptive scan accumulation and loop closure. Comparing it with “Spin-LOAM (odom)”, the results showed that our method can improve the accuracy, especially in complex environments such as Data 6 and 7. This is because the FOV of the LiDAR is limited; it is often occluded by trees or can only scan the ground in these environments, which will lead to an uneven distribution of features in a single scan.
Our method can alleviate this problem during scan-to-map registration, which can help to improve the accuracy and robustness of registration.

5.3. Runtime Analysis

We selected indoor data and an outdoor data, respectively, for the algorithm performance evaluation. The processing time per scan of each algorithm is shown in Figure 12. FAST-LIO2 was much faster than LIO-SAM and our algorithm because it is a filter-based algorithm and does not require feature extraction. The average time cost per scan of our algorithm was 63.2 ms indoors and 43.8 ms outdoors.
The computation time was more for the indoor data because the map voxel size had a significant impact on the performance. The average time cost for the loop closure was 808.4 ms indoors and 328.7 ms outdoors.
Since the loop closure was performed by a separate thread in the background, it did not affect the real-time performance of our algorithm.
Figure 12. The evaluation results of the time cost per scan for each algorithm. The selected indoor data and outdoor data are Data 2 and Data 7, respectively, as they have the longest trajectories.

6. Conclusions

In this paper, we presented a tightly coupled laser–inertial SLAM algorithm specifically designed for a spinning actuated LiDAR system.
In the front-end, to mitigate the influence of the unstable spatial distribution of the point cloud caused by the continuously rotating LiDAR, an adaptive scan accumulation method based on point cloud distribution inspection was adopted.
In the back-end, a voxel-grid-based loop closure detection method was used to reduce the drift. We use the previous scan point cloud to assist in eliminating errors in the loop closure detection results.
The experimental results demonstrated that our algorithm achieves high-precision localization results in various complex indoor and outdoor environments.
We are committed to further refining and improving our algorithm, with a focus on improving its robustness in more extreme environmental conditions.
One potential avenue for improvement is the integration of semantic information from the point cloud, which will aid in loop closure detection and removal of dynamic objects.

Author Contributions

Methodology, J.D.; Investigation, Y.Z.; Resources, C.C.; Writing—original draft, J.D.; Supervision, L.Y.; Funding acquisition, L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Major Project of Hubei Province under Grant 2021AAA010.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Y.; Ibanez-Guzman, J. LiDAR for Autonomous Driving: The Principles, Challenges, and Trends for Automotive LiDAR and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  2. Bolourian, N.; Hammad, A. LiDAR-equipped UAV path planning considering potential locations of defects for bridge inspection. Autom. Constr. 2020, 117, 103250. [Google Scholar] [CrossRef]
  3. Beland, M.; Parker, G.; Sparrow, B.; Harding, D.; Chasmer, L.; Phinn, S.; Antonarakis, A.; Strahler, A. On promoting the use of LiDAR systems in forest ecosystem research. For. Ecol. Manag. 2019, 450, 117484. [Google Scholar] [CrossRef]
  4. Raj, T.; Hanim Hashim, F.; Baseri Huddin, A.; Ibrahim, M.F.; Hussain, A. A survey on LiDAR scanning mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
  5. Li, K.; Li, M.; Hanebeck, U.D. Towards high-performance solid-state-LiDAR-inertial odometry and mapping. IEEE Robot. Autom. Lett. 2021, 6, 5167–5174. [Google Scholar] [CrossRef]
  6. Alsadik, B.; Remondino, F. Flight planning for LiDAR-based UAS mapping applications. ISPRS Int. J. Geo-Inf. 2020, 9, 378. [Google Scholar] [CrossRef]
  7. Liu, X.; Zhang, F.Z. Extrinsic Calibration of Multiple LiDARs of Small FoV in Targetless Environments. IEEE Robot. Autom. Lett. 2021, 6, 2036–2043. [Google Scholar] [CrossRef]
  8. Nguyen, T.M.; Yuan, S.; Cao, M.; Lyu, Y.; Nguyen, T.H.; Xie, L. MILIOM: Tightly Coupled Multi-Input LiDAR-Inertia Odometry and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 5573–5580. [Google Scholar] [CrossRef]
  9. Jiao, J.; Ye, H.; Zhu, Y.; Liu, M. Robust Odometry and Mapping for Multi-LiDAR Systems With Online Extrinsic Calibration. IEEE Trans. Robot. 2022, 38, 351–371. [Google Scholar] [CrossRef]
  10. Zhang, D.; Gong, Z.; Chen, Y.; Zelek, J.; Li, J. SLAM-based multi-sensor backpack LiDAR systems in gnss-denied environments. In Proceedings of the IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 8984–8987. [Google Scholar]
  11. Velas, M.; Spanel, M.; Sleziak, T.; Habrovec, J.; Herout, A. Indoor and outdoor backpack mapping with calibrated pair of velodyne LiDARs. Sensors 2019, 19, 3944. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, J.; Singh, S. LOAM: LiDAR Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 9. [Google Scholar]
  13. Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
  14. Zhou, P.; Guo, X.; Pei, X.; Chen, C. T-LOAM: Truncated Least Squares LiDAR-Only Odometry and Mapping in Real Time. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5701013. [Google Scholar] [CrossRef]
  15. Liu, Z.; Zhang, F. BALM: Bundle Adjustment for LiDAR Mapping. IEEE Robot. Autom. Lett. 2021, 6, 3184–3191. [Google Scholar] [CrossRef]
  16. Zhou, L.; Koppel, D.; Kaess, M. LiDAR SLAM With Plane Adjustment for Indoor Environment. IEEE Robot. Autom. Lett. 2021, 6, 7073–7080. [Google Scholar] [CrossRef]
  17. Zhou, L.; Wang, S.; Kaess, M. π-LSAM: LiDAR smoothing and mapping with planes. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5751–5757. [Google Scholar]
  18. Chen, X.; Milioto, A.; Palazzolo, E.; Giguere, P.; Behley, J.; Stachniss, C. Suma++: Efficient LiDAR-based semantic slam. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 4530–4537. [Google Scholar]
  19. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled LiDAR inertial odometry via smoothing and mapping.
    In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar]
  20. Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3D LiDAR inertial odometry and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
  21. Xu, W.; Zhang, F. Fast-lio: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
  22. Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
  23. Lv, J.; Hu, K.; Xu, J.; Liu, Y.; Ma, X.; Zuo, X. Clins: Continuous-time trajectory estimation for LiDAR-inertial system. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp.
    6657–6663. [Google Scholar]
  24. Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a Spring-Mounted 3-D Range Sensor with Application to Mobile Mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [Google Scholar] [CrossRef]
  25. Kaul, L.; Zlot, R.; Bosse, M. Continuous-Time Three-Dimensional Mapping for Micro Aerial Vehicles with a Passively Actuated Rotating Laser Scanner. J. Field Robot. 2016, 33, 103–132. [Google Scholar] [CrossRef]
  26. Park, C.; Moghadam, P.; Kim, S.; Elfes, A.; Fookes, C.; Sridharan, S. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1206–1213. [Google Scholar] [CrossRef]
  27. Fang, Z.; Zhao, S.; Wen, S. A Real-time and Low-cost 3D SLAM System Based on a Continuously Rotating 2D Laser Scanner.
    In Proceedings of the 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Honolulu, HI, USA, 31 July–4 August 2017; pp. 454–459. [Google Scholar] [CrossRef]
  28. Zhen, W.; Hu, Y.; Liu, J.; Scherer, S. A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions. IEEE Robot. Autom. Lett. 2019, 4, 3585–3592. [Google Scholar] [CrossRef] [Green Version]
  29. Oelsch, M.; Karimi, M.; Steinbach, E. R-LOAM: Improving LiDAR Odometry and Mapping With Point-to-Mesh Features of a Known 3D Reference Object. IEEE Robot. Autom. Lett. 2021, 6, 2068–2075. [Google Scholar] [CrossRef]
  30. Karimi, M.; Oelsch, M.; Stengel, O.; Babaians, E.; Steinbach, E. LoLa-SLAM: Low-latency LiDAR SLAM using Continuous Scan Slicing. IEEE Robot. Autom. Lett. 2021, 2248–2255. [Google Scholar] [CrossRef]
  31. Benson, M.; Nikolaidis, J.; Clayton, G.M. Lissajous-like scan pattern for a nodding multi-beam LiDAR. In Proceedings of the Dynamic Systems and Control Conference, Atlanta, GA, USA, 30 September–3 October 2018; Volume 51906, p. V002T24A007. [Google Scholar]
  32. Lupton, T.; Sukkarieh, S. Efficient integration of inertial observations into visual SLAM without initialization. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 1547–1552. [Google Scholar]
  33. Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual–Inertial Odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef]
  34. Chen, P.; Shi, W.; Bao, S.; Wang, M.; Fan, W.; Xiang, H. Low-Drift Odometry, Mapping and Ground Segmentation Using a Backpack LiDAR System. IEEE Robot. Autom. Lett. 2021, 6, 7285–7292. [Google Scholar] [CrossRef]
  35. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LiDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
Figure 1. The mechanical structure of the hardware device.
Remotesensing 15 00963 g001
Figure 2. System overview of the proposed laser–inertial SLAM algorithm for the spinning actuated LiDAR system.
Remotesensing 15 00963 g002
Figure 3. Structure of the factor graph. Only the LiDAR feature factor of the latest scan is directly used in the factor graph, and the others are replaced by the odometry factors, which are relative transformed between two adjacent scans.
Remotesensing 15 00963 g003
Figure 4. The illustration of the edge point extraction. As described in Steps 3 and 4, there are two types of edge points. The first type of edge point is at the end of the wall, and the second type of edge point is at the corners.
These edge points can be identified from unstructured points by analyzing the properties of the points in their neighborhoods.
Remotesensing 15 00963 g004
Figure 5. Feature point extraction result. The green points are the raw point cloud. The edge points extracted by the standard LOAM-based extraction method are shown in blue, and the edge points extracted by our method are shown in red.
It can be seen that many blue points are located in the vegetation, and these noises will affect the accuracy of the alignment.
Remotesensing 15 00963 g005
Figure 6. Illustration of multi-scan point clouds’ accumulation. The first scan point cloud is rendered in green and only contains a few points in the horizontal plane, which will lead to an inaccurate registration result, especially in the Z direction.
This problem can be solved by merging it with the next scan point cloud.
Remotesensing 15 00963 g006
Figure 7. Overview of the experimental sensor platform. (a) The sensor platform is carried by a backpack. (b) The sensor platform is carried by a UAV.
Remotesensing 15 00963 g007
Figure 8. Mapping results of our algorithm in indoor environments. The trajectories are represented by the red lines in the figure.
Remotesensing 15 00963 g008
Figure 9. Comparison of the mapping results of the stairs in Data 1.
Remotesensing 15 00963 g009
Figure 10. Mapping results of our algorithm in outdoor environments. The ground truth trajectories generated by GNSS are represented by the black lines.
Remotesensing 15 00963 g010
Figure 11. Point cloud maps generated by different algorithms in Data 6.
Remotesensing 15 00963 g011
Figure 12. The evaluation results of the time cost per scan for each algorithm. The selected indoor data and outdoor data are Data 2 and Data 7, respectively, as they have the longest trajectories.
Remotesensing 15 00963 g012
Table 1. End-to-end translation error (m).
Length (m)FAST-LIO2LIO-SAMSpin-LOAM (Odom)Spin-LOAM (Full)
Data 1189.09932.2631/0.69760.0090
Data 2382.36720.02100.01280.01260.0114
Data 3315.17760.01660.01450.01750.0164
Bold font represents better results.
Table 2. Absolute trajectory error (ATE) of trajectories (m).
Length (m)FAST-LIO2LIO-SAMSpin-LOAM (wo-asa)Spin-LOAM (Odom)Spin-LOAM (Full)
Data 5415.39250.07990.08540.07600.07530.0725
Data 6782.02140.8844(0.4908)0.26230.23230.2175
Data 71335.29310.22710.27710.20640.18710.1786
Data 8623.86670.1670/0.14310.13530.1308
Bold font represents better results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, L.; Dai, J.; Zhao, Y.; Chen, C. Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System. Remote Sens. 2023, 15, 963. https://doi.org/10.3390/rs15040963

AMA Style

Yan L, Dai J, Zhao Y, Chen C. Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System. Remote Sensing. 2023; 15(4):963. https://doi.org/10.3390/rs15040963

Chicago/Turabian Style

Yan, Li, Jicheng Dai, Yinghao Zhao, and Changjun Chen. 2023. "Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System" Remote Sensing 15, no. 4: 963. https://doi.org/10.3390/rs15040963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Citations

Crossref
 
Web of Science
 
Scopus
 
ads
 
Google Scholar

Article Access Statistics

Created with Highcharts 4.0.4Chart context menuArticle access statisticsArticle Views19. Dec20. Dec21. Dec22. Dec23. Dec24. Dec25. Dec26. Dec27. Dec28. Dec29. Dec30. Dec31. Dec1. Jan2. Jan3. Jan4. Jan5. Jan6. Jan7. Jan8. Jan9. Jan10. Jan11. Jan12. Jan13. Jan14. Jan15. Jan16. Jan17. Jan18. Jan19. Jan20. Jan21. Jan22. Jan23. Jan24. Jan25. Jan26. Jan27. Jan28. Jan29. Jan30. Jan31. Jan1. Feb2. Feb3. Feb4. Feb5. Feb6. Feb7. Feb8. Feb9. Feb10. Feb11. Feb12. Feb13. Feb14. Feb15. Feb16. Feb17. Feb18. Feb19. Feb20. Feb21. Feb22. Feb23. Feb24. Feb25. Feb26. Feb27. Feb28. Feb29. Feb1. Mar2. Mar3. Mar4. Mar5. Mar6. Mar7. Mar8. Mar9. Mar10. Mar11. Mar12. Mar13. Mar14. Mar15. Mar16. Mar17. Mar0k1k2k3k
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Back to Top 返回页首Top
Velas, M.; Spanel, M.; Sleziak, T.; Habrovec, J.; Herout, A. Indoor and outdoor backpack mapping with calibrated pair of velodyne LiDARs. Sensors 2019, 19, 3944. [Google Scholar] [CrossRef] [Green Version]