Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System
使用旋转驱动的 LiDAR 系统在复杂环境中进行实时 3D 映射
武汉大学大地测量与测绘学院, 武汉430079
武汉大学湖北罗家实验室, 湖北430079
应向其发送信件的作者。
遥感, 2023, 15(4), 963;https://doi.org/10.3390/rs15040963
提交截止日期:2022 年 12 月 28 日 / 修订日期:2023 年 2 月 3 日 / 录用日期:2023 年 2 月 6 日 / 发布日期:2023 年 2 月 9 日
Abstract 抽象
LiDAR 是用于 3D 环境感知的关键传感器。然而,受限于激光雷达的视场,有时很难用单个激光雷达实现对环境的完全覆盖。
In this paper, we designed a spinning actuated LiDAR mapping system that is compatible with both UAV and backpack platforms and propose a tightly coupled laser–inertial SLAM algorithm for it. In our algorithm, edge and plane features in the point cloud are first extracted.
本文设计了一种同时兼容无人机和背包平台的旋转驱动激光雷达测绘系统,并提出了一种紧密耦合的激光-惯性SLAM算法。在我们的算法中,首先提取点云中的边和平面特征。
Then, for the significant changes in the distribution of point cloud features between two adjacent scans caused by the continuous rotation of the LiDAR, we employed an adaptive scan accumulation method to improve the stability and accuracy of point cloud registration.
然后,针对LiDAR连续旋转导致的两次相邻扫描之间点云特征分布的显著变化,采用自适应扫描累积方法提高点云配准的稳定性和准确性。
After feature matching, the LiDAR feature factors and IMU pre-integration factor are added to the factor graph and jointly optimized to output the trajectory. In addition, an improved loop closure detection algorithm based on the Cartographer algorithm is used to reduce the drift.
特征匹配后,将LiDAR特征因子和IMU预积分因子添加到因子图中,并共同优化以输出轨迹。此外,还利用一种基于Cartographer算法的改进闭环检测算法来减少漂移。
We conducted exhaustive experiments to evaluate the performance of the proposed algorithm in complex indoor and outdoor scenarios.
我们进行了详尽的实验,以评估所提算法在复杂的室内和室外场景中的性能。
The results showed that our algorithm is more accurate than the state-of-the-art algorithms LIO-SAM and FAST-LIO2 for the spinning actuated LiDAR system, and it can achieve real-time performance.
结果表明,该算法比最先进的旋转驱动激光雷达系统算法LIO-SAM和FAST-LIO2更准确,并且能够实现实时性能。
1. Introduction
By combining LiDAR with Simultaneous Localization and Mapping (SLAM) technology, the laser SLAM system can obtain a three-dimensional map of the surrounding environment both indoors and outdoors.
These advantages make the laser SLAM system play an important role in many fields, such as autonomous driving [1], building inspection [2], forestry investigation [3], etc. Currently, the LiDAR used for laser SLAM can be basically divided into two categories. Mechanical LiDAR is the most commonly used LiDAR type. Its horizontal Field Of View (FOV) can reach close to 360°, but its vertical FOV is very limited.
Besides, with the development of microelectronics technology, solid-state LiDAR, e.g., the DJI Livox series, has become more and more commonly used [4,5]. Generally, solid-state LiDAR can provide a larger vertical FOV than mechanical LiDAR, but its horizontal FOV is much smaller. Therefore, in many scenarios, both types of LiDAR cannot completely cover the whole environment.
This shortcoming greatly limits the mapping efficiency of laser SLAM systems, especially when using a platform with limited endurance such as an Unmanned Aerial Vehicle (UAV) to carry the mapping system [6]. In narrow environments, this limited FOV will degrade the accuracy and stability of localization as only a small number of objects can be scanned.
However, due to the high price of LiDARs, the cost of these solutions is greatly increased. Another category of solutions is to actuate the LiDAR dynamically, and LOAM [12] is one of the most-famous methods among them. LOAM first accumulates the point cloud with the assistance of the IMU and then matches the accumulated point cloud with the global map to correct the accumulated errors.
Considering that the prediction error of the IMU grows exponentially with time and the feature extraction and matching algorithms need to wait for sufficient data to be accumulated, this time interval will become a bottleneck limiting the accuracy of the SLAM system.
We first extracted feature points from each frame point cloud through an improved feature extraction method based on LOAM.
Then, we judged whether the current point cloud contains enough information by analyzing the spatial distribution of feature points and performed scan-to-map matching once the requirement is met.
Compared with accumulating point clouds with a fixed number of scans, this method allows a better balance between the matching frequency of point clouds and the accuracy and reliability of the matching results.
Finally, in order to eliminate the accumulated error, we added a loop closure detection module to the algorithm. The main contributions of this paper can be summarized as follows:
- We propose a tightly coupled laser–inertial SLAM algorithm named Spin-LOAM for a spinning actuated LiDAR system.
- An adaptive scan accumulation method that can improve the accuracy and reliability of matching by analyzing the spatial distribution of feature points.
- Extensive experiments were conducted in indoor and outdoor environments to verify the effectiveness of our algorithm.
2. Related Work
LIO-SAM [19] combines the IMU pre-integration factor with the LiDAR odometry factor through a factor graph. LIO-Mapping [20] integrates the LiDAR and IMU in a tightly coupled fashion. FAST-LIO [21] adopts a tightly coupled iterated extended Kalman filter on a manifold to fuse the data and is accelerated by introducing an incremental KD-Tree in FAST-LIO2 [22]. CLINS [23] fuses LiDAR and IMU data by representing trajectories as a continuous-time function, and this framework is well compatible with arbitrary-frequency data from other asynchronous sensors.
为了获取完整的环境3D信息,研究人员提出了许多驱动LiDAR系统。Bosse等[24]设计了一个名为Zebedee的测绘系统,该系统通过弹簧将传感器连接到平台。通过将轨迹视为时间的函数,采用基于表面的匹配算法来估计6自由度姿态。Kaul等[25]提出了一种用于无人机测绘的被动驱动旋转LiDAR系统,他们使用连续时间SLAM算法来生成轨迹。但是,它无法实时处理数据。Park等[26]通过引入地图变形来取代连续时间SLAM中原来的全局轨迹优化,解决了这个问题。Fang等[27]提出了一种两阶段匹配算法来估计旋转LiDAR的轨迹。
In the algorithm, the distortion of the current point cloud was first removed using the estimated motion generated by matching it with the local map, and then, the undistorted point cloud was matched with the global map. Unlike the aforementioned works, Zhan et al. [28] used a rotating multi-beam LiDAR for 3D mapping and combined it with stereo cameras for dense 3D reconstruction. The precision of this system is very high, but it needs to remain stationary while collecting data. R-LOAM [29] improves the localization accuracy of rotating LiDAR by leveraging prior knowledge about a reference object. Karimi et al. [30] proposed an actuated LiDAR system using the Lissajous pattern [31]. By using the scan slice instead of the full-sweep point cloud to match with the global map, they achieved low-latency localization for a UAV without an IMU in an indoor environment.
在该算法中,首先利用与局部贴图匹配产生的估计运动去除当前点云的畸变,然后将未变形的点云与全局贴图进行匹配。与上述工作不同的是,Zhan等[28]使用旋转的多波束LiDAR进行3D测绘,并将其与立体相机相结合进行密集的三维重建。该系统的精度非常高,但在收集数据时需要保持静止。R-LOAM [ 29] 利用参考对象的先验知识,提高了旋转激光雷达的定位精度。Karimi等[30]提出了一种使用Lissajous模式的驱动LiDAR系统[31]。通过使用扫描切片而不是全扫描点云来匹配全球地图,他们实现了室内环境中没有 IMU 的无人机的低延迟定位。
3. System Overview
图 2 概述了我们的 SLAM 算法。在前端,首先,IMU测量用于构建预积分因子并生成姿态预测。
Next, the raw LiDAR point cloud is transformed to the fixed LiDAR frame and de-skewed using the pose predictions and motor encoder data. Then, edge and plane feature points are extracted from the de-skewed point cloud.
接下来,将原始LiDAR点云转换为固定的LiDAR帧,并使用姿态预测和电机编码器数据进行去偏斜。然后,从去倾斜的点云中提取边和平面特征点。
In the scan-to-map registration module, the correspondences of these feature points and the global map are established. The spatial distribution of these matching point pairs is examined to decide whether they are to be added to the factor graph or accumulated to the next scan.
在扫描到地图配准模块中,建立了这些特征点与全局地图的对应关系。检查这些匹配点对的空间分布,以确定是将它们添加到因子图中还是累积到下一次扫描中。
In the back-end, the IMU pre-integration factor and LiDAR factors are jointly optimized to estimate the system states, as shown in Figure 3. In order to bound the amount of computation, only the latest state is optimized when no loop closure constraints are added. After optimization, the feature points are added to the global map using the optimized state.
在后端,对IMU预积分因子和LiDAR因子进行联合优化,以估计系统状态,如图3所示。为了限制计算量,在不添加循环闭合约束的情况下,仅优化最新状态。优化后,要素点将使用优化状态添加到全局地图中。
Loop closure detection is performed periodically in the background to reduce drift.
在后台定期执行闭环检测,以减少漂移。
4. Methodology
4.1. IMU Processing
4.1.1. Pose Prediction
4.1.2. IMU Pre-Integration
IMU预积分技术最早于[32]提出,并在SLAM研究中得到了广泛的应用。它使用 之间的 IMU 测量 来建立两个状态 和 之间的约束。IMU预积分系数计算如下:
其中 , 和 是两个时间戳之间的相对运动 。有关歧管上 IMU 预集成的更多详细信息,请参见 [ 33]。
4.2. Feature Extraction 4.2. 特征提取
原始点云在 LiDAR 帧中测量,并因传感器的运动而扭曲。因此,在特征提取之前,需要将其转换为帧 并去歪斜。假设 是原始点云扫描 中的一个点:
式中 ,是编码器数据在时间 生成的旋转矩阵,它表示电机帧中激光雷达帧相对于固定激光雷达帧的旋转。 、 和 是预测姿态的线性插值得到的体架位, 是点云扫描 中最新点的时间戳。
我们的特征提取方法从输入点云中提取平面特征 和边缘特征 ,如图 4 所示。该方法的工作流程如下:
图4.边缘点提取的图示。如步骤 3 和 4 中所述,有两种类型的边缘点。第一种边缘点位于墙的末端,第二种类型的边缘点位于拐角处。
These edge points can be identified from unstructured points by analyzing the properties of the points in their neighborhoods.
通过分析这些点邻域中点的属性,可以从非结构化点中识别这些边缘点。
- (1)
- For a point , find its previous neighbors and succeeding neighbors in the same scan line.
对于某一点 ,在同一扫描线 中查找其先前的 邻居和后续的邻居。 - (2)
- Calculate the features , of the point using
计算点 的特征 , 使用
其中 表示扫描线在点 处的变化角度,用于表征平滑度。带有 的点将被标记为平滑点。 是从点到 和 的距离之比,用于确定该点是否为边缘点。 - (3)
- For point with , if all points in its closer neighbors (depending on the closest point belonging to the neighbor or ) are smooth points, then add to .
- (4)
- For point with , if all points in its previous and succeeding neighbors are smooth points, then add to the candidate set of edge points.
- (5)
- Use the standard LOAM-based method to extract planar features and edge points , except that the edge points must belong to the candidate set.
It can be seen that many blue points are located in the vegetation, and these noises will affect the accuracy of the alignment.
4.3. Scan-to-Map Registration
4.4. Adaptive Scan Accumulation
Unlike the point cloud obtained by the fixed mounted LiDAR, the LiDAR in our device is continuously rotating, resulting in the continuous change of the spatial distribution of the obtained point cloud. As shown in Figure 6, using only a single frame of the point cloud sometimes fails to provide reliable registration results.
A simple solution is to accumulate multi-scan point clouds before each registration, but this will reduce the computational efficiency, so we propose a method to adaptively decide whether to perform point cloud accumulation and how many scans need to be accumulated according to the spatial distribution of the point cloud.
This problem can be solved by merging it with the next scan point cloud.
4.4.1. Features’ Distribution Inspection
4.4.2. Outlier Removal in Matched Features
To remove them while performing distribution inspection, an outlier removal algorithm based on the propagation of covariance is applied.
We assumed that the error in the global map can be ignored, which means that the distance residual is mainly caused by the error of the predicted pose and the error of the laser measurement. Then, the standard deviation of the distance residuals can be computed by
4.5. Loop Closure Detection
As shown in Algorithm 1, there are two main improvements compared to the original algorithm: (1) point-to-plane ICP is used to replace the original probability occupancy-grid-based registration method in the fine registration stage; (2) the loop closure detection result is checked using the previous scan.
Due to the rotation of the LiDAR, there is a noticeable difference even between two adjacent scans. This can provide auxiliary information to help reject false detection results. If the loop closure detection result of the current scan is correct, then the registration results and should be consistent with the odometry transformation . The threshold was set to 5 cm in the experiment.
Algorithm 1: Loop closure detection. |
5. Experiments
All experiments were conducted on an Intel NUC computer with an Intel Core i5-1135G7 CPU and 16 GB memory.
5.1. Evaluation in Indoor Environments
In the indoor tests, the map voxel size in all algorithms was set to 0.2 m. The standard deviations of the noise related to the IMU and LiDAR were uniformly set according to the device model.
Since it was difficult to obtain the ground truth trajectory in the indoor environment, we returned to the starting position at the end of the trajectory and used the end-to-end translation error as the metric. The qualitative analysis results are given in Table 1, where “Spin-LOAM (odom)” represents our algorithm without loop closure and “Spin-LOAM (full)” represents the full SLAM algorithm.
Our algorithm achieved the lowest odometry drift and successfully corrected the drift through loop closure detection, as shown in Figure 9. In Data 2, our algorithm and LIO-SAM achieved similar localization accuracy, and FAST-LIO2 was slightly worse because it has no loop closure detection.
However, the accuracy of our pure odometry method was comparable to LIO-SAM, which validates the effectiveness of our algorithm. In Data 3, the accuracy of all algorithms was at the same level, because the scene was free of occlusions and full of plane features.
It is worth noting that the height of the roof in this scene was about 15 m, but our device completely acquired the point cloud of the entire scene with only one LiDAR. This reveals the advantage of the spinning actuated LiDAR system.
5.2. Evaluation in Outdoor Environments
To quantitatively compare the performance of the algorithms, we used the GNSS RTK trajectories as the ground truth and computed the absolute trajectory error (ATE) of the trajectories.
However, LIO-SAM failed to finish the localization in Data 6 and Data 8. The reason for its failure in Data 6 is that it directly uses the ICP algorithm for loop closure.
This strategy is not suitable when large drift occurs, and the accumulated errors in the large ring road make the LIO-SAM algorithm fail. After turning off the loop closure, the accuracy of LIO-SAM in Data 6 was 0.4908 m. Figure 11 gives a detailed comparison of the point cloud at the road junction. It can be seen that, even without the loop closure, our algorithm still maintained high accuracy after walking through a long loop.
In Data 8, the drone performed an aggressive motion to test the robustness of the algorithm, in which the maximum angular velocity was over 780°/s and the maximum linear velocity was over 6.5 m/s.
This test proved that our tightly coupled algorithm can work when aggressive motion occurs.
Our method can alleviate this problem during scan-to-map registration, which can help to improve the accuracy and robustness of registration.
5.3. Runtime Analysis
The computation time was more for the indoor data because the map voxel size had a significant impact on the performance. The average time cost for the loop closure was 808.4 ms indoors and 328.7 ms outdoors.
Since the loop closure was performed by a separate thread in the background, it did not affect the real-time performance of our algorithm.
6. Conclusions
In the front-end, to mitigate the influence of the unstable spatial distribution of the point cloud caused by the continuously rotating LiDAR, an adaptive scan accumulation method based on point cloud distribution inspection was adopted.
In the back-end, a voxel-grid-based loop closure detection method was used to reduce the drift. We use the previous scan point cloud to assist in eliminating errors in the loop closure detection results.
The experimental results demonstrated that our algorithm achieves high-precision localization results in various complex indoor and outdoor environments.
We are committed to further refining and improving our algorithm, with a focus on improving its robustness in more extreme environmental conditions.
One potential avenue for improvement is the integration of semantic information from the point cloud, which will aid in loop closure detection and removal of dynamic objects.
Author Contributions
Funding
Conflicts of Interest
References
- Li, Y.; Ibanez-Guzman, J. LiDAR for Autonomous Driving: The Principles, Challenges, and Trends for Automotive LiDAR and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
- Bolourian, N.; Hammad, A. LiDAR-equipped UAV path planning considering potential locations of defects for bridge inspection. Autom. Constr. 2020, 117, 103250. [Google Scholar] [CrossRef]
- Beland, M.; Parker, G.; Sparrow, B.; Harding, D.; Chasmer, L.; Phinn, S.; Antonarakis, A.; Strahler, A. On promoting the use of LiDAR systems in forest ecosystem research. For. Ecol. Manag. 2019, 450, 117484. [Google Scholar] [CrossRef]
- Raj, T.; Hanim Hashim, F.; Baseri Huddin, A.; Ibrahim, M.F.; Hussain, A. A survey on LiDAR scanning mechanisms. Electronics 2020, 9, 741. [Google Scholar] [CrossRef]
- Li, K.; Li, M.; Hanebeck, U.D. Towards high-performance solid-state-LiDAR-inertial odometry and mapping. IEEE Robot. Autom. Lett. 2021, 6, 5167–5174. [Google Scholar] [CrossRef]
- Alsadik, B.; Remondino, F. Flight planning for LiDAR-based UAS mapping applications. ISPRS Int. J. Geo-Inf. 2020, 9, 378. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, F.Z. Extrinsic Calibration of Multiple LiDARs of Small FoV in Targetless Environments. IEEE Robot. Autom. Lett. 2021, 6, 2036–2043. [Google Scholar] [CrossRef]
- Nguyen, T.M.; Yuan, S.; Cao, M.; Lyu, Y.; Nguyen, T.H.; Xie, L. MILIOM: Tightly Coupled Multi-Input LiDAR-Inertia Odometry and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 5573–5580. [Google Scholar] [CrossRef]
- Jiao, J.; Ye, H.; Zhu, Y.; Liu, M. Robust Odometry and Mapping for Multi-LiDAR Systems With Online Extrinsic Calibration. IEEE Trans. Robot. 2022, 38, 351–371. [Google Scholar] [CrossRef]
- Zhang, D.; Gong, Z.; Chen, Y.; Zelek, J.; Li, J. SLAM-based multi-sensor backpack LiDAR systems in gnss-denied environments. In Proceedings of the IGARSS 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 8984–8987. [Google Scholar]
- Velas, M.; Spanel, M.; Sleziak, T.; Habrovec, J.; Herout, A. Indoor and outdoor backpack mapping with calibrated pair of velodyne LiDARs. Sensors 2019, 19, 3944. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Singh, S. LOAM: LiDAR Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 9. [Google Scholar]
- Shan, T.; Englot, B. Lego-loam: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765. [Google Scholar]
- Zhou, P.; Guo, X.; Pei, X.; Chen, C. T-LOAM: Truncated Least Squares LiDAR-Only Odometry and Mapping in Real Time. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5701013. [Google Scholar] [CrossRef]
- Liu, Z.; Zhang, F. BALM: Bundle Adjustment for LiDAR Mapping. IEEE Robot. Autom. Lett. 2021, 6, 3184–3191. [Google Scholar] [CrossRef]
- Zhou, L.; Koppel, D.; Kaess, M. LiDAR SLAM With Plane Adjustment for Indoor Environment. IEEE Robot. Autom. Lett. 2021, 6, 7073–7080. [Google Scholar] [CrossRef]
- Zhou, L.; Wang, S.; Kaess, M. π-LSAM: LiDAR smoothing and mapping with planes. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5751–5757. [Google Scholar]
- Chen, X.; Milioto, A.; Palazzolo, E.; Giguere, P.; Behley, J.; Stachniss, C. Suma++: Efficient LiDAR-based semantic slam. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 4530–4537. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled LiDAR inertial odometry via smoothing and mapping.
In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5135–5142. [Google Scholar] - Ye, H.; Chen, Y.; Liu, M. Tightly coupled 3D LiDAR inertial odometry and mapping. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3144–3150. [Google Scholar]
- Xu, W.; Zhang, F. Fast-lio: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Lv, J.; Hu, K.; Xu, J.; Liu, Y.; Ma, X.; Zuo, X. Clins: Continuous-time trajectory estimation for LiDAR-inertial system. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp.
6657–6663. [Google Scholar] - Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a Spring-Mounted 3-D Range Sensor with Application to Mobile Mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [Google Scholar] [CrossRef]
- Kaul, L.; Zlot, R.; Bosse, M. Continuous-Time Three-Dimensional Mapping for Micro Aerial Vehicles with a Passively Actuated Rotating Laser Scanner. J. Field Robot. 2016, 33, 103–132. [Google Scholar] [CrossRef]
- Park, C.; Moghadam, P.; Kim, S.; Elfes, A.; Fookes, C.; Sridharan, S. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1206–1213. [Google Scholar] [CrossRef]
- Fang, Z.; Zhao, S.; Wen, S. A Real-time and Low-cost 3D SLAM System Based on a Continuously Rotating 2D Laser Scanner.
In Proceedings of the 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Honolulu, HI, USA, 31 July–4 August 2017; pp. 454–459. [Google Scholar] [CrossRef] - Zhen, W.; Hu, Y.; Liu, J.; Scherer, S. A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions. IEEE Robot. Autom. Lett. 2019, 4, 3585–3592. [Google Scholar] [CrossRef] [Green Version]
- Oelsch, M.; Karimi, M.; Steinbach, E. R-LOAM: Improving LiDAR Odometry and Mapping With Point-to-Mesh Features of a Known 3D Reference Object. IEEE Robot. Autom. Lett. 2021, 6, 2068–2075. [Google Scholar] [CrossRef]
- Karimi, M.; Oelsch, M.; Stengel, O.; Babaians, E.; Steinbach, E. LoLa-SLAM: Low-latency LiDAR SLAM using Continuous Scan Slicing. IEEE Robot. Autom. Lett. 2021, 2248–2255. [Google Scholar] [CrossRef]
- Benson, M.; Nikolaidis, J.; Clayton, G.M. Lissajous-like scan pattern for a nodding multi-beam LiDAR. In Proceedings of the Dynamic Systems and Control Conference, Atlanta, GA, USA, 30 September–3 October 2018; Volume 51906, p. V002T24A007. [Google Scholar]
- Lupton, T.; Sukkarieh, S. Efficient integration of inertial observations into visual SLAM without initialization. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 1547–1552. [Google Scholar]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-Manifold Preintegration for Real-Time Visual–Inertial Odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef]
- Chen, P.; Shi, W.; Bao, S.; Wang, M.; Fan, W.; Xiang, H. Low-Drift Odometry, Mapping and Ground Segmentation Using a Backpack LiDAR System. IEEE Robot. Autom. Lett. 2021, 6, 7285–7292. [Google Scholar] [CrossRef]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LiDAR SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
Length (m) | FAST-LIO2 | LIO-SAM | Spin-LOAM (Odom) | Spin-LOAM (Full) | |
---|---|---|---|---|---|
Data 1 | 189.0993 | 2.2631 | / | 0.6976 | 0.0090 |
Data 2 | 382.3672 | 0.0210 | 0.0128 | 0.0126 | 0.0114 |
Data 3 | 315.1776 | 0.0166 | 0.0145 | 0.0175 | 0.0164 |
Length (m) | FAST-LIO2 | LIO-SAM | Spin-LOAM (wo-asa) | Spin-LOAM (Odom) | Spin-LOAM (Full) | |
---|---|---|---|---|---|---|
Data 5 | 415.3925 | 0.0799 | 0.0854 | 0.0760 | 0.0753 | 0.0725 |
Data 6 | 782.0214 | 0.8844 | (0.4908) | 0.2623 | 0.2323 | 0.2175 |
Data 7 | 1335.2931 | 0.2271 | 0.2771 | 0.2064 | 0.1871 | 0.1786 |
Data 8 | 623.8667 | 0.1670 | / | 0.1431 | 0.1353 | 0.1308 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yan, L.; Dai, J.; Zhao, Y.; Chen, C. Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System. Remote Sens. 2023, 15, 963. https://doi.org/10.3390/rs15040963
Yan L, Dai J, Zhao Y, Chen C. Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System. Remote Sensing. 2023; 15(4):963. https://doi.org/10.3390/rs15040963
Chicago/Turabian StyleYan, Li, Jicheng Dai, Yinghao Zhao, and Changjun Chen. 2023. "Real-Time 3D Mapping in Complex Environments Using a Spinning Actuated LiDAR System" Remote Sensing 15, no. 4: 963. https://doi.org/10.3390/rs15040963