VelObPoints: a Neural Network for Vehicle Object Detection and Velocity Estimation for Scanning LiDAR Sensors

Lukas Haas, Nico Leuze, Arsalan Haider, Matthias Kuba, Thomas Zeh, Alfred Schöttl, Martin Jakobi, Alexander W. Koch

Published: 2024, Last Modified: 03 Mar 2026IEEE SENSORS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: LiDAR sensors are crucial sensors for highly automated vehicles. Relevant information about the vehicle's sur-roundings can be obtained from 3D point clouds, which serves as a basis for decision-making for further control of the vehicle. For this purpose, information about the type and position of objects in the vehicle's environment and their velocity and movement direction is essential. In this paper, we present VelObPoints, a neural network architecture that estimates the longitudinal and lateral velocity and performs object detection on a single LiDAR point cloud. Compared to existing tracking methods, the neural network allows this information to be extracted from a single LiDAR frame. We propose a simulated dataset for training and testing, containing motion distortion effects. The neural network achieves a mean Intersection over Union of 0.863 and a mean average velocity error of 0.332 m s−l. Based on a single point cloud, this information, consisting of the object's scaling, position, and rotation, as well as its velocities in the longitudinal and lateral directions, is immediately available for the driving function. For subsequent motion prediction, and object tracking, leading to more quickly available velocity and motion direction information and higher redundancy in sensor data fusion.
Loading