Abstract: With the development of sensor accuracy and optimization algorithms, Advanced Driver Assistance System (ADAS) has been widely used on vehicles to achieve level 3 of automatic driving. Vision-based dense mapping is one of the most essential solutions in ADAS to help vehicles sensing surroundings. However, traditional methods could only reconstruct a 3D map of a static environment because the occlusion of dynamic objects on the environment map. Moreover, dynamic environment is hard to be 3D modeled in real-time because of the limitation of computing power and algorithm complexity. To make the dynamic scene visible in a 3D point cloud, this paper proposes a system that superimposes dynamic objects on a static environment in real-time. The static environment is reconstructed by aerial multi-view images from the unmanned aerial vehicle (UAV) in advance to reduce the required computing power. Dynamic objects point cloud is generated from disparity maps generated by a consumer stereo-camera frame by frame. To superimpose the dynamic objects on the global 3D static map, a plane segmentation method is applied to reduce the mismatching for global point cloud registration. The result shows that the registration fitness reaches 79.8%, which is applied for the fusion of multiple measuring methods.
0 Replies
Loading