SensorClouds - A framework for real-time processing of multi-modal sensor data for human-robot-collaboration

Published: 2024, Last Modified: 13 Nov 2024undefined 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the advent of industry 4.0 with its goals to make production more flexible and products more individual, the need for robots which can collaborate with humans to perform manufacturing is growing. In healthcare, the aging populations of western countries and the growing labor shortage increase the need for robotic assistants capable of relieving workers of menial and strenuous tasks. Both these fields of application require robots to be able to perceive their environment in order to safely interact with humans and perform their tasks correctly. This work presents SensorClouds, a modular framework for the processing of multi-modal sensor data in realtime for applications involving human-robot-collaboration. The framework is competitive in performance with similar approaches, yet far more flexible since it is not limited to binary occupancy in its environment model but instead allows the dynamic specification of arbitrary modalities in order to enable more complex sensor dataWith the advent of industry 4.0 with its goals to make production more flexible and products more individual, the need for robots which can collaborate with humans to perform manufacturing is growing. In healthcare, the aging populations of western countries and the growing labor shortage increase the need for robotic assistants capable of relieving workers of menial and strenuous tasks. Both these fields of application require robots to be able to perceive their environment in order to safely interact with humans and perform their tasks correctly. This work presents SensorClouds, a modular framework for the processing of multi-modal sensor data in realtime for applications involving human-robot-collaboration. The framework is competitive in performance with similar approaches, yet far more flexible since it is not limited to binary occupancy in its environment model but instead allows the dynamic specification of arbitrary modalities in order to enable more complex sensor data processing and a more informed representation of the robot’s surroundings. The architecture aids developers of modules in the creation of massively parallel algorithms by taking over the parallelization aspect and requiring only the implementation of processing kernels for single data points. Application developers can use these modules to quickly solve complex sensor fusion tasks. Module interoperability is guaranteed through the enforcement of data access contracts. This work also includes methods for reconstructing three-dimensional data from sensors which do not inherently provide it so that this data can then also be included in the environment model alongside natively three-dimensional data.…
Loading