DRL-based low Latency control for Endoscopic operations

Published: 25 Sept 2024, Last Modified: 05 Nov 2024IEEE BHI'24EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Deep reinforcement learning, Assistive Control, Augmented reality, minimally invasive surgery, endoscopy, Simulation.
TL;DR: Endoscope Head motion control with DRL adaptive low latency stream approach
Abstract: Minimally invasive surgery has seen significant advancements with the introduction of robotic systems, which are highly desirable due to their ability to enhance treatment scalability and precision. This study aims to develop an effective and intelligent system for controlling and streaming the endoscope camera during endoscopic operations. The proposed system leverages head motion data from the HoloLens inertial measurement unit (IMU) to control the endoscope camera’s robotic arm. Additionally, a Deep Reinforcement Learning (DRL) technique is employed to manage the region of interest adaptively (ROI), thereby mitigating wireless channel impairments in the operating room and improving the surgeon’s interaction and quality of experience (QoE). We developed a proof-of-concept by interfacing the HoloLens with the Gazebo simulation environment for robotic arm control. The DRL model demonstrated its efficacy by intelligently reducing the communication delay and enhancing image quality by incorporating machine learning techniques. Specifically, the DRL model reduced delay by 12.56% and increased image quality by 26.6%. Furthermore, a fixed-frame technique resulted in an additional delay reduction of 18%. The successful establishment of the proof-of-concept and the comprehensive analysis of the findings underscore the potential impact of our contribution to advancing intelligent, efficient, and easily controllable endoscopic surgical procedures.
Track: 4. AI-based clinical decision support systems
Supplementary Material: zip
Registration Id: 5PN4LXLJYCD
Submission Number: 172
Loading