Abstract: A major challenge in image-guided laparoscopic surgery is that structures of interest often deform
and go, even if only momentarily, out of view. Methods
which rely on having an up-to-date impression of those structures, such as registration or localisation, are undermined in these circumstances. This is particularly true for
soft-tissue structures that continually change shape - in
registration, they must often be re-mapped. Furthermore,
methods which require ‘revisiting’ of previously seen areas cannot in principle function reliably in dynamic contexts, drastically weakening their uptake in the operating
room. We present a novel approach for learning to estimate the deformed states of previously seen soft tissue surfaces from currently observable regions, using a
combined approach that includes a Graph Neural Network
(GNN). The training data is based on stereo laparoscopic
surgery videos, generated semi-automatically with minimal labelling effort. Trackable segments are first identified
using a feature detection algorithm, from which surface
meshes are produced using depth estimation and delaunay triangulation. We show the method can predict the
displacements of previously visible soft tissue structures
connected to currently visible regions with observed displacements, both on patient data and porcine data. Our
innovative approach learns to compensate non-rigidity in
abdominal endoscopic scenes directly from stereo laparoscopic videos through targeting a new problem formulation,
and stands to benefit a variety of target applications in
dynamic environments.
Loading