Augmented Dynamics Visual Servoing: Mapping Image Variations to Multirotor's Input Commands

Published: 2025, Last Modified: 06 Nov 2025IEEE Trans. Aerosp. Electron. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Conventional visual servoing techniques, such as position-based visual servoing (PBVS) and image-based visual servoing (IBVS), rely on inverse Jacobian computations to estimate the desired states of a multirotor, including position and velocity profiles. This reliance not only increases computational complexity but also heightens sensitivity to image noise. Furthermore, these methods typically inject reference trajectories into the outer position control loop, which exacerbates error accumulation as these references propagate to the inner attitude loop. To overcome these limitations, this article proposes the augmented dynamics visual servoing (ADVS) framework, which establishes a direct mapping between image pixel variations and the multirotor’s torque and thrust inputs. By bypassing inverse Jacobian computations, this approach treats image noise as system noise, enabling the application of robust control strategies to mitigate its effects. The proposed framework leverages a time-varying finite-time sliding mode control strategy, where control gains dynamically adapt based on the desired error convergence time. Simulation and experimental results highlight the superiority of ADVS when compared to the existing PBVS, IBVS, and dynamics-based visual servoing approach.
Loading