Visually guided grasping in unstructured environments

Published: 01 Jan 1997, Last Modified: 14 Nov 2024Robotics Auton. Syst. 1997EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present simple and robust algorithms which combine uncalibrated stereo vision and a robot manipulator to enable it locate, reach and grasp unmodelled objects in unstructured environments. In the first stage, an operator indicates the object to be grasped by simply pointing at it. Next, the vision system segments the indicated object from the background, and plans a suitable grasp strategy. Finally, the robotic arm reaches out towards the object and executes the grasp. Uncalibrated stereo vision allows the system to continue to operate in the presence of errors in the kinematics of the robot manipulator and unknown changes in the position, orientation and intrinsic parameters of the stereo cameras during operation.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview