Keywords: Reinforcement Learning, robotic control, deep learning
TL;DR: Reinforcement project on robotic control task with multi-modal data representation
Abstract: Vision and touch are especially important when doing contact-rich manipulation
tasks in unstructured environments. It is non-trivial to manually design a robot con-
troller that combines these modalities which have very different characteristics. In
this project, to connect vision and touch, we first equip robots with both visual and
tactile sensors and collect a large-scale dataset of corresponding vision and tactile
sequences. We use self-supervision to learn a compact and multimodal representa-
tion of our sensory inputs, which can then be used to improve the sample efficiency
of our policy learning. We will train a policy in a simulation environment using
deep reinforcement learning algorithms. The learned policy is also transferable to
handle real-world tasks. The peg insertion is chosen as the task for demonstration
in this project. A preliminary version of our python implementation is avail-
able at: https://github.com/Henry1iu/ierg5350_rl_course_project.
A video introducing our project is available at: https://mycuhk-my.
sharepoint.com/:v:/g/personal/1155071948_link_cuhk_edu_hk/
EaKiGmkUvjJOoSqdWxrqjXYBpz3dCSAfOD9Co8krttyqUQ?e=RXsHD2
3 Replies
Loading