Abstract: This article addresses the problem of legged locomotion in unstructured environments, and a novel hierarchical multicontact motion planning method for hexapod robots is proposed by combining free gait motion planning and deep reinforcement learning. We structurally decompose the complex free gait multicontact motion planning task into path planning in discrete state space and gait planning in a continuous state space. First, the soft deep Q-network is used to obtain the global prior path information in the path planner (PP). Second, a free gait planner (FGP) is proposed to obtain the gait sequence. Finally, based on the PP and the FGP, the center-of-mass sequence is generated by the trained optimal policy using the designed deep reinforcement learning algorithm. Experimental results in different environments demonstrate the feasibility, effectiveness, and advancement of the proposed method.
Loading