Learning Heterogeneous Relation Graph and Value Regularization Policy for Visual Navigation

Published: 01 Jan 2024, Last Modified: 13 Nov 2024IEEE Trans. Neural Networks Learn. Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The goal of visual navigation is steering an agent to find a given target object with current observation. It is crucial to learn an informative visual representation and robust navigation policy in this task. Aiming to promote these two parts, we propose three complementary techniques, heterogeneous relation graph (HRG), a value regularized navigation policy (VRP), and gradient-based meta learning (ML). HRG integrates object relationships, including object semantic closeness and spatial directions, e.g., a knife is usually co-occurrence with bowl semantically or located at the left of the fork spatially. It improves visual representation learning. Both VRP and gradient-based ML improve robust navigation policy, regulating this process of the agent to escape from the deadlock states such as being stuck or looping. Specifically, gradient-based ML is a type of supervision method used in policy network training, which eliminates the gap between the seen and unseen environment distributions. In this process, VRP maximizes the transformation of the mutual information between visual observation and navigation policy, thus improving more informed navigation decisions. Our framework shows superior performance over the current state-of-the-art (SOTA) in terms of success rate and success weighted by length (SPL). Our HRG outperforms the Visual Genome knowledge graph on cross-scene generalization with $\approx 56\%$ and $\approx 39\%$ improvement on Hits@ $5^{*}$ (proportion of correct entities ranked in top 5) and MRR $^{*}$ (mean reciprocal rank), respectively. Our code and HRG datasets will be made publicly available in the scientific community.
Loading