Drop-Bottleneck: Learning Discrete Compressed Representation for Noise-Robust ExplorationDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: Reinforcement learning, Information bottleneck
Abstract: We propose a novel information bottleneck (IB) method named Drop-Bottleneck, which discretely drops features that are irrelevant to the target variable. Drop-Bottleneck not only enjoys a simple and tractable compression objective but also additionally provides a deterministic compressed representation of the input variable, which is useful for inference tasks that require consistent representation. Moreover, it can jointly learn a feature extractor and select features considering each feature dimension's relevance to the target task, which is unattainable by most neural network-based IB methods. We propose an exploration method based on Drop-Bottleneck for reinforcement learning tasks. In a multitude of noisy and reward sparse maze navigation tasks in VizDoom (Kempka et al., 2016) and DMLab (Beattie et al., 2016), our exploration method achieves state-of-the-art performance. As a new IB framework, we demonstrate that Drop-Bottleneck outperforms Variational Information Bottleneck (VIB) (Alemi et al., 2017) in multiple aspects including adversarial robustness and dimensionality reduction.
One-sentence Summary: Our novel IB method, Drop-Bottleneck, discretely drops task-irrelevant input features to build the compressed representation and shows state-of-the-art performance on noisy, sparse-reward navigation tasks in reinforcement learning.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) jaekyeom/drop-bottleneck](https://github.com/jaekyeom/drop-bottleneck)
Data: [VizDoom](https://paperswithcode.com/dataset/vizdoom)
10 Replies

Loading