Simplicial Embeddings Improve Sample Efficiency in Actor–Critic Agents

ICLR 2026 Conference Submission18504 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, deep reinforcement learning, actor critic, representation learning, state embeddings
TL;DR: We propose the use of simplicial embeddings in actor-critic methods to improve sample efficiency and final performance, without sacrificing runtime.
Abstract: Recent works have proposed accelerating the wall-clock training time of actor-critic methods via the use of large-scale environment parallelization; unfortunately, these can sometimes still require large number of environment interactions to achieve a desired level of performance. Noting that well-structured representations can improve the generalization and sample efficiency of deep reinforcement learning (RL) agents, we propose the use of simplicial embeddings: lightweight representation layers that constrain embeddings to simplicial structures. This geometric inductive bias results in sparse and discrete features that stabilize critic bootstrapping and strengthen policy gradients. When applied to FastTD3, FastSAC, and PPO, simplicial embeddings consistently improve sample efficiency and final performance across a variety of continuous- and discrete-control environments, without any loss in runtime speed.
Primary Area: reinforcement learning
Submission Number: 18504
Loading