Evolving RL: Discovering New Activation Functions using LLMs

Published: 05 Mar 2025, Last Modified: 28 Mar 2025ICLR 2025 Workshop AgenticAI PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Evolutionary Search, Large Language Models, LLM Hypothesis Generation
TL;DR: Discovering novel activation functions for RL using LLMs and Evolutionary Search
Abstract:

Deep Reinforcement Learning (DRL) has traditionally inherited activation functions from supervised learning, despite fundamental differences in learning dynamics and objectives. We present EvolveAct, a novel framework that leverages large language models and evolutionary search to automatically discover optimal activation functions for specific RL tasks. Our method combines genetic programming with code Large Language Models (LLMs) to explore a rich space of mathematical functions, optimizing for stability and performance in DRL training. Experimental results across multiple environments show that the discovered activation functions consistently outperform standard choices such as ReLU and TanH, improving final performance on the Minatar suite by 37.25% and 28.3% on the Brax suite on average. By jointly optimizing over multiple diverse environments, we discover activation functions that demonstrate strong generalization capabilities across different RL domains. This research provides a foundation for automating fundamental architectural choices in deep reinforcement learning systems.

Submission Number: 29
Loading