Shaping Laser Pulses with Reinforcement Learning

Published: 09 May 2025, Last Modified: 28 May 2025RLC 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Applied RL, DRL for Science, Sim-to-real, Domain Randomization, Ultra-short pulses
TL;DR: Learning to control laser pulses in simulation, from non-destructive imaging capturing temporal pulse duration.
Abstract: High Power Laser (HPL) systems operate in the femtosecond regime---the shortest timescale achievable in experimental physics. HPL systems are instrumental in high-energy physics, leveraging ultra-short impulse durations to yield extremely high intensities, which are essential for both practical applications and theoretical advancements in light-matter interactions. Traditionally, the parameters regulating HPL optical performance are tuned manually by human experts, or optimized by using black-box methods that can be computationally demanding. Critically, black box methods rely on stationarity assumptions overlooking complex dynamics in high-energy physics and day-to-day changes in real-world experimental settings, and thus need to be often restarted. Deep Reinforcement Learning (DRL) offers a promising alternative by enabling sequential decision making in non-static settings. This work investigates the safe application of DRL to HPL systems, and extends the current research by (1) learning a control policy directly from images and (2) addressing the need for generalization across diverse dynamics. We evaluate our method across various configurations and observe that DRL effectively enables cross-domain adaptability, coping with dynamics' fluctuations while achieving 90\% of the target intensity in test environments.
Submission Number: 318
Loading