Abstract: Explainable Artificial Intelligence (XAI) aims to enhance transparency and trust in AI systems by providing insights into their decision-making processes. While there has been significant progress in developing explainability methods for AI, such advancements do not consider black-box optimization algorithms. In this paper, we present RX-BO (Rule based Explanations for Bayesian Optimization), a novel framework that brings explainability to black-box Bayesian optimization with a Gaussian process (GP) backbone. Leveraging the GP model’s approximation and uncertainty estimation capabilities, RX-BO extracts distribution-aware rules through a post-hoc rule based explainability method. These rules shed light on different regions of the posterior distribution, enabling transparent and trustworthy decision making. The framework incorporates a pairwise Mahalanobis distance-based hierarchical agglomerative clustering algorithm with Ward criterion for generating rule proposals. It also employs traditional metrics such as support, coverage, and confidence for selecting high-quality explanations. We evaluate RX-BO on an example optimization problem and six hyperparameter optimization tasks involving three machine learning models (classification and regression) across two datasets. The results demonstrate that RX-BO improves rule confidence and rule granularity control compared to decision trees and Rule based XAI frameworks. Furthermore, RX-BO introduces a novel approach by identifying interesting areas in the search space based on likelihood. This measure allows to rank explanations on how interesting they would be for an end user. Overall, RX-BO enhances the understanding and interpretability of black-box Bayesian optimization algorithm results, contributing to the broader field of XAI.