Beyond Euler: An Explainable Machine Learning Framework for Predicting and Interpreting Buckling Instabilities in Non-Ideal Materials
Keywords: explainable AI (XAI), machine learning, SHAP, structural mechanics, Extreme Gradient Boosting (XGBoost), buckling, materials science, physics-informed ML
TL;DR: Physics-informed, interpretable boosting learns residual corrections to Euler buckling; SHAP reveals a boundary-condition effect.
Abstract: Predicting structural failure is a fundamental objective in materials science and mechanical engineering. Euler’s classical formula, the standard for predicting the buckling instability of slender columns for over 250 years, assumes idealized material properties that can lead to unreliable predictions and potentially catastrophic failures in critical infrastructure. This study proposes a solution by introducing a novel framework that synergizes machine learning and modern explainability techniques to model complex physical systems. We used pasta as a model non-ideal material for a comprehensive experimental analysis and a dataset from 147 controlled buckling experiments on four distinct pasta gauges. We then developed a physics-informed XGBoost model, incorporating both raw geometric measurements and a composite feature derived from Euler’s formula ($G = d^4/L^2$), and subsequently evaluated the model’s performance using a 5-fold cross-validation scheme. The model demonstrated an outstanding predictive power, achieving an average coefficient of determination (R²) of 0.97 and a Root Mean Squared Error (RMSE) of 0.14 N. We also examined the model’s internal decision-making process by employing SHAP (SHapley Additive exPlanations). The analysis confirmed the primary importance of the theoretically-derived feature but also revealed that the model learned to use raw geometric data as crucial correction factors. This study presents a powerful proof of concept for using interpretable machine learning to achieve not only predictive accuracy but also gain deeper physical insights into complex, non-ideal systems. The framework presented here has broad implications for advancing our understanding and design capabilities in materials science, engineering, and advanced manufacturing.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 21738
Loading