Keywords: floating point representation, numerical precision, materials science, scientific application, machine learning potential, physics-informed neural network, density functional theory, finite-difference time-domain
TL;DR: This position paper not only highlights the precision-related issues but also recommends reporting comparisons between FP32 and FP64 results, while also encouraging the release of FP64 models.
Abstract: The machine learning community has focused on computational efficiency, often leveraging lower-precision formats such as FP16, instead of the standard FP32.
In contrast, little attention has been paid to higher-precision formats, such as FP64, despite their critical role in scientific domains like materials science, where even small numerical differences can lead to significant inaccuracies in physicochemical properties.
This need for high precision extends to the emerging field of \textit{machine learning for scientific tasks}, yet it has not been thoroughly investigated.
According to several studies and our toy experiment, models trained with FP32 show insufficient accuracy compared to those trained with FP64, indicating that higher precision is also crucial in scientific machine learning, as in traditional scientific computing.
This precision issue limits the potential of scientific machine learning that can replace the traditional scientific computings in practical research.
Our position paper not only highlights these precision-related issues but also recommends reporting comparisons between FP32 and FP64 results, encouraging the release of FP64 models.
We believe that these efforts can enable machine learning to contribute meaningfully to the natural sciences, ensuring both scientific reliability and practical applicability.
Submission Number: 107
Loading