Constructing artificial life and materials scientists with accelerated AI using Deep AndersoNN

Published: 17 Jun 2024, Last Modified: 16 Jul 2024ML4LMS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Artificial life scientist, artificial material scientist, artificial intelligence, Anderson extrapolation, deep equilibrium, high performance computing, density functional theory, drug discovery
TL;DR: We present a novel method for constructing artificial life and material scientists to perform high-throughput density functional theory classification based on Anderson-accelerated training and inferences using deep equilibrium networks.
Abstract: Deep AndersoNN is a framework for accelerating AI by taking the continuum limit as the number of explicit layers in a neural network approaches infinity, and can be taken as a single implicit layer, known as a deep equilibrium model. Solving for parameters of a deep equilibrium model reduces to a nonlinear fixed point iteration problem, enabling use of vector-to-vector iterative solvers and windowing techniques, such as Anderson extrapolation, for accelerating convergence to the fixed point deep equilibrium. Here we show that Deep AndersoNN achieves up to an order of magnitude of speed-up in training and inference. The method is demonstrated on density functional theory results for industrial applications by constructing artificial life and materials `scientists' capable of classifying biomolecules, drugs, and compounds as strongly or weakly polar, sorting metal-organic frameworks by pore size, and classifying crystalline materials as metals, semiconductors, and insulators, using graph images of node-neighbor representations transformed from atom-bond networks. Results exhibit accuracy up to 98\% and showcase synergy between Deep AndersoNN and machine learning capabilities of modern computing architectures, e.g. GPUs, for accelerated computational life and materials science by quickly identifying structure-property relationships. This paves the way for saving up to 90\% of compute required for AI, reducing its carbon footprint by up to 60 gigatons per year by 2030, and scaling above memory limitations of explicit neural networks in life and materials science, and beyond.
Supplementary Material: pdf
Poster: pdf
Submission Number: 15
Loading