everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Despite the remarkable success achieved by neural networks, particularly those represented by MLP and Transformer, we reveal that they exhibit potential flaws in the modeling and reasoning of periodicity, i.e., they exhibit satisfactory performance within the domain of training period, but struggle to generalize to out of the domain (OOD). The inherent cause lies in the way that they tend to memorize the periodic data rather than genuinely understand the underlying principles of periodicity. In fact, periodicity is essential to various forms of reasoning and generalization, underpinning predictability across natural and engineered systems through recurring patterns in observations. In this paper, we propose FAN, a novel network architecture based on Fourier Analysis, which empowers the ability to efficiently model and reason about periodic phenomena, meanwhile maintaining general-purpose ability. By introducing Fourier Series, periodicity is naturally integrated into the structure and computational processes of FAN. On this basis, FAN is defined following two core principles: 1) its periodicity modeling capability scales with network depth and 2) the periodicity modeling available throughout the network, thus achieving more effective expression and prediction of periodic patterns. FAN can seamlessly replace MLP in various model architectures with fewer parameters and FLOPs, becoming a promising substitute to traditional MLP. Through extensive experiments, we demonstrate the superiority of FAN in periodicity modeling tasks, and the effectiveness and generalizability of FAN across a range of real-world tasks, including symbolic formula representation, time series forecasting, language modeling, and image recognition.