AFINets: Attentive Feature Integration Networks for Image ClassificationDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Image Classification, Convolutional Neural Network, Attention Mechanisms, Feature Reuse
Abstract: Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks, e.g., image classification. Recent advances in CNNs, such as ResNets and DenseNets, mainly focus on the skip and concatenation operators to avoid gradient vanishing. However, such operators largely neglect information across layers (as in ResNets) or involve tremendous redundancy of features repeatedly copied from previous layers (as in DenseNets). In this paper, we design Attentive Feature Integration (AFI) modules, which can be applicable to most recent network architectures, leading to new architectures named as AFINets. AFINets can by adaptively integrate distinct information through explicitly modeling the subordinate relationship between different levels of features. Experimental results on benchmark datasets have demonstrated the effectiveness of the proposed AFI modules.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: AFI modules applicable to most recent network architectures explicitly model a lightweight and selective feature integration scheme.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=FY_CVxq1o9
5 Replies

Loading