Abstract: We study the classification problem for
high-dimensional data with $n$ observations on $p$ features where the
$p \times p$ covariance matrix $\Sigma$ exhibits a spiked eigenvalues structure and the
vector $\zeta$, given by the difference between the {\em whitened} mean
vectors, is sparse. We analyzed an adaptive
classifier (adaptive with respect to the sparsity $s$) that first
performs dimension reduction on the feature vectors prior to classification in
the dimensionally reduced space, i.e., the classifier whitened
the data, then screen the features by keeping only those corresponding
to the $s$ largest coordinates of $\zeta$ and finally apply Fisher
linear discriminant on the selected features. Leveraging recent
results on entrywise matrix perturbation bounds for covariance
matrices, we show that the resulting classifier is Bayes optimal
whenever $n \rightarrow \infty$ and $s \sqrt{n^{-1} \ln p} \rightarrow
0$. Finally, experiment results on real and synthetic data indicate that
the classifier is competitive with
state-of-the-art methods while also selecting a smaller number of features.
paragraph.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Trevor_Campbell1
Submission Number: 5423
Loading