Abstract: Very deep convolutional networks have been central to the
largest advances in image recognition performance in recent years.
One example is the Inception architecture that has been shown to achieve
good performance at relatively low computational cost.
Recently, the introduction of residual connections in conjunction with a more
traditional architecture has yielded state-of-the-art
performance in the 2015 ILSVRC challenge; its performance was similar
to the latest generation Inception-v3 network. This raises the question of whether
there are any benefit in combining the Inception architecture with residual
connections. Here we give clear empirical evidence that training with residual
connections accelerates the training of Inception networks significantly,
however, when fully trained, the final quality of the non-residual Inception
variants seem to be close to those of residual versions.
We present several new streamlined architectures for both residual and
non-residual Inception networks. With an ensemble of three residual and
one pure Inception-v4, we achieve 3.08\% top-5 error on the test set of
the ImageNet classification (CLS) challenge
Conflicts: google.com
2 Replies
Loading