Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural NetworksDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 27 Jan 2024ICML 2023Readers: Everyone
Abstract: In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction. The recently proposed patc...
0 Replies

Loading