ToMoE: Converting Dense Large Language Models to Mixture-of-Experts through Dynamic Structural Pruning
External IDs:dblp:journals/tmlr/GaoHSLTLYLZGLXH26
Loading
OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2026 OpenReview