Keywords: Mixture of Experts, Large Language Models, Model Merging, Natural Language Processing, Deep Learning, Artificial Intelligence
Abstract: We introduce MixtureKit, a modular open-source framework for constructing, training, and analyzing Mixture-of-Experts (MoE) models from arbitrary pre-trained or fine-tuned models. MixtureKit currently supports three complementary methods: (i) $\textit{Traditional MoE}$, which uses a single router per transformer block to select experts, (ii) $\textit{BTX}$ (Branch-Train-miX), which introduces separate routers for each specified sub-layer enabling fine-grained token routing, and (iii) $\textit{BTS}$ (Branch-Train-Stitch), which keeps experts fully intact and introduces trainable stitch layers for controlled information exchange between hub and experts. MixtureKit automatically modifies the model configuration, patches decoder and causal LM classes, and saves a unified checkpoint ready for inference or fine-tuning. We further provide a visualization interface to inspect per-token routing decisions, expert weight distributions, and layer-wise contributions. Experiments with multilingual code-switched data (e.g. Arabic-Latin) show that a BTX-based model trained using MixtureKit can outperform baseline dense models on multiple benchmarks. We release MixtureKit as a practical foundation for research and development of MoE-based systems across diverse domains.
The library is accessible at: $\textit{Link will be provided upon acceptance}$.
Primary Area: infrastructure, software libraries, hardware, systems, etc.
Submission Number: 21958
Loading