Keywords: E-commerce Retrieval, Search Relevance, Large language models
TL;DR: We prove that for multilingual retrieval, concatenating expert embeddings works better than averaging them because it preserves the semantic manifold structure, avoiding the 'anisotropy collapse' typical in LLMs.
Abstract: In cross-border e-commerce, search relevance modeling faces the dual challenge of extreme linguistic diversity and fine-grained se- mantic nuances. Existing approaches typically rely on scaling up a single monolithic Large Language Model (LLM). However, our empirical analysis reveals that single models suffer from uneven capability distributions across regions—for instance, excelling in English while underperforming in specific Southeast Asian lan- guages. In this work, we shift the paradigm from scaling a sin- gle model to orchestrating heterogeneous experts. We propose a scalable Coarse-grained Mixture-of-Experts (MoE) framework that leverages the inherent complementarity of distinct open-source LLMs (e.g., Qwen, Gemma) without expensive pre-training. Un- like standard token-level MoE, our framework dynamically routes entire queries to specialized experts and, crucially, employs an Information-Preserving Concatenation Fusion strategy. We theo- retically posit that preserving the distinct embedding manifolds of heterogeneous experts—rather than compressing them via weighted averaging—is essential for capturing complex relevance signals in a multi-model latent space. On datasets spanning six Southeast Asian markets, our MoE improves AUC by 0.72 percentage points over a dense baseline with the same active parameters. Meanwhile, the optimized pipeline achieves 13.72 queries per second (QPS), a 9% throughput improvement.
Submission Number: 24
Loading