Abstract: We present FashionGPT, a series of fine-tuned Large Language Models (LLMs) which demonstrate outstanding performance and stand at first place in HuggingFace Open LLM Leaderboard twice. In contrast to conventional dataset fusion fine-tuning, we propose a novel instruction fine-tuning paradigm, called multiple LoRA-adapter fusion fine-tuning. This paradigm involves fine-tuning multiple independent LoRA-adapters based on distinct datasets, which are subsequently fused using learnable weights to create a versatile large language model. Extensive experiments demonstrate that the LLMs fine-tuned with the LoRA-adapter fusion approaches outperform those fine-tuned with the dataset fusion approaches. The FashionGPT series is released in https://huggingface.co/ICBU-NPU/ and only for research use.
Loading