Keywords: Large Language Models, Hyperbolic Space, Low-Rank Adaptation, Embedding Space
TL;DR: A novel method to efficiently fine-tune Large Language Models in hyperbolic space, unlocking their latent tree-like structures and significantly boosting complex reasoning performance by adapting directly on the hyperbolic manifold.
Abstract: Large language models (LLMs) have demonstrated remarkable performance on various tasks. However, it remains an open question whether the default Euclidean space is the most suitable choice for embedding tokens in LLMs.
   In this study, we investigate the non-Euclidean characteristics of LLMs. 
   Our findings reveal that token frequency follows a power-law distribution, with high-frequency tokens clustering near the origin and low-frequency tokens positioned farther away. Additionally, token embeddings exhibit a high degree of hyperbolicity, indicating a latent tree-like structure in the embedding space. 
   Motivated by these observations, we propose to efficiently fine-tune LLMs in hyperbolic space to better exploit the underlying complex structures. 
   However, we find that this hyperbolic fine-tuning cannot be achieved through the naive application of exponential and logarithmic maps when the embedding and weight matrices both reside in Euclidean space.
   To address this technical issue, we introduce hyperbolic low-rank efficient fine-tuning, HypLoRA, which performs low-rank adaptation directly on the hyperbolic manifold, preventing the cancellation effect produced by consecutive exponential and logarithmic maps and thereby preserving hyperbolic modeling capabilities.
   Extensive experiments across various base models and two different reasoning benchmarks, specifically arithmetic and commonsense reasoning tasks, demonstrate that HypLoRA substantially improves LLM performance.
Supplementary Material:  zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 20160
Loading