HyperPALoRA: Parameter-Efficient Pareto Hypernetworks via Preference-Based Diverse Low-Rank Adaptations
Keywords: Pareto front learning, multi-objective optimization, hypernetworks, low rank adaptations
TL;DR: We propose PHN-LoRA, a parameter-efficient Pareto Front Learning method that uses hypernetworks to generate preference-conditioned low-rank adaptations (LoRA) for multi-task learning.
Abstract: Multi-task learning (MTL) is often cast as multi-objective optimization, where Pareto Front Learning (PFL) seeks a continuum of task trade-offs with a single model. Existing PFL methods, especially Pareto Hypernetworks (PHNs), capture complex relationships between task tradeoffs and solution space but struggle with scalability, memory, and convergence. In this work, we propose a parameter-efficient PFL framework that augments a shared backbone with low-rank adaptations (LoRA) generated by a single preference-conditioned hypernetwork. First, instead of predicting full target-network weights, our hypernetwork outputs a single preference-aligned LoRA, sharply
reducing preference-dependent parameters and improving PHN scalability. Second. unlike prior LoRA-based approaches that linearly combine per-task LoRAs and thus only realize convex trade-offs, our PHN-based formulation naturally represents non-convex Pareto fronts, as verified on classical non-convex benchmarks. Third, to combat the limited diversity of prior PFL methods, we introduce a contrastive, preference-aware loss that keeps neighboring preferences similar while separating distant ones, yielding a well-spread set of Pareto-optimal solutions. Experiments on standard MTL benchmarks show that our method matches or outperforms state-of-the-art PFL baselines while offering substantially improved parameter efficiency compared to PHN-based PFL methods.
Submission Number: 157
Loading