Keywords: Pipeline Parallelism, Decentralized Training, Adversarial Machine Learning
TL;DR: SENTINEL provides a worker verification mechanism for pipeline parallel decentralized training by monitoring inter-stage activations and activation gradients that workers communicate.
Abstract: Decentralized training introduces critical security risks when executed across untrusted, geographically distributed nodes. While existing Byzantine-tolerant literature addresses data parallel (DP) training through robust aggregation methods, pipeline parallelism (PP) presents fundamentally distinct challenges. In PP, model layers are distributed across workers where the activations and their gradients flow between stages rather than being aggregated, making traditional DP approaches inapplicable. We propose SENTINEL, a verification mechanism for PP training *without computation duplication*. SENTINEL employs lightweight momentum-based monitoring using exponential moving averages (EMAs) to detect corrupted inter-stage communication. Unlike existing Byzantine-tolerant approaches for DP that aggregate parameter gradients *across replicas*, our approach verifies sequential activation/gradient transmission *between layers*. We provide theoretical convergence guarantees for this new setting that recovers classical convergence rates when relaxed to standard training. Experiments demonstrate successful training of billion-parameter LLMs across untrusted distributed environments with hundreds of workers while maintaining model convergence and performance.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16044
Loading