Abstract: Distributed Multi-exit Neural Networks (MeNNs) use partitioning
and early exits to reduce the cost of neural network inference
on low-power sensing systems. Existing MeNNs exhibit high
inference accuracy using policies that select when to exit based
on data-dependent prediction confidence. This paper presents a
side-channel attack against distributed MeNNs employing datadependent
early exit policies.We find that an adversary can observe
when a distributed MeNN exits early using encrypted communication
patterns. An adversary can then use these observations to
discover the MeNN’s predictions with over $1.85\times$ the accuracy of
random guessing. In some cases, the side-channel leaks over 80\% of
the model’s predictions. This leakage occurs because prior policies
make decisions using a single threshold on varying prediction confidence
distributions. We address this problem through two new exit
policies. The first method, Per-Class Exiting (PCE), uses multiple
thresholds to balance exit rates across predicted classes. This policy
retains high accuracy and lowers prediction leakage, but we prove
it has no privacy guarantees. We obtain these guarantees with a
second policy, Confidence-Guided Randomness (CGR), which randomly
selects when to exit using probabilities biased toward PCE’s
decisions. CGR provides statistically equivalent privacy with consistently
higher inference accuracy than exiting early uniformly
at random. Both PCE and CGR have low overhead, making them
viable security solutions in resource-constrained settings.
Loading