everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
With the rise of large foundation models, split inference (SI) has emerged as a popular computational paradigm for deploying models across lightweight edge devices and cloud servers, addressing both data privacy and computational cost concerns. However, most existing data reconstruction attacks have focused on smaller classification models like ResNet, leaving the privacy risks of foundation models in SI settings largely unexplored. To address this gap, we propose a novel data reconstruction attack based on guided diffusion, which leverages the rich prior knowledge embedded in a latent diffusion model (LDM) pretrained on a large-scale dataset. Our method performs iterative reconstruction on the LDM’s learned image manifold, effectively generating high-fidelity images closely resembling the original data from their intermediate representations (IR). Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods, both qualitatively and quantitatively, in reconstructing data from deep-layer IRs of the vision foundation model. The results highlight the urgent need for more robust privacy protection mechanisms for large models in SI scenarios.