Hopscotch: A Hardware-Software Co-Design for Efficient Cache Resizing on Multi-Core SoCs

Published: 01 Jan 2024, Last Modified: 26 Jul 2025IEEE Trans. Parallel Distributed Syst. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Following the trend of increasing autonomy in real-time systems, multi-core System-on-Chips (SoCs) have enabled devices to better handle the large streams of data and intensive computation required by such autonomous systems. In modern multi-core SoCs, each L1 cache is designed to be tied to an individual processor, and a processor can only access its own L1 cache. This design method ensures the system's average throughput, but also limits the possibility of parallelism, significantly reducing the system's real-time schedulability. To overcome this problem, we present a new system framework for highly-parallel multi-core systems, Hopscotch. Hopscotch introduces re-sizable L1 cache which is shared between processors in the same computing cluster. At execution, Hopscotch dynamically allocates L1 cache capacity to the tasks executed by the processors, unblocking the available parallelism in the system. Based on the new hardware architecture, we also present a new theoretical model and schedulability analysis providing cache size selection methods and corresponding timing guarantees for the system. As demonstrated in the evaluations, Hopscotch effectively improves system-level schedulability with negligible extra overhead.
Loading