Innovative Techniques for Efficient Hyperdimensional Computing on Hardware: Enhance Accuracy and On-the-Fly Hypervector Generation

Published: 01 Jan 2026, Last Modified: 04 May 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Hyperdimensional Computing (HDC) encodes data and learning operations into high-dimensional vectors (hypervectors), enabling robust and rapidly adaptable machine learning on resource-limited platforms such as Field-Programmable Gate Arrays (FPGAs) and edge devices. Despite this potential, conventional HDC systems often demand extensive memory resources to store base, level, and class hypervectors, which can limit scalability and performance in hardware implementations for Artificial Intelligence (AI) and Internet of Things (IoT) applications. This paper addresses these issues through two main innovations. First, it introduces a combinational-logic approach that generates hypervectors on the fly, eliminating the need for large lookup tables and thereby substantially reducing memory overhead. Second, it presents an orthogonal hypervector generation scheme based on sequences such as Hadamard, Walsh, and Gold, ensuring highly uncorrelated representations that enhance classification accuracy (particularly in single-shot learning) while remaining effective over multiple training epochs. Experimental evaluations on standard benchmarks, including ISOLET and UCI-HAR, show notable gains in classification performance as well as significant reductions in memory consumption and lookup table usage. These results highlight the viability of integrating logic-based hypervector synthesis with orthogonal vector design to create an efficient, power-conscious, and high-throughput HDC framework suitable for real-time edge AI and IoT scenarios. By uniting these techniques, the proposed approach advances the practical deployment of hyperdimensional computing in embedded and resource-constrained environments.
Loading