From Pixels to Reasoning: A Cross-Layer Photonic Design for Edge Visual Intelligence

Published: 2025, Last Modified: 12 Nov 2025ISVLSI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Advancing energy-efficient, real-time visual intelligence at the sensor edge is crucial for applications such as autonomous systems, surveillance, drones, and augmented reality. In this paper, we present a unified summary of our recent photonic accelerator designs-OISA, Lightator, and Neuro-Photonix-which collectively demonstrate a new paradigm for near-sensor vision processing through silicon photonic neural networks and neuro-symbolic computing. These works adopt a cross-layer hardware-software co-design methodology to process visual data directly at the sensor, dramatically reducing the need for data transmission to the cloud. OISA and Lightator achieve power reductions of $24 \times$ and $73 \times$, respectively, over conventional photonic and GPU-based baselines. Neuro-Photonix extends this foundation by enabling neuro-symbolic AI at the edge, combining analog photonic computation with efficient, granularity-controllable convolutions, a low-cost ADC, and native generation of HyperDimensional (HD) vectors for symbolic reasoning. Together, these designs pave the way for explainable, ultra-low-power, and high-performance edge AI, with Neuro-Photonix reaching up to $30 \mathrm{GOPS} / \mathrm{W}$ and achieving $20.8 \times$ and $4.1 \times$ power savings over ASIC and photonic baselines, respectively.
Loading