Abstract: The rapid uptake of large language models (LLMs) has highlighted the considerable energy use and carbon emissions associated with inference, yet end-users remain largely unaware of these hidden costs. We present LOCOMO AR (LOwer COnsumption, More Optimization), an augmented reality application that integrates real-time computer vision, speech recognition and LLM reasoning with on-device visualization of the environmental footprint generated by each query. We demonstrate the system across assistive, cultural heritage, and healthcare scenarios. By integrating real-time estimates of carbon emissions, we offer a lightweight approach for promoting environmentally responsible AI use on consumer devices.
External IDs:dblp:conf/ismar/NamJMLU25
Loading