Distributed Multi-Agent Preference Learning for An IoT-enriched Smart Space

Published: 01 Jan 2019, Last Modified: 27 Jul 2024ICDCS 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: There have been several efforts on preference learning in a smart space by means of multi-agent collaborations. Each agent captures a user action or handles part of learning but decision makings are done in a centralized manner. This makes it difficult for a smart space to deal with learning complexity due to the increase and reconfiguration of smart devices. While the complexity is relieved by articulating the learning space, it is not flexible because the articulation procedure needs to be resumed whenever a smart space reconfiguration occurs. In this paper, we propose a distributed multi-agent preference learning architecture which allows a group of physically separate agents to collaborate with each other for learning a user's task preference efficiently in an IoT enriched smart space. For this, the proposed scheme provides four key features: ontology-based knowledge structure for task-driven agent collaboration, knowledge exchange protocol for task-aware causality among agents, Q-learners for observing and learning from user behaviors, and negotiation and acknowledgement protocol for preventing agents from performing disorganized actions. Evaluation results show that the proposed scheme allows smart device agents to learn user preferences in a fully distributed way and outperforms existing approaches in terms of learning speed and system overhead.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview