Abstract: We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
Keywords: simulated environment, virtual embodiment, multimodality, reinforcement learning, gym, language grounding
TL;DR: HoME is an open-source and extensible platform for artificial agents to learn at large-scale from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context of thousands of simulated 3D household environments.
4 Replies
Loading