VHAKG: Multi-modal Knowledge Graphs with Multi-view Videos of Daily Activities

Shusaku Egami, Takanori Ugai, Swe Nwe Nwe Htun, Ken Fukuda

Published: 03 Jun 2024, Last Modified: 08 Jan 2026ZenodoEveryoneRevisionsCC BY-SA 4.0
Abstract: Outline This dataset is a multimodal knowledge graph (MMKG) of daily activity videos. This dataset integrates a KG with embedded multi-view videos created by VirtualHome-AIST, an extended version of the VirtualHome simulator, and an event-centric KG generated by VirtualHome2KG. We named this dataset VHAKG (VirtualHome-AIST-KG). Details VHAKG describes 2D bounding boxes of objects every five frames, compositional activities, primitive actions, target objects, object states, 3D bounding boxes, and their time-series changes. The videos are encoded in base64 and embedded as a literal value. VHAKG consists of 706 daily activity scenarios (e.g., clean desk, cook fried bread, and relax on sofa) and 3,530 videos captured by five synchronized cameras per scenario. The file format is RDF (Turtle), which can be loaded into various Triplestores. VHAKG's vocabularies are defined as an ontology and can be found in vh2kg_schema_v2.0.0.ttl. Contents vh2kg_video_base64.tar.gz {activity name}{scene}_{camera}_2dbbox.ttl: KG with video embedded in base64 format, including 2D bounding box data every 5 frames. To learn more about {scene}, check here. To learn more about {camera}, check here. vh2kg_event.tar.gz {activity name}_{scene}.ttl: Event-centric KGs representing video content as sequences of events. vh2kg_schema_v2.0.0.ttl: The ontology file of this dataset. affordance.ttl: The affordance data of objects that were created by crowdsourcing. Please see Section III.B of this paper for more information. add_places.ttl: Events in which agents moved from one room to another. Tools A set of tools for searching and extracting videos from VHAKG is available.
Loading