Synthetic Speaking Children - Why We Need Them and How to Make Them

Published: 2023, Last Modified: 15 Dec 2025SpeD 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Contemporary Human-Computer Interaction (HCI) research relies primarily on neural network models for machine vision and speech understanding of a system user. Such models require extensively annotated training datasets for optimal performance and when building interfaces for users from a vulnerable population such as young children, GDPR introduces significant complexities in data collection, management, and processing. Motivated by the training needs of an Edge-AI smart-toy platform this research explores the latest advances in generative neural technologies and provides a working proof-of-concept of a controllable data-generation pipeline for speech-driven facial training data at scale. In this context, we demonstrate how StyleGAN-2 can be fine-tuned to create a gender-balanced dataset of children's faces. This dataset includes a variety of controllable factors such as facial expressions, age variations, facial poses, and even speech-driven animations with realistic lip synchronization. By combining generative text-to-speech models for child voice synthesis and a 3D landmark-based talking heads pipeline, we can generate highly realistic, entirely synthetic, talking child video clips. These video clips can provide valuable, and controllable, synthetic training data for neural network models, bridging the gap when real data is scarce or restricted due to privacy regulations.
Loading