Artificial Pain Representation with Tactile and Vision Blending

Francisco Ribeiro, Alexandre Bernardino, José Santos-Victor, Minoru Asada, Erhan Oztop

Published: 2025, Last Modified: 04 Apr 2026Humanoids 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As robots become increasingly embedded in human environments, the ability to anticipate the outcomes of physical contact is crucial for enabling safe, adaptive, and socially intelligent behavior. Thus, learning to discriminate harmful sensory patterns from the benign ones will not only ensure physical safety during robot interaction, but may also lay the foundation for artificial empathy through mirroring the pain of others. To this end, this work develops a framework for tactile prediction through multimodal learning, emphasizing the integration of visual and tactile information in a common latent space. The ability to predict tactile sensations prior to contact allows a robot to avoid harmful outcomes as well as internalizing the tactile experience of others. We adapt the Deep Modality Blending Network (DMBN) as a foundational model for this task. Using demonstrations involving both gentle and noxious human touch, synchronized visual and tactile data are collected to train the model. After learning, the robot can generate temporal tactile activations from visual observations alone, anticipating sensory outcomes before physical contact occurs. Experiments on an upper-body humanoid robot show that it can predict painful stimuli and mirror tactile experiences observed in others. The key contributions of this study include: (1) the development of a predictive tactile perception framework using DMBNs, (2) the adaptation of this framework for modeling artificial pain that may be used as a basis for artificial empathy, and (3) empirical validation using real-world humanrobot interaction scenarios.
Loading