ControlTac: Force- and Position-Controlled Tactile Data Augmentation with a Single Reference Image

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tactile Sensing, Tactile Data Augmentation, Tactile Generation, Robot Learning
TL;DR: In this paper, we propose ControlTac, a two-stage, controllable tactile generative model that uses a single reference image plus force and position priors to synthesize realistic tactile images, boosting performance across various downstream tasks.
Abstract: Vision-based tactile sensing is widely used in perception, reconstruction, and robotic manipulation, yet collecting large-scale tactile data remains costly due to diverse sensor-object interactions and inconsistencies across sensor instances. Existing approaches to scaling tactile data—simulation and free-form tactile generation—often yield unrealistically rendered signals with poor transfer to highly dynamic real-world tasks. We propose **ControlTac**, a two-stage controllable framework that generates realistic tactile images conditioned on a single reference tactile image, contact force, and contact position. By grounding generation in these important physical priors, **ControlTac** produces realistic samples that effectively capture task-relevant variations. Across three downstream tasks and three real-world experiments, the augmented datasets using our approach consistently improve performance and demonstrate practical utility in dynamic real-world settings. Project page: https://controltac.github.io/
Supplementary Material: zip
Primary Area: applications to robotics, autonomy, planning
Submission Number: 13062
Loading