Integrating Thermal Imaging and Deep Learning for Real-Time Strength Estimation in 3D Concrete Printing

Published: 22 Sept 2025, Last Modified: 22 Sept 2025WiML @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D concrete printng, Semantic image segmentation, Thermal imaging
Abstract: 3D concrete printing (3DCP) is transforming the construction industry by enhancing sustainability and efficiency [1]. However, accurately estimating the strength of fresh concrete is challenging due to the limitations of traditional methods, which are time-consuming and destructive [2]. This research aims to adapt the well-established maturity method—traditionally used to estimate in-situ concrete strength from time-temperature data—by innovatively integrating semantic image segmentation and thermal imaging tailored specifically for 3DCP applications. This vision–thermal pipeline enables continuous, contactless strength estimates during printing, reducing reliance on destructive tests. We evaluated three semantic segmentation models to segment freshly deposited concrete in RGB images: DeepLabv3+ (Xception), U-Net (EfficientNet), and PSPNet (ResNet-50). These models were chosen for their strong segmentation capabilities and ease of deployment, making them well-suited for layer-wise 3D concrete printing. We curated ~1,050 paired RGB–thermal sequences (~11,000 labeled frames) from 33 early-age prints across two mixes and varied lighting conditions. The data were split into three groups (70/15/15) to prevent cross-print leakage, and temperatures were logged over the first 2–6 hours. Training was performed using SGDM with L2 weight decay, an initial learning rate of 0.003 with a piecewise schedule, and per-epoch shuffling, along with geometric & photometric augmentations to enhance robustness. We ablated the mini-batch size (10, 30, 60) and trained for up to 100 epochs with early stopping on a held-out validation set (patience = 10) to mitigate overfitting. The models were evaluated using four metrics: mean accuracy, global accuracy, Intersection over Union (IoU), and boundary factor (BF) score, as well as a qualitative visual assessment. As shown in Table 1, DeepLabv3+(Xception) outperformed the others across all metrics. While U-Net (EfficientNet) and PSPNet (ResNet-50) were less accurate, they still demonstrated strong reliability in segmenting concrete under varied conditions, though they struggled with distinguishing old from fresh concrete and detecting edges in complex environments. The second part of this study used a thermal imaging (IR) camera (FLIR T540) to capture the real-time temperature evolution of 3D-printed concrete layers. We first assessed reliability through sensitivity tests and then conducted controlled trials to capture thermal images at key stages of concrete setting. Surface temperatures were cross-validated with contact sensors (RTDs). RGB and thermal frames were processed in FLIR Tools+ to extract sectional temperatures for the printed layers and cube specimens. Building on the initial findings, the next phase will involve creating a learnable regression head that fuses segmentation and thermal features to predict end-to-end strength. Predicted strengths will be benchmarked against standard uniaxial compressive tests and the conventional maturity method to quantify accuracy, bias, and reliability. We will further validate the system in controlled and on-site 3DCP trials. The proposed methodology is expected to advance the methods of strength estimation through the innovative use of thermal data and demonstrate practical applications of machine learning in enhancing the reliability and safety of 3D-printed concrete structures.
Submission Number: 389
Loading