Abstract: In current mainstream radar-based personnel identification technologies, the identification process typically relies on detecting the Doppler effect or the intensity of radar reflection signals. However, this method encounters limitations when there are multiple people within the radar detection area, as it cannot precisely locate each person in the real space. Moreover, when these signals are directly input into neural networks for learning, the networks tend to capture macroscopic information, such as body reflections and velocity, while overlooking detailed information crucial for identification, such as body posture and gait. To overcome this challenge, this study proposes a real-time through-wall multiperson identification system based on MIMO radar, named RTMP-ID. This system employs a global-local dual-branch structure to learn fine-grained identity information. The local branch focuses on introducing a radar-based human posture estimation network, aiming to accurately extract sequences of human postures. Based on these sequences, an identity feature extraction model is constructed to derive individual identity information from the postures. Meanwhile, the global branch integrates the posture information obtained from the local branch in a feedback manner, further accurately extracting the target reflection regions for feature extraction. By fusing the features extracted by both branches, the system achieves accurate identification of individuals. Our research results emphasized the critical importance of introducing 3-D pose sequences to enhance the robustness and accuracy of multitarget person identification. We conducted experiments across three scenarios and achieved a maximum recognition rate of 97.4%, even in the presence of stationary individuals.
External IDs:dblp:journals/iotj/WangHGRSG25
Loading