Abstract: Animated human (AH) have gained popularity due to their vivid appearance and smooth, natural movements. Various animation methods based on artificial intelligence (AI) have been introduced, which are viewed as “Imitators,” offering new solutions for designing AHs. However, the effectiveness of these AI-generated AHs varies significantly across different categories and within the same category, leading to visual distortions that adversely affect the viewer’s experience. Consequently, it is essential to evaluate the quality of AHs to provide reliable and objective indicators for their further development and to ensure the delivery of higher-quality AH videos to users. In this paper, the first Animated Human Quality Assessment (AHQA) dataset is constructed by selecting 6 advanced and popular imitators and 10 common actions to animate 20 AI-generated characters. The constructed dataset integrates different genders and age groups of character images, and two types of poses, standing and sitting, are selected, highlighting the comprehensiveness and diversity of the AHQA dataset. Subjective experiments reveal significant differences in the quality of AHs produced by different imitators. Finally, we propose a quality assessment method, VIP-QA, incorporating Video quality, Identity consistency, and Posture similarity for the AHQA dataset. Experimental results show that VIP-QA significantly outperforms existing assessment methods on multiple datasets by about 5%, more closely approximates human visual perception, and provides a valid objective metric for assessing imitators. All the work in this paper has been released at https://github.com/zyj-2000/Imitator.
External IDs:dblp:journals/tcsv/ZhouZJJLMZ25
Loading