Abstract: In the fields of art history and visual semiotics, analysing gazes in paintings is important to understand the artwork, and to find semantic relationships between several paintings. Thanks to digitization and museum initiatives, the volume of datasets on artworks continues to expand, enabling new avenues for exploration and research. Artificial neural networks, trained on large datasets are able to extract complex features, and visually compare artworks. This comparison could be done by focusing on the objects present in the paintings, and matching paintings with high object co-occurrence. Our research takes this further by studying the way objects are viewed by characters in the scene. This study proposes a new approach that combines methods for gaze-based and visual-based similarity, to encode and use gaze information for finding similar paintings, while maintaining a close visual aspect. Experimental results which integrate the opinions of domain experts, show that these methods c
External IDs:dblp:conf/visigrapp/MasclefSYM25
Loading