Assist Users' Interactions in Font Search with Unexpected but Useful Concepts Generated by Multimodal Learning
Abstract: When searching for suitable fonts for a digital graphic, users usually start with an ambiguous thought. For example, they would look for fonts that are suitable for a personal web page or party invitations for children. Their design concept becomes clearer as they interact with external interventions such as exposure to suitable images for use in their web page or the children's preferences regarding the party. Hence, it is important to support users' interactions with unexpected but useful concepts during their search. In this paper, we present a novel framework that helps users to explore a font dataset using the multimodal method that provides unexpected but useful font images or concept words in response to the user's input. We collect a large font dataset and the associated tags and propose the use of unsupervised generative model that jointly learns the correlation between the visual features of a font and the associated tags for the creative process. By examining the results of the model that change with various inputs, we observed that the model produces highly promising results. In the experiment, we verified that the generated concepts by the model are not only new but also relevant to the user input that appears to be useful for inspiring users.
0 Replies
Loading