Learning via social awareness: improving sketch representations with facial feedbackDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: In the quest towards general artificial intelligence (AI), researchers have explored developing loss functions that act as intrinsic motivators in the absence of external rewards. This paper argues that such research has overlooked an important and useful intrinsic motivator: social interaction. We posit that making an AI agent aware of implicit social feedback from humans can allow for faster learning of more generalizable and useful representations, and could potentially impact AI safety. We collect social feedback in the form of facial expression reactions to samples from Sketch RNN, an LSTM-based variational autoencoder (VAE) designed to produce sketch drawings. We use a Latent Constraints GAN (LC-GAN) to learn from the facial feedback of a small group of viewers, and then show in an independent evaluation with 76 users that this model produced sketches that lead to significantly more positive facial expressions. Thus, we establish that implicit social feedback can improve the output of a deep learning model.
Keywords: social feedback, LSTM, VAE, latent constraints, GAN, MDN, facial expressions, intrinsic motivation
TL;DR: We argue that awareness of implicit social feedback is an important intrinsic motivator for both humans and deep learning models, and provide evidence that awareness of social cues like facial expressions can improve the output of a VAE sequence model.
3 Replies

Loading