Keywords: Imagery, Creativity, Computational Modeling
TL;DR: The paper explores visual imagery and creativity using a generative computational model that is biologically and cognitively plausible
Abstract: How do we imagine visual objects, and combine them to create new forms? To answer this question, we need to explore the cognitive, computational and neural mechanisms underlying imagery and creativity. The body of research on deep learning models with creative behaviors is growing. However, in this paper we suggest that the complexity of such models and their training sets is an impediment to using them as tools to understand human aspects of creativity. We propose using simpler models, inspired by neural and cognitive mechanisms, that are trained with smaller data sets. We show that a standard deep learning architecture can demonstrate imagery by generating shape/color combinations using only symbolic codes as input. However, generating a new combination that was not experienced by the model was not possible. We discuss the limitations of such models, and explain how creativity could be embedded by incorporating mechanisms to transform the network’s output into new combinations and use that as new training data.