Predicting Artist Drawing Activity via Multi-camera Inputs for Co-creative DrawingOpen Website

Published: 01 Jan 2021, Last Modified: 17 Feb 2024TAROS 2021Readers: Everyone
Abstract: This paper presents the results of computer vision experiments in the perception of an artist drawing with analog media (pen and paper), with the aim to contribute towards a human-robot co-creative drawing system. Using data gathered from user studies with artists and illustrators, two types of CNN models were designed and evaluated. Both models use multi-camera images of the drawing surface as input. One models predicts an artist’s activity (e.g. are they drawing or not?). The other model predicts the position of the pen on the canvas. Results of different combination of input sources are presented. The overall mean accuracy is 95% (std: 7%) for predicting when the artist is present and 68% (std: 15%) for predicting when the artist is drawing. The model predicts the pen’s position on the drawing canvas with a mean squared error (in normalised units) of 0.0034 (std: 0.0099). These results contribute towards the development of an autonomous robotic system which is aware of an artist at work via camera based input. In addition, this benefits the artist with a more fluid physical to digital workflow for creative content creation.
0 Replies

Loading