Keywords: Virtual/Augmented Reality, Interaction Design, Creativity Support
Abstract: Recent progress in head-worn 3D displays has made mixed-reality storytelling, which allows digital art to interact with the physical surroundings, a new and promising medium to visualize ideas and bring sketches to life. While previous works have introduced dynamic sketching and animation in 3D spaces, visual and audio effects typically need to be manually specified. We present EnchantedBrush, a novel mixed-reality approach for creating animated storyboards with automatic motion and sound effects in real-world environments. People can create animations that interact with their physical surroundings using a set of interactive motion and sound brushes. We evaluated our approach with 12 participants, including professional artists. The results suggest that EnchantedBrush facilitates storytelling and communication, and utilizing the physical environment eases animation authoring and simplifies story creation.
Track: HCI/visualization
Accompanying Video: zip
Revision: No
Summary Of Changes: We thank the reviewers for their detailed feedback and suggestions. We take the comments seriously and have addressed them in the revised version. The feedback has helped us improve the quality of our paper, and we greatly appreciate all reviews.
Below is a summary of the changes that we made in the paper in response to each point raised by the meta reviews and all reviews. We refer to Area Chair wxbc as AC and Reviewer ET9a, szzR, 24c1 as R1, R2, and R3, respectively.
AC, R1, R2: Authors need to be more precise about what is novel compared to previous works, and what are the limits of previous systems
Changes: We stressed the novelty of our work in the contribution part located at the end of the Introduction section. In addition, we improved the clarification of previous works’ limitations in Section 2.1.
AC, R1, R3: Clarify the range of objects that the system can handle
Changes: We clarified the range of objects that the system can handle in Section 4.3 Sketch Recognition. (In detail, the system can handle eight categories of objects, and each category was also specified in the section. This range of objects is designed based on the stories supported by EnchantedBrush, and it is sufficient for the purpose of validating the interaction concept of EnchantedBrush.)
AC, R1, R3: Add details about the sketch recognition system
Changes: More details about the sketch recognition system have been added in Section 4.3 Sketch Recognition and 6.4 Sketch Recognition Performance. We described the basic structure of the network, the number of images we used for training/testing as well as the number of object categories the recognizer can classify. In addition, we discussed the accuracy of both 2D and 3D sketch recognition in Section 6.4.
AC, R3: Discuss the menu that appears in the video
Changes: We added Figure 6 to illustrate the menu that appears in the video and added the menu descriptions in Section 4.1 System Overview and Setup. The menu consists of four brushes, and they correspond to Sketch Brush, Motion Brush, Path Brush, and Sound Brush. The reviewer is right that this menu is designed in order to help break the operations of sounds and animations into chunks. Users can select different brushes using the menu. Sketch Brush is for sketching elements. Motion Brush is for adding motion lines and animation. Path Brush is used for adding customized paths. Sound Brush is for adding sound components.
AC: Describe and discuss the current performance of the system
Changes: We added details of the sketch recognition system in Section 4.3 and 6.4. Besides, we expanded the performance discussion in Section 7 Limitations and Future Work. Since we focus on demonstrating the interaction concepts, users are required to perform a set of fixed tasks, so the automated sound/path mechanism works for the supported range of objects. Nevertheless, the scale of EnchantedBrush could be expanded in future works so that users could draw arbitrary objects or anthropomorphized objects using a broader set of visual languages and a larger dataset.
AC, R2: Add motivation about why using a homemade questionnaire, and tone down related claims
Changes: We discussed our motivation behind the tailored questionnaire in Section 6.1. In detail, we took the System Usability Scale as a reference and tailor-made our questionnaire specifically for our focus of attention, i.e., the user experience with the proposed features and storytelling. Also, we turned down the “usability” claim into “our interface is easy to use for storytelling” at the end of Section 6.1 Quantitative Metrics.
R1: The authors state that real-world mapping is done by an off-the-shelf piece of technology, but do not indicate how well this works and whether problems arise for the user's experience.
Changes: Explanations were added at the end of Section 6.2 Qualitative Feedback. Based on the discussions we had with participants, none of the participants mentioned that they encountered mapping issues that affected their user experience.
R2: It is not said how did authors analyse the qualitative feedback.
Changes: We added more discussions in Section 6.2 Qualitative Feedback to explain how we analyzed the qualitative feedback. In detail, while we performed the interview and discussion with the participants, we made an audio recording with their consent. The interviews were conducted to gather their feelings and experience during the user study. We then summarized the common and important comments in their responses and present the highlighted quotes in their feedback.
R3: It would be nice if the authors put an image or two of one of the study participants' T4 drawings.
Changes: We agree that task 4 is the most interesting task, so we added two more images of T4 in addition to the abandoned cow included in our previous submission. Figure 10 now shows three drawings that participants made in T4 including a UFO, a cow and an ambulance.
3 Replies
Loading