Co-Here: an expressive videoconferencing module for implicit affective interaction

Published: 23 Jan 2024, Last Modified: 30 May 2024GI 2024EveryoneRevisionsBibTeXCC BY 4.0
Letter Of Changes: We thank the reviewers for their insightful comments. We have made the following changes: - **RQ1 & RQ1.1**: there was some confusion around the relationship between RQ1.1 and RQ1. We agree that both of these research questions are quite similar, and it makes sense that "understanding emotion" can just be a special case of "understanding meaning". We have thus conflated RQ1.1 and RQ1 by deleting RQ1.1 and updating the RQ1.1 findings to relate to RQ1. - **Table 2**: This table was difficult to understand (R1, R2, AC). We have replaced this table with a new graphic (Figure 6) and description that more clearly shows the relationship between Themes and Categories. - **Show more information about participants, codes, and categories**: Reviewers wished to see more information about the relationships between codes, categories, and participants, including their frequency breakdown. We had over 200 codes, so we opted to focus on the relationship between categories and participants by creating a new figure (Fig. 7). It is a chord diagram that shows how the quotes of each participant were categorized. It additionally functions as showing the proportions of each of the categories in our corpus, and the relative contributions of each participant. - **Additional demographic information**: we include past experience with videoconferencing as well as their frequency summaries on the bottom of Figure 5. We believe that additional information won't provide a meaningful epistemic contribution to the paper as per https://interactions.acm.org/archive/view/january-february-2024/evaluating-interpretive-research-in-hci - **Writing requests: synthesis, discussion, and conciseness** - R2 wished for more synthesis of results - we have added sentences to the start of Theme 1 and Theme 2 that further summarizes the themes and categories. - R1 mentioned that there were opportunities for more conciseness - we removed irrelevant sentences in our introduction that did not add to the overall theoretical background to our research contribution, as well as made minor tweaks throughout the paper for conciseness (e.g. "situational contexts" -> "context"). - R2 wanted to see more discussion - we have expanded our discussion section with an extra section (6.3) that unifies several categories (promotion of emotional expression, ethics, identity) - **Open access to data** - R2 requested open access to codes - we have included a link to our airtable with anonymized qual. codes, categories, and themes on page 5 - R1 requested open access to the github code base - we have included the github url on page 3.
Keywords: telepresence, affective computing, affective interaction, videoconferencing
TL;DR: A qualitative evaluation of a new animation system that enhances affective interaction while giving 1:N presentations remotely
Abstract: Participants of one-to-many videoconferencing calls often experience a significant loss of affective feedback due to the limitations of the medium. To address this problem, we present Co-Here: a videoconferencing module that renders participant-driven animations that convey group affect without relying on categorical emotion detection or signalling. Co-Here consolidates and visualizes facial expressions and head movements of fellow videoconferencers, providing a sufficient medium for others to meaningfully construct emotional impressions. Our qualitative user study showed that the system helped users feel a sense of alignment with the emotions of others using the system. Co-Here had the most utility to presentation viewers, who reported that the system offered a low-attention alternative to gauging the sentiment of the crowd.
Supplementary Material: zip
Video: zip
Submission Number: 21
Loading