Towards Situation-Adaptive In-Vehicle Voice OutputOpen Website

2020 (modified: 14 Dec 2022)CIU 2020Readers: Everyone
Abstract: Human-machine interaction is increasingly speech-based, with a trend away from the earlier command-based style towards natural, intuitive dialogues based on the human model. A prerequisite is the ability of a Spoken Dialogue System to flexibly react according to individual requirements, e.g., by means of adaptive voice output. The necessity to maximize the efficiency of language interaction through alignment at all linguistic levels becomes particularly relevant in dual-task situations. Here speech represents a secondary task in parallel to a prioritized primary task, such as driving a car. In addition to the individual requirements of a user, the demands of the interaction context need to be considered. For this purpose, it is beneficial to examine the particular characteristics of user language during the performance of a primary task. To this end, we conducted data collection in a driving simulator and investigated user language while driving with a focus on the syntactic level. Our results show significant differences in language use between two different driving complexity contexts, which should be taken into account in the generation of voice output. Our analyses serve as a basis for future work towards user- and situation-adaptive voice output in dual-task environments.
0 Replies

Loading