Abstract: Text-to-image (T2I) models are emerging as a powerful tool for designers to create user interface (UI) prototypes from natural language inputs (i.e., prompts). However, the discrepancy between designer inputs and model-preferred prompts makes it challenging for designers to consistently deliver effective results to end users. To bridge this gap, we introduce a novel hybrid method that assists designers in crafting user-centric prompts for T2I models, ensuring that the generated UIs align with end-user expectations. First, this method merges text mining and Kansei Engineering (KE) to analyze online user reviews and construct a Knowledge Graph (KG), mapping the intricate relationships between diverse affective requirements of users, design features, and corresponding text prompts for UI generation. Then, our approach automatically transforms designer inputs into model-preferred prompts through entity mention recognition and entity linking during the human-AI collaborative design process. Finally, we validate the proposed approach with a case study on automotive human–machine interface design. Experimental results demonstrate that our approach achieves high performance in perceived efficiency, satisfaction, and expectation disconfirmation. Overall, this study represents a step forward in integrating human and AI contributions in design and innovation within engineering disciplines, enabling AI to inspire, develop, and reinforce human creativity from a human factors perspective.
Loading