Abstract: Highlights•PUGCN outperforms baselines with a 2.08% average improvement in EFC.•Pseudo-utterance generation significantly enhances model performance and robustness.•Multi-party dialogues benefit most from interlocutor-aware encoding in PUGCN.•Emotion-level contrastive learning proves vital in short or casual dialogues.
Loading