AE-AMT: Attribute-Enhanced Affective Music Generation With Compound Word Representation

Published: 01 Jan 2025, Last Modified: 13 May 2025IEEE Trans. Comput. Soc. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Affective music generation is a challenge for symbolic music generation. Existing methods face the problem that the perceived emotion of the generated music is not evident because music datasets containing emotional labels are relatively small in quantity and scale. To address this issue, an attribute-enhanced affective music transformer (AE-AMT) model is proposed to generate perceived affective music with attribute enhancement. In addition, a multiquantile-based attribute discretization (MQAD) strategy is designed, enabling the model to generate intensity-controllable affective music pieces. Furthermore, A replication-expanded compound representation of the control signals (RECR) method is designed for control signals to improve the controllability of the model. In objective experiments, the AE-AMT model demonstrated a 29.25% and 19.5% improvement in overall emotion accuracy, along with a 30% and 32% improvement in arousal accuracy on the datasets EMOPIA and VGMIDI. These improvements are achieved without significant difference in objective music quality, while also providing ample novelty and diversity compared to the current state-of-the-art approach. Moreover, subjective experiments revealed that the AE-AMT model outperformed comparison models, especially in low valence and arousal based on the Wilcoxon signed ranks test. Additionally, the soft variant model of AE-AMT exhibited a significant advantage in valence, low arousal, and overall music quality. These experiments showcase the AE-AMT model's ability to significantly enhance arousal performance and strike a balance between emotional intensity and musical quality through adaptable strategies.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview