Amuse

Human-AI Collaborative Songwriting with Multimodal Inspirations



Abstract


Songwriting is often driven by multimodal inspirations, such as imagery, narratives, or existing music, yet songwriters remain unsupported by current music AI systems in incorporating these multimodal inputs into their creative processes. We introduce Amuse, a songwriting assistant that transforms multimodal (image, text, or audio) inputs into chord progressions that can be seamlessly incorporated into songwriters' creative process. A key feature of Amuse is its novel method for generating coherent chords that are relevant to music keywords in the absence of datasets with paired examples of multimodal inputs and chords. Specifically, we propose a method that leverages multimodal LLMs to convert multimodal inputs into noisy chord suggestions and uses a unimodal chord model to filter the suggestions. A user study with songwriters shows that Amuse effectively supports transforming multimodal ideas into coherent musical suggestions, enhancing users' agency and creativity throughout the songwriting process.


Publications


Amuse: Human-AI Collaborative Songwriting with Multimodal Inspirations
Yewon Kim, Sung-Ju Lee, Chris Donahue
ACM CHI Conference on Human Factors in Computing Systems (CHI), 2025.
Best Paper Award
PDF Code



Teaser




Presentation Video




People


Yewon Kim

KAIST

Sung-Ju Lee

KAIST

Chris Donahue

CMU