SoundCollage: Automated Discovery of New Classes in Audio Datasets



Abstract


Developing new machine learning applications often requires the collection of new datasets. However, existing datasets may already contain relevant information to train models for new purposes. We propose SoundCollage: a framework to discover new classes within audio datasets by incorporating (1) an audio pre-processing pipeline to decompose different sounds in audio samples, and (2) an automated model-based annotation mechanism to identify the discovered classes. Furthermore, we introduce the clarity measure to assess the coherence of the discovered classes for better training new downstream applications. Our evaluations show that the accuracy of downstream audio classifiers within discovered class samples and a held-out dataset improves over the baseline by up to 34.7% and 4.5%, respectively. These results highlight the potential of SoundCollage in making datasets reusable by labeling with newly discovered classes. To encourage further research in this area, we open-source our code at https://github.com/nokia-bell-labs/audio-class-discovery.


Publications


SoundCollage: Automated Discovery of New Classes in Audio Datasets
Ryuhaerang Choi, Soumyajit Chatterjee, Dimitris Spathis, Sung-Ju Lee, Fahim Kawsar, and Mohammad Malekzadeh
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2025.
PDF



People


Ryuhaerang Choi

KAIST

Soumyajit Chatterjee

Nokia Bell Labs

Dimitris Spathis

Nokia Bell Labs

Sung-Ju Lee

KAIST

Fahim Kawsar

Nokia Bell Labs

Mohammad Malekzadeh

Nokia Bell Labs