Training chord recognition models on artificially generated audio
By: Martyna Majchrzak, Jacek Mańdziuk
Potential Business Impact:
Makes music AI learn chords from fake songs.
One of the challenging problems in Music Information Retrieval is the acquisition of enough non-copyrighted audio recordings for model training and evaluation. This study compares two Transformer-based neural network models for chord sequence recognition in audio recordings and examines the effectiveness of using an artificially generated dataset for this purpose. The models are trained on various combinations of Artificial Audio Multitracks (AAM), Schubert's Winterreise Dataset, and the McGill Billboard Dataset and evaluated with three metrics: Root, MajMin and Chord Content Metric (CCM). The experiments prove that even though there are certainly differences in complexity and structure between artificially generated and human-composed music, the former can be useful in certain scenarios. Specifically, AAM can enrich a smaller training dataset of music composed by a human or can even be used as a standalone training set for a model that predicts chord sequences in pop music, if no other data is available.
Similar Papers
Segment Transformer: AI-Generated Music Detection via Music Structural Analysis
Sound
Tells if music was made by AI or people.
Incorporating Structure and Chord Constraints in Symbolic Transformer-based Melodic Harmonization
Sound
Makes music generators follow your chord ideas.
Chord-conditioned Melody and Bass Generation
Sound
Makes music sound better by following chords.