Can Large Language Models Predict Audio Effects Parameters from Natural Language?
By: Seungheon Doh , Junghyun Koo , Marco A. Martínez-Ramírez and more
Potential Business Impact:
Lets you control music effects with words.
In music production, manipulating audio effects (Fx) parameters through natural language has the potential to reduce technical barriers for non-experts. We present LLM2Fx, a framework leveraging Large Language Models (LLMs) to predict Fx parameters directly from textual descriptions without requiring task-specific training or fine-tuning. Our approach address the text-to-effect parameter prediction (Text2Fx) task by mapping natural language descriptions to the corresponding Fx parameters for equalization and reverberation. We demonstrate that LLMs can generate Fx parameters in a zero-shot manner that elucidates the relationship between timbre semantics and audio effects in music production. To enhance performance, we introduce three types of in-context examples: audio Digital Signal Processing (DSP) features, DSP function code, and few-shot examples. Our results demonstrate that LLM-based Fx parameter generation outperforms previous optimization approaches, offering competitive performance in translating natural language descriptions to appropriate Fx settings. Furthermore, LLMs can serve as text-driven interfaces for audio production, paving the way for more intuitive and accessible music production tools.
Similar Papers
LLM2Fx-Tools: Tool Calling For Music Post-Production
Sound
Makes music sound better by automatically adding effects.
Probing Audio-Generation Capabilities of Text-Based Language Models
Sound
Computers learn to make sounds from words.
EmoSLLM: Parameter-Efficient Adaptation of LLMs for Speech Emotion Recognition
Audio and Speech Processing
Helps computers understand your feelings from your voice.