Score: 2

Can Large Language Models Predict Audio Effects Parameters from Natural Language?

Published: May 27, 2025 | arXiv ID: 2505.20770v2

By: Seungheon Doh , Junghyun Koo , Marco A. Martínez-Ramírez and more

Potential Business Impact:

Lets you control music effects with words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In music production, manipulating audio effects (Fx) parameters through natural language has the potential to reduce technical barriers for non-experts. We present LLM2Fx, a framework leveraging Large Language Models (LLMs) to predict Fx parameters directly from textual descriptions without requiring task-specific training or fine-tuning. Our approach address the text-to-effect parameter prediction (Text2Fx) task by mapping natural language descriptions to the corresponding Fx parameters for equalization and reverberation. We demonstrate that LLMs can generate Fx parameters in a zero-shot manner that elucidates the relationship between timbre semantics and audio effects in music production. To enhance performance, we introduce three types of in-context examples: audio Digital Signal Processing (DSP) features, DSP function code, and few-shot examples. Our results demonstrate that LLM-based Fx parameter generation outperforms previous optimization approaches, offering competitive performance in translating natural language descriptions to appropriate Fx settings. Furthermore, LLMs can serve as text-driven interfaces for audio production, paving the way for more intuitive and accessible music production tools.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Sound