Fine-Tuning Large Language Models Using EEG Microstate Features for Mental Workload Assessment
By: Bujar Raufi
Potential Business Impact:
Helps computers understand how much you're thinking.
This study explores the intersection of electroencephalography (EEG) microstates and Large Language Models (LLMs) to enhance the assessment of cognitive load states. By utilizing EEG microstate features, the research aims to fine-tune LLMs for improved predictions of distinct cognitive states, specifically 'Rest' and 'Load'. The experimental design is delineated in four comprehensive stages: dataset collection and preprocessing, microstate segmentation and EEG backfitting, feature extraction paired with prompt engineering, and meticulous LLM model selection and refinement. Employing a supervised learning paradigm, the LLM is trained to identify cognitive load states based on EEG microstate features integrated into prompts, producing accurate discrimination of cognitive load. A curated dataset, linking EEG features to specified cognitive load conditions, underpins the experimental framework. The results indicate a significant improvement in model performance following the proposed fine-tuning, showcasing the potential of EEG-informed LLMs in cognitive neuroscience and cognitive AI applications. This approach not only contributes to the understanding of brain dynamics but also paves the way for advancements in machine learning techniques applicable to cognitive load and cognitive AI research.
Similar Papers
Spiking Neural Networks for Mental Workload Classification with a Multimodal Approach
Neural and Evolutionary Computing
Lets computers measure brain effort quickly.
Large Language Models for EEG: A Comprehensive Survey and Taxonomy
Signal Processing
Lets computers understand brain signals like words.
Towards Attention-Aware Large Language Models: Integrating Real-Time Eye-Tracking and EEG for Adaptive AI Responses
Human-Computer Interaction
Helps computers know when you're not paying attention.