Masked Autoencoders for Ultrasound Signals: Robust Representation Learning for Downstream Applications
By: Immanuel Roßteutscher, Klaus S. Drese, Thorsten Uphues
Potential Business Impact:
Teaches computers to understand sound waves better.
We investigated the adaptation and performance of Masked Autoencoders (MAEs) with Vision Transformer (ViT) architectures for self-supervised representation learning on one-dimensional (1D) ultrasound signals. Although MAEs have demonstrated significant success in computer vision and other domains, their use for 1D signal analysis, especially for raw ultrasound data, remains largely unexplored. Ultrasound signals are vital in industrial applications such as non-destructive testing (NDT) and structural health monitoring (SHM), where labeled data are often scarce and signal processing is highly task-specific. We propose an approach that leverages MAE to pre-train on unlabeled synthetic ultrasound signals, enabling the model to learn robust representations that enhance performance in downstream tasks, such as time-of-flight (ToF) classification. This study systematically investigated the impact of model size, patch size, and masking ratio on pre-training efficiency and downstream accuracy. Our results show that pre-trained models significantly outperform models trained from scratch and strong convolutional neural network (CNN) baselines optimized for the downstream task. Additionally, pre-training on synthetic data demonstrates superior transferability to real-world measured signals compared with training solely on limited real datasets. This study underscores the potential of MAEs for advancing ultrasound signal analysis through scalable, self-supervised learning.
Similar Papers
USF-MAE: Ultrasound Self-Supervised Foundation Model with Masked Autoencoding
Image and Video Processing
Helps doctors see inside bodies better with sound.
Structure is Supervision: Multiview Masked Autoencoders for Radiology
CV and Pattern Recognition
Helps doctors find diseases in X-rays better.
Masked Autoencoder Self Pre-Training for Defect Detection in Microelectronics
CV and Pattern Recognition
Finds tiny flaws in computer chips.