DiffStyleTS: Diffusion Model for Style Transfer in Time Series
By: Mayank Nagda , Phil Ostheimer , Justus Arweiler and more
Potential Business Impact:
Makes AI learn better with less data.
Style transfer combines the content of one signal with the style of another. It supports applications such as data augmentation and scenario simulation, helping machine learning models generalize in data-scarce domains. While well developed in vision and language, style transfer methods for time series data remain limited. We introduce DiffTSST, a diffusion-based framework that disentangles a time series into content and style representations via convolutional encoders and recombines them through a self-supervised attention-based diffusion process. At inference, encoders extract content and style from two distinct series, enabling conditional generation of novel samples to achieve style transfer. We demonstrate both qualitatively and quantitatively that DiffTSST achieves effective style transfer. We further validate its real-world utility by showing that data augmentation with DiffTSST improves anomaly detection in data-scarce regimes.
Similar Papers
DS-Diffusion: Data Style-Guided Diffusion Model for Time-Series Generation
Machine Learning (CS)
Makes computer-made data match real-world styles.
Leveraging Diffusion Models for Stylization using Multiple Style Images
CV and Pattern Recognition
Changes pictures to look like any art style.
Styleclone: Face Stylization with Diffusion Based Data Augmentation
CV and Pattern Recognition
Changes photos to look like a chosen style.