BOOD: Boundary-based Out-Of-Distribution Data Generation
By: Qilin Liao , Shuo Yang , Bo Zhao and more
Potential Business Impact:
Helps computers spot fake images better.
Harnessing the power of diffusion models to synthesize auxiliary training data based on latent space features has proven effective in enhancing out-of-distribution (OOD) detection performance. However, extracting effective features outside the in-distribution (ID) boundary in latent space remains challenging due to the difficulty of identifying decision boundaries between classes. This paper proposes a novel framework called Boundary-based Out-Of-Distribution data generation (BOOD), which synthesizes high-quality OOD features and generates human-compatible outlier images using diffusion models. BOOD first learns a text-conditioned latent feature space from the ID dataset, selects ID features closest to the decision boundary, and perturbs them to cross the decision boundary to form OOD features. These synthetic OOD features are then decoded into images in pixel space by a diffusion model. Compared to previous works, BOOD provides a more training efficient strategy for synthesizing informative OOD features, facilitating clearer distinctions between ID and OOD data. Extensive experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly, achieving a 29.64% decrease in average FPR95 (40.31% vs. 10.67%) and a 7.27% improvement in average AUROC (90.15% vs. 97.42%) on the CIFAR-100 dataset.
Similar Papers
BootOOD: Self-Supervised Out-of-Distribution Detection via Synthetic Sample Exposure under Neural Collapse
CV and Pattern Recognition
Helps computers spot fake pictures, even tricky ones.
Local Background Features Matter in Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers know when they see something new.
GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers spot fake images better.