TAP-CT: 3D Task-Agnostic Pretraining of Computed Tomography Foundation Models
By: Tim Veenboer , George Yiasemis , Eric Marcus and more
Potential Business Impact:
Helps doctors see hidden problems in CT scans.
Existing foundation models (FMs) in the medical domain often require extensive fine-tuning or rely on training resource-intensive decoders, while many existing encoders are pretrained with objectives biased toward specific tasks. This illustrates a need for a strong, task-agnostic foundation model that requires minimal fine-tuning beyond feature extraction. In this work, we introduce a suite of task-agnostic pretraining of CT foundation models (TAP-CT): a simple yet effective adaptation of Vision Transformers (ViTs) and DINOv2 for volumetric data, enabling scalable self-supervised pretraining directly on 3D CT volumes. Our approach incorporates targeted modifications to patch embeddings, positional encodings, and volumetric augmentations, making the architecture depth-aware while preserving the simplicity of the underlying architectures. We show that large-scale 3D pretraining on an extensive in-house CT dataset (105K volumes) yields stable, robust frozen representations that generalize strongly across downstream tasks. To promote transparency and reproducibility, and to establish a powerful, low-resource baseline for future research in medical imaging, we will release all pretrained models, experimental configurations, and downstream benchmark code at https://huggingface.co/fomofo/tap-ct-b-3d.
Similar Papers
Scaling Self-Supervised and Cross-Modal Pretraining for Volumetric CT Transformers
CV and Pattern Recognition
Makes CT scans show more detail for doctors.
MedDINOv3: How to adapt vision foundation models for medical image segmentation?
CV and Pattern Recognition
Helps doctors see organs and sickness in scans.
Feature Quality and Adaptability of Medical Foundation Models: A Comparative Evaluation for Radiographic Classification and Segmentation
CV and Pattern Recognition
Helps X-rays find sickness better, but not always.