Score: 2

Scaling Self-Supervised and Cross-Modal Pretraining for Volumetric CT Transformers

Published: November 21, 2025 | arXiv ID: 2511.17209v1

By: Cris Claessens , Christiaan Viviers , Giacomo D'Amicantonio and more

Potential Business Impact:

Makes CT scans show more detail for doctors.

Business Areas:
Image Recognition Data and Analytics, Software

We introduce SPECTRE, a fully transformer-based foundation model for volumetric computed tomography (CT). Our Self-Supervised & Cross-Modal Pretraining for CT Representation Extraction (SPECTRE) approach utilizes scalable 3D Vision Transformer architectures and modern self-supervised and vision-language pretraining strategies to learn general-purpose CT representations. Volumetric CT poses unique challenges, such as extreme token scaling, geometric anisotropy, and weak or noisy clinical supervision, that make standard transformer and contrastive learning recipes ineffective out of the box. The framework jointly optimizes a local transformer for high-resolution volumetric feature extraction and a global transformer for whole-scan context modeling, making large-scale 3D attention computationally tractable. Notably, SPECTRE is trained exclusively on openly available CT datasets, demonstrating that high-performing, generalizable representations can be achieved without relying on private data. Pretraining combines DINO-style self-distillation with SigLIP-based vision-language alignment using paired radiology reports, yielding features that are both geometrically consistent and clinically meaningful. Across multiple CT benchmarks, SPECTRE consistently outperforms prior CT foundation models in both zero-shot and fine-tuned settings, establishing SPECTRE as a scalable, open, and fully transformer-based foundation model for 3D medical imaging.

Country of Origin
🇳🇱 Netherlands

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition