Score: 2

Whole-Body Image-to-Image Translation for a Virtual Scanner in a Healthcare Digital Twin

Published: March 18, 2025 | arXiv ID: 2503.15555v1

By: Valerio Guarrasi , Francesco Di Feola , Rebecca Restivo and more

Potential Business Impact:

Creates fake PET scans from CT scans.

Business Areas:
Image Recognition Data and Analytics, Software

Generating positron emission tomography (PET) images from computed tomography (CT) scans via deep learning offers a promising pathway to reduce radiation exposure and costs associated with PET imaging, improving patient care and accessibility to functional imaging. Whole-body image translation presents challenges due to anatomical heterogeneity, often limiting generalized models. We propose a framework that segments whole-body CT images into four regions-head, trunk, arms, and legs-and uses district-specific Generative Adversarial Networks (GANs) for tailored CT-to-PET translation. Synthetic PET images from each region are stitched together to reconstruct the whole-body scan. Comparisons with a baseline non-segmented GAN and experiments with Pix2Pix and CycleGAN architectures tested paired and unpaired scenarios. Quantitative evaluations at district, whole-body, and lesion levels demonstrated significant improvements with our district-specific GANs. Pix2Pix yielded superior metrics, ensuring precise, high-quality image synthesis. By addressing anatomical heterogeneity, this approach achieves state-of-the-art results in whole-body CT-to-PET translation. This methodology supports healthcare Digital Twins by enabling accurate virtual PET scans from CT data, creating virtual imaging representations to monitor, predict, and optimize health outcomes.

Country of Origin
🇮🇹 🇸🇪 Sweden, Italy

Page Count
7 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing