Score: 2

DiffUS: Differentiable Ultrasound Rendering from Volumetric Imaging

Published: August 9, 2025 | arXiv ID: 2508.06768v1

By: Noe Bertramo, Gabriel Duguey, Vivek Gopalakrishnan

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Makes surgery safer by showing doctors inside the body.

Intraoperative ultrasound imaging provides real-time guidance during numerous surgical procedures, but its interpretation is complicated by noise, artifacts, and poor alignment with high-resolution preoperative MRI/CT scans. To bridge the gap between reoperative planning and intraoperative guidance, we present DiffUS, a physics-based, differentiable ultrasound renderer that synthesizes realistic B-mode images from volumetric imaging. DiffUS first converts MRI 3D scans into acoustic impedance volumes using a machine learning approach. Next, we simulate ultrasound beam propagation using ray tracing with coupled reflection-transmission equations. DiffUS formulates wave propagation as a sparse linear system that captures multiple internal reflections. Finally, we reconstruct B-mode images via depth-resolved echo extraction across fan-shaped acquisition geometry, incorporating realistic artifacts including speckle noise and depth-dependent degradation. DiffUS is entirely implemented as differentiable tensor operations in PyTorch, enabling gradient-based optimization for downstream applications such as slice-to-volume registration and volumetric reconstruction. Evaluation on the ReMIND dataset demonstrates DiffUS's ability to generate anatomically accurate ultrasound images from brain MRI data.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition