Spatiotemporal Learning with Context-aware Video Tubelets for Ultrasound Video Analysis
By: Gary Y. Li , Li Chen , Bryson Hicks and more
Potential Business Impact:
Helps doctors spot lung problems in ultrasound videos.
Computer-aided pathology detection algorithms for video-based imaging modalities must accurately interpret complex spatiotemporal information by integrating findings across multiple frames. Current state-of-the-art methods operate by classifying on video sub-volumes (tubelets), but they often lose global spatial context by focusing only on local regions within detection ROIs. Here we propose a lightweight framework for tubelet-based object detection and video classification that preserves both global spatial context and fine spatiotemporal features. To address the loss of global context, we embed tubelet location, size, and confidence as inputs to the classifier. Additionally, we use ROI-aligned feature maps from a pre-trained detection model, leveraging learned feature representations to increase the receptive field and reduce computational complexity. Our method is efficient, with the spatiotemporal tubelet classifier comprising only 0.4M parameters. We apply our approach to detect and classify lung consolidation and pleural effusion in ultrasound videos. Five-fold cross-validation on 14,804 videos from 828 patients shows our method outperforms previous tubelet-based approaches and is suited for real-time workflows.
Similar Papers
Temporal Representation Learning for Real-Time Ultrasound Analysis
Image and Video Processing
Improves heart imaging by understanding movement over time.
Tracking spatial temporal details in ultrasound long video via wavelet analysis and memory bank
CV and Pattern Recognition
Helps doctors see tiny parts in ultrasound videos.
DualTrack: Sensorless 3D Ultrasound needs Local and Global Context
CV and Pattern Recognition
Makes 3D ultrasound pictures without special equipment.