Bright 4B: Scaling Hyperspherical Learning for Segmentation in 3D Brightfield Microscopy
By: Amil Khan , Matheus Palhares Viana , Suraj Mishra and more
Potential Business Impact:
Lets microscopes see inside cells without dyes.
Label-free 3D brightfield microscopy offers a fast and noninvasive way to visualize cellular morphology, yet robust volumetric segmentation still typically depends on fluorescence or heavy post-processing. We address this gap by introducing Bright-4B, a 4 billion parameter foundation model that learns on the unit hypersphere to segment subcellular structures directly from 3D brightfield volumes. Bright-4B combines a hardware-aligned Native Sparse Attention mechanism (capturing local, coarse, and selected global context), depth-width residual HyperConnections that stabilize representation flow, and a soft Mixture-of-Experts for adaptive capacity. A plug-and-play anisotropic patch embed further respects confocal point-spread and axial thinning, enabling geometry-faithful 3D tokenization. The resulting model produces morphology-accurate segmentations of nuclei, mitochondria, and other organelles from brightfield stacks alone--without fluorescence, auxiliary channels, or handcrafted post-processing. Across multiple confocal datasets, Bright-4B preserves fine structural detail across depth and cell types, outperforming contemporary CNN and Transformer baselines. All code, pretrained weights, and models for downstream finetuning will be released to advance large-scale, label-free 3D cell mapping.
Similar Papers
High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images
Quantitative Methods
Finds cells in blurry microscope pictures.
Foreground-aware Virtual Staining for Accurate 3D Cell Morphological Profiling
CV and Pattern Recognition
Makes cell pictures clearer without dyes.
GUI Based Fuzzy Logic and Spatial Statistics for Unsupervised Microscopy Segmentation
Image and Video Processing
Helps scientists see tiny cells without special dyes.