Score: 0

Bright 4B: Scaling Hyperspherical Learning for Segmentation in 3D Brightfield Microscopy

Published: December 27, 2025 | arXiv ID: 2512.22423v1

By: Amil Khan , Matheus Palhares Viana , Suraj Mishra and more

Potential Business Impact:

Lets microscopes see inside cells without dyes.

Business Areas:
Image Recognition Data and Analytics, Software

Label-free 3D brightfield microscopy offers a fast and noninvasive way to visualize cellular morphology, yet robust volumetric segmentation still typically depends on fluorescence or heavy post-processing. We address this gap by introducing Bright-4B, a 4 billion parameter foundation model that learns on the unit hypersphere to segment subcellular structures directly from 3D brightfield volumes. Bright-4B combines a hardware-aligned Native Sparse Attention mechanism (capturing local, coarse, and selected global context), depth-width residual HyperConnections that stabilize representation flow, and a soft Mixture-of-Experts for adaptive capacity. A plug-and-play anisotropic patch embed further respects confocal point-spread and axial thinning, enabling geometry-faithful 3D tokenization. The resulting model produces morphology-accurate segmentations of nuclei, mitochondria, and other organelles from brightfield stacks alone--without fluorescence, auxiliary channels, or handcrafted post-processing. Across multiple confocal datasets, Bright-4B preserves fine structural detail across depth and cell types, outperforming contemporary CNN and Transformer baselines. All code, pretrained weights, and models for downstream finetuning will be released to advance large-scale, label-free 3D cell mapping.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition