BioBench: A Blueprint to Move Beyond ImageNet for Scientific ML Benchmarks
By: Samuel Stevens
Potential Business Impact:
Helps computers understand nature pictures better.
ImageNet-1K linear-probe transfer accuracy remains the default proxy for visual representation quality, yet it no longer predicts performance on scientific imagery. Across 46 modern vision model checkpoints, ImageNet top-1 accuracy explains only 34% of variance on ecology tasks and mis-ranks 30% of models above 75% accuracy. We present BioBench, an open ecology vision benchmark that captures what ImageNet misses. BioBench unifies 9 publicly released, application-driven tasks, 4 taxonomic kingdoms, and 6 acquisition modalities (drone RGB, web video, micrographs, in-situ and specimen photos, camera-trap frames), totaling 3.1M images. A single Python API downloads data, fits lightweight classifiers to frozen backbones, and reports class-balanced macro-F1 (plus domain metrics for FishNet and FungiCLEF); ViT-L models evaluate in 6 hours on an A6000 GPU. BioBench provides new signal for computer vision in ecology and a template recipe for building reliable AI-for-science benchmarks in any domain. Code and predictions are available at https://github.com/samuelstevens/biobench and results at https://samuelstevens.me/biobench.
Similar Papers
I2I-Bench: A Comprehensive Benchmark Suite for Image-to-Image Editing Models
CV and Pattern Recognition
Tests AI image editing better, faster, and more fairly.
UWBench: A Comprehensive Vision-Language Benchmark for Underwater Understanding
CV and Pattern Recognition
Helps computers understand what's underwater.
VisChainBench: A Benchmark for Multi-Turn, Multi-Image Visual Reasoning Beyond Language Priors
CV and Pattern Recognition
Teaches computers to solve problems using many pictures.