Accelerated Learning on Large Scale Screens using Generative Library Models
By: Eli N. Weinstein , Andrei Slabodkin , Mattia G. Gollub and more
Potential Business Impact:
Finds useful proteins faster by smart testing.
Biological machine learning is often bottlenecked by a lack of scaled data. One promising route to relieving data bottlenecks is through high throughput screens, which can experimentally test the activity of $10^6-10^{12}$ protein sequences in parallel. In this article, we introduce algorithms to optimize high throughput screens for data creation and model training. We focus on the large scale regime, where dataset sizes are limited by the cost of measurement and sequencing. We show that when active sequences are rare, we maximize information gain if we only collect positive examples of active sequences, i.e. $x$ with $y>0$. We can correct for the missing negative examples using a generative model of the library, producing a consistent and efficient estimate of the true $p(y | x)$. We demonstrate this approach in simulation and on a large scale screen of antibodies. Overall, co-design of experiments and inference lets us accelerate learning dramatically.
Similar Papers
Swarms of Large Language Model Agents for Protein Sequence Design with Experimental Validation
Artificial Intelligence
Creates new proteins for medicine and materials.
DeepSeq: High-Throughput Single-Cell RNA Sequencing Data Labeling via Web Search-Augmented Agentic Generative AI Foundation Models
Genomics
Helps computers understand cell data automatically.
Scaling Up Active Testing to Large Language Models
Machine Learning (CS)
Tests big computer brains better with less work.