Generalization Beyond Benchmarks: Evaluating Learnable Protein-Ligand Scoring Functions on Unseen Targets
By: Jakub Kopko , David Graber , Saltuk Mustafa Eyrilmez and more
As machine learning becomes increasingly central to molecular design, it is vital to ensure the reliability of learnable protein-ligand scoring functions on novel protein targets. While many scoring functions perform well on standard benchmarks, their ability to generalize beyond training data remains a significant challenge. In this work, we evaluate the generalization capability of state-of-the-art scoring functions on dataset splits that simulate evaluation on targets with a limited number of known structures and experimental affinity measurements. Our analysis reveals that the commonly used benchmarks do not reflect the true challenge of generalizing to novel targets. We also investigate whether large-scale self-supervised pretraining can bridge this generalization gap and we provide preliminary evidence of its potential. Furthermore, we probe the efficacy of simple methods that leverage limited test-target data to improve scoring function performance. Our findings underscore the need for more rigorous evaluation protocols and offer practical guidance for designing scoring functions with predictive power extending to novel protein targets.
Similar Papers
Protein as a Second Language for LLMs
Machine Learning (CS)
Helps computers understand how proteins work.
Seek and You Shall Fold
Machine Learning (CS)
Creates protein shapes from experimental clues.
Learning Protein-Ligand Binding in Hyperbolic Space
Machine Learning (CS)
Finds better medicines faster using curved math.