CrystalFormer-RL: Reinforcement Fine-Tuning for Materials Design
By: Zhendong Cao, Lei Wang
Potential Business Impact:
Finds new materials with amazing, mixed-up powers.
Reinforcement fine-tuning has instrumental enhanced the instruction-following and reasoning abilities of large language models. In this work, we explore the applications of reinforcement fine-tuning to the autoregressive transformer-based materials generative model CrystalFormer (arXiv:2403.15734) using discriminative machine learning models such as interatomic potentials and property prediction models. By optimizing reward signals-such as energy above the convex hull and material property figures of merit-reinforcement fine-tuning infuses knowledge from discriminative models into generative models. The resulting model, CrystalFormer-RL, shows enhanced stability in generated crystals and successfully discovers crystals with desirable yet conflicting material properties, such as substantial dielectric constant and band gap simultaneously. Notably, we observe that reinforcement fine-tuning enables not only the property-guided novel material design ability of generative pre-trained model but also unlocks property-driven material retrieval from the unsupervised pre-training dataset. Leveraging rewards from discriminative models to fine-tune materials generative models opens an exciting gateway to the synergies of the machine learning ecosystem for materials.
Similar Papers
Reinforcement Learning Fine-Tuning Enhances Activation Intensity and Diversity in the Internal Circuitry of LLMs
Machine Learning (CS)
Makes AI smarter by changing how it thinks.
Generative models for crystalline materials
Materials Science
Creates new materials with computers for better technology.
A Mathematical Framework for Custom Reward Functions in Job Application Evaluation using Reinforcement Learning
Machine Learning (CS)
Helps hiring software find better job candidates.