Dreaming up scale invariance via inverse renormalization group
By: Adam Rançon , Ulysse Rançon , Tomislav Ivek and more
Potential Business Impact:
Computers learn to guess hidden details from blurry pictures.
We explore how minimal neural networks can invert the renormalization group (RG) coarse-graining procedure in the two-dimensional Ising model, effectively "dreaming up" microscopic configurations from coarse-grained states. This task-formally impossible at the level of configurations-can be approached probabilistically, allowing machine learning models to reconstruct scale-invariant distributions without relying on microscopic input. We demonstrate that even neural networks with as few as three trainable parameters can learn to generate critical configurations, reproducing the scaling behavior of observables such as magnetic susceptibility, heat capacity, and Binder ratios. A real-space renormalization group analysis of the generated configurations confirms that the models capture not only scale invariance but also reproduce nontrivial eigenvalues of the RG transformation. Surprisingly, we find that increasing network complexity by introducing multiple layers offers no significant benefit. These findings suggest that simple local rules, akin to those generating fractal structures, are sufficient to encode the universality of critical phenomena, opening the door to efficient generative models of statistical ensembles in physics.
Similar Papers
Diffusion-Guided Renormalization of Neural Systems via Tensor Networks
Neurons and Cognition
Helps computers understand brain signals better.
Renormalizable Graph Embeddings For Multi-Scale Network Reconstruction
Physics and Society
Find hidden connections in secret networks.
Symmetry and Generalisation in Neural Approximations of Renormalisation Transformations
Machine Learning (CS)
Makes computers learn patterns in physics better.