Interpreting CFD Surrogates through Sparse Autoencoders
By: Yeping Hu, Shusen Liu
Potential Business Impact:
Shows how computer models understand air flow.
Learning-based surrogate models have become a practical alternative to high-fidelity CFD solvers, but their latent representations remain opaque and hinder adoption in safety-critical or regulation-bound settings. This work introduces a posthoc interpretability framework for graph-based surrogate models used in computational fluid dynamics (CFD) by leveraging sparse autoencoders (SAEs). By obtaining an overcomplete basis in the node embedding space of a pretrained surrogate, the method extracts a dictionary of interpretable latent features. The approach enables the identification of monosemantic concepts aligned with physical phenomena such as vorticity or flow structures, offering a model-agnostic pathway to enhance explainability and trustworthiness in CFD applications.
Similar Papers
Interpretable and Steerable Concept Bottleneck Sparse Autoencoders
Machine Learning (CS)
Makes AI understand and control ideas better.
Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit
Artificial Intelligence
Finds hidden ideas in text data.
Probing the Representational Power of Sparse Autoencoders in Vision Models
CV and Pattern Recognition
Makes AI understand pictures better and create new ones.