Surrogate Interpretable Graph for Random Decision Forests
By: Akshat Dubey, Aleksandar Anžel, Georges Hattab
Potential Business Impact:
Shows how computer health predictions work.
The field of health informatics has been profoundly influenced by the development of random forest models, which have led to significant advances in the interpretability of feature interactions. These models are characterized by their robustness to overfitting and parallelization, making them particularly useful in this domain. However, the increasing number of features and estimators in random forests can prevent domain experts from accurately interpreting global feature interactions, thereby compromising trust and regulatory compliance. A method called the surrogate interpretability graph has been developed to address this issue. It uses graphs and mixed-integer linear programming to analyze and visualize feature interactions. This improves their interpretability by visualizing the feature usage per decision-feature-interaction table and the most dominant hierarchical decision feature interactions for predictions. The implementation of a surrogate interpretable graph enhances global interpretability, which is critical for such a high-stakes domain.
Similar Papers
Interpretable Network-assisted Random Forest+
Machine Learning (Stat)
Shows how computers learn from connected data.
Enhancing interpretability of rule-based classifiers through feature graphs
Machine Learning (CS)
Helps doctors understand patient data for better diagnoses.
Interpretable graph-based models on multimodal biomedical data integration: A technical review and benchmarking
Genomics
Helps doctors understand diseases using patient data.