A dimension reduction for extreme types of directed dependence
By: Sebastian Fuchs, Carsten Limbach
Potential Business Impact:
Finds how one thing affects another, even complex ways.
In recent years, a variety of novel measures of dependence have been introduced being capable of characterizing diverse types of directed dependence, hence diverse types of how a number of predictor variables $\mathbf{X} = (X_1, \dots, X_p)$, $p \in \mathbb{N}$, may affect a response variable $Y$. This includes perfect dependence of $Y$ on $\mathbf{X}$ and independence between $\mathbf{X}$ and $Y$, but also less well-known concepts such as zero-explainability, stochastic comparability and complete separation. Certain such measures offer a representation in terms of the Markov product $(Y,Y')$, with $Y'$ being a conditionally independent copy of $Y$ given $\mathbf{X}$. This dimension reduction principle allows these measures to be estimated via the powerful nearest neighbor based estimation principle introduced in [4]. To achieve a deeper insight into the dimension reduction principle, this paper aims at translating the extreme variants of directed dependence, typically formulated in terms of the random vector $(\mathbf{X},Y)$, into the Markov product $(Y,Y')$.
Similar Papers
On dimension reduction in conditional dependence models
Methodology
Finds important patterns in messy data.
A new coefficient of separation
Methodology
Measures how much one thing depends on others.
Learning Causal Response Representations through Direct Effect Analysis
Machine Learning (Stat)
Finds what truly causes changes in data.