Entropy-Guided Multiplicative Updates: KL Projections for Multi-Factor Target Exposures
By: Yimeng Qiu
Potential Business Impact:
Helps money managers pick better investments.
We develop \emph{Entropy-Guided Multiplicative Updates} (EGMU), a convex optimization framework for constructing multi-factor target-exposure portfolios by minimizing Kullback--Leibler (KL) divergence from a benchmark subject to linear factor constraints. Our contributions are theoretical and algorithmic. (\emph{i}) We formalize feasibility and uniqueness: with strictly positive benchmark and feasible targets in the convex hull of exposures, the solution is unique and strictly positive. (\emph{ii}) We derive the dual concave program with gradient $t-\E_{w(\theta)}[x]$ and Hessian $-\Cov_{w(\theta)}(x)$, and give a precise sensitivity formula $\partial\theta^*/\partial t=\Cov_{w^*}(x)^{-1}$ and $\partial w^*/\partial t=\mathrm{diag}(w^*) (X-\1\mu^\top)\Cov_{w^*}(x)^{-1}$. (\emph{iii}) We present two provably convergent solvers: a damped \emph{dual Newton} method with global convergence and local quadratic rate, and a \emph{KL-projection} scheme based on IPF/Bregman--Dykstra for equalities and inequalities. (\emph{iv}) We further \textbf{generalize EGMU} with \emph{elastic targets} (strongly concave dual) and \emph{robust target sets} (support-function dual), and introduce a \emph{path-following ODE} for solution trajectories, all reusing the same dual-moment structure and solved via Newton or proximal-gradient schemes. (\emph{v}) We detail numerically stable and scalable implementations (LogSumExp, covariance regularization, half-space KL-projections). We emphasize theory and reproducible algorithms; empirical benchmarking is optional.
Similar Papers
Entropy-Guided Multiplicative Updates: KL Projections for Multi-Factor Target Exposures
Portfolio Management
Builds better investment plans with fewer risks.
Mirror Descent and Novel Exponentiated Gradient Algorithms Using Trace-Form Entropies and Deformed Logarithms
Machine Learning (CS)
Teaches computers to learn faster and better.
Annealed Ensemble Kalman Inversion for Constrained Nonlinear Model Predictive Control: An ADMM Approach
Optimization and Control
Helps robots learn to move safely and efficiently.