In-context Inverse Optimality for Fair Digital Twins: A Preference-based approach
By: Daniele Masti , Francesco Basciani , Arianna Fedeli and more
Potential Business Impact:
Teaches computers to make fair choices like people.
Digital Twins (DTs) are increasingly used as autonomous decision-makers in complex socio-technical systems. Their mathematically optimal decisions often diverge from human expectations, exposing a persistent gap between algorithmic and bounded human rationality. This work addresses this gap by proposing a framework that operationalizes fairness as a learnable objective within optimization-based Digital Twins. We introduce a preference-driven learning pipeline that infers latent fairness objectives directly from human pairwise preferences over feasible decisions. A novel Siamese neural network is developed to generate convex quadratic cost functions conditioned on contextual information. The resulting surrogate objectives align optimization outcomes with human-perceived fairness while maintaining computational efficiency. The approach is demonstrated on a COVID-19 hospital resource allocation scenario. This study provides an actionable path toward embedding human-centered fairness in the design of autonomous decision-making systems.
Similar Papers
Decoupling Bias, Aligning Distributions: Synergistic Fairness Optimization for Deepfake Detection
CV and Pattern Recognition
Makes fake video checkers fair for everyone.
Controllable Pareto Trade-off between Fairness and Accuracy
Machine Learning (CS)
Lets AI be fair and accurate, your way.
AI-based CSI Feedback with Digital Twins: Real-World Validation and Insights
Information Theory
Makes wireless signals better using virtual copies.