Temporal Preferences in Language Models for Long-Horizon Assistance
By: Ali Mazyaki , Mohammad Naghizadeh , Samaneh Ranjkhah Zonouzaghi and more
Potential Business Impact:
AI learns to prefer future rewards over immediate ones.
We study whether language models (LMs) exhibit future- versus present-oriented preferences in intertemporal choice and whether those preferences can be systematically manipulated. Using adapted human experimental protocols, we evaluate multiple LMs on time-tradeoff tasks and benchmark them against a sample of human decision makers. We introduce an operational metric, the Manipulability of Time Orientation (MTO), defined as the change in an LM's revealed time preference between future- and present-oriented prompts. In our tests, reasoning-focused models (e.g., DeepSeek-Reasoner and grok-3-mini) choose later options under future-oriented prompts but only partially personalize decisions across identities or geographies. Moreover, models that correctly reason about time orientation internalize a future orientation for themselves as AI decision makers. We discuss design implications for AI assistants that should align with heterogeneous, long-horizon goals and outline a research agenda on personalized contextual calibration and socially aware deployment.
Similar Papers
Temporal Blindness in Multi-Turn LLM Agents: Misaligned Tool Use vs. Human Time Perception
Computation and Language
Helps AI know when to act based on time.
Beyond Mimicry: Preference Coherence in LLMs
Artificial Intelligence
AI doesn't always make smart choices when faced with tough decisions.
Which Way Does Time Flow? A Psychophysics-Grounded Evaluation for Vision-Language Models
CV and Pattern Recognition
Helps computers understand if videos play forward or backward.