CORE: A Conceptual Reasoning Layer for Large Language Models
By: Vishwas Hegde, Vindhya Shigehalli
Large language models handle single-turn generation well, but multi-turn interactions still require the model to reconstruct user intent and task state from an expanding token history because internal representations do not persist across turns. This token-first paradigm leads to drift, inconsistent reasoning modes, and growing prompts as conversations deepen. We propose CORE, a concept-first interaction layer that improves multi-turn stability without modifying model weights. CORE combines a small library of universal cognitive operators with a persistent Local Concept - a compact semantic state capturing the task, constraints, preferences, and intermediate results. Each model call receives only this concept state, the user's latest instruction, and the selected operator, eliminating the need to replay full history. A preliminary prototype simulating CORE's behavior shows about 42% reduction in cumulative prompt tokens, though this number reflects prototype conditions and should not be interpreted as a real-world performance estimate. CORE offers a model-agnostic mechanism that separates conceptual reasoning from language generation, suggesting a scalable direction for more stable multi-turn systems.
Similar Papers
CORE: Measuring Multi-Agent LLM Interaction Quality under Game-Theoretic Pressures
Computation and Language
Measures how well AI talks to each other.
CoreThink: A Symbolic Reasoning Layer to reason over Long Horizon Tasks with LLMs
Artificial Intelligence
Makes computers think and solve problems better.
Toward Structured Knowledge Reasoning: Contrastive Retrieval-Augmented Generation on Experience
Computation and Language
Helps computers understand tables and databases.