Idea-Gated Transformers: Enforcing Semantic Coherence via Differentiable Vocabulary Pruning
By: Darshan Fofadiya
Potential Business Impact:
Keeps AI writing focused on the main topic.
Autoregressive Language Models (LLMs) trained on Next-Token Prediction (NTP) often suffer from ``Topic Drift'' where the generation wanders away from the initial prompt due to a reliance on local associations rather than global planning \citep{holtzman2019curious}. While scaling model size mitigates this \citep{brown2020language}, the fundamental myopia of the NTP objective remains. In this work, we introduce the Idea-Gated Transformer, a novel architecture that separates semantic planning from syntactic generation. We introduce an auxiliary ``Idea Head'' trained to predict the bag-of-words distribution for a future context window, creating a latent ``Concept Vector'' that actively gates the main vocabulary during generation. We propose a differentiable gating mechanism that suppresses semantically irrelevant tokens, effectively pruning the search space in real-time. Experiments on WikiText-103 demonstrate that while the Idea-Gated model achieves comparable validation perplexity to a standard GPT-2 baseline, it exhibits significantly superior Domain Retention. Qualitative and quantitative analysis reveals that the gating mechanism successfully locks generation into specific semantic clusters (e.g., Finance, Science) and resists associative drift, offering a parameter-efficient path toward more controllable language modeling.
Similar Papers
On the Geometry of Semantics in Next-token Prediction
Computation and Language
Teaches computers to understand words like humans.
Looking to Learn: Token-wise Dynamic Gating for Low-Resource Vision-Language Modelling
Artificial Intelligence
Helps computers learn to see and understand words.
NNGPT: Rethinking AutoML with Large Language Models
Artificial Intelligence
AI builds better AI, learning from its own mistakes.