Contextually Guided Transformers via Low-Rank Adaptation
By: Andrey Zhmoginov , Jihwan Lee , Max Vladymyrov and more
Potential Business Impact:
Computers learn to adapt without needing instructions.
Large Language Models (LLMs) based on Transformers excel at text processing, but their reliance on prompts for specialized behavior introduces computational overhead. We propose a modification to a Transformer architecture that eliminates the need for explicit prompts by learning to encode context into the model's weights. Our Contextually Guided Transformer (CGT) model maintains a contextual summary at each sequence position, allowing it to update the weights on the fly based on the preceding context. This approach enables the model to self-specialize, effectively creating a tailored model for processing information following a given prefix. We demonstrate the effectiveness of our method on synthetic in-context learning tasks and language modeling benchmarks. Furthermore, we introduce techniques for enhancing the interpretability of the learned contextual representations, drawing connections to Variational Autoencoders and promoting smoother, more consistent context encoding. This work offers a novel direction for efficient and adaptable language modeling by integrating context directly into the model's architecture.
Similar Papers
Contextual Graph Transformer: A Small Language Model for Enhanced Engineering Document Information Extraction
Computation and Language
Helps computers understand hard technical writing.
Moving Beyond Next-Token Prediction: Transformers are Context-Sensitive Language Generators
Computation and Language
Explains how smart computer programs think.
Context Guided Transformer Entropy Modeling for Video Compression
CV and Pattern Recognition
Makes videos smaller and faster to watch.