In-Context Learning Enhanced Credibility Transformer
By: Kishan Padayachy , Ronald Richman , Salvatore Scognamiglio and more
Potential Business Impact:
Helps computers learn from new examples better.
The starting point of our network architecture is the Credibility Transformer which extends the classical Transformer architecture by a credibility mechanism to improve model learning and predictive performance. This Credibility Transformer learns credibilitized CLS tokens that serve as learned representations of the original input features. In this paper we present a new paradigm that augments this architecture by an in-context learning mechanism, i.e., we increase the information set by a context batch consisting of similar instances. This allows the model to enhance the CLS token representations of the instances by additional in-context information and fine-tuning. We empirically verify that this in-context learning enhances predictive accuracy by adapting to similar risk patterns. Moreover, this in-context learning also allows the model to generalize to new instances which, e.g., have feature levels in the categorical covariates that have not been present when the model was trained -- for a relevant example, think of a new vehicle model which has just been developed by a car manufacturer.
Similar Papers
The Credibility Transformer
Machine Learning (CS)
Makes computer predictions more accurate and stable.
Heuristic Transformer: Belief Augmented In-Context Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new tasks faster.
A Framework for Quantifying How Pre-Training and Context Benefit In-Context Learning
Artificial Intelligence
Teaches computers to learn new things from examples.