Understanding In-Context Learning Beyond Transformers: An Investigation of State Space and Hybrid Architectures
By: Shenran Wang, Timothy Tin-Long Tse, Jian Zhu
Potential Business Impact:
Shows how AI learns from examples differently.
We perform in-depth evaluations of in-context learning (ICL) on state-of-the-art transformer, state-space, and hybrid large language models over two categories of knowledge-based ICL tasks. Using a combination of behavioral probing and intervention-based methods, we have discovered that, while LLMs of different architectures can behave similarly in task performance, their internals could remain different. We discover that function vectors (FVs) responsible for ICL are primarily located in the self-attention and Mamba layers, and speculate that Mamba2 uses a different mechanism from FVs to perform ICL. FVs are more important for ICL involving parametric knowledge retrieval, but not for contextual knowledge understanding. Our work contributes to a more nuanced understanding across architectures and task types. Methodologically, our approach also highlights the importance of combining both behavioural and mechanistic analyses to investigate LLM capabilities.
Similar Papers
Probing In-Context Learning: Impact of Task Complexity and Model Architecture on Generalization and Efficiency
Machine Learning (CS)
Helps AI learn faster by changing how it thinks.
Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context Learning
Computation and Language
AI learns new things from just a few examples.
What do vision-language models see in the context? Investigating multimodal in-context learning
Machine Learning (CS)
Helps computers understand pictures and words together better.