Concept Generalization in Humans and Large Language Models: Insights from the Number Game
By: Arghavan Bazigaran, Hansem Sohn
We compare human and large language model (LLM) generalization in the number game, a concept inference task. Using a Bayesian model as an analytical framework, we examined the inductive biases and inference strategies of humans and LLMs. The Bayesian model captured human behavior better than LLMs in that humans flexibly infer rule-based and similarity-based concepts, whereas LLMs rely more on mathematical rules. Humans also demonstrated a few-shot generalization, even from a single example, while LLMs required more samples to generalize. These contrasts highlight the fundamental differences in how humans and LLMs infer and generalize mathematical concepts.
Similar Papers
Human-like conceptual representations emerge from language prediction
Computation and Language
Computers learn ideas like people from words.
Evidence of conceptual mastery in the application of rules by Large Language Models
Artificial Intelligence
Makes AI understand rules like people do.
Do You Get the Hint? Benchmarking LLMs on the Board Game Concept
Computation and Language
Makes computers better at guessing words and understanding people.