Bears, all bears, and some bears. Language Constraints on Language Models' Inductive Inferences
By: Sriram Padmanabhan, Siyuan Song, Kanishka Misra
Language places subtle constraints on how we make inductive inferences. Developmental evidence by Gelman et al. (2002) has shown children (4 years and older) to differentiate among generic statements ("Bears are daxable"), universally quantified NPs ("all bears are daxable") and indefinite plural NPs ("some bears are daxable") in extending novel properties to a specific member (all > generics > some), suggesting that they represent these types of propositions differently. We test if these subtle differences arise in general purpose statistical learners like Vision Language Models, by replicating the original experiment. On tasking them through a series of precondition tests (robust identification of categories in images and sensitivities to all and some), followed by the original experiment, we find behavioral alignment between models and humans. Post-hoc analyses on their representations revealed that these differences are organized based on inductive constraints and not surface-form differences.
Similar Papers
Generics and Default Reasoning in Large Language Models
Computation and Language
Computers learn to understand exceptions like "birds fly."
Concept Generalization in Humans and Large Language Models: Insights from the Number Game
Artificial Intelligence
Humans learn math rules better than computers.
Linguistic Generalizations are not Rules: Impacts on Evaluation of LMs
Computation and Language
Shows computers learn language like people, not robots.