Bootstrapping compositional generalization with cache-and-reuse

Abstract

People effectively reuse previously learned concepts to construct more complex concepts, and sometimes this can lead to systematically different beliefs when the same evidence is processed in different orders. We model these phenomena with a novel Bayesian concept learning framework that incorporates adaptor grammars to enable a dynamic concept library that is enriched over time, allowing for caching and later reusing elements of earlier insights in a principled way. Our model accounts for unique curriculum-order and conceptual garden-pathing effects in compositional causal generalization that alternative models fail to capture: While people can successfully acquire a complex causal concept when they have an opportunity to cache a key sub-concept, simply reversing the presentation order of the same learning examples induces dramatic failures, and leads people to complex and ad hoc concepts. This work provides an explanation for why information selection alone is not enough to teach complex concepts, and offers a computational account of how past experiences shape future conceptual discoveries.

Publication
Proceedings of the Computational Cognitive Neuroscience Society Meeting 2023
Date
Links