To tackle a hard problem, it is often wise to re-use, re-combine, or re-purpose existing knowledge. Such an ability to bootstrap enables us to grow rich mental concepts that go beyond our limited cognitive resources. However, the mechanisms underpinning this ability, its strengths and its limitations, are yet to be fully explicated. We here provide a formal and computational characterization of bootstrap learning complex concepts, using a dynamic conceptual repertoire that is enriched over time, allowing the model to cache and later reuse elements of earlier insights. This model predicts systematically different learned concepts when the same evidence is processed in different orders, without any extra assumptions about prior beliefs or background knowledge. Across four behavioral experiments, we found strong curriculum-order and conceptual garden-pathing effects, demonstrating that people's inductive concept inferences closely resemble our model's, and differ from those of alternative accounts. Our model provides an explanation for why information selection alone is not enough to teach complex concepts, and offers a computational account of how past experiences shape future conceptual discoveries.