The process by which people adjust, enrich, and revise their understanding of word meanings over time -- so-called `Slow Mapping' -- has often been overlooked. While many analyses have focused on in-principle in-of natural language, there is far less work linking such accounts to plausible cognitive mechanisms by which computationally bounded learners might approach these ideals. To address this gap, we propose a process model of online, incremental word-meaning induction. The proposal is inspired by recent work on concept and theory change grounded in a probabilistic language of thought (pLOT). We focus on the problem of fixing the meanings of words from examples of their usage, taking kinship terms as our test domain. We frame word meaning induction at a computational level as a program induction problem, and hypothesize that individual learners search for possible meanings as evidence arrives via a mutative Markov-Chain Monte-Carlo search scheme. We show this idea provides a better description of how participants' generalizations and tentative definitions of alien kinship words shift as evidence arrives, outperforming normative accounts and other baselines.