To glean accurate information from social networks, people should distinguish evidence from hearsay. For example, when testimony depends on others' beliefs as much as on first-hand information, there is a danger of that evidence becoming inflated or ignored. We compare human inferences with an idealised rational account that corrects for dependencies by evaluating peers' communications with respect to the underlying communication pathway. We report on three multi-player experiments examining the dynamics of both mixed human--artificial and all-human social networks. Our analyses suggest that most human inferences are best described by a naïve learning account that is insensitive to dependencies between network peers. Moreover, we find that simulated learners who assume peers behave rationally make systematic judgement errors when reasoning about the sources of noisy human communications. In contrast, we propose human learners succeed collectively through naïve signalling and aggregation that, while less sophisticated, is computationally efficient and surprisingly robust. Overall, our results challenge the idea that everyday social inference is well captured by idealised rational accounts and provide insight into the conditions under which collective wisdom can emerge from social interactions.