Abstract: Author summary Learning and predicting in every-day life is made difficult by the fact that our world is both uncertain (e.g. will it rain tonight?) and changing (e.g. climate change shakes up weather). When a change occurs, what has been learned must be revised: learning should therefore be flexible. One possibility that ensures flexibility is to constantly forget about the remote past and to rely on recent observations. This solution is computationally cheap but effective, and is at the core of many popular learning algorithms. Another possibility is to monitor the occurrence of changes themselves, and revise what has been learned accordingly. This solution requires a hierarchical representation, in which some factors like changes modify other aspects of learning. This solution is computational more complicated but it allows more sophisticated inferences. Here, we provide a direct way to test experimentally whether or not learners use a hierarchical learning strategy. Our results show that humans revise their beliefs and the confidence they hold in their beliefs in a way that is only compatible with hierarchical inference. Our results contribute to the characterization of the putative algorithms our brain may use to learn, and the neural network models that may implement these algorithms.
Loading