Abstract: In an \textit{ideal} setting for Bayesian agents, a perfect description of the rules of the environment (i.e., the objective observation model) is available, allowing them to reason through the Bayesian posterior to update their beliefs in an optimal way. But such an \textit{ideal} setting hardly ever exists in the natural world, so agents have to make do with reasoning about the \textit{how they should update their beliefs} simultaneously. This introduces a number of related challenges for a number of research areas: (1) For Bayesian statistics, this deviation of the subjective model from the true data-generating mechanism is termed \textit{model misspecification} in the literature (for a review, see \cite{nott2023}). (2) For neuroscience, it introduces the necessity to model how the agents' belief updates (how they use evidence to update their belief) \textit{and} how their belief changes over time. The current paper addresses these two challenges by (a) providing a general class of posteriors/belief updates called \textit{cut-posteriors} of Bayesian networks that have a much greater expressivity, (b) parameterize the space of possible posteriors to make meta-learning (i.e., choosing the belief update from this space in a principled manner) possible. For (a), it is noteworthy that any cut-posterior has \textit{local}\footnote{local in the sense that all computations in computing the posterior are \textit{within that particular module}} computation only, making computation tractable for human or artificial agents. For (b), a Markov Chain Monte Carlo algorithm to perform such meta-learning will be sketched here, though it is only an illustration and but no means the only possible meta-learning procedure possible for the space of cut-posteriors. Operationally, this work gives a general algorithm to take in \textit{an arbitrary Bayesian network} and output \textit{all possible cut-posteriors in the space}.
0 Replies
Loading