Collaborative Prediction: Tractable Information Aggregation via Agreement

Published: 23 Sept 2025, Last Modified: 18 Nov 2025ACA-NeurIPS2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Huma-AI Collaboration, Information aggregation, Agreement protocols, online learning
Abstract: We give efficient ``collaboration protocols'' through which two parties, who observe different features about the same instances, can interact to arrive at predictions that are more accurate than either could have obtained on their own. The parties only need to iteratively share and update their own label predictions---without either party ever having to share the actual features that they observe. Our protocols are efficient reductions to the problem of learning on each party's feature space alone, and so can be used even in settings in which each party's feature space is illegible to the other---which arises in models of human/AI interaction and in multi-modal learning. The communication requirements of our protocols are independent of the dimensionality of the data. In an online adversarial setting we show how to give regret bounds on the predictions that the parties arrive at with respect to a class of benchmark policies defined on the joint feature space of the two parties, despite the fact that neither party has access to this joint feature space. We also give simpler algorithms for the same task in the ``batch'' setting in which we assume that there is a fixed but unknown data distribution. We generalize our protocols to a decision theoretic setting with high dimensional outcome spaces---the parties in this setting do not need to communicate their (high dimensional) predictions about the outcome, but can instead communicate only ``best response actions'' with respect to a known utility function and their predicted outcome distribution. Our theorems give a computationally and statistically tractable generalization of past work on information aggregation amongst Bayesians who share a common and correct prior, as part of a literature studying ``agreement'' in the style of Aumann's agreement theorem. Our results require no knowledge of (or even the existence of) a prior distribution and are computationally efficient. Nevertheless we show how to lift our theorems back to this classical Bayesian setting, and in doing so, give new information aggregation theorems for Bayesian agreement. In particular we give the first distribution-agnostic information aggregation theorems that do not require making assumptions on the prior distribution, but instead are able to give worst-case accuracy guarantees with respect to restricted classes of functions on the parties' joint feature spaces.
Submission Number: 10
Loading