Scalable Bayesian Preference Learning for Crowds

Edwin Simpson, Iryna Gurevych

Research output: Contribution to journalArticle (Academic Journal)peer-review

68 Downloads (Pure)

Abstract

We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples’ opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels per item. We address these challenges by combining matrix factorisation with Gaussian processes, using a Bayesian approach to account for uncertainty arising from noisy and sparse data. Our method exploits input features, such as text embeddings and user metadata, to predict preferences for new items and users that are not in the training set. As previous solutions based on Gaussian processes do not scale to large numbers of users, items or pairwise labels, we propose a stochastic variational inference approach that limits computational and memory costs. Our experiments on a recommendation task show that our method is competitive with previous approaches despite our scalable inference approximation. We demonstrate the method’s scalability on a natural language processing task with thousands of users and items, and show improvements over the state of the art on this task. We make our software publicly available for future work (https://github.com/UKPLab/tacl2018-preference-convincing/tree/crowdGPPL).
Original languageEnglish
Pages (from-to)689-718
Number of pages30
JournalMachine Learning
Volume109
Issue number4
DOIs
Publication statusPublished - 6 Feb 2020

Fingerprint Dive into the research topics of 'Scalable Bayesian Preference Learning for Crowds'. Together they form a unique fingerprint.

Cite this