Skip to content

Scalable and efficient learning from crowds with Gaussian processes

Research output: Contribution to journalArticle

Standard

Scalable and efficient learning from crowds with Gaussian processes. / Morales-Álvarez, Pablo; Ruiz, Pablo; Santos-Rodríguez, Raúl; Molina, Rafael; Katsaggelos, Aggelos K.

In: Information Fusion, Vol. 52, 01.12.2019, p. 110-127.

Research output: Contribution to journalArticle

Harvard

Morales-Álvarez, P, Ruiz, P, Santos-Rodríguez, R, Molina, R & Katsaggelos, AK 2019, 'Scalable and efficient learning from crowds with Gaussian processes', Information Fusion, vol. 52, pp. 110-127. https://doi.org/10.1016/j.inffus.2018.12.008

APA

Morales-Álvarez, P., Ruiz, P., Santos-Rodríguez, R., Molina, R., & Katsaggelos, A. K. (2019). Scalable and efficient learning from crowds with Gaussian processes. Information Fusion, 52, 110-127. https://doi.org/10.1016/j.inffus.2018.12.008

Vancouver

Morales-Álvarez P, Ruiz P, Santos-Rodríguez R, Molina R, Katsaggelos AK. Scalable and efficient learning from crowds with Gaussian processes. Information Fusion. 2019 Dec 1;52:110-127. https://doi.org/10.1016/j.inffus.2018.12.008

Author

Morales-Álvarez, Pablo ; Ruiz, Pablo ; Santos-Rodríguez, Raúl ; Molina, Rafael ; Katsaggelos, Aggelos K. / Scalable and efficient learning from crowds with Gaussian processes. In: Information Fusion. 2019 ; Vol. 52. pp. 110-127.

Bibtex

@article{83e1ba96241c480a94fea8519b850574,
title = "Scalable and efficient learning from crowds with Gaussian processes",
abstract = "Over the last few years, multiply-annotated data has become a very popular source of information. Online platforms such as Amazon Mechanical Turk have revolutionized the labelling process needed for any classification task, sharing the effort between a number of annotators (instead of the classical single expert). This crowdsourcing approach has introduced new challenging problems, such as handling disagreements on the annotated samples or combining the unknown expertise of the annotators. Probabilistic methods, such as Gaussian Processes (GP), have proven successful to model this new crowdsourcing scenario. However, GPs do not scale up well with the training set size, which makes them prohibitive for medium-to-large datasets (beyond 10K training instances). This constitutes a serious limitation for current real-world applications. In this work, we introduce two scalable and efficient GP-based crowdsourcing methods that allow for processing previously-prohibitive datasets. The first one is an efficient and fast approximation to GP with squared exponential (SE) kernel. The second allows for learning a more flexible kernel at the expense of a heavier training (but still scalable to large datasets). Since the latter is not a GP-SE approximation, it can be also considered as a whole new scalable and efficient crowdsourcing method, useful for any dataset size. Both methods use Fourier features and variational inference, can predict the class of new samples, and estimate the expertise of the involved annotators. A complete experimentation compares them with state-of-the-art probabilistic approaches in synthetic and real crowdsourcing datasets of different sizes. They stand out as the best performing approach for large scale problems. Moreover, the second method is competitive with the current state-of-the-art for small datasets.",
keywords = "Bayesian modelling, Classification, Fourier features, Gaussian processes, Scalable crowdsourcing, Variational inference",
author = "Pablo Morales-{\'A}lvarez and Pablo Ruiz and Ra{\'u}l Santos-Rodr{\'i}guez and Rafael Molina and Katsaggelos, {Aggelos K.}",
year = "2019",
month = "12",
day = "1",
doi = "10.1016/j.inffus.2018.12.008",
language = "English",
volume = "52",
pages = "110--127",
journal = "Information Fusion",
issn = "1566-2535",
publisher = "Amsterdam:Elsevier",

}

RIS - suitable for import to EndNote

TY - JOUR

T1 - Scalable and efficient learning from crowds with Gaussian processes

AU - Morales-Álvarez, Pablo

AU - Ruiz, Pablo

AU - Santos-Rodríguez, Raúl

AU - Molina, Rafael

AU - Katsaggelos, Aggelos K.

PY - 2019/12/1

Y1 - 2019/12/1

N2 - Over the last few years, multiply-annotated data has become a very popular source of information. Online platforms such as Amazon Mechanical Turk have revolutionized the labelling process needed for any classification task, sharing the effort between a number of annotators (instead of the classical single expert). This crowdsourcing approach has introduced new challenging problems, such as handling disagreements on the annotated samples or combining the unknown expertise of the annotators. Probabilistic methods, such as Gaussian Processes (GP), have proven successful to model this new crowdsourcing scenario. However, GPs do not scale up well with the training set size, which makes them prohibitive for medium-to-large datasets (beyond 10K training instances). This constitutes a serious limitation for current real-world applications. In this work, we introduce two scalable and efficient GP-based crowdsourcing methods that allow for processing previously-prohibitive datasets. The first one is an efficient and fast approximation to GP with squared exponential (SE) kernel. The second allows for learning a more flexible kernel at the expense of a heavier training (but still scalable to large datasets). Since the latter is not a GP-SE approximation, it can be also considered as a whole new scalable and efficient crowdsourcing method, useful for any dataset size. Both methods use Fourier features and variational inference, can predict the class of new samples, and estimate the expertise of the involved annotators. A complete experimentation compares them with state-of-the-art probabilistic approaches in synthetic and real crowdsourcing datasets of different sizes. They stand out as the best performing approach for large scale problems. Moreover, the second method is competitive with the current state-of-the-art for small datasets.

AB - Over the last few years, multiply-annotated data has become a very popular source of information. Online platforms such as Amazon Mechanical Turk have revolutionized the labelling process needed for any classification task, sharing the effort between a number of annotators (instead of the classical single expert). This crowdsourcing approach has introduced new challenging problems, such as handling disagreements on the annotated samples or combining the unknown expertise of the annotators. Probabilistic methods, such as Gaussian Processes (GP), have proven successful to model this new crowdsourcing scenario. However, GPs do not scale up well with the training set size, which makes them prohibitive for medium-to-large datasets (beyond 10K training instances). This constitutes a serious limitation for current real-world applications. In this work, we introduce two scalable and efficient GP-based crowdsourcing methods that allow for processing previously-prohibitive datasets. The first one is an efficient and fast approximation to GP with squared exponential (SE) kernel. The second allows for learning a more flexible kernel at the expense of a heavier training (but still scalable to large datasets). Since the latter is not a GP-SE approximation, it can be also considered as a whole new scalable and efficient crowdsourcing method, useful for any dataset size. Both methods use Fourier features and variational inference, can predict the class of new samples, and estimate the expertise of the involved annotators. A complete experimentation compares them with state-of-the-art probabilistic approaches in synthetic and real crowdsourcing datasets of different sizes. They stand out as the best performing approach for large scale problems. Moreover, the second method is competitive with the current state-of-the-art for small datasets.

KW - Bayesian modelling

KW - Classification

KW - Fourier features

KW - Gaussian processes

KW - Scalable crowdsourcing

KW - Variational inference

UR - http://www.scopus.com/inward/record.url?scp=85061003837&partnerID=8YFLogxK

U2 - 10.1016/j.inffus.2018.12.008

DO - 10.1016/j.inffus.2018.12.008

M3 - Article

VL - 52

SP - 110

EP - 127

JO - Information Fusion

JF - Information Fusion

SN - 1566-2535

ER -