Online, GA based Mixture of Experts : a Probabilistic Model of UCS

Edakunni Narayanan, Brown Gavin, Tim Kovacs

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

3 Citations (Scopus)

Abstract

In recent years there have been efforts to develop a probabilistic framework to explain the workings of a Learning Classifier System. This direction of research has met with limited success due to the intractability of complicated heuristic training rules used by the learning classifier systems. In this paper, we derive a learning classifier system from a mixture of experts that is similar to a sUpervised Classifier System (UCS) in terms of its training and prediction routines. We start by framing the learning model as a mixture of experts which uses an Expectation Maximisation (EM) procedure to learn its parameters. The batch updates of the EM is then converted into online updates and finally into a GA based sampled online update thus ending up with a classifier system similar to a sUpervised Classifier System. In this paper, we show the effectiveness of such a system as compared to UCS through a series of comparative studies on test datasets.
Translated title of the contributionOnline, GA based Mixture of Experts : a Probabilistic Model of UCS
Original languageEnglish
Title of host publicationThe 2011 Conference on Genetic and Evolutionary Computation (GECCO)
EditorsNatalio Krasnogor, Pier Luca Lanzi
PublisherAssociation for Computing Machinery (ACM)
Pages1267 - 1274
Edition-
ISBN (Print)9781450305570
Publication statusPublished - 2011

Bibliographical note

Conference Proceedings/Title of Journal: Proceedings of the 2011 Conference on Genetic and Evolutionary Computation
Conference Organiser: Natalio Krasnogor and Pier Luca Lanzi
Other identifier: 2001385

Fingerprint

Dive into the research topics of 'Online, GA based Mixture of Experts : a Probabilistic Model of UCS'. Together they form a unique fingerprint.

Cite this