Abstract
A sensible use of classifiers must be based on the
estimated reliability of their predictions. A cautious
classifier would delegate the difficult or uncertain
predictions to other, possibly more specialised,
classifiers. In this paper we analyse and
develop this idea of delegating classifiers in a
systematic way. First, we design a two-step scenario
where a first classifier chooses which examples
to classify and delegates the difficult examples
to train a second classifier. Secondly, we
present an iterated scenario involving an arbitrary
number of chained classifiers. We compare these
scenarios to classical ensemble methods, such as
bagging and boosting. We show experimentally
that our approach is not far behind these methods
in terms of accuracy, but with several advantages:
(i) improved efficiency, since each classifier
learns from fewer examples than the previous
one; (ii) improved comprehensibility, since each
classification derives from a single classifier; and
(iii) the possibility to simplify the overall multiclassifier
by removing the parts that lead to delegation.
| Translated title of the contribution | Delegating Classifiers |
|---|---|
| Original language | English |
| Title of host publication | Unknown |
| Editors | Russ Greineer, Dale Schuurmans |
| Publisher | Association for Computing Machinery (ACM) |
| ISBN (Print) | 1581138385 |
| Publication status | Published - Jul 2004 |