Scaling Boosting by Margin-based Inclusion of Features and Relations

S Hoche, S Wrobel

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

3 Citations (Scopus)

Abstract

Boosting is well known to increase the accuracy of propositional and multi-relational classification learners. However, the base learner?s efficiency vitally determines boosting?s efficiency since the complexity of the underlying learner is amplified by iterated calls of the learner in the boosting framework. The idea of restricting the learner to smaller feature subsets in order to increase efficiency is widely used. Surprisingly, little attention has been paid so far to exploiting characteristics of boosting itself to include features based on the current learning progress. In this paper, we show that the dynamics inherent to boosting offer ideal means to maximize the efficiency of the learning process. We describe how to utilize the training examples? margins - which are known to be maximized by boosting - to reduce learning times without a deterioration of the learning quality. We suggest to stepwise include features in the learning process in response to a slowdown in the improvement of the margins. Experimental results show that this approach significantly reduces the learning time while maintaining or even improving the predictive accuracy of the underlying fully equipped learner.
Translated title of the contributionScaling Boosting by Margin-based Inclusion of Features and Relations
Original languageEnglish
Title of host publicationUnknown
PublisherSpringer
Pages148 - 160
Number of pages12
ISBN (Print)3540440364
Publication statusPublished - Aug 2002

Bibliographical note

Conference Proceedings/Title of Journal: Proceedings of the 13th European Conference on Machine Learning, LNCS 2430

Fingerprint Dive into the research topics of 'Scaling Boosting by Margin-based Inclusion of Features and Relations'. Together they form a unique fingerprint.

Cite this