Abstract
Coordinate gradient learning is motivated by the
problem of variable selection and determining variable
covariation. In this paper we propose a novel
unifying framework for coordinate gradient learning
(MGL) from the perspective of multi-task learning.
Our approach relies on multi-task kernels to
simulate the structure of gradient learning. This
has several appealing properties. Firstly, it allows
us to introduce a novel algorithm which appropriately
captures the inherent structure of coordinate
gradient learning. Secondly, this approach gives
rise to a clear algorithmic process: a computational
optimization algorithm which is memory and time
efficient. Finally, a statistical error analysis ensures
convergence of the estimated function and
its gradient to the true function and true gradient.
We report some preliminary experiments to validate
MGL for variable selection as well as determining
variable covariation.
Translated title of the contribution | Learning coordinate gradients with multi-task kernels |
---|---|
Original language | English |
Title of host publication | COLT 2008 |
Pages | 217 - 228 |
Number of pages | 11 |
Publication status | Published - 2008 |
Bibliographical note
Name and Venue of Event: COLTConference Proceedings/Title of Journal: Proceedings: Computational Learning Theory (COLT)