ICML 2011 workshop on unsupervised
and transfer learning
Transfer Learning by Kernel Meta-Learning
Fabio Aiolli
University of Padova, Italy
A crucial issue
in machine learning is how to find good representations for data.
Recently, much work has been devoted to kernel learning, that is the
problem to find a good kernel matrix for a task. This can be made in a
semi-supervised learning setting by using a large set of unlabeled data
and a (typically small) set of i:i:d: labeled data. Another, even more
challenging problem, is how one could exploit partially labeled data
for a source task to learn good representations for another related but
dierent target task. This is the main subject of transfer learning. In
this paper, we present a novel approach to transfer learning based on
kernel learning. Specifically, we propose a kernel meta-learning
algorithm which, starting from a basic kernel, tries to learn chains of
kernel transforms that are able to produce good kernel matrices for the
source tasks. The same sequence of transformations can be then applied
to learn the kernel matrix for any target task. We report on the
application of this method to the five datasets of the Unsupervised and
Transfer Learning (UTL) challenge benchmark. Nicely, this technique
allowed us to win the first phase of the competition.