ICML 2011 workshop on unsupervised
and transfer learning
Towards Heterogeneous Transfer Learning
Qiang Yang
Hong Kong University of Science and Technology, Clearwater Bay,
Kowloon, Hong Kong
and
Guirong Xue and Yong Yu
Shanghai Jiao Tong University, Shanghai, China
Transfer
learning aims to learn new concepts for a learning task by reusing
knowledge from related but different domains. Most existing transfer
learning tasks have focused on knowledge transfer between domains with
the same or similar feature representation spaces. However, the
potential of transfer learning should stem from its ability to acquire
knowledge from very different feature spaces. We call these transfer
learning tasks heterogeneous transfer learning. In this article, we
highlight some examples of a heterogeneous transfer learning via
knowledge transfer between text and images and between domains without
any explicit feature mappings. In image learning problems, such as
image classification and clustering, relatively few labeled data are
available to train a high quality model. Our idea is to exploit the
similarity of the domains at a deeper level to link them together, in
order to transfer knowledge from textual domains. For example, we ask:
is it possible to supplement a image classifier with a large quantity
of unlabeled text to improve its performance? When two domains do not
have any explicit feature mappings, is it still possible to find a
mapping between them? We review three of our recent works in answering
these questions, and show that by carefully crafting a translator via
resources such as Flickr, it is possible to transform knowledge from
text domains to image problems. We also illustrate how transfer
learning can be done between two very different domains even when no
translator can be found between them, by identifying and maximizing the
commonalities among the structures of the different domains.