Pierre
Baldi, UC Irvine, California, USA.
Autoencoders, Unsupervised Learning, and Deep Architectures. Autoencoders
play a fundamental role in unsupervised learning and in deep
architectures for transfer learning and other tasks. In spite of their
fundamental role, only linear autoencoders over the real numbers have
been solved analytically. Here we present a general mathematical
framework for the study of both linear and non-linear autoencoders .
The framework allows one to derive an analytical treatment for the most
non-linear autoencoder, the Boolean autoencoder, and to consider other
classes of linear autoencoders over different fields. [More...] |
|
Yoshua Bengio, Universite de Montreal, Canada. Deep Learning of Representations for Unsupervised and Transfer Learning. Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher level learned features defined in terms of lower level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while preserving as much as possible of the information in the input. [More...] | |
Joachim Buhmann, ETH, Zurich, Switzerland. Information Theoretic Model Validation by Approximate Optimization. Model selection in pattern recognition requires (i) to specify a suitable cost function for the data interpretation and (ii) to control the degrees of freedom depending on the noise level in the data. We advocate an information theoretic perspective where the uncertainty in the measurements quantizes the solution space of the underlying optimization problem, thereby adaptively regularizing the cost function.The optimal tradeoff between "informativeness" and "robustness" is quantified by the approximation capacity of the selected cost function. [More...] | |
Alexandru
Niculescu-Mizil, NEC labs, New
Jersey, USA. Inductive transfer
learning for Bayesian Network
structure learning. While receiving
significant attention in the machine learning community, Bayesian
Network structure learning remains challenging, especially when
training data is scarce. In this talk I show how structure learning
performance can be significantly improved through inductive transfer,
when data is available for multiple related
problems. I present a score and search algorithm for jointly
learning multiple related Bayesian Networks that improves the quality
of the
leaned dependency structures by transferring useful information among
the different related problems. [More...] |
|
Andrew
Ng, Stanford University, California, USA. Self-taught learning. Andrew
Ng is Associate Professor in the Computer Science Department and the
Department of Electrical Engineering. He recently developed with his
collaborators new optimization methods for sparse coding, an
unsupervised learning
algorithm for finding concise, higher-level representations of
inputs, which has been successfully applied to self-taught
learning, where the goal is to use unlabeled data to help on a
supervised learning task, even if the unlabeled data cannot be
associated with the labels of the supervised task. Applications
include text classification and robotic perception tasks. |
|
Gunnar
Raetsch, MPI, Germany. Transfer
Learning in Computational Biology. Biologists are very good at:
bringing together the knowledge that has been obtained in experiments
on various organisms in order to understand the differences and
commonalities of molecular processes in these related organisms. Gunnar
Raetsch and his collaborators consider a number of recent domain
transfer methods from machine
learning and evaluate them on genomic
sequence data from model organisms of varying evolutionary distance. We
find that in cases where the organisms are not closely related, the use
of domain adaptation methods can help improve classification
performance. [More...] |
|
Dale
Schuurmans, University of Alberta, Canada. Data Dependent Loss Functions for
Focused Generalization and Transfer Learning. We investigate a
method for using data dependent loss functions to focus generalization
and transfer learning. The idea is to construct loss functions that
encourage more accurate predictions in the densest regions of the
output space. In particular, we use the inverse cumulative distribution
function (cdf – estimated from the data) over targets to define a
transfer that maps linear pre-predictions to nonlinear
post-predictions. [more...] |
|
Prasad
Tadepalli, Oregon State University, Oregon, USA.Transfer Learning
in Sequential Decision Problems: A Hierarchical Bayesian
Approach. Prasad
Tadepalli and his collaborators are exploring transfer of skills
learned in playing different
versions of real-time strategy games. This includes transferring domain
models, task hierarchies, policies and value functions across different
games that share qualitative dynamics, reward functions, and other
features. One of his contributions is to view transfer learning
as generalization of knowledge in a richer representation language that
includes multiple subdomains as parts of the same superdomain.
Significant transfer of knowledge can be achieved this way in
real-time strategy games. [More...] |
|
Ruslan
Salakhutdinov, MIT, Massachussetts, USA. Transfer Learning with Applications to
Multiclass Object Detection. We present a hierarchical Bayesian
classification model that is able to transfer higher-level structure,
abstracted from object categories that have many training examples, to
learning novel visual categories with only a few training examples.
Unlike many of the existing object detection and recognition systems
that treat different classes as unrelated entities, our model learns
both a hierarchy for sharing visual appearance across 200 object
categories and hierarchical parameters. [More...]
|
|
Qiang
Yang, Hong Kong University of Science and Technology. Towards
Heterogeneous Transfer Learning. Transfer
learning aims to learn new concepts for a learning task by reusing
knowledge from related but different domains. Most existing transfer
learning tasks have focused on knowledge transfer between domains with
the same or similar feature representation spaces. However, the
potential of transfer learning should stem from its ability to acquire
knowledge from very different feature spaces. We call these transfer
learning tasks heterogeneous transfer learning. In this article, we
highlight some examples of a heterogeneous transfer learning via
knowledge transfer between text and images and between domains without
any explicit feature mappings. [More...] |
utl@ clopinet . com.
We are grateful to the DARPA Deep
Learning program, the Naval
Research Laboratory and the Pascal2 EU network for
their support.