What's next? The Gesture Recognition Challenge!!!
http://gesture.chalearn.org

dinner ICML
    group picture

Invited speakers
Pierre Baldi
Pierre Baldi, UC Irvine, California, USA. Autoencoders, Unsupervised Learning, and Deep Architectures. Autoencoders play a fundamental role in unsupervised learning and in deep architectures for transfer learning and other tasks. In spite of their fundamental role, only linear autoencoders over the real numbers have been solved analytically. Here we present a general mathematical framework for the study of both linear and non-linear autoencoders . The framework allows one to derive an analytical treatment for the most non-linear autoencoder, the Boolean autoencoder, and to consider other classes of linear autoencoders over different fields. [More...]
Yoshua Bengio
Yoshua Bengio, Universite de Montreal, Canada. Deep Learning of Representations for Unsupervised and Transfer Learning. Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher level learned features defined in terms of lower level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while preserving as much as possible of the information in the input. [More...]
Joachim Buhmann
Joachim Buhmann, ETH, Zurich, Switzerland.  Information Theoretic Model Validation by Approximate Optimization. Model selection in pattern recognition requires (i) to specify a suitable cost function for the data interpretation and (ii) to control the degrees of freedom depending on the noise level in the data. We advocate an information theoretic perspective where the uncertainty in the measurements quantizes the solution space of the underlying optimization problem, thereby adaptively regularizing the cost function.The optimal tradeoff between "informativeness" and "robustness" is quantified by the approximation capacity of the selected cost function. [More...]
Alexandru Niculescu-Mizil
Alexandru Niculescu-Mizil, NEC labs, New Jersey, USA.  Inductive transfer learning for Bayesian Network structure learning. While receiving significant attention in the machine learning community, Bayesian Network structure learning remains challenging, especially when training data is scarce. In this talk I show how structure learning performance can be significantly improved through inductive transfer, when data is available for multiple related problems. I present a score  and search algorithm for jointly learning multiple related Bayesian Networks that improves the quality of the leaned dependency structures by transferring useful information among the different related problems. [More...]
Andrew Ng
Andrew Ng, Stanford University, California, USA. Self-taught learning.  Andrew Ng is Associate Professor in the Computer Science Department and the Department of Electrical Engineering. He recently developed with his collaborators new optimization methods for sparse coding, an unsupervised learning algorithm for finding concise, higher-level representations of inputs, which has been successfully  applied to self-taught learning, where the goal is to use unlabeled data to help on a supervised learning task, even if the unlabeled data cannot be associated with the labels of the supervised task.  Applications include  text classification and robotic perception tasks. 

Gunnar Raetsch
Gunnar Raetsch, MPI, Germany. Transfer Learning in Computational Biology. Biologists are very good at: bringing together the knowledge that has been obtained in experiments on various organisms in order to understand the differences and commonalities of molecular processes in these related organisms. Gunnar Raetsch and his collaborators consider a number of recent domain transfer methods from machine learning and evaluate them on genomic sequence data from model organisms of varying evolutionary distance. We find that in cases where the organisms are not closely related, the use of domain adaptation methods can help improve classification performance. [More...]
Dale Schuurmans
Dale Schuurmans, University of Alberta, Canada.  Data Dependent Loss Functions for Focused Generalization and Transfer Learning. We investigate a method for using data dependent loss functions to focus generalization and transfer learning. The idea is to construct loss functions that encourage more accurate predictions in the densest regions of the output space. In particular, we use the inverse cumulative distribution function (cdf – estimated from the data) over targets to define a transfer that maps linear pre-predictions to nonlinear post-predictions.  [more...]
Prasad Tadepalli
Prasad Tadepalli, Oregon State University, Oregon, USA.Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach.  Prasad Tadepalli and his collaborators are exploring transfer of skills learned in playing different versions of real-time strategy games. This includes transferring domain models, task hierarchies, policies and value functions across different games that share qualitative dynamics, reward functions, and other features.  One of his contributions is to view transfer learning as generalization of knowledge in a richer representation language that includes multiple subdomains as parts of the same superdomain. Significant transfer of  knowledge can be achieved this way in real-time strategy games. [More...]
Ruslan Salakhutdinovv
Ruslan Salakhutdinov, MIT, Massachussetts, USA. Transfer Learning with Applications to Multiclass Object Detection. We present a hierarchical Bayesian classification model that is able to transfer higher-level structure, abstracted from object categories that have many training examples, to learning novel visual categories with only a few training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. [More...]
Qiang Yang
Qiang Yang, Hong Kong University of Science and Technology. Towards Heterogeneous Transfer Learning Transfer learning aims to learn new concepts for a learning task by reusing knowledge from related but different domains. Most existing transfer learning tasks have focused on knowledge transfer between domains with the same or similar feature representation spaces. However, the potential of transfer learning should stem from its ability to acquire knowledge from very different feature spaces. We call these transfer learning tasks heterogeneous transfer learning. In this article, we highlight some examples of a heterogeneous transfer learning via knowledge transfer between text and images and between domains without any explicit feature mappings.  [More...]

Motivation
Intelligent beings commonly transfer previously learned “knowledge” to new domains, making them capable of learning new tasks from very few examples. In contrast, many recent approaches to machine learning have been focusing on “brute force” supervised learning from massive amount of labeled data. While this last approach makes a lot of sense practically when such data are available, it does not apply when the available training data are unlabeled for the most part. Further, even when large amounts of labeled data are available, some categories may be more depleted than others. For instance, for Internet documents and images the abundance of examples per category typically follows a power law. The question is whether we can exploit similar data (labeled with different types of labels or completely unlabeled) to improve the performance of a learning machine. This workshop will address a question of fundamental and practical interest in machine learning: the assessment of methods capable of generating data representations that can be reused from task to task. To pave the ground for the workshop, we organized a challenge on unsupervised and transfer learning.

Competition
The 
unsupervised and transfer learning challenge just started and will end April 15, 2011. The results of the challenge will be discussed at the workshop and we will invite the best entrants to present their work. Further, we intend to launch a second challenge on supervised transfer learning whose results will be discussed at NIPS 2011.
This workshop is not limited to the competition program that we are leading. We encourage researchers to submit papers on the topics of the workshop.

Participation
We invite contributions relevant to unsupervised learning and transfer learning (UTL), including:
-    Algorithms for UTL, in particular addressing
    o    Learning from unlabeled or partially labeled data.
    o    Learning from few examples per class, and transfer learning.
    o    Semi-supervised learning.
    o    Multi-task learning.
    o    Covariate shift.
    o    Deep learning architectures, including convolutional neural network.
    o    Integrating information from multiple sources.
    o    Learning data representations.
    o    Kernel or similarity measure learning.
-    Applications pertinent to the workshop topic, including:
    o    Text processing (in particular from multiple languages)
    o    Image or video indexing and retrieval
    o    Bioinformatics
    o    Robotics
-    Datasets and benchmarks

The proceedings will be published by JMLR W&CP (Latex template and instructions) and as a book in the CiML series of Microtome. Revised papers are dues August 30, 2011.


==> Poster instructions:
Prepare posters of a maximum 4x4 feet area.  If you have 2 posters, you will be able to show them side by side on one board.



Program committee:
David Aha
Yoshua Bengio

Joachim Buhmann
Gideon Dror
Isabelle Guyon
Quoc Le
Vincent Lemaire
Alexandru Niculescu-Mizil
Gregoire Montavon
Atiqur Rahman Ahad
Gerard Rinkus
Gunnar Raetsch
Graham Taylor
Prasad Tadepalli
Dale Schuurmans
Danny Silver

Tentative schedule


July 1st

7:00 pm. Dinner invitation for the invited speakers and the organizers at I Love Sushi on Lake Bellevue. The other participants may join at their own expenses (please contact the organizers).

July 2nd, Room Grand A

******* Access the preprints: Login: ICML, Password: UTL11 *******

Morning: Unsupervised Learning
8:30 am. Welcome and introduction.
8:40 am. Tutorial. Deep Learning of Representations for Unsupervised and Transfer Learning. Yoshua Bengio, Université de Montréal, Canada. [preprint][slides]
9:30 am. Presentation of the results of the Unsupervised and Transfer Learning (UTL) Challenge. Isabelle Guyon, Clopinet, California. [preprint][slides][techreport on challenge datasets]
10:00 am. Presentations of the winners of phase 1 of the UTL challenge (Unsupervised Learning).
10:00 am. Transfer Learning by Kernel Meta-Learning. Fabio Aiolli. (First place in phase 1).  Pascal2 best UTL challenge (phase 2) paper award. [preprint], slides:[pptx][pdf]
10:30 am. Stochastic Unsupervised Learning on Unlabeled Data. Chuanren Liu, Jianjun Xie, Hui Xiong, Yong Ge. (Second place in phase 1 and third place in phase 2). Paper presented by Jianjun Xie. [preprint], slides:[ppt][pdf][pptx]
10:50 am. Break.
11:00 am. Autoencoders, Unsupervised Learning, and Deep Architectures. Pierre Baldi, UC Irvine, California. [preprint][slides]
11:20 am. Information Theoretic Model Selection for Pattern Analysis. Joachim Buhmann, Morteza Haghir Chehreghani, Mario Frank, Andreas P. Streich. ETH, Zurich. The presentation will be given by Cheng Soon Ong. [preprint][slides]
11:40 am. Self-taught Learning. Andrew Ng, Stanford University, California.
12:00 am. Data Dependent Loss Functions for Focused Generalization and Transfer Learning. Farzaneh Mirzazadeh  and Dale Schuurmans, University of Alberta, Canada. [preprint][slides]

12:30-1:30 pm: Poster session and lunch break.

Afternoon: Transfer Learning
1:30 pm. Welcome and introduction.
1:40 pm. Tutorial on Transfer Learning. Towards Heterogeneous Transfer Learning. Qiang Yang, Hong Kong University of S&T and Guirong Xue and Yong Yu, Shanghai Jiao Tong University, Shanghai, China.[preprint][slides]
2:30 pm. Presentations of the winners of phase 2 of the UTL challenge (Transfer Learning).
2:30 pm. Unsupervised and Transfer Learning Challenge: A Deep Learning Approach.  LISA team (First place phase 2), Université de Montréal, Canada. Presented by Yann Dauphin. Pascal2 best UTL challenge (phase 2) paper award. [preprint]slides:[pdf][ps]
3:00 pm. One-Shot Learning with a Hierarchical Nonparametric Bayesian Model. Ruslan Salakhutdinov, Josh Tenenbaum, Antonio Torralba, MIT, Massachussetts. [preprint]
3:20 pm. Transfer Learning in Computational Biology. Christian Widmer and Gunnar Raetsch, MPI, Germany. Pascal2 best paper award. [preprint]
3:40 pm. Break.
3:50 pm. Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach. Aaron Wilson, Alan Fern, Prasad Tadepalli, Oregon University. The presentation will be given by Aaron Wilson.
[preprint][Slides]
4:10 pm. Inductive Transfer for Bayesian Network Structure Learning. Alexandru Niculescu-Mizil, Rich Caruana, NEC, New-Jersey and Rich Caruana, Microsoft Research, Redmond, WA. [preprint]
4:30 pm. Transfer Learning for Auto-gating of Flow Cytometry Data. Gyemin Lee, Lloyd Stoolman, Clayton Scott, University of Michigan, Ann Arbor. Pascal2 best student paper award.  [preprint][slides]
4:50 pm. Self-reflective Multi-task Gaussian Process. Kohei Hayashi and Takashi Takenouchi  [preprint][slides]
Nara Institute of Science and Technology, Japan; and Ryota Tomioka and Hisashi Kashima
University of Tokyo, Japan.

5:10 pm.
Discussion.
6:00 pm. Ajourn.


Posters:

Rapid Feature Learning with Stacked Linear Denoisers. Zhixiang Xu (Airbus team; Third place phase 1 of UTL challenge) [Poster]
Use of Representations in High Dimensional Spaces for Unsupervised and Transfer Learning Challenge. Mehreen Saeed (aliphlaila team, Fourth place phase 2 UTL challenge), FAST, Pakistan. [technical report]
Supervised Dimensionality Reduction in the Unsupervised and Transfer Learning 2011 Competition.
Yann-Aël Le Borgne (Tryan team, Fourth place in second ranking of phase 2 UTL challenge), VUB, Belgium. [poster]
Transfer Learning for Document Classification: Sampling Informative Priors, Philemon Brakel  and Benjamin Schrauwen, Ghent University, Belgium. [preprint]
Unsupervised dimensionality reduction via gradient-based matrix factorization with two learning rates and their automatic updates. Vladimir Nikulin and Tian-Hsiang  Huang, University of Queensland, Australia. [preprint]
Transfer Learning with Cluster Ensembles. Ayan Acharya1, Eduardo R. Hruschka1,2, Joydeep Ghosh1, Sreangsu Acharyya1, 1University of Texas (UT) at Austin, USA, 2University of Sao Paulo (USP) at Sao Carlos, Brazil. [preprint]
Divide and Transfer: an Exploration of Segmented Transfer to Detect Wikipedia Vandalism. Si-Chi Chin and W. Nick Street, University of Iowa, USA. [preprint]
Clustering: Science or Art? Ulrike von Luxburg, Max Planck Institute for Intelligent Systems, Tuebingen, Germany; Robert C. Williamson, Australian National University, Australia; and Isabelle Guyon, ClopiNet, Berkeley,California. [preprint][poster]

Awards:

Widmer   Dauphin
Lee    Leborgne

Related links and paper


Daniel L. Silver, Kristin P. Bennett: Guest editor's introduction: special issue on inductive transfer learning. Machine Learning, 73(3): 215-220 (2008)

Yoshua Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2(1), pp.1-127, 2009.

Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer and Andrew Y. Ng. Self-taught learning: Transfer learning from unlabeled data, In ICML, 2007.

J. Mairal, F. Bach, J. Ponce, G. Sapiro. Online Learning for Matrix Factorization and Sparse Coding. Journal of Machine Learning Research, 11, 10-60, 2010.

Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997.

Alexandru Niculescu-Mizil, Rich Caruana , Inductive Transfer for Bayesian Network Structure Learning, JMLR W&CP: Volume 2:339:346 AISTATS 2007.

C. Liu, J. Yuen, A. Torralba, Nonparametric scene parsing: label transfer via dense scene alignment, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.

G. Schweikert, C. Widmer, B. Schölkopf, and G. Rätsch. An empirical analysis of domain adaptation algorithms. In Advances in Neural Information Processing System, NIPS, volume 22, Vancouver, B.C., 2008.

C. Widmer, J.M. Leiva, Y. Altun, and G. Rätsch. Leveraging sequence classification by taxonomy-based multitask learning. In Proc. RECOMB’10, 2010.

Mehta, N., Ray, S., Tadepalli, P., Dietterich, T. (2008). Automatic Discovery and Transfer of MAXQ Hierarchies. International Conference on Machine Learning (ICML-2008)

Rosenstein, M. T., Marx, Z., Kaelbling, L. P., Dietterich, T. G. (2005). To transfer or not to transfer. NIPS 2005 Workshop on Transfer Learning, Whistler, BC

Ulrike Von Luxburg,  Isabelle Guyon,  and Robert C. Williamson (2009). Clustering: Science or Art. In NIPS 2009 workshop on clustering.

Contact information

Organizers:
Isabelle Guyon, Clopinet, Berkeley, California
Daniel Silver, Jodrey School of Computer Science at Acadia University, Canada

Contact email:

utl@ clopinet . com.

Credits

We are grateful to the DARPA Deep Learning program, the Naval Research Laboratory and the Pascal2 EU network for their support.