ICML 2011 workshop on unsupervised
and transfer learning
Unsupervised and Transfer Learning
Challenge: a Deep Learning approach
Gregoire
Mesnil1;2
Yann Dauphin1
Xavier Glorot1
Salah Rifai1
Yoshua Bengio1
Ian Goodfellow1
Erick Lavoie1
Xavier Muller1
Guillaume Desjardins1
David Warde-Farley1
Pascal Vincent1
Aaron Courville1
James Bergstra1
1 Dept. IRO, Universite de Montreal. Montreal (QC), H2C 3J7, Canada
2 Dept. LITIS, Universite de Rouen. Mont-Saint-Aignan, 76130, France
Learning good representations from a large set of unlabeled data is a
particularly challenging
task. Recent work (see Bengio (2009) for a review) shows that training
deep architectures
is a good way to extract such representations, by extracting and
disentangling
gradually higher-level factors of variations characterizing the input
distribution. In this
paper, we describe different kinds of layers we trained for learning
representations in the
setting of the Unsupervised and Transfer Learning Challenge. The
strategy of our team
won the final phase of the challenge. It combined different 1-layer
unsupervised learning
algorithms, adapted to each of the five datasets of the competition.
This paper describes
that strategy and the particular 1-layer learning algorithms feeding a
simple linear classier
with a tiny number of labeled training samples (1 to 64 per class).
Keywords: Deep Learning, unsupervised learning, transfer learning,