ICML 2011 workshop on unsupervised
and transfer learning
Rapid Feature Learning with Stacked Linear
Denoisers
Zhixiang
Xu (Airbus team)
Washington University in St. Louis
We investigate
unsupervised pre-training of deep architectures as feature generators
for shallow classifiers. Stacked Denoising Autoencoders (SdA),
when used as feature pre-processing tools for SVM classication, can
lead to significant improvements in accuracy, and however, at the price
of a substantial increase in computational cost. In this poster we
create a simple algorithm which mimics the layer by layer training of
SdAs. However, in contrast to SdAs, our algorithm requires no training
through gradient descent as the parameters can be computed in
closed-form. It can be implemented in less than 20 lines of MATLAB and
reduces the computation time from several hours to mere seconds. We
show that our feature transformation reliably improves the results of
SVM classification significantly on all our data sets, sometimes
outperforming SdAs and even deep neural networks in the deep learning
benchmarks.