You are on page 1of 12

Survey

Deep Learning Survey


I.I. Itauma
Wayne State University Department of Computer Science

February 15, 2013

Itauma

Deep Learning

Survey

Learning

What is Deep Learning?


Yoshua Bengio ACL201 [2] "Deep Learning algorithms attempt to learn multiple levels of representation of increasing complexity/abstraction". Most current Machine Learning works well because of human designed representations and input feafures. ML becomes just optimizing weights to best make a nal prediction. Represenation learning attempts to automatically learn good features or represenation. Deep Learning is a new area of ML research, which has been introduced with the objective of moving ML closer to one of its original goals Artificial Intelligence.
Itauma Deep Learning

Survey

Learning

Breakthrough in Learning Deep Architectures

Before 2006, training deep architectures was unsuccessful. Hintons [1] and Yoshua Bengio [2] et al discovered: Unsupervised learning of representations can be used to (pre-)train each layer. Unsupervised training of one layer at a time, on top of the previously trained ones. The representation learned at each level is the input for the next layer. Using supervised training to ne-tune all the layers (in addition to one or more additional layers that are dedicated to producing predictions).

Itauma

Deep Learning

Survey

Learning

Applications of Deep Learning

DL is about learning representation features. Handcrafting features is time-consuming. The feasures are often both over-specied and incomplete. DL provides a way of developing representation for learning and reasoning like humans. DL has been used successfully in Speech recognition, NLP and visual perception. HDFS exposes block placement so that computation can be migrated to data

Itauma

Deep Learning

Survey

Learning

Visual perception with Deep Learning


Yann Le Cun (Google Tech Talks 2008) [4] investigated why we learn perception. He dened Deep Learning as learning a hierarchy of internal representation. From low-level features to mid-level invariant representation to object identities.

Itauma

Deep Learning

Survey

Learning

Unsupervised Learning Feature as Density Estimation.


We need unsupervised learning methods that can learn invariant feature hierarchies.

Itauma

Deep Learning

Survey

Learning

Unsupervised Learning

Probabilistic View:
Produces a probability density function that: has high value in regions of high sample density. has low value everywhere else.

Energy-based View:
Produces a enery function E(Y,W) that: has low value in regions of high sample density. has high value everywhere else.

Itauma

Deep Learning

Survey

Learning

Deep Learning Tutorial


The tutorials presented [5] introduce some of the most important deep learning algorithms. It also shows how to run them using Theano. Theano is a python library that makes writing DL models easy, and gives the option of training them on a GPU. The following topics are discussed:
Logistic Regression - using Theano for something simple. Multilayer perceptron - introduction to layers. Deep Convolutional Network - a simple version of LeNet5. Auto Encoders, Denoising Autoencoders - description of autoencoders. Stacked Denoising Auto-Encoders - easy steps into unsupervised pre-training for deep nets. Restricted Boltzmann Machines - single layer generative RBM model. Deep Belief Networks - unsupervised generative pre-training of stacked RBMs followed by supervised ne-tuning.
Itauma Deep Learning

Survey

Learning

Conclusion

We need unsupervised Learning methods that can learn invariant feature hierarchies. In Hinton [1] paper, the DBN uses RBM for unsupervised learning of representation of each layer. Yann [4] presented methods to learn hierarchies of sparse and invariant features. Deep Learning performs better than most traditional "shallow" architectures like SVM for recognition. In traditional architectures, the trainable classier is often generic (task independent).

Itauma

Deep Learning

Appendix

References

References I
Hinton, G. E., Osindero, S. and Teh, Y., A fast learning algorithm for deep belief nets Neural Computation 18:1527-1554, 2006 Yoshua Bengio, Pascal Lamblin, Dan Popovici and Hugo Larochelle, Greedy Layer-Wise Training of Deep Networks, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems 19 (NIPS 2006), pp. 153-160, MIT Press, 2007 MarcAurelio Ranzato, Christopher Poultney, Sumit Chopra and Yann LeCun Efcient Learning of Sparse Representations with an Energy-Based Model, in J. Platt et al. (Eds), Advances in Neural Information Processing Systems (NIPS 2006), MIT Press, 2007
Itauma Deep Learning

Appendix

References

References II

Visual Perception with Deep Learning http://www.youtube.com/watch?v=3boKlkPBckA Deep Learning Tutorial http://deeplearning.net/tutorial/

Itauma

Deep Learning

Appendix

References

Thanks!

Itauma

Deep Learning

You might also like