Professional Documents
Culture Documents
3 INTRODUCTION TO PERCEPTRONS
3.1 MLPs AND BACKPROPAGATION
3.2 EXAMPLES
4 CONVOLUTION LAYERS
4.1 FILTERS
4.2 CNN ARCHITECTURES
5 RECURRENT-NNs
5.1 INTRODUCTION
5.2 MEMORY CELLS
5.3 INPUT?OUTPUT SEQUENCES
5.4 EXAMPLE (TIME SERIES PREDICTION)
6 AUTOENCODERS
6.1 DATA REPRESENTATION
6.2 STACKED AUTOENCODERS
6.3 TRAINING ONE ENCODER AT A TIME
1 INTRODUCTION TO ARTIFICIAL INTELLIGENCE
Neuron in Brain
The architecture of a neural network is linked with the learning algorithm to train
3 INTRODUCTION TO PERCEPTRONS
3.2 Examples
4 CONVOLUTION LAYERS
4.1 Filters
5 RECURRENT-NNs
5.1 ntroduction
6 AUTOENCODERS
6.1 DATA REPRESENTATIONS
Much easier to remember sequence patterns than to remember exact lists. First
studied as chess game positions (1970s).
Autoencoder converts inputs to internal shorthand, then returns best-guess similarity.
Two parts: encoder (recognizer) & decoder (generator, aka reconstructor).
Reconstruction loss - penalizes model when reconstructions /= inputs.
Internal representation = lower dimensionality, so AE is forced to learn most
mportant features in inputs.
6.2 STACKED AUTOENCODERS