You are on page 1of 10

Table of Contents

1 INTRODUCTION TO ARTIFICIAL INTELLIGENCE


1.1 ARTIFICIAL NEURAL NETWORKS
1.2 INTRODUCTION TO NEURON
1.3 SYNAPSE vs. WEIGHT
1.4 ARCHITECTURE OF NEURAL NETWORK
1.5 ELEMENTS OF NEURAL NETWORKS
1.6 NETWORK ARCHITECTURE
1.6.1 SINGLE LAYER FEED-FORWARD
1.6.2 MULTI LAYER FEED-FORWARD
1.6.3 RECURRENT NETWORKS
1.7 NETWORK ARCHITECTURES
1.8 ADVANTAGES OF NEURAL NETWORKS
1.9 APPLICATIONS OF NEURAL NETWORKS
1.10 SINGLE LAYER NEURAL NETWORK EXAMPLE

2 TENSOR FLOW INTRODUCTION/INSTALLATION

3 INTRODUCTION TO PERCEPTRONS
3.1 MLPs AND BACKPROPAGATION
3.2 EXAMPLES

4 CONVOLUTION LAYERS
4.1 FILTERS
4.2 CNN ARCHITECTURES

5 RECURRENT-NNs
5.1 INTRODUCTION
5.2 MEMORY CELLS
5.3 INPUT?OUTPUT SEQUENCES
5.4 EXAMPLE (TIME SERIES PREDICTION)

6 AUTOENCODERS
6.1 DATA REPRESENTATION
6.2 STACKED AUTOENCODERS
6.3 TRAINING ONE ENCODER AT A TIME
1 INTRODUCTION TO ARTIFICIAL INTELLIGENCE

1.1 ARTIFICIAL NEURAL NETWORKS


Artificial Neural Networks in general is a biologically inspired network of artificial neurons
configured to perform specific tasks.

Neural networks learn from examples

 No requirement of an explicit description of the problem


 No need for a programmer.
 The neural computer adapts itself during a training period,based on examples odf similar
problems even without a desired solution to each problem.After sufficient training the
neural computer is able to relate the problem data to the soultions,inputs to outputs,and it
is then able to offer a viable solution to a brand new problem.
 Able to generalize or to handle incomplete data.
1.2 INTRODUCTION TO NEURON

Neuron in Brain

A Neuron/Node in Artificial Neural Network

1.3 SYNAPSE vs. WEIGHT


1.4 ARCHITECTURE OF NEURAL NETWORK
A typical neural network contains a large number of artificial neurons called units
arranged in a series of layers.

1.5 ELEMENTS OF NEURAL NETWORKS

1. No Computations on Input layer Neurons

2. Two process on every computational Neuron.


 Weighted Sum
 Activation Function
1.6 NETWORK ARCHITECTURE
Three basic different classes of network architectures
 Single-layer feed-forward Neural Networks
 Multi-layer feed-forward Neural Networks
 Recurrent Neural Networks

The architecture of a neural network is linked with the learning algorithm to train

1.6.1 SINGLE LAYER FEED-FORWARD


Neural Network having two input and one output units with no hidden layers
These are also known as single layer perceptrons

1.6.2 MULTI LAYER FEED-FORWARD


1.6.3 RECURRENT NETWORKS
1.7 NETWORK ARCHITECTURES

1.8 ADVANTAGES OF NEURAL NETWORKS


 A neural network can perform tasks that a linear program cannot
 When an element of neural network fails,it can continue without failure because of
their parallel nature
 A neural network learns and does not need to be reprogrammed.
 It can be implemented for multiple or any domains

1.9 APPLICATIONS OF NEURAL NETWORKS


1.10 SINGLE LAYER NEURAL NETWORK EXAMPLE

2 TENSOR FLOW INTRODUCTION/INSTALLATION

3 INTRODUCTION TO PERCEPTRONS

3.1 MLPs and Backpropagation

3.2 Examples
4 CONVOLUTION LAYERS

4.1 Filters

4.2 CNN Architectures

5 RECURRENT-NNs

5.1 ntroduction

5.2 Memory cells

5.3 Input/Output Sequences

5.4 Example (Time series prediction)

6 AUTOENCODERS
6.1 DATA REPRESENTATIONS
 Much easier to remember sequence patterns than to remember exact lists. First
studied as chess game positions (1970s).
 Autoencoder converts inputs to internal shorthand, then returns best-guess similarity.
Two parts: encoder (recognizer) & decoder (generator, aka reconstructor).
 Reconstruction loss - penalizes model when reconstructions /= inputs.
 Internal representation = lower dimensionality, so AE is forced to learn most
mportant features in inputs.
6.2 STACKED AUTOENCODERS

6.3 TRAINING ONE ENCODER AT A TIME


 Often faster to train each shallow AE individually, then stack them.
 Simplest approach = use separate TF graph for each phase

You might also like