Professional Documents
Culture Documents
Software Required:
Operating System
Windows XP / Vista / 7
Memory
Hard Drive
Video
various points. An electronic circuit would use an instrument, such as an oscilloscope, for this
task.
Networks are constructed on a breadboard by selecting components from the palettes,
stamping them on the breadboard, and then interconnecting them to form a network topology.
Once the topology is established and its components have been configured, a simulation can be
run.
New breadboards are created by selecting New from the File menu. This will create a
blank breadboard titled "Breadboard1.nsb". The new breadboard can later be saved. Saving a
breadboard saves the topology, the configuration of each component and (optionally) their
weights. Therefore, a breadboard may be saved at any point during training and then restored
later. The saving of weights is a parameter setting for each component that contains adaptive
weights. This parameter can be set for all components on the breadboard or just selected ones.
An example of a functional breadboard is illustrated in the figure below.
Fig 1 : Breadboard
What is a neural network?
A neural network is an adaptable system that can learn relationships through repeated
presentation of data, and is capable of generalizing to new, previously unseen data. Some
networks are supervised, in that a human must determine what the network should learn from the
data. Other networks are unsupervised, in that the way they organize information is hard-coded
into their architecture.
What do you use a neural network for?
Neural networks are used for both regression and classification. In regression, the outputs
represent some desired, continuously valued transformation of the input patterns. In
classification, the objective is to assign the input patterns to one of several categories or classes,
usually represented by outputs restricted to lie in the range from 0 to 1, so that they represent the
probability of class membership.
Why are neural networks so powerful?
For regression, it can be shown that neural networks can learn any desired input-output
mapping if they have sufficient numbers of processing elements in the hidden layer(s). For
classification, neural networks can learn the Bayesian posterior probability of correct
classification.
How does NeuroSolutions implement neural networks?
NeuroSolutions adheres to the so-called local additive model. Under this model, each
component can activate and learn using only its own weights and activations, and the activations
of its neighbors. This lends itself very well to object orientated modeling, since each component
can be a separate object that sends and receives messages. This in turn allows for a graphical user
interface (GUI) with icon based construction of networks.
Neural Components
Each neural component encapsulates the functionality of a particular piece of a neural
network. A working neural network simulation requires the interconnection of many different
components.
As mentioned above, the NeuralBuilder utility automates the construction process of
many popular neural networks. There may be times when you will want to create a network from
scratch, or to add components to a network created by the NeuralBuilder. This is done by
selecting components from palettes and stamping them onto the breadboard.
1. Axon component :
This is a layer of PE's (processing elements) with identity transfer function. Primary
Usage, Can act as a placeholder for the File component at the input layer, or as a linear output
layer.
Fig 5 : Connection
There is also a shortcut for making a connection between two components. First, select
the source component (by single-clicking the left mouse button), then single click the destination
component with the right mouse button to bring up the Component Menu. Select the "Connect
to" menu item.
The connection may be broken by simply dragging the MaleConnector to an empty spot
on the breadboard and performing a Cut operation. If a connection is valid, a set of lines will be
drawn indicating that data will flow between the components. Data flows from the male to the
female. The valid connection of two Axons is shown in the figure below.
dynamics. Stacking allows this form of connection. The figure below illustrates the use of
stacking to probe the activity flowing through an Axon. Notice that stacking does not use the
male and female connectors.
DataStorage
component
may
also
be
used
in
conjunction
with
the
Fig 10 : Probing the output and input of a network using the same scope.
5. Data Input/Output :
Probes attach to access points to examine data within the network. Network data can also
be altered through access points. This provides an interface for using input files, output files and
other I/O sources. Illustrated in the figure below is a FunctionGenerator stacked on top of the left
Axon. This will inject samples of a user defined function into the network's data flow.
Fig 12 : A File component to read data in and a DataWriter probe to write data out
6. Transmitters and Receivers :
Both connectors and stacking provide local communication between components. There
are situations where a global communication channel is necessary, either to send/receive data or
for control. NeuroSolutions provides a family of components, called Transmitters, to implement
global communications. These components use access points to globally transmit data, or to send
global messages based on local decisions. Several components can receive data or control
messages that alter their normal operation. This allows very sophisticated tasks to be
implemented, such as adaptive learning rates, nonuniform relaxation, and error-based stop
criteria.
Here are some of the most common component
Kohonen family :
1. DiamondKohonen :
Fig 13 : DiamondKohonen
Neighborhood Figure (Size=2):
Fig 25 : summary