You are on page 1of 5

NAME:

Umair Nazir

REG.NO:
SP13-BCS-O11

ASSIGNMENT:
A.I#2

DIFFERENCE BETWEEN ARTIFICAL NEURAL NETWORKS


AND BIOLOGICAL NEURAL NETWORKS:
The brain is a (biological) neural network: it's a network of
neurons. Artificial neural networks, usually just referred to as neural networks, are
computer simulations which process information in a way similar to how we think
the brain does it.
Biological neutrons are extremely slow but artificial neutron are extremely
fast(if properly programs). e.g. Playing Chess. Computer moves are extremely fast
but human moves will be slow for the correct move. Human neuron can perform
extremely complex task such as interpreting a visual scenes or understanding
language....If the same Artificial neutron has to be build /programmed with about

more than 1 millions line of code... And this may not work properly as assumed.
Human neurons work parallely.Artifical neutron cannot work parallelly
Artificial neural networks (ANNs) are mathematical constructs, originally designed
to approximate biological neurons. Each "neuron" is a relatively simple element --for example, summing its inputs and applying a threshold to the result, to
determine the output of that "neuron".
Several decades of research went into discovering how to build network
architectures using these mathematical constructs, and how to automatically set the
weighting on each of the connections between the neurons to perform a wide range
of tasks. For example, ANNs can do things like recognition of hand-written digits.
A "biological neural network" would refer to any group of connected biological
nerve cells. Your brain is a biological neural network, so is a number of neurons
grown together in a dish so that they form synaptic connections. The term
"biological neural network" is not very precise; it doesn't define a particular
biological structure.
In the same way, an ANN can mean any of a large number of mathematical
"neuron"-like constructs, connected in any of a large number of ways, to perform
any of a large number of tasks.
An artificial neural network is basically a mathematical function. It is built from
simple functions which have parameters (numbers) which get adjusted.

Characteristics of an ANN is as such:


Step 1. When the input node is given an image, it activates a unique set of neurons
in the first layer, starting a chain reaction that would pave a unique path to the
output node. In Scenario 1, neurons A, B, and D are activated in layer 1.
Step 2. The activated neurons send signals to every connected neuron in the next
layer. This directly affects which neurons are activated in the next layer. In
Scenario 1, neuron A sends a signal to E and G, neuron B sends a signal to E, and
neuron D sends a signal to F and G.
Step 3. In the next layer, each neuron is governed by a rule on what combinations
of received signals would activate the neuron (rules are trained when we give the
ANN program training data, i.e. images of handwritten digits and the correct
answer). In Scenario 1, neuron E is activated by the signals from A and B.
However, for neuron F and G, their neurons rules tell them that they have not
received the right signals to be activated, and hence they remains grey.

Step 4. Steps 2-3 are repeated for all the remaining layers (it is possible for the
model to have more than 2 layers), until we are left with the output node.

Q NO #2
Data structure and searching algorithm
Computer systems are often used to store large amounts of data from which individual
records must be retrieved according to some search criterion. Thus the efficient
storage of data to facilitate fast searching is an important issue. In this section, we
shall investigate the performance of some searching algorithms and the data structures
which they use.

Sequential Searches
Let's examine how long it will take to find an item matching a key in the collections
we have discussed so far. We're interested in:
a. the average time
b. the worst-case time and
c. the best possible time.
However, we will generally be most concerned with the worst-case time as
calculations based on worst-case times can lead to guaranteed performance
predictions. Conveniently, the worst-case times are generally easier to calculate than
average times.
If there are n items in our collection - whether it is stored as an array or as a linked list
- then it is obvious that in the worst case, when there is no item in the collection with
the desired key, then n comparisons of the key with keys of the items in the collection
will have to be made.
To simplify analysis and comparison of algorithms, we look for a dominant operation
and count the number of times that dominant operation has to be performed. In the
case of searching, the dominant operation is the comparison, since the search requires
n comparisons in the worst case, we say this is aO(n) (pronounce this "big-Oh-n" or
"Oh-n") algorithm. The best case - in which the first comparison returns a match -

requires a single comparison and is O(1). The average time depends on the probability
that the key will be found in the collection - this is something that we would not
expect to know in the majority of cases.

Binary Search
However, if we place our items in an array and sort them in either ascending or
descending order on the key first, then we can obtain much better performance with
an algorithm called binary search.
In binary search, we first compare the key with the item in the middle position of the
array. If there's a match, we can return immediately. If the key is less than the middle
key, then the item sought must lie in the lower half of the array; if it's greater then the
item sought must lie in the upper half of the array. So we repeat the procedure on the
lower (or upper) half of the array.

You might also like