You are on page 1of 4

Machine Learning Ishaan Kapoor roll no :- 1/15/FET/BCS/1/055

Assignment-6

1.What is difference between linear regression and logistic regression?


- Linear and Logistic regression are the most basic form of regression which are commonly used. The
essential difference between these two is that Logistic regression is used when the dependent variable
is binary in nature. In contrast, Linear regression is used when the dependent variable is continuous and
nature of the regression line is linear.
The data is modelled using a straight line.
The independent variable could be correlated with each other. (Specially in multiple linear regression)

Regression is a technique used to predict the value of a response (dependent) variables, from one or
more predictor (independent) variables, where the variable are numeric. There are various forms of
regression such as linear, multiple, logistic, polynomial, non-parametric, etc.
The probability of some obtained event is represented as a linear function of a combination of predictor
variables.
The independent variable should not be correlated with each other (no multicollinearity exist).

2.Write a note on inductive learning.

- This involves the process of learning by example -- where a system tries to induce a general rule from a
set of observed instances.

This involves classification -- assigning, to a particular input, the name of a class to which it belongs.
Classification is important to many problem solving tasks.

A learning system has to be capable of evolving its own class descriptions:

 Initial class definitions may not be adequate.


 The world may not be well understood or rapidly changing.

The task of constructing class definitions is called induction or concept learning

3.What is deep learning?


- Deep learning is a machine learning technique that teaches computers to do what comes naturally to
humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to
recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in
consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of
attention lately and for good reason. It’s achieving results that were not possible before.

4.Compare machine learning and deep learning.


- We use a machine algorithm to parse data, learn from that data, and make informed decisions based
on what it has learned. Basically, Deep Learning is used in layers to create an Artificial “Neural
Network” that can learn and make intelligent decisions on its own. We can say Deep Learning is a sub-
field of Machine Learning.
Data Dependencies
Performance is the main key difference between both algorithms. Although, when the data is small,
Deep Learning algorithms don’t perform well. This is the only reason DL algorithms need a large amount
of data to understand it perfectly.

Hardware Dependencies
Generally, Deep Learning depends on high-end machines while traditional learning depends on low-end
machines. Thus, Deep Learning requirement includes GPUs. That is an integral part of it’s working. They
also do a large amount of matrix multiplication operations.

Problem Solving Approach


Generally, we use the traditional algorithm to solve problems. However, it needs to break a problem
into different parts to solve them individually. To get a result, combine them all.

5.Explain the backpropagation technique.

- Backpropagation is a supervised learning algorithm, for training Multi-layer Perceptrons (Artificial Neural

Networks).

I would recommend you to check out the following Deep Learning Certification blogs too:

 What is Deep Learning?

 Deep Learning Tutorial

 TensorFlow Tutorial

 Neural Network Tutorial

But, some of you might be wondering why we need to train a Neural Network or what exactly is the

meaning of training.
The Backpropagation algorithm looks for the minimum value of the error function in weight space using
a technique called the delta rule or gradient descent. The weights that minimize the error function is
then considered to be a solution to the learning problem.

6.Explain neural networks technique.


- Artificial neural networks (ANN) or connectionist systems are computing systems vaguely inspired by
the biological neural networks that constitute animal brains. The neural network itself is not an
algorithm, but rather a framework for many different machine learning algorithms to work together and
process complex data inputs.[3] Such systems "learn" to perform tasks by considering examples,
generally without being programmed with any task-specific rules. For example, in image recognition,
they might learn to identify images that contain cats by analyzing example images that have been
manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this
without any prior knowledge about cats, for example, that they have fur, tails, whiskers and cat-like
faces. Instead, they automatically generate identifying characteristics from the learning material that
they process.

7.What is perceptron?
- A perceptron is a simple model of a biological neuron in an artificial neural network. Perceptron is also
the name of an early algorithm for supervised learning of binary classifiers.

The perceptron algorithm was designed to classify visual inputs, categorizing subjects into one of two
types and separating groups with a line. Classification is an important part of machine learning and
image processing. Machine learning algorithms find and classify patterns by many different means. The
perceptron algorithm classifies patterns and groups by finding the linear separation between different
objects and patterns that are received through numeric or visual input.

9.What is PCA.

- The main idea of principal component analysis (PCA) is to reduce the dimensionality of a data set

consisting of many variables correlated with each other, either heavily or lightly, while retaining the

variation present in the dataset, up to the maximum extent. The same is done by transforming the

variables to a new set of variables, which are known as the principal components (or simply, the PCs) and

are orthogonal, ordered such that the retention of variation present in the original variables decreases as

we move down in the order. So, in this way, the 1st principal component retains maximum variation that

was present in the original components. The principal components are the eigenvectors of a covariance

matrix, and hence they are orthogonal.

Importantly, the dataset on which PCA technique is to be used must be scaled. The results are also

sensitive to the relative scaling. As a layman, it is a method of summarizing data. Imagine some wine

bottles on a dining table. Each wine is described by its attributes like colour, strength, age, etc. But

redundancy will arise because many of them will measure related properties. So what PCA will do in this

case is summarize each wine in the stock with less characteristics.

Intuitively, Principal Component Analysis can supply the user with a lower-dimensional picture, a

projection or "shadow" of this object when viewed from its most informative viewpoint.
10.What is dimenionality reduction?
- In machine learning classification problems, there are often too many factors on the basis of which the
final classification is done. These factors are basically variables called features. The higher the number of
features, the harder it gets to visualize the training set and then work on it. Sometimes, most of these
features are correlated, and hence redundant. This is where dimensionality reduction algorithms come
into play. Dimensionality reduction is the process of reducing the number of random variables under
consideration, by obtaining a set of principal variables. It can be divided into feature selection and
feature extraction.

You might also like