You are on page 1of 22

INTRODUCTION

In mathematics, the maximum and minimum of a function, known collectively as


extrema are largest and smallest value that the function takes at a point either
within a given neighbourhood (local or relative extremum) or on the function
domain in its entirely (global or absolute extremum). Pierre de Fermat was one of
the first mathematicians to propose the general technique (called ad equality) for
finding maxima and minima point. To locate extreme values is the basic objective of
optimization
Pierre de Fermat (French: [pj dfma]; 17 August 1601 12 January 1665)
was a French lawyer at the Parlement of Toulouse, France, and a mathematician who
is given credit for early developments that led to infinitesimal calculus, including his
technique of ad equality. In particular, he is recognized for his discovery of an
original method of finding the greatest and the smallest ordinates of curved lines,
which is analogous to that of the differential calculus, then unknown, and his
research into number theory. He made notable contributions to analytic
geometry, probability, and optics. He is best known for Fermat's Last Theorem,
which he described in a note at the margin of a copy of Diophantus' Arithmetica.

PART I
a) Describe
i.

Mathematical optimization:
In mathematics, computer science, operations research, mathematical
optimization (alternatively, optimization or mathematical programming) is the
selection of a best element (with regard to some criteria) from some set of
available alternatives. In the simplest case, an optimization problem consists
of maximizing or minimizing a real function by systematically
choosing input values from within an allowed set and computing the value of
the function. The generalization of optimization theory and techniques to
other formulations comprises a large area of applied mathematics. More
generally, optimization includes finding "best available" values of some
objective function given a defined domain (or a set of constraints), including
a variety of different types of objective functions and different types of
domains.

ii.

Global maximum:
A global maximum, also known as an absolute maximum, the largest overall
value of a set, function over its entire range. It is impossible to construct an
algorithm that will find a global maximum for an arbitrary function.
Global minimum:
A global minimum, also known as an absolute minimum, is the smallest
overall value of a set, function, etc., over its entire range. It is impossible to
construct an algorithm that will find a global minimum for an arbitrary
function.

iii.

Local maximum:
A local maximum, also called a relative maximum, is a maximum within
some neighborhood that need not be (but may be) a global maximum.
Local minimum:
A local minimum, also called a
relative minimum, is
a minimum within
some neighborhood that need not
be (but may be) a global
minimum.

b) Methods of finding the maximum and minimum value of a quadratic


function

Method 1 of 3: If the quadratic is in the form y = ax2 + bx + c

Decide whether you're going to find the maximum value or minimum


value. It's either one or the other; you're not going to find both.

The maximum or minimum value of a quadratic function occurs at its vertex.


For y = ax2 + bx + c,
(c - b2/4a) gives the y-value (or the value of the function) at its vertex.

If the value of a is positive, you're going to get the minimum value because as such
the parabola opens upwards (the vertex is the lowest the graph can get)

If the value of a is negative, you're going to find the maximum value because as
such the parabola opens downward (the vertex is the highest point the graph can
get)

The value of a can't be zero, otherwise we wouldn't be dealing with a quadratic


function, would we?

Method 2 of 3: If the
quadratic is in the form y = a(x-h)2 + k

For y = a(x-h)2 + k,
k is the value of the function at its vertex.

k gives us the maximum or minimum value of the quadratic accordingly


as a is negative or positive respectively.

Method 3 of 3: Using differentiation when the quadratic is in


the form
y = ax^2 + bx + c

Differentiate y with respect to x. dy/dx = 2ax + b

Determine the differentiation point values in terms of dy/dx. Since dy/dx is


the gradient function of a curve, the gradient of a curve at any given point can be

found. Thus, the maximum/minimum value can be found by setting these values
equal to 0 and find the corresponding values. dy/dx = 0. 2ax+b = 0, x = -b/2a

Substitute this value of x into y to get the minimum/maximum point.

i-Think Map

PART II

10

PART III

11

FURTHER EXPLORATION
a) Linear programming (LP; also called linear optimization) is a method to
achieve the best outcome (such as maximum profit or lowest cost) in
a mathematical model whose requirements are represented by linear
relationships. Linear programming is a special case of mathematical
programming (mathematical optimization).
More formally, linear programming is a technique for the optimization of
a linear objective function, subject to linear equality and linear
inequality constraints. Its feasible region is a convex polytope, which is a set
defined as the intersection of finitely many half spaces, each of which is defined
by a linear inequality. Its objective function is a real-valued affine
function defined on this polyhedron. A linear programming algorithm finds a
point in the polyhedron where this function has the smallest (or largest) value if
such a point exists.
Linear programs are problems that can be expressed in canonical form:

Where x represents the vector of variables (to be


determined), c and b are vectors of (known) coefficients, A is a
(known) matrix of coefficients, and

is the matrix transpose. The expression

to be maximized or minimized is called the objective function (cTx in this case).


The inequalities Ax b and x 0 are the constraints which specify a convex
polytope over which the objective function is to be optimized. In this context,
two vectors are comparable when they have the same dimensions. If every entry
in the first is less-than or equal-to the corresponding entry in the second then we
can say the first vector is less-than or equal-to the second vector.
Linear programming can be applied to various fields of study. It is used in
business and economics, but can also be utilized for some engineering
problems. Industries that use linear programming models include transportation,
energy, telecommunications, and manufacturing. It has proved useful in

12

modeling diverse types of problems in


planning, routing, scheduling, assignment, and design.

13

LINEAR PROGRAMMING HISTORY


The problem of solving a system of linear inequalities dates back at least as far
as Fourier, who in 1827 published a method for solving them, and after whom the
method of FourierMotzkin elimination is named.
The first linear programming formulation of a problem that is equivalent to the
general linear programming problem was given by Leonid Kantorovich in 1939, who
also proposed a method for solving it. He developed it during World War II as a way
to plan expenditures and returns so as to reduce costs to the army and increase
losses incurred by the enemy. About the same time as Kantorovich, the DutchAmerican economist T. C. Koopmans formulated classical economic problems as
linear programs. Kantorovich and Koopmans later shared the 1975 Nobel prize in
economics. In 1941, Frank Lauren Hitchcock also formulated transportation
problems as linear programs and gave a solution very similar to the later Simplex
method;[2] Hitchcock had died in 1957 and the Nobel prize is not awarded
posthumously.
During 1946-1947, George B. Dantzig independently developed general linear
programming formulation to use for planning problems in US Air Force. In 1947,
Dantzig also invented the simplex method that for the first time efficiently tackled
the linear programming problem in most cases. When Dantzig arranged meeting
with John von Neumann to discuss his Simplex method, Neumann immediately
conjectured the theory of duality by realizing that the problem he had been working
in game theory was equivalent. Dantzig provided formal proof in an unpublished
report "A Theorem on Linear Inequalities" on January 5, 1948. [3] Postwar, many
industries found its use in their daily planning.
Dantzig's original example was to find the best assignment of 70 people to 70 jobs.
The computing power required to test all the permutations to select the best
assignment is vast; the number of possible configurations exceeds the number of
particles in the observable universe. However, it takes only a moment to find the
optimum solution by posing the problem as a linear program and applying
the simplex algorithm. The theory behind linear programming drastically reduces
the number of possible solutions that must be checked.
The linear-programming problem was first shown to be solvable in polynomial time
by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in
14

the field came in 1984 when Narendra Karmarkar introduced a new interior-point
method for solving linear-programming problems.

15

USES OF LINEAR PROGRAMMING


Linear programming is a considerable field of optimization for several reasons. Many
practical problems in operations research can be expressed as linear programming
problems. Certain special cases of linear programming, such as network
flow problems and multicommodity flow problems are considered important enough
to have generated much research on specialized algorithms for their solution. A
number of algorithms for other types of optimization problems work by solving LP
problems as sub-problems. Historically, ideas from linear programming have
inspired many of the central concepts of optimization theory, such
as duality, decomposition, and the importance of convexity and its generalizations.
Likewise, linear programming is heavily used in microeconomics and company
management, such as planning, production, transportation, technology and other
issues. Although the modern management issues are ever-changing, most
companies would like to maximize profits or minimize costs with limited resources.
Therefore, many issues can be characterized as linear programming problems.

16

EXAMPLE 1
A company manufactures inkjet and laser printers. The company can make a total of
60 printers per day, and it has 120 labor hours per day available. It takes one hour
to manufacture an inkjet printer and three hours to make a laser printer. The profit
is forty- five dollars per inkjet printer and sixty- five per laser printer.

STEP ONE:
Identify the variables
x- Inkjet Printers
y- Laser Printers

STEP THREE:
Graph it

GRAPH

17

STEP TWO:
Set up equations
Labour hours:
x+y 60
x+3y 120
Profit:
45x+65y

STEP FOUR:
Insert your x and y coordinates into
your objective function to determine
the minimum and maximum values of
your graph
45(60) + 65(60) = 6600
45(120) + 65(40) = 8000
45(30) + 65(30) = 3300
Maximum: 8000
Minimum: 3300

18

EXAMPLE II

s = number of small
vans
l = number of large
vans

GRAPH

19

number of
passengers P = 7s +
15l

s0
l0
10000s + 20000l
100000
100s +75l 500

20

REFLECTION
During conducting this Additional Mathematics, I have learnt how to manage my
time. This is because, we, the SPM students are going to a hectic schedule with
Trials and SPM around the corner. Not only that we have tuitions and extra-curricular
activities. So, I had to manage my time efficiently to complete the project.
Furthermore, I learnt how to apply Additional Mathematics especially Linear
Programming in our daily lives. For an example, in this project paper, Ive learnt how
Encik Shah has to use calculations to maximize the total area of his sheep pen.
I also got build my relationship between my friends. We really helped each other
throughout conducting this project.
A special thanks To Puan Puteh Zainuha for spending time and all her effort in the
process of doing this project. I personally think, I couldnt have done this without her
help.

21

22

You might also like