You are on page 1of 3

Fiq 22.

If the values of x are more accurate than y (i.e, regression of y on x ) the we estimate the
values of ordinate y from the observed of x; thar is
i
y =a 0+ ai x i

(22.12)

Where yi is the estimated value of the dependent variable yi thus, the error eyi is given by
e iy= y 0+ y i

(22.13)

The last square principle states that for the best fitting straight line, the sum of the squares of
the errors should be minimum. That is,
n

S e = e 2yi

(22.14)

i=1

Is minimised. Subtituting the value of eyi from Eq. (22.13) we get the sum of error square as :
2
y 0 + y i

S e =
i=1

Substituting the value

y i

from (22.13) in Eq. (22.15) we get,

(22.15)

(22.16)

S e =
i=1

In order to minimise Se, we determine the partial derivates of Se with respect to a0 and a1 in eq
(22.16). Each derivate must be zero at its minimum. Thus, this proccess gives two equations
with two unknows. Therefore,

And

ia0 ai xi 2

n
Se
=2
a0
i=1

(22.17)

ia0 ai xi 2

n
Se
=2
a1
i=1

(22.18)

Equation (22.17) and (22.18) can be simplified to


n

i=1

i=1

y ia0 nai x i=0

And

i=1

i=1

i=1

(22.19)

x i yi a0 x iai x 2i =0

(22.20)

Solving Eqs. (22.19) and (22.20) simultaneously we get,


n

i=1

i=1

y i x 2i x i x i y i

a0 = i=1

i =1
2

( )

n x2i
i=1

xi
i=1

(22.21)

y i x x i x i y i

a1= i=1

And

2
i

i=1

i=1

n x 2i
i=1

i=1
2

(22.22)

( )
xi
i=1

The denominator in Eqs. (22.21) and (22.22) is the same and could be replaced by a quantity
say and further short notation x i xi y i etc. In place of

xi ,
i=1

x i yi
i=1

, etc.

Could beemployed to present the above equations as :


xi y i
y i x 2i x i

a0 =

(22.23)

yi
n x i y i xi

a1=

And

(22.24)

x i 2
n x 2i

Where

(22.25)

It may be noted that by dividing Eq. (22.19) by n we get

y i =a +a x i
n

(22.26)
Y =a0 +a1 x

(22.27)

Or

This shows that the least square line would pass through the centroid ( X , Y .

You might also like