You are on page 1of 84

ECON 939 QUANTITATIVE ECONOMIC ANALYSIS

Covariance Correlation Sampling estimators Unbiasedness Efficiency Consistency Normal distribution Chi2 distribution F distribution T distribution Hypothesis testing

Definition II. Methodology Of Econometrics III. PROBABILITY CONCEPTS Random variables


I.

Probability Distribution Mean of the distribution, Variance of a Random Variable Mutual and Exhaustive events Joint, Marginal Probability

Econometrics
Quantitative Economic Analysis also goes by the name of Econometrics. What is Econometrics? The term literally means Economic Measurement. There is, however, more to Econometrics than just tricks.

Definition Of Econometrics

Econometrics may be defined as the social science in which the tools of economic theory, mathematics and statistical inference are applied to the analysis of economic phenomena.

Explication Of The Definition


Too often emphasis is placed on the mathematics and statistics that is involved in econometrics.

Applications
Numerous other areas apart from Economics now use the techniques of econometrics: politics, psychology, history, marketing and advertising, finance, among others. Hence, Financial Econometric, Economic econometric, Whichever area econometrics is used in, the role of theory is crucial.

Theory: Hypothesis
Without theory there is no way of evaluating results. Theory enables the setting up of the hypothesis which is to be tested with the data. Unless the hypothesis is precisely specified accepting it or rejecting it will not be unambiguous.

e.g. 1. Forecasting the correlation between stock indices of two countries 2. Testing the hypothesis that earnings have no effect on stock prices. 3. Modeling long run relationship between prices and exchange rates.

Role of theory
Theory precedes data and not the other way round.
e.g. For most goods, relationship between consumption and disposable income is positive

Reversing this order consistently is bad econometrics and goes by the name of data mining

Methodology Of Econometrics
1. 2. 3. 4. 5. 6. 7. 8. Statement of theory or hypothesis Specification of mathematical model of the theory Specification of econometric model of the theory Obtaining the data Estimation of the parameters of the model Testing of hypothesis Validation of model by forecasting Using the model for control or policy purposes

Before We do Econometrics
Before we actually plunge into doing econometrics, we need to take a tour of a vitally important branch of statistics known as probability theory. Quite often, the emphasis on probability meets with resistance and, possibly, understandably so. Probability theory tends to be quite technical. But we cannot should not avoid studying it. Doing econometrics with knowing probability is a bit like trying to learn a language but avoiding learning the grammar of that language.

ECON 939

PROBABILITY CONCEPTS

PROBABILITY CONCEPTS
An investigator conducts a variety of experiments, such as, tossing a coin or rolling a pair of dice or examining the impact of advertising on consumer purchase decisions. All such experiments result in outcomes: a coin may show heads or tails; the dice may show a combination of numbers from one to six; advertising may increase consumption. Increase in disposable income may increase consumption.

Random Variables
The outcome of the experiment, results in some variables which can take on a variety of values. Such variables are known as random variables.

Random Variable is a variable whose value is


unknown until it is observed. The value of a random variable is the results from an experiment.

E.g.: Temperature at a certain time of the day, income


of a family, consumption of bread, etc.

Random Variables (contd.)


A random variables can be discrete or continuous.

Discrete Random Variable: A discrete random


variable can take only a finite number of values, that can be counted by using the positive integers.

E.g.: number of students in the classrooms of UOWD;


number of cars in Dubai.

Random Variables
Continuous Random Variable: A continuous
random variable can take any real value (not just whole numbers) in at least one interval on the real line.

E.g.: Heights or weights of individuals; Gross national


product (GNP); money supply; interest rates; price of vegetables; household income.

Probability Distribution
Associated

random variable is a probability distribution, denoted by f(x) that


with each determines the probability that that the random variable (r.v.), X will take on a value x. This written as:

f(x) = P(X=x)
Therefore,

0 < f(x) < 1

If X takes on the n values: x1, x2, . . . , xn, then f(x1)

+ f(x2)+. . .+f(xn) = 1.

Example: Graphic Representation of Probability Distribution


30 5 30 0

RANGE 0 - 0.5 0.5 - 1.0 1.0 - 1.5 1.5 - 2.0 2.0 - 2.5 2.5 - 3.0 3.0 - 3.5 3.5 - 4.0

X 0.25 0.75 1.25 1.75 2.25 2.75 3.25 3.75

F(X) 0 0.002 0.01 0.049 0.244 0.342 0.255 0.098

No. of Students 0 2 10 49 244 342 255 98

20 5 No.ofStu 20 0 10 5 10 0 5 0 0

0 .5

1 .5

2 X

2 .5

3 .5

0 5 .3 0 .3 0 5 .2 0 .2 0 5 .1 0 .1 0 5 .0 0

F(X)

0 .5

1 .5

2 X

2 .5

3 .5

Mathematical Expectation, Mean and Variance


When working with random variables, it is convenient to summarize their probability characteristics using the concept of mathematical expectation.

Rules of Summation
n

Rule 1:

i= 1 xi = x1 + x2 + . . . + xn

Rule 2:

i = 1 axi = ai = xi , where a is a constant 1


n n n
1 i = (xi + yi) = i 1 xi += 1 yi = i

Rule 3:

Note that summation is a linear operator which means it operates term by term.

Rules of Summation (continued)

Rule 4:

1 i = (axi + byi) = ai 1 xi + ib 1 yi = =

Rule 5:

= n 1 xi = i=

x1 + x2 + . . . + xn
n

The definition of x as given in Rule 5 implies the following important fact:


n
i = 1 (xi x) = 0

Rules of Summation (continued)


n

Rule 6:

i = 1 f(xi) = f(x1) + f(x2) + . . . + f(xn)

Notation:
n m

1 x f(xi) = i f(xi) = i = f(xi)

Rule 7:

i i = 1 = 1 f(xi,yj) == 1 j

[ f(xi,y1) + f(xi,y2)+. . .+ f(xi,ym)]


n m m n

The order of summation does not matter :

i=1 j=1

f(xi,yj) =j = 1

i=1

f(xi,yj)

Mean of a Distribution
An important characteristic of a random variable is its mathematical expectation or expected value or its mean. The expected value or mean of a random variable X is the average value of the random variable in an infinite number of repetitions of the experiment (repeated samples); it is denoted by E(X).

Mean of a Distribution (contd.)


The fact that the experiment is repeated infinite number of times is important. If we denote heads of a coin as 0 and tails as 1, if the coin is tossed an infinite number of times the average value of X (which takes values 0 or 1) will approach 0.5, the mathematical expectation. Quite obviously, tossing the coin once will never give a value of 0.5

Mean of a Distribution (contd.)


Definition:
For discrete random variable, X the mean of distribution (sometimes denoted by ) is given by:

E[X] = i = 1i f(xi) x
E[X] = x1f(x1) + x2f(x2) + . . . + xnf(xn)
Where f(xi) = P(X=xi)

Mean of a Distribution (contd.)


The definition given above can be extended to any function of x i.e. x2 or x3 or generally, g(X) Thus:

E (X) = E(X ) = E(X ) =


3 2

i=1
n

xi f(xi)
2

i=1
n

xi f(xi) xi f(xi) g(xi) f(xi)

i=1

E [g(X)] =

i = 1

Properties of Mathematical Expectation g(X) = g1(X) + g2(X) E [g(X)] = E [g(X)] =

= 1 i
n

[g1(xi) + g2(xi)] f(xi)


n

i = 1 g1(xi) f(xi) + i1 =

g2(xi) f(xi)

E [g(X)] = E [g1(X)] + E [g2(X)]

Properties of Math. Expectation (contd.)


E(X+Y) = E(X) + E(Y) E(X - Y) = E(X) - E(Y) E(X+a) = E(X) + E(a) = E(X) + a E(bX) = b E(X) E(a+bX) = a + bE(X)

Proving by Excel (Empirical)


Show that E(X+5X2)=E(X)+5E(X2)
random variable X 1 3 5 E(X) 2.4 P(X) X2 X+5X2

0.4 0.5 0.1

1 9 25 7.4

6 48 130 39.4

E(X+5X2)= E(X)+5E(X2)

39.4

Variance of a Random Variable


Let = E(X) and g(X) = E(X )2.
(X ) is a measure of how far X deviates from mean . Squaring (X ) magnifies the deviations and treats positive and negative deviations on parallel. The probability-weighted average of (X )2 is a measure of the dispersion of X around its mean value and is known as the variance of the distribution.

Variance:

Var(X) = E(X ) -
2

Definition: Var(X) = 2 = E[(X )2] = (xi )2f(xi) This can also be written as: Var(X)
= E [ [X E(X) ]2] = E [X2 - 2XE(X) + (E(X))2 ] = E(X2) - 2 E(X) E(X) + E[ E(X)2] = E(X2) - 2 E(X)2 + E(X)2 = E(X2) - E(X)2

= E(X2) - 2 since E(X) =

Empirical proof
random variable X 1 3 5 2.4 P(X) X2 X+5X2 (X-u)^2

0.4 0.5 0.1

1 9 25 7.4

6 48 130 39.4

1.96 0.36 6.76

E(X)

2.4

7.4

39.4

V(x) E(X2) - 2 var

1.64

1.64

Variance (contd.)
Some properties of the variance are: If c is a constant, Var(c) = 0 If a and b are constants,

Var(a + bX) = E[(a+bX) E(a+bX)]2 = b2 Var(X) Standard deviation


The square root of the variance of a random variable is called the and is denoted by .

Coefficient of variation is defined as the ratio of the


standard deviation to the mean: /.

Definitions Two or more event are called mutually exclusive if at most one of them occur when the experiment is performed, that is, if no two of them have outcomes in common. A B=empty set Two or more event are called collectively exhaustive if at least one of them occurs when the experiment is performed, that is, if their union makes the sample space. A B=S P(A) + P(B) =1

Joint Probabilities
To answer probability questions involving two or more random variables, we need to know their joint

probability distribution.
If X and Y are discrete random variables (r.v.), the probability that X=x and Y=y, is the joint probability function for X and Y and denoted by fXY(x, y). Thus:

fXY(x,y) = P(X=x, Y=y)

Joint Probability: Example


Gender (G) Education (E) Arts (=0) Science (=1) Other (=2) Column Total 200 300 60 560 270 100 70 440 470 400 130 1000 Males (=0) Females (=1) Row Total

The joint probability function f(g, e) of G and E is given by:


Gender (G) Education (E) 0 1 2 f(g) 0.20 0.30 0.06 0.56 0.27 0.10 0.07 0.44 0.47 0.40 0.13 1.00 0 1 h(e)

The probability of a male arts student f(0,0) = 0.20.

Marginal Probability
In the above example, the probability of drawing a male student is 0.56; probability of female student is 0.44. These values are given at the bottom of the second table. These values are called f(g) are probability density function of G (=Gender) alone. Likewise, we can obtain values h(e) which are probability density function of E (=Education) alone. f(g) and h(e) are called marginal probabilities or marginal distributions of G and E respectively.

Conditional Probability
We are sometimes interested in knowing the probability of a r.v. (say, Y) given that another r.v. (say, X) has already taken place. Thus, in the above example we may wish to know what is probability a student is studying science, given that the student is male. Such probability is know as conditional probability.

Conditional Probability (contd.)


Definition:

P( A and B) P( A | B) = P( B)

Covariance and Correlation


One is often interested in exploring the relationships between two random variables. Covariance and correlation are the two ways to measure the closeness of two r.v.'s.

We write covariance as: XY = cov(X,Y) = E[(X - EX)(Y-EY)] = E[ (X X)(Y Y) ]

Covariance (contd.)
cov(X,Y)
= E [(X - EX)(Y-EY)] = E [XY - X EY - Y EX + EX EY] = E(XY) - EX EY - EY EX + EX EY = E(XY) - 2 EX EY + EX EY

= E(XY) - E(X) E(Y)


Note that variance is a special case of covariance.

cov(X, X) = var(X) = E[(X - EX) 2 ]

Excel

Cov(X,Y) = E(XY) - E(X) E(Y)


x 1 2 3 4 5 8 12 y 6 7 2 4 9 3 4 xy 6 14 6 16 45 24 48

covar Expected covar

-2.28571 5 -2.28571 5 22.71429

Excel- cov(X,Y) = E(XY) - E(X) E(Y)


Y X 1 3 5 P(Y) 2 0.25 0.3 0.05 0.6 XY 2 4 6 12 10 20 E(XY) 6.8 4 0.15 0.2 0.05 0.4 P(XY) 0.25 0.15 0.3 0.2 0.05 0.05 cov(X,Y) 0.08 P(X) 0.4 0.5 0.1 1

Independence versus covariance (skip)


Y X 1 2 3 6 0.2 0 0.2 0.4 8 0 0.2 0 0.2 10 0.2 0 0.2 0.4 0.4 0.2 0.4

COV(X,Y)=16; E(X)=2 and E(Y)=8

Independent Random Variables


If X and Y are independent => E(XY)=E(X).E(Y) => Cov (X,Y) =0 (easy to prove) If X and Y are independent Var (X+Y)=var(X)+var(Y)+2cov(X,Y)= = var(X)+var(Y)

X 1 3 Covariance =0 does not mean that the two 4 random variables are independent.

var(X+Y)

var(X) + var(Y) + 2cov(X,Y)

Covariance (contd.)
The sign of the covariance between two r.v.'s indicates whether their association is positive or negative.

In quadrant (1), both X and Y are greater than E(X) and E(Y). Hence, the product (XEX)(Y-EY) will be positive. In quadrant (3) both are less than their means and hence the product is again positive. In quadrants (2) and (4) the product will be negative. Since there are more points in (1) and (3) than in (2) and (4), covariance will tend to be positive.

E(Y) 3 4

E(X)

Correlation (contd.)
Even though covariance indicates whether the association between two r.v.'s is positive or negative, the value of the covariance does not indicate the strength of their association. This is because the size of covariance is dependent on the units in which the variables are measured.
4 0 3 5 3 0 2 5 2 0 1 5 1 0 5 0 0 2 0 4 0 6 0 8 0 10 0 10 2 10 4
5 0 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 1 5 1 0 2 0

The two graphs above are identical and yet their covariances are different. Covariance of graph on the left = 288.8 Covariance of graph on the right = 72.2 Why is this so? The reason is that the values of X and Y in the left graph are double those in the right graph. Thus, the value of covariance depends on the units of measurement of X and Y.

Correlation (contd.)
To overcome the problem of units of measurement in the computation of the covariance, a normalised covariance measure is used. That is, this normalised measure does not change with a change in the units of measurement.

Definition: XY

XY C ov(X, Y) = = X Y [Var(X)Var (Y)]

1/2

If X and Y are positively related, the correlation coefficient () is positive; If they are negatively related, is negative. If = 0, X and Y are uncorrelated. is a number that lies between -1 and +1 when X and Y are perfectly linearly correlated. For the 2 graphs given above, = 0.8854.

Random Sampling
A statistical investigation arises out of the need to solve a problem, e.g. what is the effect of price reduction on sales of Coke? In formulating the answer to the question, it is important to identify the population, which is the totality of elements about which some information is desired.

Random Sampling (contd.)


In the Coke example, we might wish to know the effect of price reduction in all of Dubai. An analyst trying to answer a research question clearly cannot afford to study each and every element in the population. The analyst would draw a sample of the elements, make observations on them and use these observations to draw conclusions about the characteristic of the population that the sample represents.

Random Sampling (contd.)


The process of drawing a sample is called sampling. A variety of sampling techniques are available we focus on simple random sampling (S.R.S.). Definition: A simple random sample of n elements is a sample
that has the property that every combination of n elements has an equal chance of being the selected sample. A random sample of observations on a r.v. X is a set of independent, identically distributed (i.i.d.) random variable X1, X2, ,Xn each of which has the same probability distribution as that of X.

Sampling Distribution
A function of the observed values of the observed r.v.'s that does not contain any unknown parameters is called a sample statistic. The two most frequently used sample statistics are:

1 Sample Mean : x = x i n 1 Sample Variance : s = (x i x)2 n 1


2

Sampling Distribution (contd.)


The square root of s2 is called the sample standard deviation or standard error. The distinction between sample statistic and population must be clearly understood. Suppose that a r.v., X has mean expected mean and variance 2. These are population parameters that are fixed and not random.

Sampling Distribution (contd.)


The sample mean and sample variance are however r.v.'s since they will be different for different samples. By repeatedly drawing samples we can obtain a large number of values of the sample mean. We can then compute the fraction of times, the sample mean will lie in a specified interval. This gives the probability that the sample mean will lie in that interval.

Sampling Distribution (contd.)


By varying the interval, we can obtain a whole range of probabilities, thus generating a probability distribution. This will be the distribution of the sample mean. Similarly, we can get the distribution of the sample variance.

Estimation of Parameters
In an empirical investigation, the analyst very often knows the general form of the probability distribution of the r.v.'s. However, the specific values of the population parameters say, mean ()and variance (2) are not known. Since a complete census of the population is out of the question, the analyst uses a sample to draw inferences about the population parameters i.e. the underlying probability distribution.

Estimation of Parameters (contd.)


Based on the sample mean and the sample variance the analyst hopes to draw inferences about the population mean and population variance. The term estimator is used to refer to the formula that gives us a numerical value of the parameter of interest. The numerical value itself is called an estimate.

Properties of Estimators
We often need to know which is a good estimator of a population parameter. Is the sample mean a good estimator of the population parameter? Or should we add the largest and smallest values in a sample and divide the sum by 2? Unless we know the properties that good estimators should have, we will not know how to choose.

Unbiasedness
Suppose an unknown parameter is and its estimator is ^. ^ is a function of observations x1, x2,xn and does not depend on any unknown parameters. Since the xs are random, ^ is also random. Since ^ is a r.v., it has a probability distribution with a certain mean, E(^). Definition: An estimator ^ is said to be an unbiased estimator of if E(^) = . If this is equality does not hold, the estimator is said to be biased and the bias is E(^) .

Unbiasedness (contd.)
Obviously, a particular value of ^ from a given trial/sample would not equal . But drawing a large number samples, computing ^ for each sample, the average of these values must equal , if the estimator is to be unbiased. Unbiasedness is, however, not enough. If we draw only one sample, we cannot be sure if the value of ^ is close to or away from it.

Efficiency
Since it is possible to consider an infinite number of unbiased estimators, unbiasedness is not enough. We know that variance of a r.v. measures its dispersion around the mean. A smaller variance implies that the values are clustered closer to the mean as compared to a sample which has a larger variance. Thus, given two unbiased estimators, we would choose the one with the lower variance since on average it is closer to the true mean .

Efficiency (contd.)

a)

Definition:
Let 1^ and 2^ be two unbiased estimators of the parameter . If Var(1^) < Var(2^), then 1^ is said to be more efficient than 2^. The ratio [Var(1^)]/[Var(2^) is called relative efficiency. Among all the unbiased estimators, the one with the smallest variance is called the minimum variance unbiased estimator.

b) c)

Mean Squared Error


If a biased estimator has a lower variance than an unbiased one, we may wish to tolerate some bias. A measure that permits trade-off between bias and variance is mean squared error (MSE).

Mean Squared Error (contd.)


Definition:
a) b) The MSE of an estimator ^, MSE(^) = E[(^ - )2], which is the expected value of the deviation of ^ from . If 1^ and 2^ are two alternative estimators of and MSE(1^) < MSE(2^). Then 1^ is said to be mean squared efficient as compared to 2^. If both are unbiased, 1^ is more efficient. Among all possible estimators of , the one with the smallest MSE is called the minimum MSE estimator.

c)

Mean Squared Error (contd.)


It can be shown that the MSE equals the sum of the variance and the square of the bias, b() = E(^) . It may be noted that b() is independent of the xs and is fixed or nonrandom. MSE = E[(^ - )2] = E[(^ - E(^) + E(^) - ]2 = E[(^ - E(^) + b()]2 = E[(^ - E(^)]2 + [b()]2 + 2b() E[(^ - E(^)] Now, E[(^ - E(^)]2 = var(^) and, since E(^) is nonrandom, E[(^ - E(^)] = E(^) - E(^) = 0. Hence, MSE = var(^) + [b()]2

Consistency

Sometimes an estimator may not possess the desirable properties in small sample, but when the sample size is large, many of the desirable properties may hold. In such cases we let the sample size n increase indefinitely and the associated estimator is labelled n^. The most frequently used large-sample property is consistency.

Consistency (contd.)
Consistency means that as n increases, the estimator approaches the true . Definition: An estimator n^ is said to be a consistent estimator of if limnP( n^ + ) = 1, for all > 0. This property is expressed as plim(n^) = . Note that the above definition is valid for any , however small.

Normal Distribution
We have been discussing r.v.s and their probability distributions. In many contexts we need to deal specific distributions. The most important is the normal distribution. If X is a normally distributed r.v. with mean and variance 2, written as X~N(, 2), then its probability density functions is given by:
1 (x )2 f(x) = exp < x< 2 2 2 2 where exp[ a ] denotes the exponential function ea

Normal Distribution (contd.)


f(x)

The normal distribution is symmetric around and has a bell shape. The area under the normal curve between - and + is 68.26%; between -2 and +2, it is 95.44%; between -3 and +3, it is 99.73%.

Normal Distribution (contd.)


If X has a normal distribution mean and std. deviation , the standardised normal variable, Z = (X )/ has the standard normal distribution N(0,1). The probability density function of a standardised normal distribution is given by:

1 x2 f(x) = exp 2 2

< x<

Property
Normal random variable has two parameters ( , 2) If X1 and X2 are random variables from a normal distribution => Y= aX1+bX2 => Y is a normal random variable Y(a 1+ b 2, var( 1)+var( 2)+2cov( 1 2) ) Generating normal random variables with excel

=NORMINV(RAND(),mean,std)

Excel
Mean std Experimental Mean Experimental std
20 5

X
11.7773 3 22.0765 9 21.1331 12.1274 5 21.4586 9 19.6138 8 12.5582 6 17.4003 2 25.7012 6 24.0501 1 22.3272 2 19.7688 8 25.6151 4 17.4763 9 14.2706 8 18.3260 5 14.6457 15.4612 8 23.8288 8 12.7164 7 22.1382 1 19.9142 25.7595 2 17.3297 4 15.4520 2 17.5318

19.79 5.039

bin
0 5 10 15 20 25 30 35 40

Bin 0 5 10 15 20 25 30

Frequen Cumulat cy ive % 0 0.00% 1 22 104 254 229 98 0.14% 2.50% 16.41% 49.51% 86.23% 98.47%

Generating normal distributed vector

Chi-Square Distribution
The distribution of the sum of squares of n independent standard normal r.v.s is called chi-square (2) distribution with n degrees

of freedom and is written as 2n.

Consider n r.v.s Z1, Z2,Zn, all of which are (independent) standard normally distributed N(0,1). Define a new r.v. U such that:

U = Z21+ Z22++Z2n = Z2i


The distribution of U is 2.

Chi-Square Distribution (contd.)


Since U is non-negative, the chi-square distribution is defined only over 0 n . The density function of 2 depends only on one parameter, called the degrees of freedom (d.f.). The mean of 2 can be shown to be n which is its d.f. If U~ 2m and V~ 2n, with U and V independent, then U+V~ 2m+n

Students t-Distribution
Suppose Z~N(0,1) and U~ 2n with Z and U independent. Define the r.v. t = Z/U/n = Zn/U. The distribution of t is the t-distribution with n d.f. The t-distribution is symmetric around the origin and has a shape similar to the normal distribution. For large n, the t-distribution is approximately as N(0,1).

F-Distribution
F-distribution is the ratio of two independent chisquares. If U~ 2m and V~ 2n are independent of each other, then F=(U/m) (V/n) is called the Fdistribution with m and n d.f. and written F~Fm,n. The F-distribution has a distribution similar to that of chi-square. If the r.v. t has the t-distribution with n d.f., then t2 has the F-distribution with 1 and n d.f.

Testing of Hypothesis
A null hypothesis (Ho) is a statement of the status quo, one of no difference or no effect. If the null hypothesis is not rejected, no changes will be made. An alternative hypothesis (H1) is one in which some difference or effect is expected. Accepting the alternative hypothesis will lead to changes in opinions or actions.

Testing of Hypothesis (contd.)


The decision rule which selects one of the inferences reject H0 or do not reject H0 for every outcome of an experiment is called a test statistic. The test statistic measures how close the sample has come to the null hypothesis. The test statistic often follows a well-known distribution, such as the normal, t, F, or 2 distribution. The range of values of the test statistic for which the test procedure recommends rejecting H0 is called the critical region; alternatively, where range of values of the test statistic recommends not rejecting H0 is called the nonrejection region.

Testing of Hypothesis (contd.)


Type I error : reject H0 when it is in fact true.
The probability of type I error (P(I)) is also called the level of significance.

Type II error : Do not reject H0 when it is in fact false. The probability of type II error is denoted by P(II) . Unlike P(I) , which is specified by the researcher, the magnitude of P(II) depends on the actual value of the population parameter (proportion).

Testing of Hypothesis (contd.)


Ideally, we would like to keep both, probabilities of Type I and Type II errors low. However, this is not possible. Any attempt to reduce P(I) automatically increases P(II) and vice versa. The largest P(I) is called level of significance as well as size of a test. The probability of rejecting a null hypothesis when it is false is given by 1-P(II) and is called power of a test.

Example : Titanic Passengers

Men
Find the probability of randomly selecting a man and a boy. Find the probability of randomly selecting a man and someone who survived.

W 332 1360

Survived Died

Total refers to the probability of an 1692 Joint probability


Occurrence involving 2 or more events

Example : Titanic Passengers

Men
Find the probability of randomly selecting a man Find the probability of randomly selecting someone who survived.

Survived Died
1

332 1360
k

Marginal Probability : set of joint probabilities


2

Total B )+P(A and B ) + +P(A and1692 P(A)=P(A and B)


Bi must be mutual exclusive and exhaustive

Example : Titanic Passengers

Men
Find the probability of randomly selecting a man or a boy. Find the probability of randomly selecting a man or someone who survived.

W 332 1360
1692

Survived Died

Total

You might also like