You are on page 1of 5

1) DEGREES OF FREEDOM

The number of degrees of freedom generally refers to the number of independent observations in a sample minus the number of population parameters that must be estimated from sample data. To calculate the correct value for the t-statistic from a t-distribution and a selected level of significance, you must calculate the proper degrees of freedom for the estimate. In addition, it is important to examine the number of degrees of freedom from which a standard error estimate is based The reliability of the estimated standard error, as measured by its relative standard error (i.e., (standard error of the standard error of the estimate/standard error of the estimate)*100), is inversely proportional to its degrees of freedom. As the number of degrees of freedom increases, the relative standard error decreases and the reliability of the estimate increases. This corresponds to at least 12 degrees of freedom. For example, the exact shape of a t distribution is determined by its degrees of freedom. When the t distribution is used to compute a confidence interval for a mean score, one population parameter (the mean) is estimated from sample data. Therefore, the number of degrees of freedom is equal to the sample size minus one.

2) SAMPLING ERROR OR STANDARD ERROR


Sampling error gives us some idea of the precision of our statistical estimate. A low sampling error means that we had relatively less variability or range in the sampling distribution. The standard error is a measure of the variability of a statistic. It is an estimate of the standard deviation of a sampling distribution. The standard error depends on three factors:

N: The number of observations in the population. n: The number of observations in the sample. The way that the random sample is chosen.

How do we calculate sampling error? We base our calculation on the standard deviation of our sample. The greater the sample standard deviation, the greater the standard error (and the sampling error). The standard error is also related to the sample size. The greater your sample size, the smaller the standard error. Why? Because the greater the sample size, the closer your sample is to the actual population itself. If you take a sample that consists of the entire population you actually have no sampling error because you don't have a sample, you have the entire population. In that case, the mean you estimate is the parameter.

The 68, 95, 99 Percent Rule For instance, in the figure, the mean of the distribution is 3.75 and the standard unit is 0.25 (If this was a distribution of raw data, we would be talking in standard deviation units. If it's a sampling distribution, we'd be talking in standard error units).

If we go up and down one standard unit from the mean, we would be going up and down .25 from the mean of 3.75. Within this range from 3.5 to 4.0, we would expect to see approximately 68% of the cases. This section is marked in red on the figure.

Example: Let's assume we did a study and drew a single sample from the population. Furthermore, let's assume that the average for the sample was 3.75 and the standard deviation was .25. This is the raw data distribution depicted above. now, what would the sampling distribution be in this case? Well, we don't actually construct it (because we would need to take an infinite number of samples) but we can estimate it. For starters, we assume that the mean of the sampling distribution is the mean of the sample, which is 3.75. Then, we calculate the standard error. To do this, we use the standard deviation for our sample and the sample size (in this case N=100) and we come up with a standard error of .025 (just trust me on this). Now we have everything we need to estimate a confidence interval for the population parameter. We would estimate that the probability is 68% that the true parameter value falls between 3.725 and 3.775 (i.e., 3.75 plus and minus .025); that the 95% confidence interval is 3.700 to 3.800; and that we can say with 99% confidence that the population value is between 3.675 and 3.825. The real value (in this fictitious example) was 3.72 and so we have correctly estimated that value with our sample.

standard error : ( )

s is the sample standard deviation (i.e., the sample-based estimate of the standard deviation of the population), and n is the size (number of observations) of the sample.

3) STANDARD DEVIATION
The standard deviation is a numerical value used to indicate how widely individuals in a group vary. If individual observations vary greatly from the group mean, the standard deviation is big; and vice versa. It is important to distinguish between the standard deviation of a population and the standard deviation of a sample. They have different notation, and they are computed differently. The standard deviation of a population is denoted by and the standard deviation of a sample, by s. The standard deviation of a population is defined by the following formula: = sqrt [ ( Xi - X )2 / N ] where is the population standard deviation, X is the population mean, Xi is the ith element from the population, and N is the number of elements in the population. The standard deviation of a sample is defined by slightly different formula: s = sqrt [ ( xi - x )2 / ( n - 1 ) ] where s is the sample standard deviation, x is the sample mean, x i is the ith element from the sample, and n is the number of elements in the sample. Using this equation, the standard deviation of the sample is an unbiased estimate of the standard deviation of the population.

4) DIFFERENCES

BETWEEN

STANDARD

DEVIATION

AND

STANDARD ERROR
Standard Deviation of Sample Estimates The variability of a statistic is measured by its standard deviation. The table below shows formulas for computing the standard deviation of statistics from simple random samples. These formulas are valid when the population size is much larger (at least 10 times larger) than the sample size. Statistic Sample mean, x Standard Deviation x = / sqrt( n )

Sample proportion, p p = sqrt [ P(1 - P) / n ]

Standard Error of Sample Estimates The standard error is computed from known sample statistics, and it provides an unbiased estimate of the standard deviation. The table below shows how to compute the standard error for simple random samples, assuming the population size is at least 10 times larger than the sample size. Statistic Sample mean, x Standard Error SEx = s / sqrt( n )

Sample proportion, p SEp = sqrt [ p(1 - p) / n ]

The equations for the standard error are identical to the equations for the standard deviation, except for one thing - the standard error equations use statistics where the standard deviation equations use parameters. Specifically, the standard error equations use p in place of P, and s in place of .

You might also like