You are on page 1of 7

OMPANY PROFILE Southern Batteries Pvt.

Ltd was established in the year 1980 by Late Sri S R Pillai, a visionary who was able to foresee the surging secondary storage power requirement in the coming decades. Southern Batteries is engaged in the manufacture of Lead Acid Tubular Batteries, Valve Regulated Lead Acid (VRLA) Batteries, Flat Plate Batteries, Traction Batteries and Automotive Batteries under the brand name "Hi-Power". The company manufactures a wide range of batteries from 20 - 2000Ah in 2V cells, 20 240 Ah in 12V Monoblock PP/HR, 10V, 8V and 6V ranges. Keeping pace with the market requirement and potential the company set up another state of the art manufacturing facility in Jigani, Bangalore and further expansion plans are in place. The Company has established a prominent share in the Solar, Railways, Telecommunication, UPS & Inverters, Power plant and other industrial application in the last 30 years. The Company has ventured into retail market segment and has nationwide presence through a network of branch offices and channel partners. The Company focuses on being a responsible corporate citizen through active participation in solar power projects and also working with electric vehicles manufacturers. Commitment to quality, consistency, innovation and service is taking the Company through a steady growth in the competitive market. The Company is ISO certified.

Sampling distribution
From Wikipedia, the free encyclopedia In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given statistic based on a random sample. Sampling distributions are important in statistics because they provide a major simplification on the route to statistical inference. More specifically, they allow analytical considerations to be based on the sampling distribution of a statistic, rather than on the joint probability distribution of all the individual sample values.

Contents
[hide]

1 Introduction 2 Standard error 3 Examples 4 Statistical inference

5 References 6 External links

[edit] Introduction
The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size n. It may be considered as the distribution of the statistic for all possible samples from the same population of a given size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case as n . For example, consider a normal population with mean and variance . Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean for each sample this statistic is called the sample mean. Each sample has its own average value, and the distribution of these averages is called the "sampling distribution of the sample mean". This distribution is normal since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes). The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they don't exist in closedform. In such cases the sampling distributions may be approximated through Monte-Carlo simulations, bootstrap methods, or asymptotic distribution theory.

[edit] Standard error


The standard deviation of the sampling distribution of the statistic is referred to as the standard error of that quantity. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:

where is the standard deviation of the population distribution of that quantity and n is the size (number of items) in the sample.

An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost-benefit tradeoffs.

[edit] Examples
Population Normal: Statistic Sample mean from samples of size n Sample proportion of "successful trials" Sampling distribution

Bernoulli:

Two independent normal populations:

Difference between sample means,

and Median X(k) from Any absolutely a sample of size continuous n = 2k 1, distribution F where sample is with density ordered X(1) to X(n) Maximum Any distribution with distribution from a random function F sample of size n

[edit] Statistical inference


In the theory of statistical inference, the idea of a sufficient statistic provides the basis of choosing a statistic (as a function of the sample data points) in such a way that no information is lost by replacing the full probabilistic description of the sample with the sampling distribution of the selected statistic. In frequentist inference, for example in the development of a statistical hypothesis test or a confidence interval, the availability of the sampling distribution of a statistic (or an

approximation to this in the form of an asymptotic distribution) can allow the ready formulation of such procedures, whereas the development of procedures starting from the joint distribution of the sample would be less straightforward. In Bayesian inference, when the sampling distribution of a statistic is available, one can consider replacing the final outcome of such procedures, specifically the conditional distributions of any unknown quantities given the sample data, by the conditional distributions of any unknown quantities given selected sample statistics. Such a procedure would involve the sampling distribution of the statistics. The results would be identical provided the statistics chosen are jointly sufficient statistics.

Statistics Tutorial: Sampling Distributions


Suppose that we draw all possible samples of size n from a given population. Suppose further thatwe compute a statistic (e.g., a mean, proportion, standard deviation) for each sample. Theprobability distribution of this statistic is called a sampling distribution.

Variability of a Sampling Distribution


The variability of a sampling distribution is measured by its variance or its standard deviation. Thevariability of a sampling distribution depends on three factors: N: The number of observations in the population. n: The number of observations in the sample. The way that the random sample is chosen.If the population size is much larger than the sample size, then the sampling distribution hasroughly the same sampling error, whether we sample with or without replacement. On the otherhand, if the sample represents a significant fraction (say, 1/10) of the population size, the samplingerror will be noticeably smaller, when we sample without replacement.

Central Limit Theorem


The central limit theorem states that the sampling distribution of any statistic will be normal ornearly normal, if the sample size is large enough.How large is "large enough"? As a rough rule of thumb, many statisticians say that a sample size of30 is large enough. If you know something about the shape of the sample distribution, you canrefine that rule. The sample size is large enough if any of the following conditions apply.

The population distribution is normal.


The sampling distribution is symmetric, unimodal, without outliers, and the sample size is 15

or less.The sampling distribution is moderately skewed, unimodal, without outliers, and the samplesize is between 16 and 40.The sample size is greater than 40, without outliers.The exact shape of any normal curve is totally determined by its mean and standard deviation.Therefore, if we know the mean and standard deviation of a statistic, we can find the mean andstandard deviation of the sampling distribution of the statistic (assuming that the statistic camefrom a "large" sample).

Sampling Distribution of the Mean


Suppose we draw all possible samples of size n from a population of size N. Suppose further that wecompute a mean score for each sample. In this way, we create a sampling distribution of the mean.We know the following. The mean of the population () is equal to the mean of the samplingdistribution (x). And the standard error of the sampling distribution (x) is determined by thestandard deviation of the population (), the population size, and the sample size. Theserelationships are shown in the equations below:x = and x = * sqrt( 1/n - 1/N )Therefore, we can specify the sampling distribution of the mean whenever two conditions are met:The population is normally distributed, or the sample size is sufficiently large. The population standard deviation is known. Note: When the population size is very large, the factor 1/N is approximately equal to zero; andthe standard deviation formula reduces to: x = / sqrt(n). You often see this formula inintroductory statistics texts.

Sampling Distribution of the Proportion


In a population of size N, suppose that the probability of the occurence of an event (dubbed a"success") is P; and the probability of the event's non-occurence (dubbed a "failure") is Q. From thispopulation, suppose that we draw all possible samples of size n. And finally, within each sample,suppose that we determine the proportion of successes p and failures q. In this way, we create a

sampling distribution of the proportion.


We find that the mean of the sampling distribution of the proportion (p) is equal to the probabilityof success in the population (P). And the standard error of the sampling distribution (p) isdetermined by the standard deviation of the population (), the population size, and the samplesize. These relationships are shown in the equations below:p = P and p = * sqrt( 1/n - 1/N ) = sqrt[ PQ/n - PQ/N ]where = sqrt[ PQ ]. Note: When the population size is very large, the factor PQ/N is approximately equal to zero; andthe standard deviation formula reduces to: p = sqrt( PQ/n ). You often see this formula in introstatistics texts.

Test Your Understanding of This Lesson In this section, we offer two examples to illustrate how to apply the Central Limit Theorem to solvesome commom statistical problems. Since the Central Limit Theorem makes use of the normaldistribution, use the Normal Distribution Calculator to compute probabilities. The Calculator isfree.

Normal Distribution Calculator


The normal calculator solves common statistical problems, based on the normal distribution.The calculator computes cumulative probabilities, based on three simple inputs. Simpleinstructions guide you to an accurate solution, quickly and easily. If anything is unclear,frequently-asked questions and sample problems provide straightforward explanations. Thecalculator is free. It can be found under the Stat Tables tab, which appears in the header of Example 1 Assume that a school district has 10,000 6th graders. In this district, the average weight of a 6thgrader is 80 pounds, with a standard deviation of 20 pounds. Suppose you draw a random sample of50 students. What is the probability that the average weight of a sampled student will be less than75 pounds? Solution: To solve this problem, we need to define the sampling distribution of the mean. Becauseour sample size is greater than 40, the Central Limit Theorem tells us that the sampling distributionwill be normally distributed.To define our normal distribution, we need to know both the mean of the sampling distribution andthe standard deviation. Finding the mean of the sampling distribution is easy, since it is equal tothe mean of the population. Thus, the mean of the sampling distribution is equal to 80. The standard deviation of the sampling distribution can be computed using the following formula. x = * sqrt( 1/n - 1/N ) x = 20 * sqrt( 1/50 - 1/10000 ) = 20 * sqrt( 0.0199 ) = 20 * 0.141 = 2.82 Let's review what we know and what we want to know. We know that the sampling distribution ofthe mean is normally distributed with a mean of 80 and a standard deviation of 2.82. We want toknow the probability that a sample mean is less than or equal to 75 pounds. To solve the problem,we plug these inputs into the Normal Probability Calculator: mean = 80, standard deviation = 2.82,and value = 75. The Calculator tells us that the probability that the average weight of a sampledstudent is less than 75 pounds is equal to 0.038.

Example 2 Find the probability that of the next 120 births, no more than 40% will be boys. Assume equalprobabilities for the births of boys and girls. Assume also that the number of births in thepopulation (N) is very large, essentially infinite. Solution: The Central Limit Theorem tells us that the proportion of boys in 120 births will benormally distributed.The mean of the sampling distribution will be equal to the mean of the population distribution. Inthe population, half of the births result in boys; and half, in girls. Therefore, the probability of boybirths in the population is 0.50. Thus, the mean proportion in the sampling distribution should alsobe 0.50. The standard deviation of the sampling distribution can be computed using the following formula. p = sqrt[ PQ/n - PQ/N ] p = sqrt[ (0.5)(0.5)/120 ] = sqrt[ 0.25/120 ] = 0.04564 In the above calculation, the term PQ/N was equal to zero, since the population size (N) wasassumed to be infinite.Let's review what we know and what we want to know. We know that the sampling distribution ofthe proportion is normally distributed with a mean of 0.50 and a standard deviation of 0.04564. Wewant to know the probability that no more than 40% of the sampled births are boys. To solve theproblem, we plug these inputs into the Normal Probability Calculator: mean = .5, standarddeviation = 0.04564, and value = .4. The Calculator tells us that the probability that no more than40% of the sampled births are boys is equal to 0.014. Note: This use of the Central Limit Theorem provides a good approximation of the true probabilities. The exact probability, computed using a binomial distribution, is 0.018 - very close tothe approximation obtained with the Central Limit Theorem. The accuracy of the approximationincreases as sample size increases.

You might also like