You are on page 1of 26

Probability is a way of expressing knowledge or belief that an event will occur or has occurred.

In mathematics the concept has been given an exact meaning in probability theory, that is used extensively in such areas of study as mathematics, statistics, finance, gambling, science, and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems.

History of probability
From Wikipedia, the free encyclopedia

Jump to: navigation, search


.

------------------An introduction to probability theory and mathematical statistics that emphasizes the probabilistic foundations required to understand probability models and statistical methods. Topics covered will include the probability axioms, basic combinatorics, discrete and continuous random variables, probability distributions, mathematical expectation, common families of probability distributions, and the central limit theorem.

History of science

Background Theories/sociology Historiography Pseudoscience By era In early cultures in Classical Antiquity In the Middle Ages In the Renaissance Scientific Revolution By topic Natural sciences Astronomy Biology Botany Chemistry Ecology Geography Geology Paleontology

Probability has a dual aspect: on the one hand the probability or likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically older in, for example, the law of evidence, while the mathematical treatment of dice began with the work of Pascal and Fermat in the 1650s. Probability is distinguished from statistics. (See History of Statistics). While statistics deals with data and inferences from it, (stochastic) probability deals with the stochastic (random) processes which lie behind data or outcomes.

Contents
[hide]

1 Etymology 2 Origins 3 18th Century 4 19th Century 5 20th Century 6 Bibliography 7 References 8 External links

Physics Mathematics Algebra Calculus Combinatorics Geometry Logic Probability Statistics Trigonometry Social sciences Anthropology Economics Linguistics Political science Psychology Sociology Technology Agricultural science Computer science Materials science Medicine Navigational pages Timelines Portal Categories

[edit] Etymology
Probable and likely and their cognates in other modern languages derive from medieval learned Latin probabilis and verisimilis, deriving from Cicero and generally applied to an opinion to mean plausible or generally approved.[1]

[edit] Origins
See also: Timeline of probability and statistics Ancient and medieval law of evidence developed a grading of degrees of proof, probabilities, presumptions and half-proof to deal with the uncertainties of evidence in court.[2] In Renaissance times, betting was discussed in terms of odds such as "ten to one" and maritime insurance premiums were estimated based on intuitive risks, but there was no theory on how to calculate such odds or premiums.[3] The mathematical methods of probability arose in the correspondence of Pierre de Fermat and Blaise Pascal (1654) on such questions as the fair division of the stake in an interrupted game of chance. Christiaan Huygens (1657) gave a comprehensive treatment of the subject.[4]

[edit] 18th Century

Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's The Doctrine of Chances (1718) put probability on a sound mathematical footing, showing how to calculate a wide range of complex probabilities. Bernoulli proved a version of the fundamental law of large numbers, which states that in a large number of trials, the average of the outcomes is likely to be very close to the expected value - for example, in 1000 throws of a fair coin, it is likely that there are close to 500 heads (and the larger the number of throws, the closer to half-and-half the proportion is likely to be).

[edit] 19th Century


The power of probabilistic methods in dealing with uncertainty was shown by Gauss's determination of the orbit of Ceres from a few observations. The theory of errors used the method of least squares to correct error-prone observations, especially in astronomy, based on the assumption of a normal distribution of errors to determine the most likely true value. Towards the end of the nineteenth century, a major success of explanation in terms of probabilities was the Statistical mechanics of Ludwig Boltzmann and J. Willard Gibbs which explained properties of gases such as temperature in terms of the random motions of large numbers of particles. The field of the history of probability itself was established by Isaac Todhunter's monumental History of the Mathematical Theory of Probability from the Time of Pascal to that of Lagrange (1865).

[edit] 20th Century


Probability and statistics became closely connected through the work on hypothesis testing of R. A. Fisher and Jerzy Neyman, which is now widely applied in biological and psychological experiments and in clinical trials of drugs. A hypothesis, for example that a drug is usually effective, gives rise to a probability distribution that would be observed if the hypothesis is true. If observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected.[5] The theory of stochastic processes broadened into such areas as Markov processes and Brownian motion, the random movement of tiny particles suspended in a fluid. That provided a model for the study of random fluctuations in stock markets, leading to the use of sophisticated probability models in mathematical finance, including such successes as the widely-used Black-Scholes formula for the valuation of options.[6] The twentieth century also saw long-running disputes on the interpretations of probability. In the mid-century frequentism was dominant, holding that probability means long-run relative frequency in a large number of trials. At the end of the century there was some revival of the Bayesian view, according to which the fundamental notion of probability is how well a proposition is supported by the evidence for it. The mathematical treatment of probabilities, especially when there are infinitely many possible outcomes, was facilitated by Kolmogorov's axioms (1931).

[edit] Bibliography

Bernstein, Peter L. (1996). Against the Gods: The Remarkable Story of Risk. New York: Wiley. ISBN 0471121045. Daston, Lorraine (1988). Classical Probability in the Enlightenment. Princeton: Princeton University Press. ISBN 0691084971. Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal. Baltimore, MD: Johns Hopkins University Press. ISBN 0801865697. Hacking, Ian (2006). The Emergence of Probability (2nd ed). New York: Cambridge University Press. ISBN 9780521866552. Hald, Anders (2003). A History of Probability and Statistics and Their Applications before 1750. Hoboken, NJ: Wiley. ISBN 0471471291. Hald, Anders (1998). A History of Mathematical Statistics from 1750 to 1930. New York: Wiley. ISBN 0471179124. Heyde, C. C.; Seneta, E. (eds) (2001). Statisticians of the Centuries. New York: Springer. ISBN 0387953299. von Plato, Jan (1994). Creating Modern Probability: Its Mathematics, Physics and Philosophy in Historical Perspective. New York: Cambridge University Press. ISBN 9780521597357. Salsburg, David (2001). The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century. ISBN 0-7167-4106-7 Stigler, Stephen M. (1990). The History of Statistics: The Measurement of Uncertainty before 1900. Belknap Press/Harvard University Press. ISBN 0-67440341-X.

[edit] References
1. ^ J. Franklin, The Science of Conjecture: Evidence and Probability Before Pascal, 113, 126. 2. ^ Franklin, The Science of Conjecture, ch. 2. 3. ^ Franklin, Science of Conjecture, ch. 11. 4. ^ Hacking, Emergence of Probability; Franklin, Science of Conjecture, ch. 12. 5. ^ Salsburg, The Lady Tasting Tea. 6. ^ Bernstein, Against the Gods, ch. 18.

statistics The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances."[4] However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.[5]

Aside from some elementary considerations made by Girolamo Cardano in the 16th century, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given. Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve y = (x), x being any error and y its probability, and laid down three properties of this curve: 1. it is symmetric as to the y-axis; 2. the x-axis is an asymptote, the probability of the error being 0; 3. the area enclosed is 1, it being certain that an error exists. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles mthodes pour la dtermination des orbites des comtes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an IrishAmerican writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,

h being a constant depending on precision of observation, and c a scale factor ensuring that
the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872),

Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).

[edit] Mathematical treatment


Further information: Probability theory In mathematics, a probability of an event A is represented by a real number in the range from 0 to 1 and written as P(A), p(A) or Pr(A).[6] An impossible event has a probability of 0, and a certain event has a probability of 1. However, the converses are not always true: probability 0 events are not always impossible, nor probability 1 events certain. The rather subtle distinction between "certain" and "probability 1" is treated at greater length in the article on "almost surely". The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring); its probability is given by P(not A) = 1 - P(A).[7] As an example, the chance of not rolling a six on a six-sided die is 1 - (chance of rolling a six) = Complementary event for a more complete treatment. . See

If both the events A and B occur on a single performance of an experiment this is called the intersection or joint probability of A and B, denoted as are independent then the joint probability is . If two events, A and B

for example, if two coins are flipped the chance of both being heads is

[8]

If either event A or event B or both events occur on a single performance of an experiment this is called the union of the events A and B denoted as mutually exclusive then the probability of either occurring is . If two events are

For example, the chance of rolling a 1 or 2 on a six-sided die is

If the events are not mutually exclusive then

For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is , because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the

possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once. Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written P(A|B), and is read "the probability of A, given B". It is defined by

[9]

If P(B)

= 0 then

is undefined.

Summary of probabilities

Event

Probability

not A

A or B

A and B

A given B

[edit] Theory
Main article: Probability theory Like other theories, the theory of probability is a representation of probabilistic concepts in formal termsthat is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are then interpreted or translated back into the problem domain.

There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details. There are other methods for quantifying uncertainty, such as the Dempster-Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as they are usually understood.

[edit] Applications
Two major applications of probability theory in everyday life are in risk assessment and in trade on commodity markets. Governments typically apply probabilistic methods in environmental regulation where it is called "pathway analysis", often measuring well-being using methods that are stochastic in nature, and choosing projects to undertake based on statistical analyses of their probable effect on the population as a whole. A good example is the effect of the perceived probability of any widespread Middle East conflict on oil prices - which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely vs. less likely sends prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are not assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. It can reasonably be said that the discovery of rigorous methods to assess and combine probability assessments has had a profound effect on modern society. Accordingly, it may be of some importance to most citizens to understand how odds and probability assessments are made, and how they contribute to reputations and to decisions, especially in a democracy. Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, utilize reliability theory in the design of the product in order to reduce the probability of failure. The probability of failure may be closely associated with the product's warranty.

[edit] Relation to randomness


Main article: Randomness In a deterministic universe, based on Newtonian concepts, there is no probability if all conditions are known. In the case of a roulette wheel, if the force of the hand and the period of that force are known, then the number on which the ball will stop would be a certainty. Of course, this also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex

(with the number of molecules typically the order of magnitude of Avogadro constant 6.021023) that only statistical description of its properties is feasible. A revolutionary discovery of 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The wave function itself evolves deterministically as long as no observation is made, but, according to the prevailing Copenhagen interpretation, the randomness caused by the wave function collapsing when an observation is made, is fundamental. This means that probability theory is required to describe nature. Others never came to terms with the loss of determinism. Albert Einstein famously remarked in a letter to Max Born: Jedenfalls bin ich berzeugt, da der Alte nicht wrfelt. (I am convinced that God does not play dice). Although alternative viewpoints exist, such as that of quantum decoherence being the cause of an apparent random collapse, at present there is a firm consensus among physicists that probability theory is necessary to describe quantum phenomena.[citation needed]

[edit] See also


The probability of an event (see sample space) is a number lying in the interval 0p1, with 0 corresponding to an event that never occurs and 1 to an event that is certain to occur. For an experiment with N equally likely outcomes the probability of an event A is n/N, where n is the number of outcomes in which the event A occurs. For some experiments, such as throwing a drawing pin and seeing whether it lands point up, there is no possible set of equally likely outcomes. In the 'frequentist' view of probability, the probability of getting 'point up' is the limit, in some sense, of the relative frequency as the number of experiments tends to infinity. In the context of Bayesian inference, each observer has his or her own a priori distribution for the probability, which is then modified a posteriori in the light of whatever results have been obtained.

Probabilities can also be used, more generally, to describe degrees of belief in propositions that do not involve random variables -- for example `the probability that 2050 will be the warmest year on record, assuming people don't change their lifestyle', or `the probability that the Hubble constant lies between 41 and 43, given measurements of the Sunyaev-Zel'dovich effect'. Degrees of belief can be mapped onto probabilities if they satisfy some simple consistency rules known as the Cox axioms . Thus probabilities can be used to describe assumptions, and to describe inferences given those assumptions. The rules of probability ensure that if two people make the same assumptions and receive the same data then they will draw identical conclusions. This more general use of probability is known as the Bayesian viewpoint. It is also known as the subjective interpretation of probability, since the probabilities depend on assumptions. Advocates of a Bayesian approach to data modelling and pattern recognition do not view this subjectivity as a defect, since in their view, you can't do inference without making assumptions. In this book it will be taken for granted that a Bayesian approach makes sense, but the reader is warned that this is not yet a globally held view -the field of statistics has been dominated for most of the 20th century by non-Bayesian methods in which probabilities are onlyINTRODUCTION

The history of probability theory dates back to the 17th century and at that time was related to games of chance. In the 18th century the probability theory was known to have applications beyond the scope of games of chance. Some of the applications in which probability theory is

applied are situations with outcomes such as life or death and boy or girl. Statistics and probability are currently applied to insurance, annuities, biology, and social investigations. The treatment of probability in this chapter is limited to simple applications. These applications will be, to a large extent, based on games of chance, which lend themselves to an understanding of basic ideas of probability. BASIC CONCEPTS If a coin were tossed, the chance it would land heads up is just as likely as the chance it would land tails up; that is, the coin has no more reason to land heads up than it has to land tails up. Every toss of the coin is called a trial. We define probability as the ratio of the different number of ways a trial can succeed (or fail) to the total number of ways in which it may result. We will let p represent the probability of success and q represent the probability of failure. One commonly misunderstood concept of probability is the effect prior trials have on a single trial. That is, after a coin has been tossed many times and every trial resulted in the coin falling heads up, will the next toss of the coin result in tails up? The answer is "not necessarily" and will be explained later in this chapter allowed to describe random variables.

Probability
Many things in everyday life, from stock price to lottery, are random phenomena for which the outcome is uncertain. The concept of probability provides us with the idea on how to measure the chances of possible outcomes. Probability enables us to quantify uncertainty, which is described in terms of mathematics. Here we introduce basic notions that help us to find probabilities of interest.

Chapter 2

Probability Space
In this chapter we describe the probability model of \choosing an object at random." Examples will help us come up with a good denition. We explain that the key idea is to associate a likelihood, which we call probability, to sets of outcomes, not to individual outcomes. These sets are events. The description of the events and of their probability constitute a probability space that characterizes completely a random experiment.

2.1 Choosing At Random


First consider picking a card out of a 52-card deck. We could say that the odds of picking any particular card are the same as that of picking any other card, assuming that the deck

has been well shued. We then decide to assign a \probability" of 1/52 to each card. That probability represents the odds that a given card is picked. One interpretation is that if we repeat the experiment \choosing a card from the deck" a large number N of times (replacing the card previously picked every time and re-shuing the deck before the next selection), then a given card, say the ace of diamonds, is selected approximated N=52 times. Note that this is only an interpretation. There is nothing that tells us that this is indeed the case; moreover, if it is the case, then there is certainly nothing yet in our theory that allows us to expect that result. Indeed, so far, we have simply assigned the number 1/52 to each card 13 14 CHAPTER 2. PROBABILITY SPACE in the deck. Our interpretation comes from what we expect from the physical experiment. This remarkable \statistical regularity" of the physical experiment is a consequence of some deeper properties of the sequences of successive cards picked from a deck. We will come back to these deeper properties when we study independence. You may object that the denition of probability involves implicitly that of \equally likely events." That is correct as far as the interpretation goes. The mathematical denition does not require such a notion. Second, consider the experiment of throwing a dart on a dartboard. The likelihood of hitting a specic point on the board, measured with pinpoint accuracy, is essentially zero. Accordingly, in contrast with the previous example, we cannot assign numbers to individual outcomes of the experiment. The way to proceed is to assign numbers to sets of possible outcomes. Thus, one can look at a subset of the dartboard and assign some probability that represents the odds that the dart will land in that set. It is not simple to assign the numbers to all the sets in a way that these numbers really correspond to the odds of a given dart player. Even if we forget about trying to model an actual player, it is not that simple to assign numbers to all the subsets of the dartboard. At the very least, to be meaningful, the numbers assigned to the dierent subsets must obey some basic consistency rules. For instance, if A and B are two subsets of the dartboard such that A B, then the number P(B) assigned to B must be at least as large as the number P(A) assigned to A. Also, if A and B are disjoint, then P(A [ B) = P(A) + P(B). Finally, P(-) = 1, if - designates the set of all possible outcomes (the dartboard, possibly extended to cover all bases). This is the basic story: probability is dened on sets of possible outcomes and it is additive. [However, it turns out that one more property is required: countable additivity (see below).] Note that we can lump our two examples into one. Indeed, the rst case can be viewed as a particular case of the second where we would dene P(A) = jAj=52, where A is any subset of the deck of cards and jAj is the number of cards in the deck. This denition is certainly additive and it assigns the probability 1=52 to any one card. 2.2. EVENTS 15 Some care is required when dening what we mean by a random choice. See Bertrand's paradox in Appendix E for an illustration of a possible confusion. Another example of the possible confusion with statistics is Simpson's paradox in Appendix F.

2.2 Events
The sets of outcomes to which one assigns a probability are called events. It is not necessary (and often not possible, as we may explain later) for every set of outcomes to be an event. For instance, assume that we are only interested in whether the card that we pick is black or red. In that case, it suces to dene P(A) = 0:5 = P(Ac) where A is the set of all the black cards and Ac is the complement of that set, i.e., the set of all the red cards. Of course, we know that P(-) = 1 where - is the set of all the cards and P(;) = 0, where ; is the empty set. In this case, there are four events: ;;-; A;Ac. More generally, if A and B are events, then we want Ac;A \ B, and A [ B to be events also. Indeed, if we want to dene the probability that the outcome is in A and the probability that it is in B, it is reasonable to ask that we can also dene the probability that the outcome is not in A, that it is in A and B, and that it is in A or in B (or in both). By

extension, set operations that are performed on a nite collection of events should always produce an event. For instance, if A;B;C;D are events, then [(AnB) \ C] [D should also be an event. We say that the set of events is closed under nite set operations. [We explain below that we need to extend this property to countable operations.] With these properties, it makes sense to write for disjoint events A and B that P(A[B) = P(A)+P(B). Indeed, A [ B is an event, so that P(A [ B) is dened. You will notice that if we want A - (with A 6= - and A 6= ;) to be an event, then the smallest collection of events is necessarily f;;-; A;Acg. If you want to see why, generally for uncountable sample spaces, all sets of outcomes 16 CHAPTER 2. PROBABILITY SPACE may not be events, check Appendix C.

2.3 Countable Additivity


This topic is the rst serious hurdle that you face when studying probability theory. If you understand this section, you increase considerably your appreciation of the theory. Otherwise, many issues will remain obscure and fuzzy. We want to be able to say that if the events An for n = 1; 2; : : :, are such that An An+1 for all n and if A := [nAn, then P(An) " P(A) as n ! 1. Why is this useful? This property, called -additivity is the key to being able to approximate events. The property species that the probability is continuous: if we approximate the events, then we also approximate their probability. This strategy of \lling the gaps" by taking limits is central in mathematics. You remember that real numbers are dened as limits of rational numbers. Similarly, integrals are dened as limits of sums. The key idea is that dierent approximations should give the same result. For this to work, we need the continuity property above. To be able to write the continuity property, we need to assume that A := [nAn is an event whenever the events An for n = 1; 2; : : :, are such that An An+1. More generally, we need the set of events to be closed under countable set operations. For instance, if we dene P([0; x]) = x for x 2 [0; 1], then we can dene P([0; a)) = a because if is small enough, then An := [0; a =n] is such that An An+1 and [0; a) := [nAn. We will discuss many more interesting examples. You may wish to review the meaning of countability (see Appendix ??). 2.4. PROBABILITY SPACE 17

2.4 Probability Space


Putting together the observations of the sections above, we have dened a probability space as follows. Denition 2.4.1. Probability Space A probability space is a triplet f-;F; Pg where - is a nonempty set, called the sample space; F is a collection of subsets of - closed under countable set operations - such a collection is called a -eld. The elements of F are called events; P is a countably additive function from F into [0; 1] such that P(-) = 1, called a probability measure. Examples will clarify this denition. The main point is that one denes the probability of sets of outcomes (the events). The probability should be countably additive (to be continuous). Accordingly (to be able to write down this property), and also quite intuitively, the collection of events should be closed under countable set operations.

2.5 Examples
Throughout the course, we will make use of simple examples of probability space. We review some of those here.

2.5.1 Choosing uniformly in f1; 2; : : : ;Ng


We say that we pick a value ! uniformly in f1; 2; : : : ;Ng when the N values are equally likely to be selected. In this case, the sample space - is - = f1; 2; : : : ;Ng. For any subset A -, one denes P(A) = jAj=N where jAj is the number of elements in A. For instance,

P(f2; 5g) = 2=N. 18 CHAPTER 2. PROBABILITY SPACE

2.5.2 Choosing uniformly in [0; 1]


Here, - = [0; 1] and one has, for example, P([0; 0:3]) = 0:3 and P([0:2; 0:7]) = 0:5. That is, P(A) is the \length" of the set A. Thus, if ! is picked uniformly in [0; 1], then one can write P([0:2; 0:7]) = 0:5. It turns out that one cannot dene the length of every subset of [0; 1], as we explain in Appendix C. The collection of sets whose length is dened is the smallest -eld that contains the intervals. This collection is called the Borel -eld of [0; 1]. More generally, the smallest -eld of < that contains the intervals is the Borel -eld of <, usually designated by B.

2.5.3 Choosing uniformly in [0; 1]2


Here, - = [0; 1]2 and one has, for example, P([0:1; 0:4][0:2; 0:8]) = 0:30:6 = 0:18. That is, P(A) is the \area" of the set A. Thus, P([0:1; 0:4] [0:2; 0:8]) = 0:18. Similarly, in that case, if B = f! = (!1; !2) 2 - j !1 + !2 1g and C = f! = (!1; !2) 2 - j !2 1 + !2 2 1g; then P(B) = 1 2 and P(C) = 4: As in one dimension, one cannot dene the area of every subset of [0; 1]2. The proper -eld is the smallest that contains the rectangles. It is called the Borel -eld of [0; 1]2. More generally, the smallest -eld of <2 that contains the rectangles is the Borel -eld of <2 designated by B2. This idea generalizes to <n, with Bn.

2.6 Summary
We have learned that a probability space is f-;F; Pg where - is a nonempty set, F is a -eld of -, i.e., a collection of subsets of - that is closed under countable set operations, 2.7. SOLVED PROBLEMS 19 and P : F ! [0; 1] is a -additive set function such that P(-) = 1. The idea is to specify the likelihood of various outcomes (elements of -). If one can specify the probability of individual outcomes (e.g., when - is countable), then one can choose F = 2-, so that all sets of outcomes are events. However, this is generally not possible as the example of the uniform distribution on [0; 1] shows. (See Appendix C.)

2.6.1 Stars and Bars Method


In many problems, we use a method for counting the number of ordered groupings of identical objects. This method is called the stars and bars method. Suppose we are given identical objects we call stars. Any ordered grouping of these stars can be obtained by separating them by bars. For example, jjj separates four stars into four groups of sizes 0, 0, 3, and 1. Suppose we wish to separate N stars into M ordered groups. We need M 1 bars to form M groups. The number of orderings is the number of ways of placing the N identical stars and M 1 identical bars into N +M 1 spaces, N+M1
M

. Creating compound objects of stars and bars is useful when there are bounds on the sizes of the groups.

2.7 Solved Problems


Example 2.7.1. Describe the probability space f-;F; Pg that corresponds to the random

experiment \picking ve cards without replacement from a perfectly shued 52-card deck." 1. One can choose - to be all the permutations of A := f1; 2; : : : ; 52g. The interpretation of ! 2 - is then the shued deck. Each permutation is equally likely, so that p! = 1=(52!) for ! 2 -. When we pick the ve cards, these cards are (!1; !2; : : : ; !5), the top 5 cards of the deck.

probability
Probability is a branch of mathematics that deals with calculating the likelihood of a given event's occurrence, which is expressed as a number between 1 and 0. An event with a probability of 1 can be considered a certainty: for example, the probability of a coin toss resulting in either "heads" or "tails" is 1, because there are no other options, assuming the coin lands flat. An event with a probability of .5 can be considered to have equal odds of occurring or not occurring: for example, the probability of a coin toss resulting in "heads" is .5, because the toss is equally as likely to result in "tails." An event with a probability of 0 can be considered an impossibility: for example, the probability that the coin will land (flat) without either side facing up is 0, because either "heads" or "tails" must be facing up. A little paradoxical, probability theory applies precise calculations to quantify uncertain measures of random events. In its simplest form, probability can be expressed mathematically as: the number of occurrences of a targeted event divided by the number of occurrences plus the number of failures of occurrences (this adds up to the total of possible outcomes): p(a) = p(a)/[p(a) + p(b)] Calculating probabilities in a situation like a coin toss is straightforward, because the outcomes are mutually exclusive: either one event or the other must occur. Each coin toss is an independent event; the outcome of one trial has no effect on subsequent ones. No matter how many consecutive times one side lands facing up, the probability that it will do so at the next toss is always .5 (50-50). The mistaken idea that a number of consecutive results (six "heads" for example) makes it more likely that the next toss will result in a "tails" is known as the gambler's fallacy , one that has led to the downfall of many a bettor. Probability theory had its start in the 17th century, when two French mathematicians, Blaise Pascal and Pierre de Fermat carried on a correspondence discussing mathematical problems dealing with games of chance. Contemporary applications of probability theory run the gamut of human inquiry, and include aspects of computer programming, astrophysics, music, weather prediction, and medicine.

Fascinating facts about Blaise Pascal inventor of a mechanical adding machine in 1642.
Inventor: Blaise Pascal Criteria: First to invent. First practical. Birth: June 19, 1623 in Clermont-Ferrand, France Death: August 19, 1662 in Paris, France Nationality: French

Blaise Pascal

Blaise Pascal, French philosopher, mathematician, and physicist, considered one of the great minds in Western intellectual history. Inventor of the first mechanical adding machine. Blaise Pascal was born in Clermont-Ferrand on June 19, 1623, and his family settled in Paris in 1629. Under the tutelage of his father, Pascal soon proved himself a mathematical prodigy, and at the age of 16 he formulated one of the basic theorems of projective geometry, known as Pascal's theorem and described in his Essai pour les coniques (Essay on Conics, 1639). In 1642 he invented the first mechanical adding machine. Pascal proved by experimentation in 1648 that the level of the mercury column in a barometer is determined by an increase or decrease in the surrounding atmospheric pressure rather than by a vacuum, as previously believed. This discovery verified the hypothesis of the Italian physicist Evangelista Torricelli concerning the effect of atmospheric pressure on the equilibrium of liquids. Six years later, in conjunction with the French mathematician Pierre de Fermat, Pascal formulated the mathematical theory of probability, which has become important in such fields as actuarial, mathematical, and social statistics and as a fundamental element in the calculations of modern theoretical physics. Pascal's other important scientific contributions include the derivation of Pascal's law or principle, which states that fluids transmit pressures equally in all directions, and his investigations in the geometry of infinitesimal. His methodology reflected his emphasis on empirical experimentation as opposed to analytical, a priori methods, and he believed that human progress is perpetuated by the accumulation of scientific discoveries resulting from such experimentation. Pascal espoused Jansenism and in 1654 entered the Jansenist community at Port Royal, where he led a rigorously ascetic life until his death eight years later. In 1656 he wrote the famous 18 Lettres provinciales (Provincial Letters), in which he attacked the Jesuits for their attempts to reconcile 16thcentury naturalism with orthodox Roman Catholicism. His most positive religious statement appeared posthumously (he died August 19, 1662); it was published in fragmentary form in 1670 as Apologie de la religion Chrtienne (Apology of the Christian Religion). In these fragments, which later were incorporated into his major work, he posed the alternatives of potential salvation and eternal damnation, with the implication that only by conversion to Jansenism could salvation be achieved. Pascal asserted that whether or not salvation was achieved, humanity's ultimate destiny is an afterlife belonging to a supernatural realm that can only be known intuitively. Pascal's final important work was Penses sur la religion et sur quelques autres sujets (Thoughts on Religion and on Other Subjects), also published in 1670. In the Penses Pascal attempted to explain and justify the difficulties of human life by the doctrine of original sin, and he contended that revelation can be comprehended only by faith, which in turn is justified by revelation. Pascal's writings urging acceptance of the Christian life contain frequent applications of the calculations of probability; he reasoned that the value of eternal happiness is infinite and that although the probability of gaining such happiness by religion may be small it is infinitely greater than

by any other course of human conduct or belief. A reclassification of the Penses, a careful work begun in 1935 and continued by several scholars, does not reconstruct the Apologie, but allows the reader to follow the plan that Pascal himself would have followed. Pascal was one of the most eminent mathematicians and physicists of his day and one of the greatest mystical writers in Christian literature. His religious works are personal in their speculation on matters beyond human understanding. He is generally ranked among the finest French polemicists, especially in the Lettres provinciales, a classic in the literature of irony. Pascal's prose style is noted for its originality and, in particular, for its total lack of artifice. He affects his readers by his use of logic and the passionate force of his dialectic.

Probability and Genetics


Probability theory is the study of the likelihood of an occurrence of random events in order to predict future behaviors of a system (2). The principles of probability are widely used. In genetics, for example, probability is used to estimate the likelihood of gene distribution from one generation to the next. In business, insurance companies use the principles of probability to determine risk groups. Probability is closely related to statistics since uncertainty always exists when statistical predictions are being made. A number between 0 and 1 represents the probability of an outcome (1). The probability of an impossible event is 0. Where as the probability of something that is certain to occur is 1. The theory of probability is recognized as being developed by Blaise Pascal with help from his friend Pierre de Fermat. Blaise Pascal was born at Clermont, France on June 19, in 1623. He was the third child of Etienne Pascal, and his only son. Blaise was only 3 when his mother died (3). In 1631, his family moved to Paris to carry on the education of Blaise, who had already displayed exceptional ability. Pascal was home taught, and to ensure that he was not overworked, his father decided that his studies would only involve the languages, and should not include any mathematics. At the age of twelve, Pascal demonstrated to his tutor an interest in geometry. He was stimulated by the subject, and gave up his playtime and chose to study geometry instead. In a few weeks, he discovered the many properties of geometric figures, in particular, that the sum of the angels of a triangle equals180 degrees. Impressed by Pascals display, his father gave him a copy of Euclid's Elements, which Pascal read and soon mastered (5). At the age of fourteen, Pascal was admitted to the weekly meetings of French geometricians, which ultimately became the French Academy. His first work, Essay on Conic Sections, was published in February of 1640. Between 1642 and 1645, he invented the first digital calculator that helped his father with his work of collecting taxes. However, there were problems with the machine since it did not work well with the design of French currency, so

it never became successful (3). Pascal then began to turn his attention toward analytical geometry and physics. He repeated many of the experiments of Toricelli, the inventor of the barometer, and confirmed the theory of barometrical variations. Then in the middle of this research, in 1650, Pascal abandoned his mathematical pursuits to study religion. After his father's death in 1653, Pascal had to administer his father's estate, and this reignited his interest in mathematics. It was at this point that he invented the arithmetical triangle (now known as Pascal's triangle), and together with Fermat created the theory of probability. On November 23, 1654, he had a near fatal accident with a horse carriage, but was saved when the traces broke. This incident caused him to again turn towards religion and pledge his life to Christianity. He then retired to Port Royal, where he continued to live until his death in 1662. From 1657 to 1658, he wrote his most famous work, Penses, which is a collection of his personal thoughts on human suffering and faith in God (3). He rationalized believing in God with this argument, "if God does not exist, one will lose nothing by believing in Him, while if He does exist, one will lose everything by not believing." Pascal died on August 19, 1662, at the young age of 39 because he had injured his health as a teenager by his incessant studying (5).

Pascal developed the theory of probability after his friend, Antoine Gombaud; a French nobleman with an interest in gambling, confronted Pascal with a question regarding a popular dice game. The question was this. Two players want to leave a game before finishing. With their scores given, it is desired to find what proportion they should divide the winnings. During 1654 Pascal and Fermat wrote several letters back and forth to each other until they agreed upon a general theory of probability (5). Here is Pascal's explanation. After that point, many mathematicians contributed to the theory of probability. There was much appeal for this branch of mathematics because of its ties with games of chance. In 1657, a Dutch scientist, Christian Huygens published the first book on probability, which was filled with gambling related problems. The subject developed quickly during the eighteenth century with contributions from Jakob Bernoulli and Abraham de Moivre. Then in 1812, Pierre de Laplace introduced many new ideas and mathematical techniques demonstrating many scientific and practical problems for probability theory. Some of the applications for probability theory developed in the nineteenth century included theory of errors, actuarial mathematics, and statistical mechanics. In 1933, a Russian mathematician, A. Kolmogorov developed a comprehensive definition of probability theory, which became the basis for the modern theory. Probability is now widely used in the fields of genetics, psychology, economics, and engineering (4). The theory of probability can be shown through the example of a simple coin game. Take three quarters, color both sides of one quarter, and one side of another quarter black. 3 of the 6 sides of the quarters are black, and three are silver. Now, shuffle the quarters and pull one out so that only one side is seen. If the side showing is black, bet the person you are playing that the other side is also black. If the side showing is silver, guess that the other side is silver. There is a two-thirds probability that you are correct. To begin with, two of the coins have the same color on both sides and only one has one side black, one side silver. So there is a two out of three chance of picking a quarter with the same color on both sides, and only a one out

of three chance of picking a quarter with two different colored sides. By guessing the same color on the other side of the quarter, one is really betting on the two-thirds probability of picking a quarter with the same color on both sides. During the late 1850s, Gregor Mendel used the general rules of probability to explain the basic principles of heredity by breeding green peas in planned experiments. A heritable feature is called a character, and variants of a character are called traits. Traits occur because of different versions of genes on the chromosomes of every cell of a living organism. Different versions of genes are called alleles. For most genes, two alleles exist, one dominant over the other. Each organism has two alleles. If an organism has two different alleles for the same gene, the dominant allele will be expressed in the phenotype of the organism, while the recessive allele will not. The phenotype of an organism is its physical appearance, as in Mendels peas, they could either have a round shape (S), or a wrinkled shaped (s) (1). Alleles are passed on from parents to offspring after they are bred. Each parent will pass on one of its two alleles. Which allele is passed on depends on probability. For seed shape, a parent could either have two dominant alleles (SS) for round shape, two recessive alleles (ss) for wrinkled shape, this condition is called homozygous. If a pea plant has one of each allele (Ss) in which the round shape will be expressed, this is called heterozygous. Homozygous pea plants will only pass on their one allele, so the probability of them passing on their one allele is 1. Heterozygous pea plants on the other hand will have a probability of passing on the dominant allele (S), and a probability of passing on the recessive allele (s). Mendel was able to show how these alleles are passed on by performing a breeding experiment with true-breeding pea plants. Truebreeding means the plants are homozygous for a particular trait, in this case, seed shape. He began by crossing a pea plant that was homozygous for round seeds (SS) with a pea plant that was homozygous for wrinkled seeds (ss). All of the offspring in the first generation (F1) will have round seeds, but be heterozygous (Ss). All of the offspring are heterozygous because each parent passes on its one allele, so the offspring gets a dominant allele from one parent and a recessive allele from the other. The pattern of inheritance is then shown when two of the heterozygous plants are crossed (1). Two heterozygous (Ss) plants are then crossed and the result is a 3 to 1 ratio of round seeds to wrinkled seeds. This phenomenon can be explained by using the rule of multiplication and the rule of addition to predict which characteristics the second offspring generation (F2) will exhibit. The rule of multiplication involves computing the probability for each independent

event, then multiplying these individual probabilities to obtain the overall probability of these events occurring together. The rule of addition is the sum of separate probabilities if an event can occur in more than one way. The probability that the first pea plant will pass on the dominant allele is . The probability that the second pea plant passed on the dominant allele is also . So x = . of the second generation of plants will be homozygous dominant for round seeds (SS). will also be homozygous recessive (ss), because the probability is the same for the recessive allele, x . As previously stated, the probability of the first plant passing on the dominant allele is , and the probability of the second plant passing on the recessive allele is . Conversely, the first plant could give the recessive allele and the second plant could give the dominant allele, which will also product a heterozygous plant. Therefore, the proportion of heterozygous plants (Ss) for round seed shape will be ( x ) + ( x ) = . The for homozygous dominant (SS) plus the for heterozygous (Ss) shows that will have the dominant allele (S), and have round seed shape. The remaining will have wrinkled seeds because they will be homozygous recessive (ss). This relationship can be seen more easily through the use of a Punnett Square (1). Mendel performed this experiment with hundreds of pea plants while tracking the distribution of 6 other traits, such as flower color. With each trait the ratio was 3 to 1, dominant to recessive, as expected by the rules of probability. These same principles of probability are used today in genetics to solve more complicated problems involving many genes with more difficult modes of expression.

Probability is a very interesting branch of mathematics because it is used to predict the likelihood of random events. It is a deductive science that studies uncertain quantities related to random events (2). Many people seem to enjoy the ideas of probability because it is related to so many games that usually involve money. By playing the odds right, someone could win big, either at a card table or on Wall Street. Probability makes random events look like very predictable ones. www.ideafinder.com/history/inventors/pascal.htm www.facstaff.bucknell.edu/udaepp/090/w3/matthewr.htm www.highbeam.com/doc/1P3-1515425351.html

The development of probability


Posted May 1st, 2008 by hauke in

level 2 philosophy probability

Probability theory was one of the last major areas of mathematics to be developed; its beginnings are usually dated to correspondence between the mathematicians Fermat and Pascal in the 1650's concerning some interesting problems that arose from gambling.

Why Greek, Roman or medieval philosophers and mathematicians hadn't developed probability theory is an area of active debate. Several ideas have been suggested, none of them, however, seem quite satisfactory (Hacking 1975 [1]). One explanation may have been that in an age where thunder strikes because a god wills it, and where heroes cannot escape their fate no matter how they try, the ideas of uncertainty or risk have a different meaning than they do today. The sociologist Beck, for example, sees this as a reason why risk, and controlling risk, is such a marked feature of modern society but was almost completely absent before (Beck 2007 [2]). People no longer see the work of fate or the gods in disasters such as war and famine, but instead view them as risks that through the right actions may be averted or at least managed. Although this may deal with the difference between contemporary and ancient society, around the time probability theory started to develop a mostly deterministic world view was beginning to dominate thinking, so on its own this idea is not convincing. Another reason may have been the fact that the Roman and Greek notation system for numbers was particularly unsuited for dealing with probability, although the activity that gave the original impetus to the development of probability theory (and which continues to shape it today), gambling, was around in those days. One of the fundamental developments early on that lead to probability theory was the assumption of equally likely cases (see below on the classical interpretation) and this was possible only once regular dice had replaced the more asymmetrical knuckle-bones used by Roman gamblers. Hacking (1975), however, rejects these convenient explanations and concludes that the scientific study of probability could not have taken place as the very concept itself had not been developed. So in parallel with the development of the mathematical aspects of probability, mathematicians have had to ask themselves what it actually is that they describe. The question of what precisely is probability, although it is a philosophical question, is not philosophical in the colloquial sense of being of academic interest only. Unlike philosophy in many other areas, this question can have important consequences to the relevant mathematical content, the areas of application, and the very statements about the world that are permissible. In fact, one enduring and still acrimonious split among statisticians, that between the (subjective) Bayesians and the frequentists, has as one of its roots the philosophical interpretation of what we mean by probability.

http://understandinguncertainty.org/node/146

event example

A simple example
If we assemble a deck of 52 playing cards and no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each individual card is a possible outcome. An event, however, is any subset of the sample space, including any single-element set (an elementary event, of which there are 52, representing the 52 possible cards drawn from the deck), the empty set (an impossible event, defined to have probability zero) and the sample space itself (the entire set of 52 cards), which is defined to have probability one. Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include:

A Venn diagram of an event. B is the sample space and A is an event. By the ratio of their areas, the probability of A is approximately 0.4.

"Red and black at the same time without being a joker" (0 elements), "The 5 of Hearts" (1 element), "A King" (4 elements), "A Face card" (12 elements), "A Spade" (13 elements), "A Face card or a red suit" (32 elements), "A card" (52 elements).

Since all events are sets, they are usually written as sets (e.g. {1, 2, 3}), and represented graphically using Venn diagrams. Venn diagrams are particularly useful for representing events because the probability of the event can be identified with the ratio of the area of the event and the area of the sample space. (Indeed, each of the axioms of probability, and the definition of conditional probability can be represented in this fashion.)

[edit] Events in probability spaces


Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly-behaved' sets, such as those which are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a -algebra, that is, a family closed

under countable unions and intersections. The most natural choice is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice. In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected -algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the -algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all events of interest will be elements of the -algebra

Probability
Problem: A spinner has 4 equal sectors colored yellow, blue, green and red. What are the chances of landing on blue after spinning the spinner? What are the chances of landing on red? Solution: The chances of landing on blue are 1 in 4, or one fourth. The chances of landing on red are 1 in 4, or one fourth. This problem asked us to find some probabilities involving a spinner. Let's look at some definitions and examples from the problem above.

Uni

Definition
An experiment is a situation involving chance or probability that leads to results called outcomes. An outcome is the result of a single trial of an experiment. An event is one or more outcomes of an experiment. Probability is the measure of how likely an event is.

Example
In the problem above, the experiment is spinning the spinner. The possible outcomes are landing on yellow, blue, green or red. One event of this experiment is landing on blue. The probability of landing on blue is one fourth.

In order to measure probabilities, mathematicians have devised the following formula for finding the probability of an event.

Probability Of An Event
P(A) = The Number Of Ways Event A Can Occur The total number Of Possible Outcomes

The probability of event A is the number of ways event A can occur divided by the total number of possible outcomes. Let's take a look at a slight modification of the problem from the top of the page. Experiment 1: A spinner has 4 equal sectors colored yellow, blue, green and red. After spinning the spinner, what is the probability of landing on each color? The possible outcomes of this experiment are yellow, blue, green, and red. P(yellow) = # of ways to land on yellow 1 = total # of colors 4 # of ways to land on blue 1 = total # of colors 4 # of ways to land on green 1 = total # of colors 4 # of ways to land on red 1 = total # of colors 4

Outcomes: Probabilities:

P(blue)

P(green) = P(red) =

Experiment 2:

A single 6-sided die is rolled. What is the probability of each outcome? What is the probability of rolling an even number? of rolling an odd number? The possible outcomes of this experiment are 1, 2, 3, 4, 5 and 6. P(1) = # of ways to roll a 1 total # of sides # of ways to roll a 2 total # of sides # of ways to roll a 3 total # of sides # of ways to roll a 4 total # of sides # of ways to roll a 5 total # of sides # of ways to roll a 6 total # of sides = 1 6 1 6 1 6 1 6 1 6 1 6

Outcomes:

Probabilities:

P(2)

P(3) P(4)

= =

= =

P(5)

P(6)

P(even) =

# ways to roll an even number 3 1 = = total # of sides 6 2

P(odd) =

# ways to roll an odd number 3 1 = = total # of sides 6 2

Experiment 2 illustrates the difference between an outcome and an event. A single outcome of this experiment is rolling a 1, or rolling a 2, or rolling a 3, etc. Rolling an even number (2, 4 or 6) is an event, and rolling an odd number (1, 3 or 5) is also an event. In Experiment 1 the probability of each outcome is always the same. The probability of landing on each color of the spinner is always one fourth. In Experiment 2, the probability of rolling each number on the die is always one sixth. In both of these experiments, the outcomes are equally likely to occur. Let's look at an experiment in which the outcomes are not equally likely. Experiment 3: A glass jar contains 6 red, 5 green, 8 blue and 3 yellow marbles. If a single marble is chosen at random from the jar, what is the probability of choosing a red marble? a green marble? a blue marble? a yellow marble? The possible outcomes of this experiment are red, green, blue and yellow. P(red) = # of ways to choose red 6 3 = = total # of marbles 22 11 # of ways to choose green 5 = total # of marbles 22 # of ways to choose blue 8 4 = = total # of marbles 22 11 # of ways to choose yellow 3 = total # of marbles 22

Outcomes: Probabilities:

P(green) =

P(blue)

P(yellow) =

The outcomes in this experiment are not equally likely to occur. You are more likely to choose a blue marble than any other color. You are least likely to choose a yellow marble.

Experiment 4:

Choose a number at random from 1 to 5. What is the probability of each outcome? What is the probability that the number chosen is even? What is the probability that the number chosen is odd? The possible outcomes of this experiment are 1, 2, 3, 4 and 5. P(1) = # of ways to choose a 1 total # of numbers # of ways to choose a 2 total # of numbers # of ways to choose a 3 total # of numbers = 1 5 1 5 1 5

Outcomes: Probabilities:

P(2)

P(3)

P(4)

# of ways to choose a 4 total # of numbers # of ways to choose a 5 total # of numbers

1 5 1 5

P(5)

P(even) =

# of ways to choose an even number 2 = total # of numbers 5 # of ways to choose an odd number 3 = total # of numbers 5

P(odd) =

The outcomes 1, 2, 3, 4 and 5 are equally likely to occur as a result of this experiment. However, the events even and odd are not equally likely to occur, since there are 3 odd numbers and only 2 even numbers from 1 to 5.

Summary:

The probability of an event is the measure of the chance that the event will occur as a result of an experiment. The probability of an event A is the number of ways event A can occur divided by the total number of possible outcomes. The probability of an event A, symbolized by P(A), is a number between 0 and 1, inclusive, that measures the likelihood of an event in the following way:

If P(A) > P(B) then event A is more likely to occur than event B. If P(A) = P(B) then events A and B are equally likely to occur.

Probability Tutorial Basic Concepts


Random Experiment: An experiment is said to be a random experiment, if it's out-come can't be predicted with certainty. Example; If a coin is tossed, we can't say, whether head or tail will appear. So it is a random experiment. Sample Space: The sent of all possible out-comes of an experiment is called the sample space. It is denoted by 'S' and its number of elements are n(s). Example; In throwing a dice, the number that appears at top is any one of 1,2,3,4,5,6. So here: S ={1,2,3,4,5,6} and n(s) = 6 Similarly in the case of a coin, S={Head,Tail} or {H,T} and n(s)=2.

The elements of the sample space are called sample point or event point. Event: Every subset of a sample space is an event. It is denoted by 'E'. Example: In throwing a dice S={1,2,3,4,5,6}, the appearance of an event number will be the event E={2,4,6}. Clearly E is a sub set of S. http://www.tutors4you.com/probabilitytutorial.htm

You might also like