Está en la página 1de 3

Sample Space: In probability theory, the sample space of an experiment or random trial is the set of all possible outcomes

or results of that experiment.[1] A sample space is usually denoted using set notation, and the possible outcomes are listed as elements in the set. It is common to refer to a sample space by the labels S, , or U (for "universal set"). For example, if the experiment is tossing a coin, the sample space is typically the set {head, tail}. For tossing two coins, the corresponding sample space would be {(head,head), (head,tail), (tail,head), (tail,tail)}. For tossing a single sixsided die, the typical sample space is {1, 2, 3, 4, 5, 6} (in which the result of interest is the number of pips facing up). Experiment : An experiment is an orderly procedure carried out with the goal of verifying, refuting, or establishing the validity of a hypothesis. In the scientific method, an experiment is an empirical method that arbitrates between competing models or hypotheses. Experimentation is also used to test existing theories or new hypotheses in order to support them or disprove them. Conditional Probability : in probability theory, a conditional probability is the probability that an event will occur, when another event is known to occur or to have occurred. If the events are A and B respectively, this is said to be "the probability of A given B". It is commonly denoted by P(A|B), or sometimes PB(A). P(A|B) may or may not be equal to P(A), the probability of A. If they are equal, A and B are said to be independent. For example, if a coin is flipped twice, "the outcome of the second flip" is independent of "the outcome of the first flip". In the Bayesian interpretation of probability, the conditioning event is interpreted as evidence for the conditioned event. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A having accounted for evidence E. (In fact, this is also the Frequentist interpretation.) Random experiment A random experiment a process leading to an uncertain out- comes, before the experiment is run. We usually assume that the experiment can be repeated indefinitely under essentially the same conditions. A basic outcome is a possible outcome of a random experiment. The structure of a random experiment is characterized by three objects:
The

sample space S;

the events set; the probability measure.

Some probabilities cannot be calculated by just looking at the situation. For example, you cannot work out the probability of winning a football match by assuming that win, lose and draw are equally likely, but we can look at previous results in similar matches and use these results to estimatethe probability of winning. Example 1 The Bumbleton and Stickton village football teams have played each other 50 times. Bumbleton have won 10 times, Stickton have won 35 times, and the teams have drawn 5 times. We want to estimate the probability that Stickton will win the next match. So far, Stickton have won 35 out of the 50 matches. We can write this as a fraction, which is . This fraction isn't the probability of Stickton winning, but it is an estimate of that probability. We say that the relative frequency of Stickton winning is .

Relative frequency We calculate the relative frequency of an outcome using this formula:

We can estimate the probability of a particular outcome by calculating the relative frequency. The estimate of probability becomes more accurate if more trials are carried out.

Statistical Independence If Pr (AjB) = Pr (A) we say that A is statistically independent of B: whether B happens makes no difference to how often A happens Since Pr (A \ B) = Pr (AjB) Pr (B), if A is independent of B, then Pr (A \ B) = Pr (A) Pr (B) If this holds, though, then B is also independent of A: Pr (BjA) = Pr (A \ B) Pr (A) = Pr (A) Pr (B) Pr (A) = Pr (B) so we can just say \A and B are independent". Note that events can be logically or physically independent but still statistically dependent. Let A = \scored above 700 on the math SAT" and B = \attends CMU". These are logically independent (neither one implies the other), but statistically quite dependent, because Pr (AjB) > Pr (A). Statistical independence means one event conveys no information about the other; statistical dependence means there is some information. Making this precise is the subject of information theory. Information theory is my area of research, so if I start talking about it I won't shut up; so I won't start. Statistically independent is not the same as mutually exclusive: if A and B are mutually exclusive, then they can't be independent, unless one of them is probability 0 to start with: Pr (()A \ B) = 0 = Pr (()A)Pr (()B) Mutually exclusive" is definitely informative: if one happens, then the other can't. (They could still both not happen, unless they're jointly exhaustive.)

También podría gustarte