Está en la página 1de 55

CHAPTER 3

Probability Theory
Chapter Structure
2.1
2.2

Introduction
Probability Theory
2.2.1 Basic terminology
2.2.2 Probability concept
2.2.3 Probability laws
2.3 Probability Distributions
2.3.1 Discrete and continuous random variable
2.3.2 Discrete Distributions
2.3.2.1
Binomial Distribution
2.3.2.2
Poisson Distribution
2.3.2.3
Negative Binomial or Pascal Distribution
2.3.2.4
Geometric Distribution
2.3.2.5
Hypergeometric Distribution
2.3.3 Continuous Distributions
2.3.3.1
Uniform Distribution
2.3.3.2
Gamma Distribution
2.3.3.3
Exponential Distribution
2.3.3.4
Normal Distribution
2.4
Hypothesis Testing
2.4.1 Null and alternate hypothesis
2.4.2 Terminology used in hypothesis testing
2.4.2.1 Error in sampling
2.4.2.2 Critical region and Level of significance
2.4.2.3 One tailed and two tailed tests
2.4..2. 4 Critical values or significant values
2.4.3 Procedure for testing of hypothesis
2.4.4 Z-test
2.4.5 t-test
2.4.6 Chi-Square Test
2.4.7 F- test
2.5 Analysis of Variance (ANOVA)
2.5.1 One -Way Analysis of Variance
2.5.2 Two-Way Analysis of Variance
2.7 Summary

Learning Objectives
In this chapter, you will learn about

Basics of probability theory


Probability distributions : Discrete and continuous
Hypothesis Testing : basic concepts and various tests
Analysis of variance

2.1 Introduction
Probability theory is being applied in the solution of social, economic, political, engineering
and business problems. In fact, probability has become a part of our everyday life. In
personal and management decisions, we face uncertainty and use probability theory. Various
quality related processes are stochastic in nature and these are modeled using probability
theories and inferences are drawn based on statistical measures. In this chapter we learn about
basic probability terminology, related laws, probability distributions, hypothesis testing and
ANNOVA analysis.

2.2 Probability Theory


The word probability or chance is very commonly used in day to day
conversation and generally people have a vague idea about its meaning.
However, in mathematics and statistics we try to present conditions under
which we can make sensible numerical statements about uncertainty and
apply certain numerical values of probabilities and expectations.
2.2.1. Basic terminology
For better understanding about probability concepts let us introduce some
basic terminology.
Experiment : The term experiment refers to describe an act which can
be repeated under some given conditions.
Random Experiment : Random experiments are those experiments
whose results depend on chance such as tossing of a coin, throwing of
dice. The results of a random experiment are called outcomes. If in an
experiment all the possible outcomes are known in advance and none of
the outcomes can be predicted with certainty, then such an experiment is

called a random experiment and the outcomes as events or chance


events. Events are generally denoted by capital letters A,B,C, etc.
Sample spaces: A set S that consist of all possible outcomes of a random
experiment is called a sample space, and each outcome is called a sample
point.
Events and its types : An event is a subset A of the sample space S,
i.e., it is a set of possible outcomes. If the outcome of an experiment
is an element of A, we say that the event A has occurred. Various
types of events used in probability theory are described here. An event
consisting

of a

single

point

of

is

often

called

a simple or

elementary event. An event consisting of joint occurrence of two or


more events is called a compound event. An event whose occurrence is
inevitable when a certain random experiment is performed is called a
certain or sure event. An event which can never occur when a
certain random experiment is performed is called an impossible event.
For example, in a toss of a balanced dice the occurrence of any one of the
numbers 1,2,3,4,5,6 is a sure event while occurrence of 8 is an impossible
event. An event which may or may not occur while performing a certain
random experiment is known as a random event. Occurrence of 2 is a
random event in the above experiment of the tossing of a dice.
Mutually Exclusive Events : Two events are said to be mutually
exclusive when both cannot
happen simultaneously in a single trial or, in other words, the occurrence
of any one of
them precludes the occurrence of the other. For example, if a single coin
is tossed either
head can be up or tail can be up, both cannot be up at the same time. It
may be pointed out
the mutually exclusive events can always be connected by the words
eitheror. Events
A,B,C are mutually exclusive only if either A or B or C can occur.

Independent Events :

Two

or

more events are said to be

independent when the


outcome of one does not affect, and is not affected by the other.

For

example, if a coin is
tossed twice, the result of the second throw would in

no

way

be

affected by the
result of the first throw. Similarly, the results obtained by throwing a dice
are independent
of the results obtained by drawing an ace from a pack of cards.
Dependent
Events : Dependent events are those in which the
occurrence or
non-occurrence of one event in any one trial affects the probability of
other events in other
trials. For example , if a card is drawn from a pack of playing cards and is
not replaced this
will alter the probability that the second card drawn is, say an ace,
similarly the probability

of drawing a queen from a pack of 52 cards is


drawn (queen) is

4
52

or

1
13

. But if the card

not replaced in the pack, the probability of drawing again a queen is


( because the pack

3
51

now contains only 51 cards out of which there are 3 queens).


Equally Likely Events:
does not occur

Events are said to be equally likely when one

more often than the others. For example, if an unbiased coin or dice is
thrown, each face
may be expected to be observed approximately the same number of
times in the long run.
Exhaustive Events:
totality includes all

Events are said to be exhaustive when their

the possible outcomes of a random experiment.


tossing a dice, the

For example, while

possible outcomes are 1,2,3,4,5 and 6 and hence the exhaustive number
of cases is 6. If two
dice are thrown once, the possible outcomes are :The sample space of
the experiment i.e.
36 ordered pairs (62).
Complementary Events : Let there be two event A and B. A is called
the complementary
event of B (and vice versa) If A and B are mutually exclusive and
exhaustive. For example,
when a dice is thrown, occurrence of an even number (2,4,6) and odd
number (1, 3, 5) are
complementary events. Simultaneous occurrence of two events A and B is
generally
written as AB.
VENN DIAGRAM : Sample spaces events,

particularly relationships

among events, are often depicted by means of Venn diagrams, in which


the sample space is represented by a rectangle, while events are
represented by regions within the rectangle, usually by circles or parts of
circles. Events A and B are mutually exclusive; that is, the two sets have
no elements in common (or the two events cannot both occur), Refer Fig.

A B ,
3.1. When A and B are mutually exclusive, we write A
denotes empty set, which has no elements at all.

where

Fig. 1 Venn Diagram showing two mutually exclusive events


2.2.2 THE CONCEPT OF PROBABILITY
In any random experiment there is always uncertainty as to whether a
particular event will or will not occur. As a measure of the chance , or
probability, with which we can expect the event to occur, it is convenient
to assign a number between 0 and 1.
If an event can occur in h different ways out of a total number of n
possible ways, all of which are equally likely, then the probability of the
event is h/n.

Probability axioms
0 P ( A) 1;

P( S ) 1;

P ( ) 0;

P( A c ) 1 P( A)

2.2.3 Probability laws


Addition law
The addition law states that if two events A and B are mutually exclusive the
probability of occurrence of either A or B is the sum of individual probability of A
and B. Symbolically,

P( A or B) P( A B ) P( A) P( B)
(3.1)

In case A and B are not mutually exclusive, then

P( A or B) P( A B) P( A) P( B) P( A B)

(3.2)
Multiplication Law
If two events A and B are independent events the probability that they
both will occur is equal to the product of their individual probabilities.
Symbolically,

P ( A B ) P ( A) P( B)
(3.3)
CONDITIONAL PROBABILITY

P( A) 0
Let A and B be two events such that

. The conditional probability

is denoted bye by P(B / A) the probability of B given that A has occurred.


Since A is known to have occurred, it becomes the new sample space
replacing the original S. The conditional probability is defined as:

P ( B / A)

P( A B)
P ( A)

P( A B) P( A) P( B / A)
or

(3.4)

For any three events A1,A2, A3, we have


P ( A1 A2 A3 ) P( A1 ) P( A2 / A1 ) P ( A3 / A1 A2 )

(3.5)

In words, the probability that A1,and A2,and A3 all occur is equal to the
probability that A1 occurs times the probability that A2 occurs given that A1
has occurred times the probability that A3 occurs given that both A1 and A2
have occurred. The result can be easily generalized to n events.
Total Probability Law
If an event A must result in one of the mutually exclusive events A1, A2,
An, then
P ( A) P( A1 ) P ( A / A1 ) P( A2 ) P ( A / A2 ) ........... P ( An ) P ( A / An )

(3.6)

BAYE S THEOREM
Suppose that A1, A2.., Ak, , An are mutually exclusive events whose
union is the sample space S, i.e., one of the events must occur. Then if A
is any event, which have intersection with these mutually exclusive
events, then according to Bayes theorem:

P( Ak / A)

P( Ak ) P( A / Ak )
n

P( A ) P( A / A )
i 1

(3.7)

This enables us to find the probabilities of the various events A1, A2,.., An
that can cause A to occur.
Example 3.1 : A manufacturer has three machine operators A, B and C.
The first operator A produces 1% defective items, whereas the other two
operators B and C produces 5% and 7% defective items, respectively. A is
on the job for 50% of the time, B on the job for 30% of the time and C on
the job for 20% of the time. A defective item is produced. What is the
probability that it was produced by A?
Solution: Let E1, E2, E3 and A be the events defined as follows:

E1: the item is manufactured by the operator A,


E2: the item is manufactured by the operator B,
E3: the item is manufactured by the operator C, and
A = the item is defective.
P(E1) = Probability that item drawn is manufactured by the operator A

50
100

P(E2) = Probability that item drawn is manufactured by the operator B

30
100

P(E3) = Probability that item drawn is manufactured by the operator C

=
A

E1

20
100

= Probability that item drawn is defective which is manufactured by

the operator A

1
100

E2

P
Similarly ,

= Probability that item drawn is defective which is


5
100
manufactured by the operator B =
A

E3

= Probability that item drawn is defective which is manufactured by


the operator C

5
100

Now, the required probability


= Probability that the item is manufactured by operator A given that the
item drawn is defective
E1

P
=

A
)
E1
A
A
A
P ( E1 ) P ( ) P ( E 2 ) P ( ) P ( E3 ) P( )
E1
E2
E3
P( E1 ) P(

=
50
1
X
100 100
50
1 30
5 20
7
X

X
100 100 100 100 100 100

50
5

50 150 140 34

2.3 Probability Distributions


In this section before introducing probability distributions, we define
random variable, its types: discrete and continuous. Subsequently
probability distribution for discrete and continuous random variable is
explained. Some important probability distributions of each category are
discussed.

When a random experiment is performed, value of some numerical


quantity determined by the result is of interest rather than all of the
details of the experimental result. For instance, in tossing dice we are
often interested in sum of the two dice and are not really concerned about
the individual outcome of dice. That is , we may be interested in knowing
that the sum is 8 and not be concerned over whether the actual outcomes
was (1,7)or (2,6) or (3,5) or (4,4) or (5,3) or (6,2) or (7,1). These
quantities of interest that are determined by the result of the experiment
are known as random variables.
2.3.1 Discrete and continuous random variable:
Random variable whose set of possible values is sequence is said to be
discrete. For example, a random variable whose set of possible values is
the set of nonnegative integers is a discrete random variable.
Random variable whose set of possible values is an interval is said to be
continuous. One example is the random variable denoting the lifetime of
a car, when the cars lifetime is assumed to take on any value in some
interval (a, b).
Let P(X) be the probability function of the random variable X, then
probability distribution is defined as the outcome of the different
probabilities taken by the function P(X) of the random variable x with a
condition that sum of all probabilities for all possible values of the random
variables in the experiment is one.
Probability distribution for discrete random variable:
For a discrete random variable X, we define the probability mass
function (pmf), p (x) of X by,
p ( x ) P{ X x}

If X assumes one of the values x1, x2, .., then


P(xi) > 0,

i = 1, 2, and p(xi) = 0, all other values of x.

p( x ) 1
With the condition

i 1

The cumulative distribution function (CDF), F(x) of the discrete random


variable X is defined for any real number x as the probability that a
random variable X takes on a value that is less than or equal to x.
Symbolically,

F ( x) P{ X x}

Probability distribution for discrete random variable:


For a continuous random variable X, we define the probability density
function (pdf), f(x) of X by,
f ( x) P{ X x}

The cumulative distribution function (CDF), F(x) of the continuous random


variable X is defined for any real number x as the probability that a
random variable X takes on a value that is less than or equal to x.
Symbolically,
a

F ( x)

f ( x)dx,

x ( , )

2.3.2 Discrete Distributions

Some of the important discrete probability distributions such


Binomial, Negative Binomial or Pascal, Geometric, Hyper geometric,
Poisson which are useful in quality control are described here.
2.3.2.1
Binomial Distribution
A random variable X has a binomial distribution and it is referred to as a
binomial random variable if and only if its probability distribution is given
by

P ( X x) n C x p x (1 p ) n x

for x 0, 1, 2, ....., n
(3.8)

Thus, the number of successes in n trials is a random variable having a


binomial distribution with the parameters n, p.

Probability mass function

0.25

0.2

p(x)

0.15

0.1

0.05

5
Random variable

10

Fig. 2 Probability mass function of Binomial distribution


The mean and variance of binomial distribution are:

2 np(1 p)

np
and

(3.9)
The binomial distribution holds under the following conditions:
i)
ii)
iii)
iv)

Trials are repeated under identical conditions for a fixed number


of times.
There are only two mutually exclusive outcomes, viz; success or
failure for each trials.
The probability of success in each trial remains constant and
does not change from trial to trial.
The trials are independent, i.e., the probability of an event in any
trial is not affected by the results of any other trial.

The binomial distribution is used frequently in quality control. It is the


appropriate probability model for sampling from an infinite large
populations, where p represents the fraction of defective or
nonconforming in the population. In these applications, x usually
represents the number of nonconforming items found in a random sample
of n items.
Example 3.2 : The probability that a bulb produced by a factory will fuse
after 150 days of use is 0.05. Find the probability that out of 5 such bulbs

(i) none

(ii) not more than one

(iii) more than one

(iv) at least one

Will fuse after 150 days of use.


Sol. Here, p = probability that a bulb will fuse after 150 days of use
=0.05=1/20
q=1-p=1-1/20=19/20
Also n = 5.
The number of bulbs that will fuse after 150 days of use is a Binomial
random variable X with distribution B(5, 1/20)
0

1 19

20 20

P ( X 0)5C0

i)

19

20

P(0 X 1) P( X 0) P( X 1)
ii)
0

1 19
1 19
5

C1

20 20
20 20

C0

6 19

5 20

P(1 X 5) 1 P(0 X 1)
iii)

6 19
1 19
1 19
5
C0

C1

1

5 20
20 20
20 20
0

P(1 X 5) 1 P ( X 0)
iv)

1
=

1 19
19
C0

1

20 20
20
0

2.3.2.2

Poisson Distribution

Poisson Distribution is the limiting form of the binomial distribution under


the following condition:

n .

i)
ii)

n, the number of trials becomes indefinitely large, i.e.


p, the probability of success for each trial is indefinitely small,
p 0.

i.e.,
np .is finite.
iii)

A random variable X has a Poisson distribution and it is referred to as a


Poisson random variable if and only if its probability distribution is given
by
p ( x)

x e x
x

for x 0,1,2,......

(3.10)
Thus, the number of successes is a random variable having a Poisson
distribution with the parameter

Probability mass function

0.4
0.35
0.3

p(x)

0.25
0.2
0.15
0.1
0.05
0

4
5
6
Random variable

10

Fig. 3 Probability mass function of Poisson distribution


The number of defective goods manufactured by a factory, the number of
defects or nonconformity that occur in a unit of product can be modeled
by Poisson distribution.
Example 3.3 : Suppose 1% of the bulbs produced by a factory are
defective . Find the probability that there are 3 or more defective bulbs in
a sample of 100 items.
Solution;
Let p be the probability that the bulb is defective

p
Then,

1
0.01
100

We have n=100
Since p is small, so we use the Poissons distribution with

np 100 X 0.01 1

Let X denote the number of defective bulbs. Then


P( X r )

r e
r

P( X 3) 1 P( X 2) 1 P( X 2) P( X 1) P( X 0)

P ( X 2)

(1) 2 e 1
0.184;
2

P( X 1)

(1)1 e 1
(1)0 e 1
0.368; P ( X 0)
0.368;
1
0

P( X 3) 1 0.184 0.368 0.368 1 0.92 0.08


2.3.2.3 Negative Binomial or Pascal Distribution
A random variable X has a negative binomial distribution and it is referred
to as a negative binomial random variable if and only if
p ( x) x 1C k 1 p k (1 p ) x k

(3.11)

For x=k, k+1, K+2,


Thus the number of the trial on which the kth success occurs is a random
variable having a negative binomial distribution with the parameters k
and p.
The mean and the variance of the negative binomial distribution are

k
p

2
and

k 1
1
p p

.
(3.12)

The negative binomial distribution, like the Poisson distribution, is


sometimes useful as the underlying statistical model for various types of
count data, such as the occurrences of defective in a unit of product. In
binomial distribution, we fix the sample size and observe the number of
successes but in case of negative binomial distribution, we fix the number
of successes and observe the sample size required to achieve them.
Since the negative binomial distribution with k = 1 has many important
applications, it is given a special name; it is called the geometric
distribution.

Example 3.4: Suppose that 25% of the items taken from the end of a
production line are defective. If the items taken from the line are checked
until 5 defective items are found, what is the probability that 10 items are
examined?
Solution: Suppose the occurrence of a defective item is a success. Then
according to question we have to find the probability that there will be
(10-5)=5 failures preceding the 5th success, the probability of success in a
trial being 0.25. Then by negative binomial probability law, the required
probability is:
p ( x )101C51 (0.25) 5 (1 0.25)105

2.3.2.4

Geometric Distribution

A random variable X has a geometric distribution and it is referred to as a


geometric random variable if and only if its probability distribution is given
by
p ( x) p(1 p) x 1

for x 1,2,3

(3.13)
Probability mass function

0.25

0.2

p(x)

0.15

0.1

0.05

5
6
Random variable

10

Fig. 4 Probability mass function of Geometricl distribution

Example 3.5:
A box contains dice numbered from 1 to 38. If Amit always bets that the
outcome will be one of the numbers 1 through 12, what is the probability
that his first win occurs on his fourth bet?
Solution.
Let X be the number of trial of which the first success (win ) occurs, then
using
p( x) p(1 p ) x 1

p ( x 4)

for x 1,2,3

12
26
(1 ) 3
38
38

=0.1012
2.3.2.5 Hypergeometric Distribution

A random variable X has a hyper geometric distribution and it is referred


to as a hyper geometric random variable if and only if its probability
distribution is given by

p ( x)

C x N M Cn x
N
Cn
for, x= 0,1, 2, , n.
(3.14)

x M and n x N M
Thus, there is a finite population consisting of N items. Some number M,
of these items fall into class of interest. Then , a random sample of n
items from the population without replacement, and the number of items
in the sample that falls into the class of interest, say , x is is a random
variable having a hyper geometric distribution with the parameters n, N
and M.
The mean and variance of binomial distribution are:

nM
N

2
and

nM
N

M N n

N N 1
(3.15)

The hypergeometric distribution is the appropriate probability model for


selecting a random sample of n items without replacement from a lot of N
items of which M are defective. In these application , x usually represents
the number of defective items found in the sample.
Example 3.6: Suppose 20 major computer companies operate in the
India and that 8 are located in Bangalore. If three computer companies are
selected randomly from the entire list, what is the probability that one or
more of the selected companies are located in the Bangalore?
Solution.
N=20, n=3, M=8 and

x 1

In this problem sampling is done without replacement and sample size is


15% of the population. Therefore, the hypergeometric distribution is
applicable.
p ( X 1) p ( X 1) p ( X 2) p ( X 3)

C1 208C31 8C2 208C3 2 8C 3 208C33

20
20
20
C3
C3
C3

C1 12C2 8C2 12C1 8C 3 12C0


20
20
20
C3
C3
C3

2.3.2

Continuous Distributions

In the category of continuous distributions, Uniform, Gamma,


Exponential, Normal distribution are frequently used in quality
and reliability theories.
2.3.3.1 Uniform Distribution
A random variable has a uniform distribution and it is referred to as a
continuous uniform random variable if and only it its probability density is
given by
1

f ( x)
0

for x 0

elsewhere

(3.16)

and
The parameters

of this probability density are real constants,

with

f(x)

Random Variable

Fig. 5 Probability density function of Uniform distribution


Example 3.7: The amount of time that a person wait for a local train is
uniformly distributed between 0 and 20 minutes. What is the probability
that a person waits fewer that 8 minutes?
Solution: Let X be the number of minutes a person must wait for a local

0 and 20
train.

f ( x)

1
0.05
20 0

F ( X 8) (8 0)(0.05) 0.4
This is the probability that a person will wait for less than 8 minutes.
2.3.3.2 Gamma Distribution
A random variable X has a gamma distribution and it is referred to as a
gamma random variable if and only if its probability density is given by

f ( x)

1
1
x
e
( )
0

for x 0

elsewhere
(3.17)

0 and 0
Where

The mean and variance of binomial distribution are:

and

2 2
(3.18)

Example 3.8: Consider a system as shown in Fig. This is called a standby


redundant system, because while component 1 is on, component 2 is off,
and when component 1 fails, the switch automatically turns component 2
on. If each component has a life described by an exponential distribution
with =10-4 per hour, then the system life is gamma distributed with

parameter

104
= 2 and

. Thus the mean time to failure is

2 X 10 4 hour
The cumulative gamma distribution is
x

1
F (a ) 1
x 1e
( )
a

F (a ) 1
xe
2
10000

(
2
)
2

2.3.3.3

dt

10000

dx

Exponential Distribution

1 and

It is a special case of gamma distribution when


. Formally, A
random variable X has an exponential distribution and it is referred to as
an exponential random variable if and only if its probability density is
given by

e x

f ( x)

for x 0

elsewhere

(3.19)

-5

11

Probability density function

x 10

10
9

f(x)

8
7
6
5
4
3

1000

2000

3000

4000 5000 6000


Random variable

7000

8000

9000 10000

Fig. 6 Probability mass function of Exponential distribution

The mean and variance of binomial distribution are:

1
and

1
2
(3.20)

The exponential distribution is widely used in the field of reliability engineering


as a model of the time to failure of a component. In these applications, the
parameter is called the failure rate of the component. The mean of the
distribution 1/ is called the mean time to failure (MTTF).

Example 3.9: An electronic component in an airborne radar system has a


useful life described by an exponential distribution with failure rate 10 -4
per hour. Determine the probability that this component would fail before
800 hours.
Solution.

Let X, the time to failure be a continuous variable. Since the X follows


exponential distribution, the probability that the component would fail
before 800 hours. ( Given mean failure rate, =10-4 per hour)

e x


800

F ( X 80) e dx

800

(1 e 0.0001(800) )
= 0.0769
2.3.3.4
Normal Distribution
The normal distribution is the most important distribution in both theory
and application of statistics. A random variable X has a normal distribution
and it is referred to as a normal random variable if and only if its
probability density is given by
1 x


1
f ( x)
e 2
2

for x
(3.21)

Fig. 7 Probability density function of Normal distribution

The cumulative distribution function ( CDF) value is given as


x

F ( X x)

1
1 ( x' )2 '
exp[
]dx
2 2
2
(3.22)

The closed form solution of this integral can not be obtained. It is calculated by numerical
solution. First, the PDF, f(x) expression is transformed into standard normal variable, z,
z
function using transformation,

( z)

1 z2 / 2
e
2

(z )

. After transformation, f(x) reduces to

as

Then cumulative distribution function , F(x) is derived as


X x

} ( z ) ( z ' )dz '

F ( x) Pr{ X x} Pr{

(3.23)
This can be solved with the help of standard normal distribution tables.

Example 3.10 . A grinding machine is so set that its production of shafts


has an average diameter of 10.10 cms and a standard deviation of 0.20
cm. The production specifications call for shaft diameters between 10.05
cm. and 10.20 cm. What proportion of output meets the specifications
presuming normal distribution ?

Solution .

Assuming that the distribution of diameter of shafts is normal.

0.20 cm
We have, =10.10 cm,

10.05 10.10
0.25
0.20

10.20 10.10
0. 5
0.20

For X=10.05,

For X=10.20,

Therefore, area of the normal curve between the ordinate -0.25 to 0.5 =
Area between the ordinates -0.25 to 0 + area between the ordinates 0 to
0.5 = 0.0987 + 0.1915 = 0.2902. Thus 29.02% of the output meets the
specification.
Example 3.11: A sample of 100 dry battery cells tested to find the length
of life produced the following results :

12 hours,

3 hours

Assuring the data to be normally distributed, what percentage of battery


cells are expected to have life :
(i) More than 15 hours,
(ii) Less than 6 hours, and
(iii)between 10 and 14 years ?
(Given Z:

2.5

Area

2
0.4938

1
0.4772

0.67
0.3413

0.2487

Solution : Let the random variable X denote the length of life of dry
battery cells.
(i)

P(X =15) = P (Z =1 )

Z
Where,

15 12
1
3

Area under the standard normal curve to the right of Z = +1


=0.05 -03413 =0.1587 or 15.87%

(ii)

P(X < 6) = P(Z <- 2)

Z
Where,

6 12
2
3

Area under the standard normal curve to the left to Z = - 2


=Area to the right of Z = +2
= 0.5 0.4772 = 0.0228 = 2.28%
(iii)

P(10 <X< 14) = P (-0.67 < Z < 0.67)

Area under the standard normal curve between Z = - 0.67 to Z = 0.67


= 2 (Area between Z =0 and Z= 0.67)
=2 x 0.2487 = 0.4974 = 49.74%.

2.4 Hypothesis Testing


Hypothesis testing is to test about some hypothesis about parent
population from which the sample is drawn. Another definition of
hypothesis is simply a quantitative statements about a population.
2.4.1 Null and Alternate hypothesis
The two hypothesis in statistical test are normally referred to as
i)
ii)

Null hypothesis
Alternate hypothesis

Null hypothesis, H0 asserts that there is no real difference in the sample


in the population in the particular matter under consideration. To
Alternate hypothesis: Any hypothesis which is complementary to the
null hypothesis is called an alternative hypothesis, usually denoted by H a.
For example, if we want to test the null hypothesis that the population has
a specified mean 0, i.e. H0: =0, then the alternative hypothesis could be
H a : 0 (i.e. 0 or 0

i)
ii)

H a : 0

H a : 0

iii)
The alternative hypothesis in (i) is known as a two tailed alternative and
the alternatives in (ii) and (iii) are known as right tailed and left tailed
alternatives respectively. The setting of alternative hypothesis is very
important since it enable us to decide whether we have to use a single
tailed (right or left) or two-tailed test.
2.4.2. Terminology used in hypothesis testing
2.4.2.1 Errors in Sampling: The main objective in sampling theory is to
draw valid inferences about the population parameters on the basis of the
sample results. In practice we decide to accept or reject the lot after
examining a sample from it. As such we are liable to commit the following
two types of errors:
Type I Error: Reject H0 when it is true.
Type II Error: Accept H0 when it is wrong. i.e. accept H0 when H1 is true.
If we write,
P{Reject H0 when it is true}=P{Reject H0/H0} =

and P{Accept H0 when it is wrong}=P{Accept H0/H1}=


Then

and

are called the sizes of type I error and type II error,

respectively.

In practice, type I error amounts to rejecting a lot when a lot when it is


good and type II error may be regarded as accepting the lot when it is
bad.
2.4.2.2 Critical Region and Level of Significance:
A region (corresponding to a statistic t) in the sample space S which
amounts to rejection


of Ho is termed as critical region or region of rejection. If
region and if t is

is the critical

the value of the statistic based on a random sample of size n, then.


P (t / H 0 ) , P(t / H a )

Where

, the complementary set of

, is called the acceptance region.

The probability that a random value of the statistic t belongs to the


critical region is known as the level of significance. In other words, level
of significance is the size of the type I error (or the maximum producers
risk). The levels of significance usually employed in testing of hypothesis
are 5% and i1%. The level of significance is always fixed in advance
before collecting the sample information.
2.4.2.3 One tailed and Two tailed Tests:
A test of any statistical hypothesis where the alternative hypothesis is one
tailed (right tailed or left tailed) is called a one tailed test. For example, a
test for testing the mean of a population.
H 0 : 0

Against the alternative hypothesis :


H a : 0

H a : 0

(Right tailed ) or
(Left tailed), is a single tailed test. In
the right tailed test the critical region lies entirely in the right tail of the

sampling distribution of , while for the left tail test, the critical region is
entirely in the left tail of the distribution.
A test of statistical hypothesis where the alternative hypothesis is two
tailed such as :
H 0 : 0

H a : 0

0 and 0

, against the alternative hypothesis


,(
), is known as two tailed test and in such a case the critical region is given
by the portion of the area lying in both the tails of the probability curve of
the test statistic.
In a particular problem, whether one tailed or two tailed test is to be
applied depends entirely on the nature of the alternative hypothesis. If

the alternative hypothesis is two-tailed we apply two-tailed test and if


alternative hypothesis is one-tailed, we apply one tailed test.
For example, suppose that there are two population brands of bulbs, one
manufactured by standard process (with mean life 1) and the other
manufactured by some new technique (with mean life 2). If we want to
H 0 : 1 2

test if the bulbs differ significantly, then our null hypothesis is


H a : 1 2

and alternative will be


, thus giving us a two-tailed test.
However , if we want to test if the bulbs produced by new process have
higher average life than those produced by standard process, then we
have
H 0 : 1 2

and

H1 : 1 2

Thus giving us a left-tail test. Similarly, for testing if the product of new
process is inferior to that of standard process, then we have :
H 0 : 1 2

and

H1 : 1 2

Thus giving us a right-tail test. Thus, the decision about applying a twotail test or a single-tail (right or left) test will depend on the problem under
study.
2.4.2.4 Critical Values or significant Values: The value of test
statistic which separates the critical (or rejection) region and the
acceptance region is called the critical value or significant value. It
depends upon :
(i) The level of significance used, and
(ii) The alternative hypothesis, whether it is two-tailed or single-tailed.
As has been pointed our earlier, for large samples, the standardised
variable corresponding to the statistic x viz :

This value of Z under the null hypothesis is known as test statistic. The
critical value of the test statistic at level of significance
test is given by

where

for a two-tailed

is determined by the equation

P( Z z

i.e.,
is

is the value so that the total area of the critical region on both tails

. Since normal probability curve is a symmetrical curve, we get

P( Z z P( Z z )

P( Z z )
or

i.e., the area of each tail is

right of

is

. Thus

and to the left of

is

,
z

is the value such that area to the

In case of single-tail alternative, the critical values


total area to the right of it (for right-tailed test) is
the total area to the left of
For Right-tail Test
For Left-tail Test

:
:

is determined so that
and for left-tailed test

is a (See diagrams below), i.e.,

P ( Z z )

P ( Z z )

Thus the significant or critical value of Z for a single-tailed test (left or


right) at level of significance

is same as the critical value of Z for a

two-tailed test at level of significance

2.4.3

Procedure for Testing of Hypothesis: The various steps in

testing of a statistical hypothesis are explained in a systematic manner


as:
1.

Null Hypothesis. Set up the Null Hypothesis Ho

2.

Alternative Hypothesis. Set up the Alternative Hypothesis Ha. This


will enable us to decide whether we have to use a single-tailed
(right or left) test or two-tailed test.

3.

Level of Significance. Choose the appropriate level of significance (

) depending on the reliability of the estimates and permissible

risk. This is to be decided before sample drawn, i.e.,

is fixed in

advance.
4.

Test Statistic (or Test Criterion).Compute the test statistic

5.

under the null hypothesis.

Conclusion. We compare z the computed value of Z in step 4 with

the significance,

Z z

If
i.e., if the calculated value of Z (in modulus value) is less than
we say it is not significant. By this we mean that the difference x- is just
due to fluctuations of sampling and the sample data do not provide us
sufficient evidence against the null hypothesis which may therefore, be
accepted.
Z z

If
, i.e., if the computed value of test statistic is greater than the
critical or significant value, then we say that it is significant and the null
hypothesis is rejected at level of significance
coefficient (1 2.4.4 Z- Test

).

i.e., with confidence

Z test is used for tests on means with known variance and for large
samples ( sample size more than 30). The testing of significance for single
mean and significance of difference of means are done with Z-test.
Test of Significance for Single Mean: According to Central Limit
Theorem if x1, (i = 1,2,...., n) is random sample of size n from a normal
population with mean and variance

, then the sample mean is


2
n

distributed normally with mean and variance


. However, this result
holds even in random sampling from non-normal population provided the
sample size n is large .Thus for large samples, the standard normal variate
corresponding to

is :

Under the null hypothesis, Ho that the sample has been drawn from a
population with mean and variance

difference between the sample mean (


Symbolically,

, i.e., there is no significant


) and population mean ().

H 0 : 0
H a : 0

We compute the test statistic (for large samples) ,

Z0

x 0

n
Z 0 z / 2

and reject H0 if
Table.

. The summary of the test procedure is given in

Table 1: Tests on mean with known variance

Hypothesis

Test Statistic

H 0 : 0

Two-tailed test

H a : 0
H 0 : 0

Left tailed test

Z0

x 0

Criteria for
rejection of
H0
hypothesis
Z 0 z / 2

Z 0 z

H a : 0
H 0 : 0

Right tailed test

Z 0 z

H a : 0

Test of Significance for Difference of Means: Let

x1

be the mean of a

random sample of size n1 from a population with mean 1 and variance


and let

x2

be the mean of an independent random sample of size n2 from

another population with mean 2 and variance


sizes are large.


x1 ~ N 1 , 1
n1

Also

x1 x2

and

. Then, since sample

2
x2 ~ N 2 , 2
n2

, being the difference of two independent normal variates is also

a normal variate. The Z corresponding to

12

( x 1 x2 ) ( 1 2 )

1 2

n1
n2
2

x1 x2

is given by

H 0 : 1 2

Under the null hypothesis


i.e., there is no significant difference
between the sample means. The summary of the test procedure is given
in Table.
Table 2: Tests on difference of two means with known variance
Hypothesis

Test Statistic

Criteria for
rejection of
H0
hypothesis
Z 0 z / 2

H 0 : 1 2

Two-tailed test

H a : 1 2
H 0 : 1 2

Left tailed test

H a : 1 2
H 0 : 1 2

Right tailed test

Z0

x1 x2

1 2

n1
n2
2

Z 0 z

Z 0 z

H a : 1 2

Example 3.12: The internal pressure strength of glass bottles used to


packed a carbonated beverage is an important quality characteristic. The
bottler wants to know whether the mean pressure strength exceeds 175
psi . From experience , he knows that the standard deviation of pressure
strength is 10 psi . The glass manufacture submits lots of these bottles to
the bottler, who is interested in testing the hypothesis
H 0 : 175;

H a : 175
H 0 : 175;

Note that the lot will be accepted if the null hypothesis


is
rejected a random sample of 25 bottles is selected, and the bottles are
placed on a hydrostatic pressure-testing machine that increase the
pressure inside the bottle until it falls. The sample average bursting
x
strength is
182 psi. The value of the test statistic is

Z0

x 0

182 175
3.50
10
25

If we specify a type I error (or products risk) of


Normal Table we find

z z0.05 1.645

= 0.05, then from

H 0 : 175;

. Therefore ,we reject


mean pressure strength exceeds 175 psi.

and conclude that the lot

2.4.5. t-Test for Single Mean:


The t-distribution is used when sample size is 30 or less and the
population standard deviation is unknown.
Suppose we want to test : If a random sample xi, (i=1,2,...,n) of size n
has been drawn from a normal population with a specified mean, 0 , or if
the sample mean differs significantly from the hypothetical value 0 of the
population mean. Then hypothesis is :
H 0 : 0
H a : 0

t
The statistic

x 0
S
n

, where S is the sample standard deviation,

follows t-distribution with (n -1) d.f.


We now compare the calculated value of t with the tabulated value at
t

certain level of significance. If calculated

tabulated t, null hypothesis

is rejected and if calculated


level of significance adopted.

tabulated t, Ho may be accepted at the

The summary of t test procedure is given in the following table:


Table 3: Tests on mean with unknown variance

Hypothesis

Test Statistic

Criteria for
rejection of
H0
hypothesis

H 0 : 0

Two-tailed test

t0 t / 2,n1

H a : 0
H 0 : 0

Left tailed test

x 0
S
n

t0 t ,n1

H a : 0
H 0 : 0

Right tailed test

t 0 t , n 1

H a : 0

Table 4: Tests on difference of two means with unknown variance


Hypothesis

Test Statistic

H 0 : 1 2

Two-tailed test

t0

H a : 1 2

(X X ) (X
2

Sp

Criteria for
rejection of
H0
hypothesis
t 0 t / 2,

x1 x2
1 1
Sp

n1 n2

X 2 )2

n1 n 2 2

Where,
and

n1 n2 2

Example 3.13: Two types of drugs were used on 5 and 7 patients for
reducing their weight. The drug A was imported and drug B indigenous.
The decrease in weight after using the drug for six months was as follows:
Drug A

10

12

13

11

14

Drug B

12

14

15

10

Is there a significant difference in the efficacy of the two drugs ? If not,


which drug should you buy?
Solution:

H 0 : 1 2

Hypothesis design

H a : 1 2

Table 5. Calculation table for sample variances

..

( X1 X1)

X1

( X1 X1 )2

( X1 X1)

X1

( X1 X1 )2

10

-2

-3

12

-2

12

13

11

-1

14

14

15

16

10

-1

-2

(X

60

X 1 ) 2 10

77

(X

X 2 ) 2 44

X1

X
n1

60
12;
5

(X X ) (X
2

Sp

X2

n1 n 2 2

X
n2

X 2 )2

77
11
7

10 44
572

t0

x1 x2
12 11

0.735
1 1
1 1
Sp

2.324
n1 n2
5 7

n1 n2 2 5 7 2 10

and

t0.05,10 2.228
The calculated value of t is less than the table value, the hypothesis is
accepted. Hence, there is no significance in the efficacy of two drugs.
Since drug B is indigenous and there is no difference in the efficacy of
imported and indigenous drugs, we should buy indigenous drug B.
2.4.6 Chi-square Test for population Variance: Suppose we want to
test if a random sample xi, (i=1,2,.....,n) has been drawn from a normal
2

1 0
population with a specified variance

.
2

1 0

Under the null hypothesis that the population variance is


statistic

, the

( xi x ) 2
nS 2
[
] 2
2
0
0
i 1
2

, follows chi-square distribution with (n-1) d.f.


2

By comparing the calculated value with the tabulated value of


for (n -1)
d.f. at certain, level of significance, (usually 5%), we may retain or reject
the null hypothesis.
Chi-square Test of Goodness of Fit: A very powerful test for testing
the significance of the discrepancy between theory and experiment is
known as Chi-square test of goodness of fit. It enables us to find if the
deviation of the experiment for theory is just by chance or is it really due
to the inadequacy of the theory to fit the observed data.
If Oi, (i = 1,2,...., n) is a set of observed (experimental) frequencies and Ei
(i = 1,2,...,n) is the corresponding set of expected (theoretical or
hypothetical frequencies, then chi-square, given by

2 [
i 1

(Oi Ei ) 2
]
2
Ei

, follows chi-square distribution with (n 1 ) d.f.

Degrees of Freedom (d.f.). The number of independent variates which


2

make up the statistic (e.g.,

) is known as the degrees of freedom (d.f.)

and is usually denoted by (the letter Nu of the Greek alphabet).The


number of degrees of freedom, in general, is the total number of
observations less the number of independent constraints imposed on the
observations. For example, if k is the number of independent constraints
in a set of data of n observations then v = (n k)
Example 3.14: A random sample of 25 from a population gives the
sample standard deviation to be 8.5. Test the hypothesis that the
population standard deviation is 10.
Solution:
Hypothesis design: the population variance is 100.
2

H 0 : 1 100;

H a : 1 100;

Given n=25, S=8.5

nS 2 25(8.5) 2
2
18.0625
100
0
2

n 1 25 1 24

2 0.05, 24 36.415
2 (18.0625)
Since the calculated value of
is less than the table value
(36.415), the hypothesis holds true, the population standard deviation
may be 10.
2.4.7 F Test
If X and Y are two independent chi-square variates with
d.f..respectively, the F-statistic is defined by

and

F 1
Y
2
In other words, F is defined as the ratio of two independent chi-square
variates divided by their respective degrees of freedom and it follows F-

1 2

distribution with (

) d.f. with probability function given by

1 2
1

F 2 1
f (F ) 2


( ) / 2
B 1 , 2 [1 1 F ] 1 2
2
2 2

THE F-TEST OR THE VARIANCE RATIO TEST


The F-test is named to honour of the great statistician R.A. Fisher. The
object of the F-test is to find out whether the two independent estimates
of population variance differ significantly, or whether the two samples
may be regarded as drawn from the normal populations having the same
variance. For carrying out the test of significance . we calculate the ratio
F. F is defined as
2

S1
2
S2

It should be noted that S12 is always the larger estimate of variance, i.e.,
S12> S22.

1 n1 1

1
2

and

2 n2 1

= degrees of freedom for sample having larger variance.


= degrees of freedom for sample having smaller variance.

The calculated value of F is compared with the table value for


and
at
5% of 1% level of significance. If calculated value of F is greater than the
table value then the F ratio is considered significant and the null
hypothesis is rejected. On the other hand, if the calculated value of F is
less than the table value the null hypothesis is accepted and it is inferred
that both the samples have come from the population having same
variance.
Since F test is based on the ratio of two variances , it is also known as the
Variance Ratio Test. The ratio of two variances follow a distribution called
the F distribution named after the famous statistician R.A. Fisher.
Assumptions in F-Test. The F-test is based on the following assumptions :
1.

Normality, i.e., the values in each group are normally distributed.

2.

Homogeneity, i.e., the variance within each group should be equal


for all groups (...) This assumption is needed in order to combine or
pool the variances within the groups into a single within groups
source of variation.

3.

Independence of error. It states that the error (variation of each


value around its own group mean) should be independent for each
value.

Example 3.15: Given that sample variance for two gasoline formulations

are :

S12

= 1.34 and

S 22

= 1.07 and sample size is 10 for both formulations.

Test the hypothesis that the variances of the road octane numbers for the
two gasoline formulations are the same ; that is,
H0 :1 2 ; H a :1 2 ;
2

Since

S12

= 1.34 and

F0 =

S12
S 22

S 22

= 1.07, the test statistic is

1.34
1.07

= 1.25

From Table we find F

0.025,9,9

= 4.03 and F

0.975 9,9

= 0.248, so since 0.248

< 1.25 < 4.03, we conclude that there is no evidence to warrant rejection
of Ho.

2.5 Analysis of Variance (ANOVA)


The analysis of variance (ANOVA) is a statistical technique specially
designed to test whether the means of more than two populations are
equal. It consists of classifying and cross-classifying statistical results and
testing whether the means of a specified classification differ significantly.
In this way it is determined whether the given classification is important in
affecting the results. The analysis of variance is classified in two
categories, i.e. One-way ANOVA and Two-way ANOVA
2.5.1 One- way ANOVA
In one-way classification the date are classified according to only one
criterion. The null hypothesis is :
H 0 : 1 2 3 ............. k

, i.e. the arithmetic means of populations from


which the k samples were randomly drawn are equal to one another. Its
alternate hypothesis is :
H a : 1 2 3 ............. k

, i.e. all the means area not equal.


The steps in carrying out the analysis are:
1.

Calculate variance between the samples (SSC).

The variance between samples taken into account the random variations
from observation to observation. It also measures difference from one
group to another. The sum of squares between samples is denoted by
SSC. The steps in calculating variance between samples will be :
(a) Calculate the mean of each sample i.e.,
(b) Calculate the grand average

X1, X 2 ,

etc. :

. Its value is obtained as follows :

X 1 X 2 ..........
N1 N 2 .............

(c)Take the difference between the means of the various samples


and the grand average.
(d) Square these deviations and obtain the total which will give sum
of squares between the samples ; and
(e) Divide the total obtained in step (d) by the degrees of freedom.
The degrees of freedom will be one less than the number of samples
2. Calculate variance within the samples ( SSE). The variance within
samples measures those inter-sample differences due to chance only. It is
denoted by SSE.
The variance within samples (groups) measures
variability differences it can be considered a measure of the random
variation of values within a group. The steps in calculating variance within
the samples will be :
(a) Calculate the mean value of each sample ,

X 1, X 2 ,

etc. :

(b) Take the deviations of the various items in a sample from the mean
values of the respective samples.
(c) Square these deviations and obtain the total which gives the sum of
square within the samples and
(d) Divide the total obtained in step (c) by the degrees of freedom. The
degree of freedom is obtained by deduction from the total number of
items the number of samples , i.e. v= N-K, where K refers to the number
of samples and N refers to the total number of all the observations.
3. Calculate the ratio F as follows :
F

Between column var iance


Within column var iance

S1
2
S2

The F-distribution measures the ratio of the variance between groups to


the variance within groups. The variance between the samples means is
the numerator and the variance within the sample means is the
denominator. If there is no real difference from group to group, any

sample difference will be explainable by random variation and the


variance between groups should be close to the variance within groups.
However, if there is a real difference between the groups, the variance
between groups will be significantly larger than the variance within
groups.
4 Compare the calculated value of F with the table value of F for the
degrees of freedom at a certain critical level (generally we take 5% level
of significance). If the calculated value of F is greater than the table
value, it is concluded that the difference in sample means is significant
i.e., it could not have arisen due to fluctuations of simple sampling or, in
other words, the samples do not come from the sample population. On
the other hand, if the calculated value of F is less than the table value the
difference is not significant and has arisen due to fluctuations of simple
sampling.
It is customary to summarize calculations for sums of squares together
with the r numbers of degrees of freedom and mean squares in a table
called the analysis of variance table, generally abbreviated ANOVA table.
Table 6: One- way ANOVA table
Source
of Sum
of Degree
of Mean square
variation
squares
freedom
Between
SSC
MSC=SSC/(c 1 (c 1)
samples
1)
Within
SSE
MSE=SSE/(n 2 (n c)
sample
c)
Total
n-1
SST=Total sum of squares of variation
SSC=Sum of squares between samples ( columns)
SSE=Sum of squares within samples ( rows)
MSC=Mean sum of squares between samples
MSE=Mean sum of squares within samples

Variance
ratio of F
MSC/MSE

The calculated values of F are compared with the table values. If


calculated value of F is greater than the table value of pre-assigned level
of significance the null hypothesis is rejected, otherwise accepted. If
rejected there is significant difference between the sample means.
Example 3.16: A company has three manufacturing plants, and company
officials want to determine whether there is a difference in the average
age of workers at three locations. The following data are the ages of five
randomly selected workers at each plant. Perform a one-way ANOVA to
determine whether there is a significant difference in the mean ages of

the workers at three plants. Use


equal.

0.01

and note that the sample sizes are

Table 7. Plant employee ages ( years)


..
1

29

32

25

27

33

24

30

31

24

27

34

25

28

30

26

.
Solution :
Hypothesis design:
H 0 : 1 2 3
H a : 1 2 3

i.e. all the means area not equal


Sample mean of each sample

x1

29 27 30 27 28
28.2
5

x2

32 33 31 34 30
32.0
5

x3

25 24 24 25 26
24.8
5

Grand average

x1 x 2 x 3 28.2 32.0 24.8

28.33
3
3

Sum of squares between samples

SSC n ( x1 x ) 2 ( x2 x ) 2 ( x3 x ) 2

5 (28.2 28.33) 2 (32.0 28.33) 2 (24.8 28.33) 2 129.73

Sum of squares within sample

SSE ( x11 x1 ) 2 ( x12 x1 ) 2 ( x13 x1 ) 2 ( x14 x1 ) 2 ( x15 x1 ) 2 ............ ( x35 x3 ) 2

SSE (29 28.2) 2 ( 27 28.2) 2 (30 28.2) 2 (27 28.2) 2 (28 28.2) 2 ............ ( 25 24.8) 2

=19.60

SST ( x11 x ) 2 ( x12 x ) 2 ( x13 x ) 2 ( x14 x ) 2 ( x15 x ) 2 ............ ( x 35 x ) 2

SST (29 28.33) 2 ( 27 28.33) 2 (30 28.33) 2 ( 27 28.33) 2 (28 28.33) 2 ............ (26 28.33) 2

= 149.33

1 (c 1) 3 1 2

2 (n c) 15 c 14
MSC=SSC/(c-1) = 129.73/2=64.87
MSE=SSE/(n-c) = 19.60/12=1.63
F = MSC/MSE = 64.87/1.63=39.80
Table 8: One- way ANOVA table
Source
of Sum
of
variation
squares
Between
SSC=129.73
samples
Within
SSE=19.60
sample
Total
SST=149.33

Degree
of Mean square
freedom
1 (c 1) 2 MSC=SSC/(c1)=64.87
2 (n c) 12 MSE=SSE/(nc)=1.63
n-1=14

Variance
ratio of F
MSC/MSE
=39.80

The decision is to reject the null hypothesis because the observed F value
of 39.80 is greater than the critical value of 6.93. Therefore, there is a
significant difference between the sample means.
2.5.2 TWO-WAY ANOVA
In a one-factor analysis of variance explained above the treatments
constitute different levels of a single factor which is controlled in the
experiment. There are, however, many situations in which the response
variable of interest may be affected by more than one factor. For
example, petrol mileage may be affected by the type of car driven, the
way it is driven road conditions and other factors in addition to the brand
of petrol used.
When it is believed that two independent factors might have an effect on
the response variable of interest, it is possible to design the test so that
an analysis of variance can be used to test for the effects of the two
factors simultaneously. Such a test is called a two-factor analysis of
variance. With the two-factor analysis of variance. We can test two sets
of hypothesis with the same data at the same time.
In a two way classification the date are classified according to the
different criteria or factors. The procedure for analysis of variance is
somewhat different than the one followed while dealing with problems of
one-way classification. In a two-way classification the analysis of variance
table takes the following form.
Table 9: Two-way ANOVA table
Source
variation

of Sum of
square
s
SSC

Between
samples
Between Rows SSR
Residua
or SSE
error
Total
SST

Degrees
of
freedom
c-1

Mean sum of squares

Ratio of F

MSC=SSC/(c-1)

MSC/MSE

r-1
(c-1)(r-1)

MSR=SSR/(r-1)
MSE=SSE/(r-1)(c-1)

MSR/MSE

n-1

SSC = Sum of squares between columns


SSR = Sum of squares between rows
SSE = Sum of squares due to error
SST = Total sum of squares

The sum of squares for the source Residual is obtained by subtracting


from the total sum of squares the sum of squares between columns and
rows, i.e., SSE = SST- [SSC + SSR]
The total number of degrees of freedom = n -1 or cr 1
Where c refers to number of columns, and r refers to number of rows
Number of degrees of freedom between columns
= (c 1)
Number of degrees of freedom between rows
=(r -1)
Number of degrees of freedom for residual
= (c -1 ) (r 1)
The total sum of squares, sum of squares for between columns and sum of
squares for between rows are obtained in the same way as before.
Residual or error sum of square = total sum of squares sum of squares
between columns Sum of squares between rows.
The F values are calculated as follows :

F(1 , 2 )

Where

F(1 , 2 )

Where

MSC
MSE

1 (c 1)

and

2 (c 1)( r 1)

MSR
MSE

1 (r 1)

2 (c 1)( r 1)

and

It should be carefully noted that

1 (c 1)

case

and another case

may not be same in both cases one

1 (r 1)

The calculated values of F are compared with the table values. If


calculated value of F is greater than the table value of pre-assigned level
of significance the null hypothesis is rejected, otherwise accepted.
Example 3.17:

The following data represent the number of units of

production per day turned out by different workers using four different
types of machines:
Machine Type

Workers

44

38

47

36

46

40

52

43

34

36

44

32

43

38

46

33

38

42

49

39

a) Test whether the mean productivity is the same for the different
machine types.
b) Test whether the 5 men differ with respect to mean productivity
Solution.
Hypothesis Design:
a) The mean productivity is the same for four different machines.
H 0 : A B C D
H a : A B C D

, i.e. the mean productivity of all the machines is not


equal.
b) The 5 men do not differ with respect to mean productivity.
H 0 : 1 2 3 4 5
H a : 1 2 3 4 5

, i.e. the mean productivity of all the machines is


not equal.
The coded data is given below:
Workers
1
2

Machine type
A
B
+4
-2
+6
0

Total
C
+7
+12

D
-4
+3

+5
+21

3
4
5
Total

-6
+3
-2
+5

Correction factor

-4
-2
+2
-6

+4
+6
+9
+38

-8
-7
-1
-17

-14
0
+8
T=20

T 2 400

20
N
20

Sum of squares between machines

(5) 2 (6) 2 (38) 2 (17) 2

correctionfactor
5
5
5
5

(5 7.2 288.8 57.8) 20 338.8

(c 1) (4 1) 3

Sum of squares between workers

(5) 2 (21) 2 (14) 2 (0) 2 (8) 2

correction factor
4
4
4
4
4

(6.25 110.25 49 0 16) 20 161.5

( r 1) (5 1) 4
Total sum of squares

[( 4) 2 (6) 2 (6) 2 ..................... (8) 2 (7) 2 (1) 2 ] correction factor


574
Residual = Total sum of squares-(sum of squares between machines sum
of squares

between workers)
= 574-338.8-161.5 = 73.7
Degree of freedom for residual = (c-1) (r-1) = 3X 4 = 12
Table 10: Two-way ANOVA table
Source

of Sum

variation
Between

of Degrees

of Mean square

Variance ratio
or F
112.933
18.387
6.142

squares
338.8

freedom
3

112.933

types
Between

161.5

40.375

workers
Remainder

73.7

12

6.142

574

19

machine

40.375
6.574
6.142

or residual

3 12, F0.05 3.49


a) For
Since the calculated value(18.387) is greater than the table value
(3.49), we conclude that the mean productivity is not same for the
four different types of machines.

4 12, F0.05 3.26

b) For
The calculated value(6.574) is greater than the table value
(3.26),hence the workers differ with respect to mean productivity.

2.7 Summary
This chapter focuses on basic probability concepts, important
discrete and continuous probability distributions, hypothesis
testing, four important statistical tests;

Z-test, t-test, F-test,

and

2 test.

These concepts are used in seven quality control

tools, mainly in control charts. Various. In addition, and ANOVA


analysis is explained with solved examples.

Review Question

1. Define following types of events occurring in the theory of


probability, and state the theorem concerning them: i) Mutually
exclusive, ii) Exhaustive, iii) Dependent, and iv) Independent.
2. Differentiate between discrete and continuous variables. How the
cumulative distribution function ( CDF) is evaluated in case of
discrete and continuous variable?
3. Explain the salient features of Binomial and Normal distribution.
4. Define Poisson distribution and state the conditions under which this
distribution is used.
5. What is hypothesis testing ? Explain the procedure of hypothesis
testing.
6. Differentiate the following pairs of concept: i) Statistic and
parameter, ii) Critical region and acceptance region, iii) Null and
alternate hypothesis.
7. How the z-test is different from t-test?
8. What is Goodness-of-fit test? How it is carried out?
9. Explain the F test application in the context of quality control.
10.
Explain the One-way ANOVA procedure. How the Two-way
ANOVA is different fron One-way ANOVA?

EXERCISES
1.In a factory, machine A produces 30% of the total output, machine B
produces 25% and the machine C produces the remaining output. 1% of

the total output of machine A is defective. 1.2% of machine B is defective


and 2% of the output of machine C is defective. Three machines working
together produce 10,0000 item in a day. An item is drawn at random from
a days output and found to be defective. Find the probability that it was
produced by machine B?
2.Two balls each equally likely to be coloured either red or blue, are put in
an urn. At each stage one of the balls is randomly chosen, its colour is
noted, and it is then returned to the urn. If the first two balls chosen are
coloured red, what is the probability that
(a) both balls in the urn are coloured red;
(b) the next ball chosen will be red?
3. A study has been made to compare the nicotine contents of the two
brands of cigarettes. Ten cigarettes of brand A had an average nicotine
content of 3.1 milligrams with a standard deviation of 0.5 milligram, while
eight cigarettes of Brand B had an average nicotine content of 2.7
milligrams with a standard deviation of 0.7 milligram. Assuming that the
two sets of data are independent random samples from normal
populations with equal variances, Test the hypothesis that the mean
nicotine content in brand A and brand B are equal at 5% level of
significance.
4. Suppose the probability that an item produced by a particular machine
is defective, is 0.2. If 10 items produced by this machine are selected at
random, what is the probability that not more than one defective item is
found ? Attempt this question by two approaches , i.e. binomial
distribution approach and Poisson distribution approach.
5.The diameter of a metal shaft used in a disk drive unit is normally
distributed with mean 0.2508 in. and standard deviation 0.0005 in. The
specifications on the shaft have been established as 0.2500+ 0.0015 in.
We wish to determine that fraction of the shafts produced conform to
specifications.
7. Develop a one-way ANOVA on the following data.
1
113

2
120

3
132

121 127
130
117 125
129
110 129
135
Determine the observed F value. Compare it to the critical F value and
decide whether to reject the null hypothesis. Use a 1% level of
significance.
8. To study the performance of three detergents and three different water
temperatures, the following Whiteness readings were obtained with
specially designed equipment :
Water temp.
Detergent A
Detergent B
Detergent C
Cold Water
57
55
67
Warm Water
49
52
68
Hot Water
54
46
58
Perform a two-way analysis of variance, using 5% level of significant
(Given F 5% = 6.94 ) The time (in hours ) required to repair a machine is
an exponentially disturbed random

variable with parameter

= 1.
(a) What is the probability that a time exceeds 2 hours?
(b) What is the conditional probability that a repair takes at least 3
hours, given that is duration exceeds 2 hours?

También podría gustarte