This action might not be possible to undo. Are you sure you want to continue?

**A Friendly Introduction for Electrical and Computer Engineers
**

SECOND EDITION

MATLAB Function Reference

Roy D. Yates and David J. Goodman

May 22, 2004

This document is a supplemental reference for MATLAB functions described in the text Prob-

ability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers.

This document should be accompanied by matcode.zip, an archive of the corresponding MAT-

LAB .m ﬁles. Here are some points to keep in mind in using these functions.

• The actual programs can be found in the archive matcode.zip or in a directory matcode.

To use the functions, you will need to use the MATLAB command addpath to add this

directory to the path that MATLAB searches for executable .m ﬁles.

• The matcode archive has both general purpose programs for solving probability problems

as well as speciﬁc .m ﬁles associated with examples or quizzes in the text. This manual

describes only the general purpose .m ﬁles in matcode.zip. Other programs in the archive

are described in main text or in the Quiz Solution Manual.

• The MATLAB functions described here are intended as a supplement the text. The code is

not fully commented. Many comments and explanations relating to the code appear in the

text, the Quiz Solution Manual (available on the web) or in the Problem Solution Manual

(available on the web for instructors).

• The code is instructional. The focus is on MATLAB programming techniques to solve prob-

ability problems and to simulate experiments. The code is deﬁnitely not bulletproof; for

example, input range checking is generally neglected.

• This is a work in progress. At the moment (May, 2004), the homework solution manual has

a number of unsolved homework problems. As these solutions require the development of

additional MATLAB functions, these functions will be added to this reference manual.

• There is a nonzero probability (in fact, a probability close to unity) that errors will be found. If

you ﬁnd errors or have suggestions or comments, please send email to ryates@winlab.rutgers.edu.

When errors are found, revisions both to this document and the collection of MATLAB func-

tions will be posted.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Random Variables

bernoullipmf y=bernoullipmf(p,x)

function pv=bernoullipmf(p,x)

%For Bernoulli (p) rv X

%input = vector x

%output = vector pv

%such that pv(i)=Prob(X=x(i))

pv=(1-p)*(x==0) + p*(x==1);

pv=pv(:);

Input: p is the success probability of a Bernoulli

random variable X, x is a vector of possible

sample values

Output: y is a vector with y(i) = P

X

(x(i)).

bernoullicdf y=bernoullicdf(p,x)

function cdf=bernoullicdf(p,x)

%Usage: cdf=bernoullicdf(p,x)

% For Bernoulli (p) rv X,

%given input vector x, output is

%vector pv such that pv(i)=Prob[X<=x(i)]

x=floor(x(:));

allx=0:1;

allcdf=cumsum(bernoullipmf(p,allx));

okx=(x>=0); %x_i < 1 are bad values

x=(okx.*x); %set bad x_i=0

cdf= okx.*allcdf(x); %zeroes out bad x_i

Input: p is the success probability of

a Bernoulli random variable X,

x is a vector of possible sample

values

Output: y is a vector with y(i) =

F

X

(x(i)).

bernoullirv x=bernoullirv(p,m)

function x=bernoullirv(p,m)

%return m samples of bernoulli (p) rv

r=rand(m,1);

x=(r>=(1-p));

Input: p is the success probability of a

Bernoulli random variable X, m is

a positive integer vector of possible

sample values

Output: x is a vector of m independent

sample values of X

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf y=bignomialpmf(n,p,x)

function pmf=bignomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

k=(0:n-1)’;

a=log((p/(1-p))*((n-k)./(k+1)));

L0=n*log(1-p);

L=[L0; L0+cumsum(a)];

pb=exp(L);

% pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

Comment: This function should al-

ways produce the same output

as binomialpmf(n,p,x);

however, the function calcu-

lates the logarithmof the proba-

bility and thismay lead to small

numerical innaccuracy.

binomialcdf y=binomialcdf(n,p,x)

function cdf=binomialcdf(n,p,x)

%Usage: cdf=binomialcdf(n,p,x)

%For binomial(n,p) rv X,

%and input vector x, output is

%vector cdf: cdf(i)=P[X<=x(i)]

x=floor(x(:)); %for noninteger x(i)

allx=0:max(x);

%calculate cdf from 0 to max(x)

allcdf=cumsum(binomialpmf(n,p,allx));

okx=(x>=0); %x(i) < 0 are zero-prob values

x=(okx.*x); %set zero-prob x(i)=0

cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

Input: n and p are the pa-

rameters of a bino-

mial (n, p) random

variable X, x is a vec-

tor of possible sample

values

Output: y is a vector with

y(i) = F

X

(x(i)).

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

binomialpmf y=binomialpmf(n,p,x)

function pmf=binomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

if p<0.5

pp=p;

else

pp=1-p;

end

i=0:n-1;

ip= ((n-i)./(i+1))*(pp/(1-pp));

pb=((1-pp)ˆn)*cumprod([1 ip]);

if pp < p

pb=fliplr(pb);

end

pb=pb(:); % pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

binomialrv x=binomialrv(n,p,m)

function x=binomialrv(n,p,m)

% m binomial(n,p) samples

r=rand(m,1);

cdf=binomialcdf(n,p,0:n);

x=count(cdf,r);

Input: n and p are the parameters of a binomial ran-

dom variable X, m is a positive integer

Output: x is a vector of m independent samples of

random variable X

bivariategausspdf

function f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Usage: f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Evaluate the bivariate Gaussian (muX,muY,sigmaX,sigmaY,rho) PDF

nx=(x-muX)/sigmaX;

ny=(y-muY)/sigmaY;

f=exp(-((nx.ˆ2) +(ny.ˆ2) - (2*rho*nx.*ny))/(2*(1-rhoˆ2)));

f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2));

Input: Scalar parameters muX,muY,sigmaX,sigmaY,rho of the bivariate Gaussian PDF, scalars

x and y.

Output: f the value of the bivariate Gaussian PDF at x,y.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

duniformcdf y=duniformcdf(k,l,x)

function cdf=duniformcdf(k,l,x)

%Usage: cdf=duniformcdf(k,l,x)

% For discrete uniform (k,l) rv X

% and input vector x, output is

% vector cdf: cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); %for noninteger x_i

allx=k:max(x);

%allcdf = cdf values from 0 to max(x)

allcdf=cumsum(duniformpmf(k,l,allx));

%x_i < k are zero prob values

okx=(x>=k);

%set zero prob x(i)=k

x=((1-okx)*k)+(okx.*x);

%x(i)=0 for zero prob x(i)

cdf= okx.*allcdf(x-k+1);

Input: k and l are the parameters of

a discrete uniform (k, l) random

variable X, x is a vector of pos-

sible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

duniformpmf y=duniformpmf(k,l,x)

function pmf=duniformpmf(k,l,x)

%discrete uniform(k,l) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

pmf= (x>=k).*(x<=l).*(x==floor(x));

pmf=pmf(:)/(l-k+1);

Input: k and l are the parameters

of a discrete uniform (k, l) ran-

dom variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

duniformrv x=duniformrv(k,l,m)

function x=duniformrv(k,l,m)

%returns m samples of a discrete

%uniform (k,l) random variable

r=rand(m,1);

cdf=duniformcdf(k,l,k:l);

x=k+count(cdf,r);

Input: k and l are the parameters of a discrete

uniform (k, l) random variable X, m is a

positive integer

Output: x is a vector of m independent samples

of random variable X

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

erlangb pb=erlangb(rho,c)

function pb=erlangb(rho,c);

%Usage: pb=erlangb(rho,c)

%returns the Erlang-B blocking

%probability for sn M/M/c/c

%queue with load rho

pn=exp(-rho)*poissonpmf(rho,0:c);

pb=pn(c+1)/sum(pn);

Input: Offered load rho (ρ = λ/µ), and

the number of servers c of an M/M/c/c

queue.

Output: pb, the blocking probability of the

queue

erlangcdf y=erlangcdf(n,lambda,x)

function F=erlangcdf(n,lambda,x)

F=1.0-poissoncdf(lambda*x,n-1);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

).

erlangpdf y=erlangpdf(n,lambda,x)

function f=erlangpdf(n,lambda,x)

f=((lambdaˆn)/factorial(n))...

*(x.ˆ(n-1)).*exp(-lambda*x);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λ

n

x

n−1

i

e

−λx

i

/(n − 1)!.

erlangrv x=erlangrv(n,lambda,m)

function x=erlangrv(n,lambda,m)

y=exponentialrv(lambda,m*n);

x=sum(reshape(y,m,n),2);

Input: n and lambda are the parameters of an

Erlang random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

exponentialcdf y=exponentialcdf(lambda,x)

function F=exponentialcdf(lambda,x)

F=1.0-exp(-lambda*x);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

1 − e

−λx

i

.

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

exponentialpdf y=exponentialpdf(lambda,x)

function f=exponentialpdf(lambda,x)

f=lambda*exp(-lambda*x);

f=f.*(x>=0);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λe

−λx

i

.

exponentialrv x=exponentialrv(lambda,m)

function x=exponentialrv(lambda,m)

x=-(1/lambda)*log(1-rand(m,1));

Input: lambda is the parameter of an expo-

nential random variable X, integer m

Output: Length m vector x such that each x

i

is a sample of X

finitecdf y=finitecdf(sx,p,x)

function cdf=finitecdf(s,p,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% cdf: cdf(i)=P[X=x(i)]

cdf=[];

for i=1:length(x)

pxi= sum(p(find(s<=x(i))));

cdf=[cdf; pxi];

end

Input: sx is the range of a ﬁnite random variable

X, px is the corresponding probability as-

signment, x is a vector of possible sample

values

Output: y is a vector with y(i) = F

X

(x(i)).

finitecoeff rho=finitecoeff(SX,SY,PXY)

function rho=finitecoeff(SX,SY,PXY);

%Usage: rho=finitecoeff(SX,SY,PXY)

%Calculate the correlation coefficient rho of

%finite random variables X and Y

ex=finiteexp(SX,PXY); vx=finitevar(SX,PXY);

ey=finiteexp(SY,PXY); vy=finitevar(SY,PXY);

R=finiteexp(SX.*SY,PXY);

rho=(R-ex*ey)/sqrt(vx*vy);

Input: Grids SX, SY and

probability grid PXY de-

scribing the ﬁnite ran-

dom variables X and Y.

Output: rho, the correlation

coefﬁcient of X and Y

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitecov covxy=finitecov(SX,SY,PXY)

function covxy=finitecov(SX,SY,PXY);

%Usage: cxy=finitecov(SX,SY,PXY)

%returns the covariance of

%finite random variables X and Y

%given by grids SX, SY, and PXY

ex=finiteexp(SX,PXY);

ey=finiteexp(SY,PXY);

R=finiteexp(SX.*SY,PXY);

covxy=R-ex*ey;

Input: Grids SX, SY and probability grid

PXY describing the ﬁnite random

variables X and Y.

Output: covxy, the covariance of X and

Y.

finiteexp ex=finiteexp(sx,px)

function ex=finiteexp(sx,px);

%Usage: ex=finiteexp(sx,px)

%returns the expected value E[X]

%of finite random variable X described

%by samples sx and probabilities px

ex=sum((sx(:)).*(px(:)));

Input: Probability vector px, vector

of samples sx describing random

variable X.

Output: ex, the expected value E[X].

finitepmf y=finitepmf(sx,p,x)

function pmf=finitepmf(sx,px,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% pmf: pmf(i)=P[X=x(i)]

pmf=zeros(size(x(:)));

for i=1:length(x)

pmf(i)= sum(px(find(sx==x(i))));

end

Input: sx is the range of a ﬁnite random

variable X, px is the corresponding

probability assignment, x is a vector

of possible sample values

Output: y is a vector with y(i) =

P[X = x(i)].

finiterv x=finiterv(sx,p,m)

function x=finiterv(s,p,m)

% returns m samples

% of finite (s,p) rv

%s=s(:);p=p(:);

r=rand(m,1);

cdf=cumsum(p);

x=s(1+count(cdf,r));

Input: sx is the range of a ﬁnite random variable X, p

is the corresponding probability assignment, m is

positive integer

Output: x is a vector of m sample values y(i) =

F

X

(x(i)).

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitevar v=finitevar(sx,px)

function v=finitevar(sx,px);

%Usage: ex=finitevar(sx,px)

% returns the variance Var[X]

% of finite random variables X described by

% samples sx and probabilities px

ex2=finiteexp(sx.ˆ2,px);

ex=finiteexp(sx,px);

v=ex2-(exˆ2);

Input: Probability vector px

and vector of samples

sx describing random

variable X.

Output: v, the variance

Var[X].

gausscdf y=gausscdf(mu,sigma,x)

function f=gausscdf(mu,sigma,x)

f=phi((x-mu)/sigma);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

((x

i

− µ)/σ).

gausspdf y=gausspdf(mu,sigma,x)

function f=gausspdf(mu,sigma,x)

f=exp(-(x-mu).ˆ2/(2*sigmaˆ2))/...

sqrt(2*pi*sigmaˆ2);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

).

gaussrv x=gaussrv(mu,sigma,m)

function x=gaussrv(mu,sigma,m)

x=mu +(sigma*randn(m,1));

Input: mu and sigma are the parameters of an

Gaussian random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

gaussvector x=gaussvector(mu,C,m)

function x=gaussvector(mu,C,m)

%output: m Gaussian vectors,

%each with mean mu

%and covariance matrix C

if (min(size(C))==1)

C=toeplitz(C);

end

n=size(C,2);

if (length(mu)==1)

mu=mu*ones(n,1);

end

[U,D,V]=svd(C);

x=V*(Dˆ(0.5))*randn(n,m)...

+(mu(:)*ones(1,m));

Input: For a Gaussian (µ

X

, C

X

) random vector X,

gaussvector can be called in two ways:

• C is the n × n covariance matrix, mu is

either a length n vector, or a length 1

scalar, m is an integer.

• C is the length n vector equal to the ﬁrst

row of a symmetric Toeplitz covariance

matrix C

X

, mu is either a length n vec-

tor, or a length 1 scalar, m is an integer.

If mu is a length n vector, then mu is the ex-

pected value vector; otherwise, each element

of X is assumed to have mean mu.

Output: n × m matrix x such that each column

x(:,i) is a sample vector of X

gaussvectorpdf f=gaussvector(mu,C,x)

function f=gaussvectorpdf(mu,C,x)

n=length(x);

z=x(:)-mu(:);

f=exp(-z’*inv(C)*z)/...

sqrt((2*pi)ˆn*det(C));

Input: For a Gaussian (µ

X

, C

X

) random vec-

tor X, mu is a length n vector, C is the

n × n covariance matrix, x is a length n

vector.

Output: f is the Gaussian vector PDF f

X

(x)

evaluated at x.

geometriccdf y=geometriccdf(p,x)

function cdf=geometriccdf(p,x)

% for geometric(p) rv X,

%For input vector x, output is vector

%cdf such that cdf_i=Prob(X<=x_i)

x=(x(:)>=1).*floor(x(:));

cdf=1-((1-p).ˆx);

Input: p is the parameter of a geometric

random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

geometricpmf y=geometricpmf(p,x)

function pmf=geometricpmf(p,x)

%geometric(p) rv X

%out: pmf(i)=Prob[X=x(i)]

x=x(:);

pmf= p*((1-p).ˆ(x-1));

pmf= (x>0).*(x==floor(x)).*pmf;

Input: p is the parameter of a geometric random

variable X, x is a vector of possible sample

values

Output: y is a vector with y(i) = P

X

(x(i)).

geometricrv x=geometricrv(p,m)

function x=geometricrv(p,m)

%Usage: x=geometricrv(p,m)

% returns m samples of a geometric (p) rv

r=rand(m,1);

x=ceil(log(1-r)/log(1-p));

Input: p is the parameters of a

geometric random variable

X, m is a positive integer

Output: x is a vector of m inde-

pendent samples of random

variable X

icdfrv x=icdfrv(@icdf,m)

function x=icdfrv(icdfhandle,m)

%Usage: x=icdfrv(@icdf,m)

%returns m samples of rv X

%with inverse CDF icdf.m

u=rand(m,1);

x=feval(icdfhandle,u);

Input: @icdfrv is a “handle” (a kind of pointer)

to a MATLAB function icdf.m that is

MATLAB’s representation of an inverse

CDF F

−1

X

(x) of a random variable X, inte-

ger m

Output: Length m vector x such that each x

i

is a

sample of X

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalcdf y=pascalcdf(k,p,x)

function cdf=pascalcdf(k,p,x)

%Usage: cdf=pascalcdf(k,p,x)

%For a pascal (k,p) rv X

%and input vector x, the output

%is a vector cdf such that

% cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); % for noninteger x(i)

allx=k:max(x);

%allcdf holds all needed cdf values

allcdf=cumsum(pascalpmf(k,p,allx));

%x_i < k have zero-prob,

% other values are OK

okx=(x>=k);

%set zero-prob x(i)=k,

%just so indexing is not fouled up

x=(okx.*x) +((1-okx)*k);

cdf= okx.*allcdf(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

pascalpmf y=pascalpmf(k,p,x)

function pmf=pascalpmf(k,p,x)

%For Pascal (k,p) rv X, and

%input vector x, output is a

%vector pmf: pmf(i)=Prob[X=x(i)]

x=x(:);

n=max(x);

i=(k:n-1)’;

ip= [1 ;(1-p)*(i./(i+1-k))];

%pb=all n-k+1 pascal probs

pb=(pˆk)*cumprod(ip);

okx=(x==floor(x)).*(x>=k);

%set bad x(i)=k to stop bad indexing

x=(okx.*x) + k*(1-okx);

% pmf(i)=0 unless x(i) >= k

pmf=okx.*pb(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalrv x=pascalrv(k,p,m)

function x=pascalrv(k,p,m)

% return m samples of pascal(k,p) rv

r=rand(m,1);

rmax=max(r);

xmin=k;

xmax=ceil(2*(k/p)); %set max range

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

end

x=xmin+countless(cdf,r);

Input: k and p are the parameters of a Pas-

cal random variable X, m is a posi-

tive integer

Output: x is a vector of m independent

samples of random variable X

phi y=phi(x)

function y=phi(x)

sq2=sqrt(2);

y= 0.5 + 0.5*erf(x/sq2);

Input: Vector x

Output: Vector y such that y(i) = (x(i)).

poissoncdf y=poissoncdf(alpha,x)

function cdf=poissoncdf(alpha,x)

%output cdf(i)=Prob[X<=x(i)]

x=floor(x(:));

sx=0:max(x);

cdf=cumsum(poissonpmf(alpha,sx));

%cdf from 0 to max(x)

okx=(x>=0);%x(i)<0 -> cdf=0

x=(okx.*x);%set negative x(i)=0

cdf= okx.*cdf(x+1);

%cdf=0 for x(i)<0

Input: alpha is the parameter of a Poisson

(α) random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonpmf y=poissonpmf(alpha,x)

function pmf=poissonpmf(alpha,x)

%Poisson (alpha) rv X,

%out=vector pmf: pmf(i)=P[X=x(i)]

x=x(:);

k=(1:max(x))’;

logfacts =cumsum(log(k));

pb=exp([-alpha; ...

-alpha+ (k*log(alpha))-logfacts]);

okx=(x>=0).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

%pmf(i)=0 for zero-prob x(i)

Input: alpha is the parameter of a

Poisson (α) random variable X, x

is a vector of possible sample val-

ues

Output: y is a vector with y(i) =

P

X

(x(i)).

poissonrv x=poissonrv(alpha,m)

function x=poissonrv(alpha,m)

%return m samples of poisson(alpha) rv X

r=rand(m,1);

rmax=max(r);

xmin=0;

xmax=ceil(2*alpha); %set max range

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

%while ( sum(cdf <=rmax) ==(xmax-xmin+1) )

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

end

x=xmin+countless(cdf,r);

Input: alpha is the parameter of

a Poisson (α) random vari-

able X, m is a positive inte-

ger

Output: x is a vector of m inde-

pendent samples of random

variable X

uniformcdf y=uniformcdf(a,b,x)

function F=uniformcdf(a,b,x)

%Usage: F=uniformcdf(a,b,x)

%returns the CDF of a continuous

%uniform rv evaluated at x

F=x.*((x>=a) & (x<b))/(b-a);

F=f+1.0*(x>=b);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

)

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

uniformpdf y=uniformpdf(a,b,x)

function f=uniformpdf(a,b,x)

%Usage: f=uniformpdf(a,b,x)

%returns the PDF of a continuous

%uniform rv evaluated at x

f=((x>=a) & (x<b))/(b-a);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

)

uniformrv x=uniformrv(a,b,m)

function x=uniformrv(a,b,m)

%Usage: x=uniformrv(a,b,m)

%Returns m samples of a

%uniform (a,b) random varible

x=a+(b-a)*rand(m,1);

Input: a and ( b) are parameters for continuous uni-

form random variable X, positive integer m

Output: m element vector x such that each x(i) is

a sample of X.

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Stochastic Processes

brownian w=brownian(alpha,t)

function w=brownian(alpha,t)

%Brownian motion process

%sampled at t(1)<t(2)< ...

t=t(:);

n=length(t);

delta=t-[0;t(1:n-1)];

x=sqrt(alpha*delta).*gaussrv(0,1,n);

w=cumsum(x);

Input: t is a vector holding an ordered se-

quence of inspection times, alpha

is the scaling constant of a Brownian

motion process such that the i th in-

crement has variance α(t

i

− t

i −1

).

Output: w is a vector such that w(i) is

the position at time t(i) of the par-

ticle in Brownian motion.

cmcprob pv=cmcprob(Q,p0,t)

function pv = cmcprob(Q,p0,t)

%Q has zero diagonal rates

%initial state probabilities p0

K=size(Q,1)-1; %max no. state

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

R=Q-diag(sum(Q,2));

pv= (p0(:)’*expm(R*t))’;

Input: n × n state transition matrix Q for a

continuous-time ﬁnite Markov chain, length

n vector p0 denoting the initial state proba-

bilities, nonengative scalar t

Output: Length n vector pv such that pv(t) is

the state probability vector at time t of the

Markov chain

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0

cmcstatprob pv=cmcstatprob(Q)

function pv = cmcstatprob(Q)

%Q has zero diagonal rates

R=Q-diag(sum(Q,2));

n=size(Q,1);

R(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Rˆ(-1))’;

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain

Output: pv is the stationary probability vector for

the continuous-time Markov chain

dmcstatprob pv=dmcstatprob(P)

function pv = dmcstatprob(P)

n=size(P,1);

A=(eye(n)-P);

A(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Aˆ(-1))’;

Input: n × n stochastic matrix P representing

a discrete-time aperiodic irreducible ﬁnite

Markov chain

Output: pv is the stationary probability vector.

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonarrivals s=poissonarrivals(lambda,T)

function s=poissonarrivals(lambda,T)

%arrival times s=[s(1) ... s(n)]

% s(n)<= T < s(n+1)

n=ceil(1.1*lambda*T);

s=cumsum(exponentialrv(lambda,n));

while (s(length(s))< T),

s_new=s(length(s))+ ...

cumsum(exponentialrv(lambda,n));

s=[s; s_new];

end

s=s(s<=T);

Input: lambda is the arrival rate of a

Poisson process, T marks the end of

an observation interval [0, T].

Output: s=[s(1), ..., s(n)]’ is

a vector such that s(i) is i th arrival

time. Note that length n is a Poisson

random variable with expected value

λT.

Comment: This code is pretty stupid.

There are decidedly better ways to

create a set of arrival times; see Prob-

lem 10.13.5.

poissonprocess N=poissonprocess(lambda,t)

function N=poissonprocess(lambda,t)

%input: rate lambda>0, vector t

%For a sample function of a

%Poisson process of rate lambda,

%N(i) = no. of arrivals by t(i)

s=poissonarrivals(lambda,max(t));

N=count(s,t);

Input: lambda is the arrival rate of a Pois-

son process, t is a vector of “inspec-

tion times’.’

Output: N is a vector such that N(i) is the

number of arrival by inspection time

t(i).

simcmc ST=simcmc(Q,p0,T)

function ST=simcmc(Q,p0,T);

K=size(Q,1)-1; max no. state

%calc average trans. rate

ps=cmcstatprob(Q);

v=sum(Q,2); R=ps’*v;

n=ceil(0.6*T/R);

ST=simcmcstep(Q,p0,2*n);

while (sum(ST(:,2))<T),

s=ST(size(ST,1),1);

p00=Q(1+s,:)/v(1+s);

S=simcmcstep(Q,p00,n);

ST=[ST;S];

end

n=1+sum(cumsum(ST(:,2))<T);

ST=ST(1:n,:);

%truncate last holding time

ST(n,2)=T-sum(ST(1:n-1,2));

Input: state transition matrix Q for a continuous-time

ﬁnite Markov chain, vector p0 denoting the ini-

tial state probabilities, integer n

Output: A simulation of the Markov chain system

over the time interval [0, T]: The output is an

n × 2 matrix ST such that the ﬁrst column

ST(:,1) is the sequence of system states and

the second column ST(:,2) is the amount of

time spent in each state. That is, ST(i,2) is

the amount of time the system spends in state

ST(i,1).

Comment: If p0 is a scalar integer, then the simula-

tion starts in state p0. Note that n, the number

of state occupancy periods, is random.

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simcmcstep S=simcmcstep(Q,p0,n)

function S=simcmcstep(Q,p0,n);

%S=simcmcstep(Q,p0,n)

% Simulate n steps of a cts

% Markov Chain, rate matrix Q,

% init. state probabilities p0

K=size(Q,1)-1; %max no. state

S=zeros(n+1,2);%init allocation

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

v=sum(Q,2); %state dep. rates

t=1./v;

P=diag(t)*Q;

S(:,1)=simdmc(P,p0,n);

S(:,2)=t(1+S(:,1)) ...

.*exponentialrv(1,n+1);

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain, vector p0 denot-

ing the initial state probabilities, integer n

Output: A simulation of n steps of the

continuous-time Markov chain system:

The output is an n × 2 matrix ST such that

the ﬁrst column ST(:,1) is the length n

sequence of system states and the second

column ST(:,2) is the amount of time

spent in each state. That is, ST(i,2) is

the amount of time the system spends in

state ST(i,1).

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0. This program is

the basis for simcmc.

simdmc x=simdmc(P,p0,n)

function x=simdmc(P,p0,n)

K=size(P,1)-1; %highest no. state

sx=0:K; %state space

x=zeros(n+1,1); %initialization

if (length(p0)==1) %convert integer p0 to prob vector

p0=((0:K)==p0);

end

x(1)=finiterv(sx,p0,1); %x(m)= state at time m-1

for m=1:n,

x(m+1)=finiterv(sx,P(x(m)+1,:),1);

end

Input: n×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov

chain, length n vector p0 denoting the initial state probabilities, integer n.

Output: A simulation of the Markov chain system such that for the length n vector x, x(m) is the

state at time m-1 of the Markov chain.

Comment: If p0 is a scalar integer, then the simulation starts in state p0

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Random Utilities

count n=count(x,y)

function n=count(x,y)

%Usage n=count(x,y)

%n(i)= # elements of x <= y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<=MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x less than or equal to y(i).

countequal n=countequal(x,y)

function n=countequal(x,y)

%Usage: n=countequal(x,y)

%n(j)= # elements of x = y(j)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX==MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x equal to y(i).

countless n=countless(x,y)

function n=countless(x,y)

%Usage: n=countless(x,y)

%n(i)= # elements of x < y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<MY),1))’;

Input:

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x strictly less than y(i).

dftmat F=dftmat(N)

function F = dftmat(N);

Usage: F=dftmat(N)

%F is the N by N DFT matrix

n=(0:N-1)’;

F=exp((-1.0j)*2*pi*(n*(n’))/N);

Input: Integer N.

Output: F is the N by N discrete Fourier trans-

form matrix

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

freqxy fxy=freqxy(xy,SX,SY)

function fxy = freqxy(xy,SX,SY)

%Usage: fxy = freqxy(xy,SX,SY)

%xy is an m x 2 matrix:

%xy(i,:)= ith sample pair X,Y

%Output fxy is a K x 3 matrix:

% [fxy(k,1) fxy(k,2)]

% = kth unique pair [x y] and

% fxy(k,3)= corresp. rel. freq.

%extend xy to include a sample

%for all possible (X,Y) pairs:

xy=[xy; SX(:) SY(:)];

[U,I,J]=unique(xy,’rows’);

N=hist(J,1:max(J))-1;

N=N/sum(N);

fxy=[U N(:)];

%reorder fxy rows to match

%rows of [SX(:) SY(:) PXY(:)]:

fxy=sortrows(fxy,[2 1 3]);

Input: For random variables X and Y, xy is

an m × 2 matrix holding a list of sample

values pairs; yy(i,:) is the i th sample

pair (X, Y). Grids SX and SY represent-

ing the sample space.

Output: fxy is a K × 3 matrix. In each row

[fxy(k,1) fxy(k,2) fxy(k,3)]

[fxy(k,1) fxy(k,2)] is a unique

(X, Y) pair with relative frequency

fxy(k,3).

Comment: Given the grids SX, SY and the

probability grid PXY, a list of random

sample value pairs xy can be simulated

by the commands

S=[SX(:) SY(:)];

xy=finiterv(S,PXY(:),m);

The output fxy is ordered so that the

rows match the ordering of rows in the

matrix

[SX(:) SY(:) PXY(:)].

fftc S=fftc(r,N); S=fftc(r)

function S=fftc(varargin);

%DFT for a signal r

%centered at the origin

%Usage:

% fftc(r,N): N point DFT of r

% fftc(r): length(r) DFT of r

r=varargin{1};

L=1+floor(length(r)/2);

if (nargin>1)

N=varargin{2}(1);

else

N=(2*L)-1;

end

R=fft(r,N);

n=reshape(0:(N-1),size(R));

phase=2*pi*(n/N)*(L-1);

S=R.*exp((1.0j)*phase);

Input: Vector r=[r(1) ... r(2k+1)]

holding the time sequence r

−k

, . . . , r

0

, . . . , r

k

centered around the origin.

Output: S is the DFT of r

Comment: Supports the same calling conventions

as fft.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pmfplot pmfplot(sx,px,’x’,’y axis text’)

function h=pmfplot(sx,px,xls,yls)

%Usage: pmfplot(sx,px,xls,yls)

%sx and px are vectors, px is the PMF

%xls and yls are x and y label strings

nonzero=find(px);

sx=sx(nonzero); px=px(nonzero);

sx=(sx(:))’; px=(px(:))’;

XM = [sx; sx];

PM=[zeros(size(px)); px];

h=plot(XM,PM,’-k’);

set(h,’LineWidth’,3);

if (nargin==4)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

xmin=min(sx); xmax=max(sx);

xborder=0.05*(xmax-xmin);

xmax=xmax+xborder;

xmin=xmin-xborder;

ymax=1.1*max(px);

axis([xmin xmax 0 ymax]);

Input: Sample space vector sx

and PMF vector px for ﬁ-

nite random variable PXY,

optional text strings xls

and yls

Output: A plot of the PMF

P

X

(x) in the bar style used

in the text.

rect y=rect(x)

function y=rect(x);

%Usage:y=rect(x);

y=1.0*(abs(x)<0.5);

Input: Vector x

Output: Vector y such that

y

i

= rect(x

i

) =

1 |x

i

| < 0.5

0 otherwise

sinc y=sinc(x)

function y=sinc(x);

xx=x+(x==0);

y=sin(pi*xx)./(pi*xx);

y=((1.0-(x==0)).*y)+ (1.0*(x==0));

Input: Vector x

Output: Vector y such that

y

i

= sinc(x

i

) =

sin(πx

i

)

πx

i

Comment: The code is ugly because it makes

sure to produce the right limit value at

x

i

= 0.

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simplot simplot(S,xlabel,ylabel)

function h=simplot(S,xls,yls);

%h=simplot(S,xlabel,ylabel)

% Plots the output of a simulated state sequence

% If S is N by 1, a discrete time chain is assumed

% with visit times of one unit.

% If S is an N by 2 matrix, a cts time Markov chain

% is assumed where

% S(:,1) = state sequence.

% S(:,2) = state visit times.

% The cumulative sum

% of visit times are transition instances.

% h is a handle to a stairs plot of the state sequence

% vs state transition times

%in case of discrete time simulation

if (size(S,2)==1)

S=[S ones(size(S))];

end

Y=[S(:,1) ; S(size(S,1),1)];

X=cumsum([0 ; S(:,2)]);

h=stairs(X,Y);

if (nargin==3)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

Input: The simulated state sequence vector S generated by S=simdmc(P,p0,n) or the n × 2

state/time matrix ST generated by either

ST=simcmc(Q,p0,T)

or

ST=simcmcstep(Q,p0,n).

Output: A “stairs” plot showing the sequence of simulation states over time.

Comment: If S is just a state sequence vector, then each stair has equal width. If S is n × 2

state/time matrix ST, then the width of the stair is proportional to the time spent in that state.

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Probability and Stochastic Processes

A Friendly Introduction for Electrical and Computer Engineers

Second Edition

Quiz Solutions

Roy D. Yates and David J. Goodman

May 22, 2004

• The MATLAB section quizzes at the end of each chapter use programs available for

download as the archive matcode.zip. This archive has programs of general pur-

pose programs for solving probability problems as well as speciﬁc .m ﬁles associated

with examples or quizzes in the text. Also available is a manual probmatlab.pdf

describing the general purpose .m ﬁles in matcode.zip.

• We have made a substantial effort to check the solution to every quiz. Nevertheless,

there is a nonzero probability (in fact, a probability close to unity) that errors will be

found. If you ﬁnd errors or have suggestions or comments, please send email to

ryates@winlab.rutgers.edu.

When errors are found, corrected solutions will be posted at the website.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 1

Quiz 1.1

In the Venn diagrams for parts (a)-(g) below, the shaded area represents the indicated

set.

M

O

T

M

O

T

M

O

T

(1) R = T

c

(2) M ∪ O (3) M ∩ O

M

O

T

M

O

T

M

O

T

(4) R ∪ M (4) R ∩ M (6) T

c

− M

Quiz 1.2

(1) A

1

= {vvv, vvd, vdv, vdd}

(2) B

1

= {dvv, dvd, ddv, ddd}

(3) A

2

= {vvv, vvd, dvv, dvd}

(4) B

2

= {vdv, vdd, ddv, ddd}

(5) A

3

= {vvv, ddd}

(6) B

3

= {vdv, dvd}

(7) A

4

= {vvv, vvd, vdv, dvv, vdd, dvd, ddv}

(8) B

4

= {ddd, ddv, dvd, vdd}

Recall that A

i

and B

i

are collectively exhaustive if A

i

∪ B

i

= S. Also, A

i

and B

i

are

mutually exclusive if A

i

∩ B

i

= φ. Since we have written down each pair A

i

and B

i

above,

we can simply check for these properties.

The pair A

1

and B

1

are mutually exclusive and collectively exhaustive. The pair A

2

and

B

2

are mutually exclusive and collectively exhaustive. The pair A

3

and B

3

are mutually

exclusive but not collectively exhaustive. The pair A

4

and B

4

are not mutually exclusive

since dvd belongs to A

4

and B

4

. However, A

4

and B

4

are collectively exhaustive.

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.3

There are exactly 50 equally likely outcomes: s

51

through s

100

. Each of these outcomes

has probability 0.02.

(1) P[{s

79

}] = 0.02

(2) P[{s

100

}] = 0.02

(3) P[A] = P[{s

90

, . . . , s

100

}] = 11 ×0.02 = 0.22

(4) P[F] = P[{s

51

, . . . , s

59

}] = 9 ×0.02 = 0.18

(5) P[T ≥ 80] = P[{s

80

, . . . , s

100

}] = 21 ×0.02 = 0.42

(6) P[T < 90] = P[{s

51

, s

52

, . . . , s

89

}] = 39 ×0.02 = 0.78

(7) P[a C grade or better] = P[{s

70

, . . . , s

100

}] = 31 ×0.02 = 0.62

(8) P[student passes] = P[{s

60

, . . . , s

100

}] = 41 ×0.02 = 0.82

Quiz 1.4

We can describe this experiment by the event space consisting of the four possible

events V B, V L, DB, and DL. We represent these events in the table:

V D

L 0.35 ?

B ? ?

In a roundabout way, the problem statement tells us how to ﬁll in the table. In particular,

P [V] = 0.7 = P [V L] + P [V B] (1)

P [L] = 0.6 = P [V L] + P [DL] (2)

Since P[V L] = 0.35, we can conclude that P[V B] = 0.35 and that P[DL] = 0.6 −

0.35 = 0.25. This allows us to ﬁll in two more table entries:

V D

L 0.35 0.25

B 0.35 ?

The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.

This implies P[DB] = 0.05 and the complete table is

V D

L 0.35 0.25

B 0.35 0.05

Finding the various probabilities is now straightforward:

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25

(2) P[D ∪ L] = P[V L] + P[DL] + P[DB] = 0.35 +0.25 +0.05 = 0.65.

(3) P[V B] = 0.35

(4) P[V ∪ L] = P[V] + P[L] − P[V L] = 0.7 +0.6 −0.35 = 0.95

(5) P[V ∪ D] = P[S] = 1

(6) P[LB] = P[LL

c

] = 0

Quiz 1.5

(1) The probability of exactly two voice calls is

P [N

V

= 2] = P [{vvd, vdv, dvv}] = 0.3 (1)

(2) The probability of at least one voice call is

P [N

V

≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] (2)

= 6(0.1) +0.2 = 0.8 (3)

An easier way to get the same answer is to observe that

P [N

V

≥ 1] = 1 − P [N

V

< 1] = 1 − P [N

V

= 0] = 1 − P [{ddd}] = 0.8 (4)

(3) The conditional probability of two voice calls followed by a data call given that there

were two voice calls is

P [{vvd} |N

V

= 2] =

P [{vvd} , N

V

= 2]

P [N

V

= 2]

=

P [{vvd}]

P [N

V

= 2]

=

0.1

0.3

=

1

3

(5)

(4) The conditional probability of two data calls followed by a voice call given there

were two voice calls is

P [{ddv} |N

V

= 2] =

P [{ddv} , N

V

= 2]

P [N

V

= 2]

= 0 (6)

The joint event of the outcome ddv and exactly two voice calls has probability zero

since there is only one voice call in the outcome ddv.

(5) The conditional probability of exactly two voice calls given at least one voice call is

P [N

V

= 2|N

v

≥ 1] =

P [N

V

= 2, N

V

≥ 1]

P [N

V

≥ 1]

=

P [N

V

= 2]

P [N

V

≥ 1]

=

0.3

0.8

=

3

8

(7)

(6) The conditional probability of at least one voice call given there were exactly two

voice calls is

P [N

V

≥ 1|N

V

= 2] =

P [N

V

≥ 1, N

V

= 2]

P [N

V

= 2]

=

P [N

V

= 2]

P [N

V

= 2]

= 1 (8)

Given that there were two voice calls, there must have been at least one voice call.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6

In this experiment, there are four outcomes with probabilities

P[{vv}] = (0.8)

2

= 0.64 P[{vd}] = (0.8)(0.2) = 0.16

P[{dv}] = (0.2)(0.8) = 0.16 P[{dd}] = (0.2)

2

= 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition

and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes,

we now can test for the independence of events.

(1) First, we calculate the probability of the joint event:

P [N

V

= 2, N

V

≥ 1] = P [N

V

= 2] = P [{vv}] = 0.64 (1)

Next, we observe that

P [N

V

≥ 1] = P [{vd, dv, vv}] = 0.96 (2)

Finally, we make the comparison

P [N

V

= 2] P [N

V

≥ 1] = (0.64)(0.96) = P [N

V

= 2, N

V

≥ 1] (3)

which shows the two events are dependent.

(2) The probability of the joint event is

P [N

V

≥ 1, C

1

= v] = P [{vd, vv}] = 0.80 (4)

From part (a), P[N

V

≥ 1] = 0.96. Further, P[C

1

= v] = 0.8 so that

P [N

V

≥ 1] P [C

1

= v] = (0.96)(0.8) = 0.768 = P [N

V

≥ 1, C

1

= v] (5)

Hence, the events are dependent.

(3) The problem statement that the calls were independent implies that the events the

second call is a voice call, {C

2

= v}, and the ﬁrst call is a data call, {C

1

= d} are

independent events. Just to be sure, we can do the calculations to check:

P [C

1

= d, C

2

= v] = P [{dv}] = 0.16 (6)

Since P[C

1

= d]P[C

2

= v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are

independent. Note that this shouldn’t be surprising since we used the information that

the calls were independent in the problem statement to determine the probabilities of

the outcomes.

(4) The probability of the joint event is

P [C

2

= v, N

V

is even] = P [{vv}] = 0.64 (7)

Also, each event has probability

P [C

2

= v] = P [{dv, vv}] = 0.8, P [N

V

is even] = P [{dd, vv}] = 0.68 (8)

Thus, P[C

2

= v]P[N

V

is even] = (0.8)(0.68) = 0.544. Since P[C

2

= v, N

V

is even] =

0.544, the events are dependent.

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7

Let F

i

denote the event that that the user is found on page i . The tree for the experiment

is

¨

¨

¨

¨

¨

¨

F

1

0.8

F

c

1

0.2

¨

¨

¨

¨

¨

¨

F

2

0.8

F

c

2

0.2

¨

¨

¨

¨

¨

¨

F

3

0.8

F

c

3

0.2

The user is found unless all three paging attempts fail. Thus the probability the user is

found is

P [F] = 1 − P

_

F

c

1

F

c

2

F

c

3

_

= 1 −(0.2)

3

= 0.992 (1)

Quiz 1.8

(1) We can view choosing each bit in the code word as a subexperiment. Each subex-

periment has two possible outcomes: 0 and 1. Thus by the fundamental principle of

counting, there are 2 ×2 ×2 ×2 = 2

4

= 16 possible code words.

(2) An experiment that can yield all possible code words with two zeroes is to choose

which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There

are

_

4

2

_

= 6 ways to do this. Hence, there are six code words with exactly two zeroes.

For this problem, it is also possible to simply enumerate the six code words:

1100, 1010, 1001, 0101, 0110, 0011.

(3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst

bit has only one outcome. For each of the next three bits, we have two choices. In

this case, there are 1 ×2 ×2 ×2 = 8 ways of choosing a code word.

(4) For the constant ratio code, we can specify a code word by choosing M of the bits to

be ones. The other N −M bits will be zeroes. The number of ways of choosing such

a code word is

_

N

M

_

. For N = 8 and M = 3, there are

_

8

3

_

= 56 code words.

Quiz 1.9

(1) In this problem, k bits received in error is the same as k failures in 100 trials. The

failure probability is = 1 − p and the success probability is 1 − = p. That is, the

probability of k bits in error and 100 −k correctly received bits is

P

_

S

k,100−k

_

=

_

100

k

_

k

(1 −)

100−k

(1)

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

For = 0.01,

P

_

S

0,100

_

= (1 −)

100

= (0.99)

100

= 0.3660 (2)

P

_

S

1,99

_

= 100(0.01)(0.99)

99

= 0.3700 (3)

P

_

S

2,98

_

= 4950(0.01)

2

(0.99)

9

8 = 0.1849 (4)

P

_

S

3,97

_

= 161, 700(0.01)

3

(0.99)

97

= 0.0610 (5)

(2) The probability a packet is decoded correctly is just

P [C] = P

_

S

0,100

_

+ P

_

S

1,99

_

+ P

_

S

2,98

_

+ P

_

S

3,97

_

= 0.9819 (6)

Quiz 1.10

Since the chip works only if all n transistors work, the transistors in the chip are like

devices in series. The probability that a chip works is P[C] = p

n

.

The module works if either 8 chips work or 9 chips work. Let C

k

denote the event that

exactly k chips work. Since transistor failures are independent of each other, chip failures

are also independent. Thus each P[C

k

] has the binomial probability

P [C

8

] =

_

9

8

_

(P [C])

8

(1 − P [C])

9−8

= 9p

8n

(1 − p

n

), (1)

P [C

9

] = (P [C])

9

= p

9n

. (2)

The probability a memory module works is

P [M] = P [C

8

] + P [C

9

] = p

8n

(9 −8p

n

) (3)

Quiz 1.11

R=rand(1,100);

X=(R<= 0.4) ...

+ (2*(R>0.4).*(R<=0.9)) ...

+ (3*(R>0.9));

Y=hist(X,1:3)

For a MATLAB simulation, we ﬁrst gen-

erate a vector R of 100 random numbers.

Second, we generate vector X as a func-

tion of R to represent the 3 possible out-

comes of a ﬂip. That is, X(i)=1 if ﬂip i

was heads, X(i)=2 if ﬂip i was tails, and

X(i)=3) is ﬂip i landed on the edge.

To see how this works, we note there are three cases:

• If R(i) <= 0.4, then X(i)=1.

• If 0.4 < R(i) and R(i)<=0.9, then X(i)=2.

• If 0.9 < R(i), then X(i)=3.

These three cases will have probabilities 0.4, 0.5 and 0.1. Lastly, we use the hist function

to count how many occurences of each possible value of X(i).

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 2

Quiz 2.1

The sample space, probabilities and corresponding grades for the experiment are

Outcome P[·] G

BB 0.36 3.0

BC 0.24 2.5

CB 0.24 2.5

CC 0.16 2

Quiz 2.2

(1) To ﬁnd c, we recall that the PMF must sum to 1. That is,

3

n=1

P

N

(n) = c

_

1 +

1

2

+

1

3

_

= 1 (1)

This implies c = 6/11. Now that we have found c, the remaining parts are straight-

forward.

(2) P[N = 1] = P

N

(1) = c = 6/11

(3) P[N ≥ 2] = P

N

(2) + P

N

(3) = c/2 +c/3 = 5/11

(4) P[N > 3] =

∞

n=4

P

N

(n) = 0

Quiz 2.3

Decoding each transmitted bit is an independent trial where we call a bit error a “suc-

cess.” Each bit is in error, that is, the trial is a success, with probability p. Now we can

interpret each experiment in the generic context of independent trials.

(1) The random variable X is the number of trials up to and including the ﬁrst success.

Similar to Example 2.11, X has the geometric PMF

P

X

(x) =

_

p(1 − p)

x−1

x = 1, 2, . . .

0 otherwise

(1)

(2) If p = 0.1, then the probability exactly 10 bits are sent is

P [X = 10] = P

X

(10) = (0.1)(0.9)

9

= 0.0387 (2)

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The probability that at least 10 bits are sent is P[X ≥ 10] =

∞

x=10

P

X

(x). This

sum is not too hard to calculate. However, its even easier to observe that X ≥ 10 if

the ﬁrst 10 bits are transmitted correctly. That is,

P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)

10

(3)

For p = 0.1, P[X ≥ 10] = 0.9

10

= 0.3487.

(3) The random variable Y is the number of successes in 100 independent trials. Just as

in Example 2.13, Y has the binomial PMF

P

Y

(y) =

_

100

y

_

p

y

(1 − p)

100−y

(4)

If p = 0.01, the probability of exactly 2 errors is

P [Y = 2] = P

Y

(2) =

_

100

2

_

(0.01)

2

(0.99)

98

= 0.1849 (5)

(4) The probability of no more than 2 errors is

P [Y ≤ 2] = P

Y

(0) + P

Y

(1) + P

Y

(2) (6)

= (0.99)

100

+100(0.01)(0.99)

99

+

_

100

2

_

(0.01)

2

(0.99)

98

(7)

= 0.9207 (8)

(5) Random variable Z is the number of trials up to and including the third success. Thus

Z has the Pascal PMF (see Example 2.15)

P

Z

(z) =

_

z −1

2

_

p

3

(1 − p)

z−3

(9)

Note that P

Z

(z) > 0 for z = 3, 4, 5, . . ..

(6) If p = 0.25, the probability that the third error occurs on bit 12 is

P

Z

(12) =

_

11

2

_

(0.25)

3

(0.75)

9

= 0.0645 (10)

Quiz 2.4

Each of these probabilities can be read off the CDF F

Y

(y). However, we must keep in

mind that when F

Y

(y) has a discontinuity at y

0

, F

Y

(y) takes the upper value F

Y

(y

+

0

).

(1) P[Y < 1] = F

Y

(1

−

) = 0

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) P[Y ≤ 1] = F

Y

(1) = 0.6

(3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − F

Y

(2) = 1 −0.8 = 0.2

(4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − F

Y

(2

−

) = 1 −0.6 = 0.4

(5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = F

Y

(1

+

) − F

Y

(1

−

) = 0.6

(6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = F

Y

(3

+

) − F

Y

(3

−

) = 0.8 −0.8 = 0

Quiz 2.5

(1) With probability 0.7, a call is a voice call and C = 25. Otherwise, with probability

0.3, we have a data call and C = 40. This corresponds to the PMF

P

C

(c) =

⎧

⎨

⎩

0.7 c = 25

0.3 c = 40

0 otherwise

(1)

(2) The expected value of C is

E [C] = 25(0.7) +40(0.3) = 29.5 cents (2)

Quiz 2.6

(1) As a function of N, the cost T is

T = 25N +40(3 − N) = 120 −15N (1)

(2) To ﬁnd the PMF of T, we can draw the following tree:

¨

¨

¨

¨

¨

¨

¨

N=0

0.1

r

r

r

r

r

r

r

N=3

0.3

$

$

$

$

$

$

$N=1 0.3

N=2 0.3

•T=120

•T=105

•T=90

•T=75

From the tree, we can write down the PMF of T:

P

T

(t ) =

⎧

⎨

⎩

0.3 t = 75, 90, 105

0.1 t = 120

0 otherwise

(2)

From the PMF P

T

(t ), the expected value of T is

E [T] = 75P

T

(75) +90P

T

(90) +105P

T

(105) +120P

T

(120) (3)

= (75 +90 +105)(0.3) +120(0.1) = 62 (4)

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.7

(1) Using Deﬁnition 2.14, the expected number of applications is

E [A] =

4

a=1

aP

A

(a) = 1(0.4) +2(0.3) +3(0.2) +4(0.1) = 2 (1)

(2) The number of memory chips is M = g(A) where

g(A) =

⎧

⎨

⎩

4 A = 1, 2

6 A = 3

8 A = 4

(2)

(3) By Theorem 2.10, the expected number of memory chips is

E [M] =

4

a=1

g(A)P

A

(a) = 4(0.4) +4(0.3) +6(0.2) +8(0.1) = 4.8 (3)

Since E[A] = 2, g(E[A]) = g(2) = 4. However, E[M] = 4.8 = g(E[A]). The two

quantities are different because g(A) is not of the form αA +β.

Quiz 2.8

The PMF P

N

(n) allows to calculate each of the desired quantities.

(1) The expected value of N is

E [N] =

2

n=0

nP

N

(n) = 0(0.1) +1(0.4) +2(0.5) = 1.4 (1)

(2) The second moment of N is

E

_

N

2

_

=

2

n=0

n

2

P

N

(n) = 0

2

(0.1) +1

2

(0.4) +2

2

(0.5) = 2.4 (2)

(3) The variance of N is

Var[N] = E

_

N

2

_

−(E [N])

2

= 2.4 −(1.4)

2

= 0.44 (3)

(4) The standard deviation is σ

N

=

√

Var[N] =

√

0.44 = 0.663.

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.9

(1) From the problem statement, we learn that the conditional PMF of N given the event

I is

P

N|I

(n) =

_

0.02 n = 1, 2, . . . , 50

0 otherwise

(1)

(2) Also from the problem statement, the conditional PMF of N given the event T is

P

N|T

(n) =

_

0.2 n = 1, 2, 3, 4, 5

0 otherwise

(2)

(3) The problem statement tells us that P[T] = 1 − P[I ] = 3/4. From Theorem 1.10

(the law of total probability), we ﬁnd the PMF of N is

P

N

(n) = P

N|T

(n) P [T] + P

N|I

(n) P [I ] (3)

=

⎧

⎨

⎩

0.2(0.75) +0.02(0.25) n = 1, 2, 3, 4, 5

0(0.75) +0.02(0.25) n = 6, 7, . . . , 50

0 otherwise

(4)

=

⎧

⎨

⎩

0.155 n = 1, 2, 3, 4, 5

0.005 n = 6, 7, . . . , 50

0 otherwise

(5)

(4) First we ﬁnd

P [N ≤ 10] =

10

n=1

P

N

(n) = (0.155)(5) +(0.005)(5) = 0.80 (6)

By Theorem 2.17, the conditional PMF of N given N ≤ 10 is

P

N|N≤10

(n) =

_

P

N

(n)

P[N≤10]

n ≤ 10

0 otherwise

(7)

=

⎧

⎨

⎩

0.155/0.8 n = 1, 2, 3, 4, 5

0.005/0.8 n = 6, 7, 8, 9, 10

0 otherwise

(8)

=

⎧

⎨

⎩

0.19375 n = 1, 2, 3, 4, 5

0.00625 n = 6, 7, 8, 9, 10

0 otherwise

(9)

(5) Once we have the conditional PMF, calculating conditional expectations is easy.

E [N|N ≤ 10] =

n

nP

N|N≤10

(n) (10)

=

5

n=1

n(0.19375) +

10

n=6

n(0.00625) (11)

= 3.15625 (12)

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 50 100

0

2

4

6

8

10

0 500 1000

0

2

4

6

8

10

(a) samplemean(100) (b) samplemean(1000)

Figure 1: Two examples of the output of samplemean(k)

(6) To ﬁnd the conditional variance, we ﬁrst ﬁnd the conditional second moment

E

_

N

2

|N ≤ 10

_

=

n

n

2

P

N|N≤10

(n) (13)

=

5

n=1

n

2

(0.19375) +

10

n=6

n

2

(0.00625) (14)

= 55(0.19375) +330(0.00625) = 12.71875 (15)

The conditional variance is

Var[N|N ≤ 10] = E

_

N

2

|N ≤ 10

_

−(E [N|N ≤ 10])

2

(16)

= 12.71875 −(3.15625)

2

= 2.75684 (17)

Quiz 2.10

The function samplemean(k) generates and plots ﬁve m

n

sequences for n = 1, 2, . . . , k.

The i th column M(:,i) of M holds a sequence m

1

, m

2

, . . . , m

k

.

function M=samplemean(k);

K=(1:k)’;

M=zeros(k,5);

for i=1:5,

X=duniformrv(0,10,k);

M(:,i)=cumsum(X)./K;

end;

plot(K,M);

Examples of the function calls (a) samplemean(100) and (b) samplemean(1000)

are shown in Figure 1. Each time samplemean(k) is called produces a random output.

What is observed in these ﬁgures is that for small n, m

n

is fairly random but as n gets

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

large, m

n

gets close to E[X] = 5. Although each sequence m

1

, m

2

, . . . that we generate is

random, the sequences always converges to E[X]. This random convergence is analyzed

in Chapter 7.

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 3

Quiz 3.1

The CDF of Y is

0 2 4

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0

y/4 0 ≤ y ≤ 4

1 y > 4

(1)

From the CDF F

Y

(y), we can calculate the probabilities:

(1) P[Y ≤ −1] = F

Y

(−1) = 0

(2) P[Y ≤ 1] = F

Y

(1) = 1/4

(3) P[2 < Y ≤ 3] = F

Y

(3) − F

Y

(2) = 3/4 −2/4 = 1/4

(4) P[Y > 1.5] = 1 − P[Y ≤ 1.5] = 1 − F

Y

(1.5) = 1 −(1.5)/4 = 5/8

Quiz 3.2

(1) First we will ﬁnd the constant c and then we will sketch the PDF. To ﬁnd c, we use

the fact that

_

∞

−∞

f

X

(x) dx = 1. We will evaluate this integral using integration by

parts:

_

∞

−∞

f

X

(x) dx =

_

∞

0

cxe

−x/2

dx (1)

= −2cxe

−x/2

¸

¸

¸

∞

0

. ,, .

=0

+

_

∞

0

2ce

−x/2

dx (2)

= −4ce

−x/2

¸

¸

¸

∞

0

= 4c (3)

Thus c = 1/4 and X has the Erlang (n = 2, λ = 1/2) PDF

0 5 10 15

0

0.1

0.2

x

f

X

(

x

)

f

X

(x) =

_

(x/4)e

−x/2

x ≥ 0

0 otherwise

(4)

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To ﬁnd the CDF F

X

(x), we ﬁrst note X is a nonnegative random variable so that

F

X

(x) = 0 for all x < 0. For x ≥ 0,

F

X

(x) =

_

x

0

f

X

(y) dy =

_

x

0

y

4

e

−y/2

dy (5)

= −

y

2

e

−y/2

¸

¸

¸

x

0

−

_

x

0

−

1

2

e

−y/2

dy (6)

= 1 −

x

2

e

−x/2

−e

−x/2

(7)

The complete expression for the CDF is

0 5 10 15

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

_

1 −

_

x

2

+1

_

e

−x/2

x ≥ 0

0 otherwise

(8)

(3) From the CDF F

X

(x),

P [0 ≤ X ≤ 4] = F

X

(4) − F

X

(0) = 1 −3e

−2

. (9)

(4) Similarly,

P [−2 ≤ X ≤ 2] = F

X

(2) − F

X

(−2) = 1 −3e

−1

. (10)

Quiz 3.3

The PDF of Y is

−2 0 2

0

1

2

3

y

f

Y

(

y

)

f

Y

(y) =

_

3y

2

/2 −1 ≤ y ≤ 1,

0 otherwise.

(1)

(1) The expected value of Y is

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

1

−1

(3/2)y

3

dy = (3/8)y

4

¸

¸

¸

1

−1

= 0. (2)

Note that the above calculation wasn’t really necessary because E[Y] = 0 whenever

the PDF f

Y

(y) is an even function (i.e., f

Y

(y) = f

Y

(−y)).

(2) The second moment of Y is

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

1

−1

(3/2)y

4

dy = (3/10)y

5

¸

¸

¸

1

−1

= 3/5. (3)

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The variance of Y is

Var[Y] = E

_

Y

2

_

−(E [Y])

2

= 3/5. (4)

(4) The standard deviation of Y is σ

Y

=

√

Var[Y] =

√

3/5.

Quiz 3.4

(1) When X is an exponential (λ) random variable, E[X] = 1/λ and Var[X] = 1/λ

2

.

Since E[X] = 3 and Var[X] = 9, we must have λ = 1/3. The PDF of X is

f

X

(x) =

_

(1/3)e

−x/3

x ≥ 0,

0 otherwise.

(1)

(2) We know X is a uniform (a, b) random variable. To ﬁnd a and b, we apply Theo-

rem 3.6 to write

E [X] =

a +b

2

= 3 Var[X] =

(b −a)

2

12

= 9. (2)

This implies

a +b = 6, b −a = ±6

√

3. (3)

The only valid solution with a < b is

a = 3 −3

√

3, b = 3 +3

√

3. (4)

The complete expression for the PDF of X is

f

X

(x) =

_

1/(6

√

3) 3 −3

√

3 ≤ x < 3 +3

√

3,

0 otherwise.

(5)

Quiz 3.5

Each of the requested probabilities can be calculated using (z) function and Table 3.1

or Q(z) and Table 3.2. We start with the sketches.

(1) The PDFs of X and Y are shown below. The fact that Y has twice the standard

deviation of X is reﬂected in the greater spread of f

Y

(y). However, it is important

to remember that as the standard deviation increases, the peak value of the Gaussian

PDF goes down.

−5 0 5

0

0.2

0.4

x y

f

X

(

x

)

f

Y

(

y

)

← f

X

(x)

← f

Y

(y)

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Since X is Gaussian (0, 1),

P [−1 < X ≤ 1] = F

X

(1) − F

X

(−1) (1)

= (1) −(−1) = 2(1) −1 = 0.6826. (2)

(3) Since Y is Gaussian (0, 2),

P [−1 < Y ≤ 1] = F

Y

(1) − F

Y

(−1) (3)

=

_

1

σ

Y

_

−

_

−1

σ

Y

_

= 2

_

1

2

_

−1 = 0.383. (4)

(4) Again, since X is Gaussian (0, 1), P[X > 3.5] = Q(3.5) = 2.33 ×10

−4

.

(5) Since Y is Gaussian (0, 2), P[Y > 3.5] = Q(

3.5

2

) = Q(1.75) = 1 − (1.75) =

0.0401.

Quiz 3.6

The CDF of X is

−2 0 2

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < −1,

(x +1)/4 −1 ≤ x < 1,

1 x ≥ 1.

(1)

The following probabilities can be read directly from the CDF:

(1) P[X ≤ 1] = F

X

(1) = 1.

(2) P[X < 1] = F

X

(1

−

) = 1/2.

(3) P[X = 1] = F

X

(1

+

) − F

X

(1

−

) = 1 −1/2 = 1/2.

(4) We ﬁnd the PDF f

Y

(y) by taking the derivative of F

Y

(y). The resulting PDF is

−2 0 2

0

0.5

x

f

X

(

x

)

0.5

f

X

(x) =

⎧

⎨

⎩

1/4 −1 ≤ x < 1,

(1/2)δ(x −1) x = 1,

0 otherwise.

(2)

Quiz 3.7

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) Since X is always nonnegative, F

X

(x) = 0 for x < 0. Also, F

X

(x) = 1 for x ≥ 2

since its always true that x ≤ 2. Lastly, for 0 ≤ x ≤ 2,

F

X

(x) =

_

x

−∞

f

X

(y) dy =

_

x

0

(1 − y/2) dy = x − x

2

/4. (1)

The complete CDF of X is

−1 0 1 2 3

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < 0,

x − x

2

/4 0 ≤ x ≤ 2,

1 x > 2.

(2)

(2) The probability that Y = 1 is

P [Y = 1] = P [X ≥ 1] = 1 − F

X

(1) = 1 −3/4 = 1/4. (3)

(3) Since X is nonnegative, Y is also nonnegative. Thus F

Y

(y) = 0 for y < 0. Also,

because Y ≤ 1, F

Y

(y) = 1 for all y ≥ 1. Finally, for 0 < y < 1,

F

Y

(y) = P [Y ≤ y] = P [X ≤ y] = F

X

(y) . (4)

Using the CDF F

X

(x), the complete expression for the CDF of Y is

−1 0 1 2 3

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0,

y − y

2

/4 0 ≤ y < 1,

1 y ≥ 1.

(5)

As expected, we see that the jump in F

Y

(y) at y = 1 is exactly equal to P[Y = 1].

(4) By taking the derivative of F

Y

(y), we obtain the PDF f

Y

(y). Note that when y < 0

or y > 1, the PDF is zero.

−1 0 1 2 3

0

0.5

1

1.5

y

f

Y

(

y

)

0.25

f

Y

(y) =

_

1 − y/2 +(1/4)δ(y −1) 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 3.8

(1) P[Y ≤ 6] =

_

6

−∞

f

Y

(y) dy =

_

6

0

(1/10) dy = 0.6 .

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) From Deﬁnition 3.15, the conditional PDF of Y given Y ≤ 6 is

f

Y|Y≤6

(y) =

_

f

Y

(y)

P[Y≤6]

y ≤ 6,

0 otherwise,

=

_

1/6 0 ≤ y ≤ 6,

0 otherwise.

(1)

(3) The probability Y > 8 is

P [Y > 8] =

_

10

8

1

10

dy = 0.2 . (2)

(4) From Deﬁnition 3.15, the conditional PDF of Y given Y > 8 is

f

Y|Y>8

(y) =

_

f

Y

(y)

P[Y>8]

y > 8,

0 otherwise,

=

_

1/2 8 < y ≤ 10,

0 otherwise.

(3)

(5) From the conditional PDF f

Y|Y≤6

(y), we can calculate the conditional expectation

E [Y|Y ≤ 6] =

_

∞

−∞

y f

Y|Y≤6

(y) dy =

_

6

0

y

6

dy = 3. (4)

(6) From the conditional PDF f

Y|Y>8

(y), we can calculate the conditional expectation

E [Y|Y > 8] =

_

∞

−∞

y f

Y|Y>8

(y) dy =

_

10

8

y

2

dy = 9. (5)

Quiz 3.9

A natural way to produce random variables with PDF f

T|T>2

(t ) is to generate samples

of T with PDF f

T

(t ) and then to discard those samples which fail to satisfy the condition

T > 2. Here is a MATLAB function that uses this method:

function t=t2rv(m)

i=0;lambda=1/3;

t=zeros(m,1);

while (i<m),

x=exponentialrv(lambda,1);

if (x>2)

t(i+1)=x;

i=i+1;

end

end

A second method exploits the fact that if T is an exponential (λ) random variable, then

T

= T +2 has PDF f

T

(t ) = f

T|T>2

(t ). In this case the command

t=2.0+exponentialrv(1/3,m)

generates the vector t.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 4

Quiz 4.1

Each value of the joint CDF can be found by considering the corresponding probability.

(1) F

X,Y

(−∞, 2) = P[X ≤ −∞, Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on

the value −∞.

(2) F

X,Y

(∞, ∞) = P[X ≤ ∞, Y ≤ ∞] = 1. This result is given in Theorem 4.1.

(3) F

X,Y

(∞, y) = P[X ≤ ∞, Y ≤ y] = P[Y ≤ y] = F

Y

(y).

(4) F

X,Y

(∞, −∞) = P[X ≤ ∞, Y ≤ −∞] = 0 since Y cannot take on the value −∞.

Quiz 4.2

From the joint PMF of Q and G given in the table, we can calculate the requested

probabilities by summing the PMF over those values of Q and G that correspond to the

event.

(1) The probability that Q = 0 is

P [Q = 0] = P

Q,G

(0, 0) + P

Q,G

(0, 1) + P

Q,G

(0, 2) + P

Q,G

(0, 3) (1)

= 0.06 +0.18 +0.24 +0.12 = 0.6 (2)

(2) The probability that Q = G is

P [Q = G] = P

Q,G

(0, 0) + P

Q,G

(1, 1) = 0.18 (3)

(3) The probability that G > 1 is

P [G > 1] =

3

g=2

1

q=0

P

Q,G

(q, g) (4)

= 0.24 +0.16 +0.12 +0.08 = 0.6 (5)

(4) The probability that G > Q is

P [G > Q] =

1

q=0

3

g=q+1

P

Q,G

(q, g) (6)

= 0.18 +0.24 +0.12 +0.16 +0.08 = 0.78 (7)

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.3

By Theorem 4.3, the marginal PMF of H is

P

H

(h) =

b=0,2,4

P

H,B

(h, b) (1)

For each value of h, this corresponds to calculating the row sum across the table of the joint

PMF. Similarly, the marginal PMF of B is

P

B

(b) =

1

h=−1

P

H,B

(h, b) (2)

For each value of b, this corresponds to the column sum down the table of the joint PMF.

The easiest way to calculate these marginal PMFs is to simply sum each row and column:

P

H,B

(h, b) b = 0 b = 2 b = 4 P

H

(h)

h = −1 0 0.4 0.2 0.6

h = 0 0.1 0 0.1 0.2

h = 1 0.1 0.1 0 0.2

P

B

(b) 0.2 0.5 0.3

(3)

Quiz 4.4

To ﬁnd the constant c, we apply

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy = 1. Speciﬁcally,

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy =

_

2

0

_

1

0

cxy dx dy (1)

= c

_

2

0

y

_

x

2

/2

¸

¸

¸

1

0

_

dy (2)

= (c/2)

_

2

0

y dy = (c/4)y

2

¸

¸

¸

2

0

= c (3)

Thus c = 1. To calculate P[A], we write

P [A] =

__

A

f

X,Y

(x, y) dx dy (4)

To integrate over A, we convert to polar coordinates using the substitutions x = r cos θ,

y = r sin θ and dx dy = r dr dθ, yielding

Y

X

1

1

2

A

P [A] =

_

π/2

0

_

1

0

r

2

sin θ cos θ r dr dθ (5)

=

_

_

1

0

r

3

dr

__

_

π/2

0

sin θ cos θ dθ

_

(6)

=

_

r

4

/4

¸

¸

¸

1

0

_

⎛

⎝

sin

2

θ

2

¸

¸

¸

¸

¸

π/2

0

⎞

⎠

= 1/8 (7)

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.5

By Theorem 4.8, the marginal PDF of X is

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy (1)

For x < 0 or x > 1, f

X

(x) = 0. For 0 ≤ x ≤ 1,

f

X

(x) =

6

5

_

1

0

(x + y

2

) dy =

6

5

_

xy + y

3

/3

_¸

¸

¸

y=1

y=0

=

6

5

(x +1/3) =

6x +2

5

(2)

The complete expression for the PDf of X is

f

X

(x) =

_

(6x +2)/5 0 ≤ x ≤ 1

0 otherwise

(3)

By the same method we obtain the marginal PDF for Y. For 0 ≤ y ≤ 1,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dy (4)

=

6

5

_

1

0

(x + y

2

) dx =

6

5

_

x

2

/2 + xy

2

_¸

¸

¸

x=1

x=0

=

6

5

(1/2 + y

2

) =

3 +6y

2

5

(5)

Since f

Y

(y) = 0 for y < 0 or y > 1, the complete expression for the PDF of Y is

f

Y

(y) =

_

(3 +6y

2

)/5 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 4.6

(A) The time required for the transfer is T = L/B. For each pair of values of L and B,

we can calculate the time T needed for the transfer. We can write these down on the

table for the joint PMF of L and B as follows:

P

L,B

(l, b) b = 14, 400 b = 21, 600 b = 28, 800

l = 518, 400 0.20 (T=36) 0.10 (T=24) 0.05 (T=18)

l = 2, 592, 000 0.05 (T=180) 0.10 (T=120) 0.20 (T=90)

l = 7, 776, 000 0.00 (T=540) 0.10 (T=360) 0.20 (T=270)

From the table, writing down the PMF of T is straightforward.

P

T

(t ) =

⎧

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎩

0.05 t = 18

0.1 t = 24

0.2 t = 36, 90

0.1 t = 120

0.05 t = 180

0.2 t = 270

0.1 t = 360

0 otherwise

(1)

23

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(B) First, we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1, W = XY satisﬁes

0 ≤ W ≤ 1. Thus f

W

(0) = 0 and f

W

(1) = 1. For 0 < w < 1, we calculate the

CDF F

W

(w) = P[W ≤ w]. As shown below, integrating over the region W ≤ w

is fairly complex. The calculus is simpler if we integrate over the region XY > w.

Speciﬁcally,

Y

X

1

1

XY > w

w

w

XY = w

F

W

(w) = 1 − P [XY > w] (2)

= 1 −

_

1

w

_

1

w/x

dy dx (3)

= 1 −

_

1

w

(1 −w/x) dx (4)

= 1 −

_

x −wln x|

x=1

x=w

_

(5)

= 1 −(1 −w +wln w) = w −wln w (6)

The complete expression for the CDF is

F

W

(w) =

⎧

⎨

⎩

0 w < 0

w −wln w 0 ≤ w ≤ 1

1 w > 1

(7)

By taking the derivative of the CDF, we ﬁnd the PDF is

f

W

(w) =

d F

W

(w)

dw

=

⎧

⎨

⎩

0 w < 0

−ln w 0 ≤ w ≤ 1

0 w > 1

(8)

Quiz 4.7

(A) It is helpful to ﬁrst make a table that includes the marginal PMFs.

P

L,T

(l, t ) t = 40 t = 60 P

L

(l)

l = 1 0.15 0.1 0.25

l = 2 0.3 0.2 0.5

l = 3 0.15 0.1 0.25

P

T

(t ) 0.6 0.4

(1) The expected value of L is

E [L] = 1(0.25) +2(0.5) +3(0.25) = 2. (1)

Since the second moment of L is

E

_

L

2

_

= 1

2

(0.25) +2

2

(0.5) +3

2

(0.25) = 4.5, (2)

the variance of L is

Var [L] = E

_

L

2

_

−(E [L])

2

= 0.5. (3)

24

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) The expected value of T is

E [T] = 40(0.6) +60(0.4) = 48. (4)

The second moment of T is

E

_

T

2

_

= 40

2

(0.6) +60

2

(0.4) = 2400. (5)

Thus

Var[T] = E

_

T

2

_

−(E [T])

2

= 2400 −48

2

= 96. (6)

(3) The correlation is

E [LT] =

t =40,60

3

l=1

lt P

LT

(lt ) (7)

= 1(40)(0.15) +2(40)(0.3) +3(40)(0.15) (8)

+1(60)(0.1) +2(60)(0.2) +3(60)(0.1) (9)

= 96 (10)

(4) From Theorem 4.16(a), the covariance of L and T is

Cov [L, T] = E [LT] − E [L] E [T] = 96 −2(48) = 0 (11)

(5) Since Cov[L, T] = 0, the correlation coefﬁcient is ρ

L,T

= 0.

(B) As in the discrete case, the calculations become easier if we ﬁrst calculate the marginal

PDFs f

X

(x) and f

Y

(y). For 0 ≤ x ≤ 1,

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy =

_

2

0

xy dy =

1

2

xy

2

¸

¸

¸

¸

y=2

y=0

= 2x (12)

Similarly, for 0 ≤ y ≤ 2,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dx =

_

2

0

xy dx =

1

2

x

2

y

¸

¸

¸

¸

x=1

x=0

=

y

2

(13)

The complete expressions for the marginal PDFs are

f

X

(x) =

_

2x 0 ≤ x ≤ 1

0 otherwise

f

Y

(y) =

_

y/2 0 ≤ y ≤ 2

0 otherwise

(14)

From the marginal PDFs, it is straightforward to calculate the various expectations.

25

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) The ﬁrst and second moments of X are

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

1

0

2x

2

dx =

2

3

(15)

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

1

0

2x

3

dx =

1

2

(16)

(17)

The variance of X is Var[X] = E[X

2

] −(E[X])

2

= 1/18.

(2) The ﬁrst and second moments of Y are

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

2

0

1

2

y

2

dy =

4

3

(18)

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

2

0

1

2

y

3

dy = 2 (19)

The variance of Y is Var[Y] = E[Y

2

] −(E[Y])

2

= 2 −16/9 = 2/9.

(3) The correlation of X and Y is

E [XY] =

_

∞

−∞

_

∞

−∞

xy f

X,Y

(x, y) dx, dy (20)

=

_

1

0

_

2

0

x

2

y

2

dx, dy =

x

3

3

¸

¸

¸

¸

1

0

y

3

3

¸

¸

¸

¸

2

0

=

8

9

(21)

(4) The covariance of X and Y is

Cov [X, Y] = E [XY] − E [X] E [Y] =

8

9

−

_

2

3

__

4

3

_

= 0. (22)

(5) Since Cov[X, Y] = 0, the correlation coefﬁcient is ρ

X,Y

= 0.

Quiz 4.8

(A) Since the event V > 80 occurs only for the pairs (L, T) = (2, 60), (L, T) = (3, 40)

and (L, T) = (3, 60),

P [A] = P [V > 80] = P

L,T

(2, 60) + P

L,T

(3, 40) + P

L,T

(3, 60) = 0.45 (1)

By Deﬁnition 4.9,

P

L,T| A

(l, t ) =

_

P

L,T

(l,t )

P[A]

lt > 80

0 otherwise

(2)

26

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We can represent this conditional PMF in the following table:

P

L,T| A

(l, t ) t = 40 t = 60

l = 1 0 0

l = 2 0 4/9

l = 3 1/3 2/9

The conditional expectation of V can be found from the conditional PMF.

E [V| A] =

l

t

lt P

L,T| A

(l, t ) (3)

= (2 · 60)

4

9

+(3 · 40)

1

3

+(3 · 60)

2

9

= 133

1

3

(4)

For the conditional variance Var[V| A], we ﬁrst ﬁnd the conditional second moment

E

_

V

2

| A

_

=

l

t

(lt )

2

P

L,T| A

(l, t ) (5)

= (2 · 60)

2

4

9

+(3 · 40)

2

1

3

+(3 · 60)

2

2

9

= 18, 400 (6)

It follows that

Var [V| A] = E

_

V

2

| A

_

−(E [V| A])

2

= 622

2

9

(7)

(B) For continuous random variables X and Y, we ﬁrst calculate the probability of the

conditioning event.

P [B] =

__

B

f

X,Y

(x, y) dx dy =

_

60

40

_

3

80/y

xy

4000

dx dy (8)

=

_

60

40

y

4000

_

x

2

2

¸

¸

¸

¸

3

80/y

_

dy (9)

=

_

60

40

y

4000

_

9

2

−

3200

y

2

_

dy (10)

=

9

8

−

4

5

ln

3

2

≈ 0.801 (11)

The conditional PDF of X and Y is

f

X,Y|B

(x, y) =

_

f

X,Y

(x, y) /P [B] (x, y) ∈ B

0 otherwise

(12)

=

_

Kxy 40 ≤ y ≤ 60, 80/y ≤ x ≤ 3

0 otherwise

(13)

27

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

where K = (4000P[B])

−1

. The conditional expectation of W given event B is

E [W|B] =

_

∞

−∞

_

∞

−∞

xy f

X,Y|B

(x, y) dx dy (14)

=

_

60

40

_

3

80/y

Kx

2

y

2

dx dy (15)

= (K/3)

_

60

40

y

2

x

3

¸

¸

¸

x=3

x=80/y

dy (16)

= (K/3)

_

60

40

_

27y

2

−80

3

/y

_

dy (17)

= (K/3)

_

9y

3

−80

3

ln y

_¸

¸

¸

60

40

≈ 120.78 (18)

The conditional second moment of K given B is

E

_

W

2

|B

_

=

_

∞

−∞

_

∞

−∞

(xy)

2

f

X,Y|B

(x, y) dx dy (19)

=

_

60

40

_

3

80/y

Kx

3

y

3

dx dy (20)

= (K/4)

_

60

40

y

3

x

4

¸

¸

¸

x=3

x=80/y

dy (21)

= (K/4)

_

60

40

_

81y

3

−80

4

/y

_

dy (22)

= (K/4)

_

(81/4)y

4

−80

4

ln y

_¸

¸

¸

60

40

≈ 16, 116.10 (23)

It follows that the conditional variance of W given B is

Var [W|B] = E

_

W

2

|B

_

−(E [W|B])

2

≈ 1528.30 (24)

Quiz 4.9

(A) (1) The joint PMF of A and B can be found from the marginal and conditional

PMFs via P

A,B

(a, b) = P

B| A

(b|a)P

A

(a). Incorporating the information from

the given conditional PMFs can be confusing, however. Consequently, we can

note that A has range S

A

= {0, 2} and B has range S

B

= {0, 1}. A table of the

joint PMF will include all four possible combinations of A and B. The general

form of the table is

P

A,B

(a, b) b = 0 b = 1

a = 0 P

B| A

(0|0)P

A

(0) P

B| A

(1|0)P

A

(0)

a = 2 P

B| A

(0|2)P

A

(2) P

B| A

(1|2)P

A

(2)

28

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Substituting values from P

B| A

(b|a) and P

A

(a), we have

P

A,B

(a, b) b = 0 b = 1

a = 0 (0.8)(0.4) (0.2)(0.4)

a = 2 (0.5)(0.6) (0.5)(0.6)

or

P

A,B

(a, b) b = 0 b = 1

a = 0 0.32 0.08

a = 2 0.3 0.3

(2) Given the conditional PMF P

B| A

(b|2), it is easy to calculate the conditional

expectation

E [B| A = 2] =

1

b=0

bP

B| A

(b|2) = (0)(0.5) +(1)(0.5) = 0.5 (1)

(3) From the joint PMF P

A,B

(a, b), we can calculate the the conditional PMF

P

A|B

(a|0) =

P

A,B

(a, 0)

P

B

(0)

=

⎧

⎨

⎩

0.32/0.62 a = 0

0.3/0.62 a = 2

0 otherwise

(2)

=

⎧

⎨

⎩

16/31 a = 0

15/31 a = 2

0 otherwise

(3)

(4) We can calculate the conditional variance Var[A|B = 0] using the conditional

PMF P

A|B

(a|0). First we calculate the conditional expected value

E [A|B = 0] =

a

aP

A|B

(a|0) = 0(16/31) +2(15/31) = 30/31 (4)

The conditional second moment is

E

_

A

2

|B = 0

_

=

a

a

2

P

A|B

(a|0) = 0

2

(16/31) +2

2

(15/31) = 60/31 (5)

The conditional variance is then

Var[A|B = 0] = E

_

A

2

|B = 0

_

−(E [A|B = 0])

2

=

960

961

(6)

(B) (1) The joint PDF of X and Y is

f

X,Y

(x, y) = f

Y|X

(y|x) f

X

(x) =

_

6y 0 ≤ y ≤ x, 0 ≤ x ≤ 1

0 otherwise

(7)

(2) From the given conditional PDF f

Y|X

(y|x),

f

Y|X

(y|1/2) =

_

8y 0 ≤ y ≤ 1/2

0 otherwise

(8)

29

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The conditional PDF of Y given X = 1/2 is f

X|Y

(x|1/2) = f

X,Y

(x, 1/2)/f

Y

(1/2).

To ﬁnd f

Y

(1/2), we integrate the joint PDF.

f

Y

(1/2) =

_

∞

−∞

f

X,1/2

( ) dx =

_

1

1/2

6(1/2) dx = 3/2 (9)

Thus, for 1/2 ≤ x ≤ 1,

f

X|Y

(x|1/2) =

f

X,Y

(x, 1/2)

f

Y

(1/2)

=

6(1/2)

3/2

= 2 (10)

(4) From the pervious part, we see that given Y = 1/2, the conditional PDF of X

is uniform (1/2, 1). Thus, by the deﬁnition of the uniform (a, b) PDF,

Var [X|Y = 1/2] =

(1 −1/2)

2

12

=

1

48

(11)

Quiz 4.10

(A) (1) For random variables X and Y from Example 4.1, we observe that P

Y

(1) =

0.09 and P

X

(0) = 0.01. However,

P

X,Y

(0, 1) = 0 = P

X

(0) P

Y

(1) (1)

Since we have found a pair x, y such that P

X,Y

(x, y) = P

X

(x)P

Y

(y), we can

conclude that X and Y are dependent. Note that whenever P

X,Y

(x, y) = 0,

independence requires that either P

X

(x) = 0 or P

Y

(y) = 0.

(2) For random variables Q and G from Quiz 4.2, it is not obvious whether they

are independent. Unlike X and Y in part (a), there are no obvious pairs q, g

that fail the independence requirement. In this case, we calculate the marginal

PMFs from the table of the joint PMF P

Q,G

(q, g) in Quiz 4.2.

P

Q,G

(q, g) g = 0 g = 1 g = 2 g = 3 P

Q

(q)

q = 0 0.06 0.18 0.24 0.12 0.60

q = 1 0.04 0.12 0.16 0.08 0.40

P

G

(g) 0.10 0.30 0.40 0.20

Careful study of the table will verify that P

Q,G

(q, g) = P

Q

(q)P

G

(g) for every

pair q, g. Hence Q and G are independent.

(B) (1) Since X

1

and X

2

are independent,

f

X

1

,X

2

(x

1

, x

2

) = f

X

1

(x

1

) f

X

2

(x

2

) (2)

=

_

(1 − x

1

/2)(1 − x

2

/2) 0 ≤ x

1

≤ 2, 0 ≤ x

2

≤ 2

0 otherwise

(3)

30

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Let F

X

(x) denote the CDF of both X

1

and X

2

. The CDF of Z = max(X

1

, X

2

)

is found by observing that Z ≤ z iff X

1

≤ z and X

2

≤ z. That is,

P [Z ≤ z] = P [X

1

≤ z, X

2

≤ z] (4)

= P [X

1

≤ z] P [X

2

≤ z] = [F

X

(z)]

2

(5)

To complete the problem, we need to ﬁnd the CDF of each X

i

. From the PDF

f

X

(x), the CDF is

F

X

(x) =

_

x

−∞

f

X

(y) dy =

⎧

⎨

⎩

0 x < 0

x − x

2

/4 0 ≤ x ≤ 2

1 x > 2

(6)

Thus for 0 ≤ z ≤ 2,

F

Z

(z) = (z − z

2

/4)

2

(7)

The complete expression for the CDF of Z is

F

Z

(z) =

⎧

⎨

⎩

0 z < 0

(z − z

2

/4)

2

0 ≤ z ≤ 2

1 z > 1

(8)

Quiz 4.11

This problem just requires identifying the various terms in Deﬁnition 4.17 and Theo-

rem 4.29. Speciﬁcally, from the problem statement, we know that ρ = 1/2,

µ

1

= µ

X

= 0, µ

2

= µ

Y

= 0, (1)

and that

σ

1

= σ

X

= 1, σ

2

= σ

Y

= 1. (2)

(1) Applying these facts to Deﬁnition 4.17, we have

f

X,Y

(x, y) =

1

√

3π

2

e

−2(x

2

−xy+y

2

)/3

. (3)

(2) By Theorem 4.30, the conditional expected value and standard deviation of X given

Y = y are

E [X|Y = y] = y/2 ˜ σ

X

= σ

2

1

(1 −ρ

2

) =

_

3/4. (4)

When Y = y = 2, we see that E[X|Y = 2] = 1 and Var[X|Y = 2] = 3/4. The

conditional PDF of X given Y = 2 is simply the Gaussian PDF

f

X|Y

(x|2) =

1

√

3π/2

e

−2(x−1)

2

/3

. (5)

31

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.12

One straightforward method is to follow the approach of Example 4.28. Instead, we use

an alternate approach. First we observe that X has the discrete uniform (1, 4) PMF. Also,

given X = x, Y has a discrete uniform (1, x) PMF. That is,

P

X

(x) =

_

1/4 x = 1, 2, 3, 4,

0 otherwise,

P

Y|X

(y|x) =

_

1/x y = 1, . . . , x

0 otherwise

(1)

Given X = x, and an independent uniform (0, 1) random variable U, we can generate a

sample value of Y with a discrete uniform (1, x) PMF via Y = xU. This observation

prompts the following program:

function xy=dtrianglerv(m)

sx=[1;2;3;4];

px=0.25*ones(4,1);

x=finiterv(sx,px,m);

y=ceil(x.*rand(m,1));

xy=[x’;y’];

32

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 5

Quiz 5.1

We ﬁnd P[C] by integrating the joint PDF over the region of interest. Speciﬁcally,

P [C] =

_

1/2

0

dy

2

_

y

2

0

dy

1

_

1/2

0

dy

4

_

y

4

0

4dy

3

(1)

= 4

_

_

1/2

0

y

2

dy

2

__

_

1/2

0

y

4

dy

4

_

= 1/4. (2)

Quiz 5.2

By deﬁnition of A, Y

1

= X

1

, Y

2

= X

2

−X

1

and Y

3

= X

3

−X

2

. Since 0 < X

1

< X

2

<

X

3

, each Y

i

must be a strictly positive integer. Thus, for y

1

, y

2

, y

3

∈ {1, 2, . . .},

P

Y

(y) = P [Y

1

= y

1

, Y

2

= y

2

, Y

3

= y

3

] (1)

= P [X

1

= y

1

, X

2

− X

1

= y

2

, X

3

− X

2

= y

3

] (2)

= P [X

1

= y

1

, X

2

= y

2

+ y

1

, X

3

= y

3

+ y

2

+ y

1

] (3)

= (1 − p)

3

p

y

1

+y

2

+y

3

(4)

By deﬁning the vector a =

_

1 1 1

_

**, the complete expression for the joint PMF of Y is
**

P

Y

(y) =

_

(1 − p) p

a

y

y

1

, y

2

, y

3

∈ {1, 2, . . .}

0 otherwise

(5)

Quiz 5.3

First we note that each marginal PDF is nonzero only if any subset of the x

i

obeys the

ordering contraints 0 ≤ x

1

≤ x

2

≤ x

3

≤ 1. Within these constraints, we have

f

X

1

,X

2

(x

1

, x

2

) =

_

∞

−∞

f

X

(x) dx

3

=

_

1

x

2

6 dx

3

= 6(1 − x

2

), (1)

f

X

2

,X

3

(x

2

, x

3

) =

_

∞

−∞

f

X

(x) dx

1

=

_

x

2

0

6 dx

1

= 6x

2

, (2)

f

X

1

,X

3

(x

1

, x

3

) =

_

∞

−∞

f

X

(x) dx

2

=

_

x

3

x

1

6 dx

2

= 6(x

3

− x

1

). (3)

In particular, we must keep in mind that f

X

1

,X

2

(x

1

, x

2

) = 0 unless 0 ≤ x

1

≤ x

2

≤ 1,

f

X

2

,X

3

(x

2

, x

3

) = 0 unless 0 ≤ x

2

≤ x

3

≤ 1, and that f

X

1

,X

3

(x

1

, x

3

) = 0 unless 0 ≤ x

1

≤

33

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

x

3

≤ 1. The complete expressions are

f

X

1

,X

2

(x

1

, x

2

) =

_

6(1 − x

2

) 0 ≤ x

1

≤ x

2

≤ 1

0 otherwise

(4)

f

X

2

,X

3

(x

2

, x

3

) =

_

6x

2

0 ≤ x

2

≤ x

3

≤ 1

0 otherwise

(5)

f

X

1

,X

3

(x

1

, x

3

) =

_

6(x

3

− x

1

) 0 ≤ x

1

≤ x

3

≤ 1

0 otherwise

(6)

Now we can ﬁnd the marginal PDFs. When 0 ≤ x

i

≤ 1 for each x

i

,

f

X

1

(x

1

) =

_

∞

−∞

f

X

1

,X

2

(x

1

, x

2

) dx

2

=

_

1

x

1

6(1 − x

2

) dx

2

= 3(1 − x

1

)

2

(7)

f

X

2

(x

2

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

3

=

_

1

x

2

6x

2

dx

3

= 6x

2

(1 − x

2

) (8)

f

X

3

(x

3

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

2

=

_

x

3

0

6x

2

dx

2

= 3x

2

3

(9)

The complete expressions are

f

X

1

(x

1

) =

_

3(1 − x

1

)

2

0 ≤ x

1

≤ 1

0 otherwise

(10)

f

X

2

(x

2

) =

_

6x

2

(1 − x

2

) 0 ≤ x

2

≤ 1

0 otherwise

(11)

f

X

3

(x

3

) =

_

3x

2

3

0 ≤ x

3

≤ 1

0 otherwise

(12)

Quiz 5.4

In the PDF f

Y

(y), the components have dependencies as a result of the ordering con-

straints Y

1

≤ Y

2

and Y

3

≤ Y

4

. We can separate these constraints by creating the vectors

V =

_

Y

1

Y

2

_

, W =

_

Y

3

Y

4

_

. (1)

The joint PDF of V and W is

f

V,W

(v, w) =

_

4 0 ≤ v

1

≤ v

2

≤ 1, 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(2)

34

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We must verify that V and W are independent. For 0 ≤ v

1

≤ v

2

≤ 1,

f

V

(v) =

__

f

V,W

(v, w) dw

1

dw

2

(3)

=

_

1

0

_

_

1

w

1

4 dw

2

_

dw

1

(4)

=

_

1

0

4(1 −w

1

) dw

1

= 2 (5)

Similarly, for 0 ≤ w

1

≤ w

2

≤ 1,

f

W

(w) =

__

f

V,W

(v, w) dv

1

dv

2

(6)

=

_

1

0

_

_

1

v

1

4 dv

2

_

dv

1

= 2 (7)

It follows that V and W have PDFs

f

V

(v) =

_

2 0 ≤ v

1

≤ v

2

≤ 1

0 otherwise

, f

W

(w) =

_

2 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(8)

It is easy to verify that f

V,W

(v, w) = f

V

(v) f

W

(w), conﬁrming that V and W are indepen-

dent vectors.

Quiz 5.5

(A) Referring to Theorem 1.19, each test is a subexperiment with three possible out-

comes: L, A and R. In ﬁve trials, the vector X =

_

X

1

X

2

X

3

_

indicating the

number of outcomes of each subexperiment has the multinomial PMF

P

X

(x) =

⎧

⎨

⎩

_

5

x

1

,x

2

,x

3

_

(0.3)

x

1

(0.6)

x

2

(0.1)

x

3

x

1

+ x

2

+ x

3

= 5;

x

1

, x

2

, x

3

∈ {0, 1, . . . , 5}

0 otherwise

(1)

We can ﬁnd the marginal PMF for each X

i

from the joint PMF P

X

(x); however it

is simpler to just start from ﬁrst principles and observe that X

1

is the number of

occurrences of L in ﬁve independent tests. If we view each test as a trial with success

probability P[L] = 0.3, we see that X

1

is a binomial (n, p) = (5, 0.3) random

variable. Similarly, X

2

is a binomial (5, 0.6) random variable and X

3

is a binomial

(5, 0.1) random variable. That is, for p

1

= 0.3, p

2

= 0.6 and p

3

= 0.1,

P

X

i

(x) =

_ _

5

x

_

p

x

i

(1 − p

i

)

5−x

x = 0, 1, . . . , 5

0 otherwise

(2)

35

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the marginal PMFs, we see that X

1

, X

2

and X

3

are not independent. Hence, we

must use Theorem 5.6 to ﬁnd the PMF of W. In particular, since X

1

+ X

2

+ X

3

= 5

and since each X

i

is non-negative, P

W

(0) = P

W

(1) = 0. Furthermore,

P

W

(2) = P

X

(1, 2, 2) + P

X

(2, 1, 2) + P

X

(2, 2, 1) (3)

=

5![0.3(0.6)

2

(0.1)

2

+0.3

2

(0.6)(0.1)

2

+0.3

2

(0.6)

2

(0.1)]

2!2!1!

(4)

= 0.1458 (5)

In addition, for w = 3, w = 4, and w = 5, the event W = w occurs if and only if

one of the mutually exclusive events X

1

= w, X

2

= w, or X

3

= w occurs. Thus,

P

W

(3) = P

X

1

(3) + P

X

2

(3) + P

X

3

(3) = 0.486 (6)

P

W

(4) = P

X

1

(4) + P

X

2

(4) + P

X

3

(4) = 0.288 (7)

P

W

(5) = P

X

1

(5) + P

X

2

(5) + P

X

3

(5) = 0.0802 (8)

(B) Since each Y

i

= 2X

i

+4, we can apply Theorem 5.10 to write

f

Y

(y) =

1

2

3

f

X

_

y

1

−4

2

,

y

2

−4

2

,

y

3

−4

2

_

(9)

=

_

(1/8)e

−(y

3

−4)/2

4 ≤ y

1

≤ y

2

≤ y

3

0 otherwise

(10)

Note that for other matrices A, the constraints on y resulting from the constraints

0 ≤ X

1

≤ X

2

≤ X

3

can be much more complicated.

Quiz 5.6

We start by ﬁnding the components E[X

i

] =

_

∞

−∞

x f

X

i

(x) dx of µ

X

. To do so, we use

the marginal PDFs f

X

i

(x) found in Quiz 5.3:

E [X

1

] =

_

1

0

3x(1 − x)

2

dx = 1/4, (1)

E [X

2

] =

_

1

0

6x

2

(1 − x) dx = 1/2, (2)

E [X

3

] =

_

1

0

3x

3

dx = 3/4. (3)

To ﬁnd the correlation matrix R

X

, we need to ﬁnd E[X

i

X

j

] for all i and j . We start with

36

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

the second moments:

E

_

X

2

1

_

=

_

1

0

3x

2

(1 − x)

2

dx = 1/10. (4)

E

_

X

2

2

_

=

_

1

0

6x

3

(1 − x) dx = 3/10. (5)

E

_

X

2

3

_

=

_

1

0

3x

4

dx = 3/5. (6)

Using marginal PDFs from Quiz 5.3, the cross terms are

E [X

1

X

2

] =

_

∞

−∞

_

∞

−∞

x

1

x

2

f

X

1

,X

2

(x

1

, x

2

) , dx

1

dx

2

(7)

=

_

1

0

_

_

1

x

1

6x

1

x

2

(1 − x

2

) dx

2

_

dx

1

(8)

=

_

1

0

[x

1

−3x

3

1

+2x

4

1

] dx

1

= 3/20. (9)

E [X

2

X

3

] =

_

1

0

_

1

x

2

6x

2

2

x

3

dx

3

dx

2

(10)

=

_

1

0

[3x

2

2

−3x

4

2

] dx

2

= 2/5 (11)

E [X

1

X

3

] =

_

1

0

_

1

x

1

6x

1

x

3

(x

3

− x

1

) dx

3

dx

1

. (12)

=

_

1

0

_

(2x

1

x

3

3

−3x

2

1

x

2

3

)

¸

¸

¸

x

3

=1

x

3

=x

1

_

dx

1

(13)

=

_

1

0

[2x

1

−3x

2

1

+ x

4

1

] dx

1

= 1/5. (14)

Summarizing the results, X has correlation matrix

R

X

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

. (15)

Vector X has covariance matrix

C

X

= R

X

− E [X] E [X]

(16)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/4

1/2

3/4

⎤

⎦

_

1/4 1/2 3/4

_

(17)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/16 1/8 3/16

1/8 1/4 3/8

3/16 3/8 9/16

⎤

⎦

=

1

80

⎡

⎣

3 2 1

2 4 2

1 2 3

⎤

⎦

. (18)

37

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This problemshows that even for fairly simple joint PDFs, computing the covariance matrix

by calculus can be a time consuming task.

Quiz 5.7

We observe that X = AZ +b where

A =

_

2 1

1 −1

_

, b =

_

2

0

_

. (1)

It follows from Theorem 5.18 that µ

X

= b and that

C

X

= AA

=

_

2 1

1 −1

_ _

2 1

1 −1

_

=

_

5 1

1 2

_

. (2)

Quiz 5.8

First, we observe that Y = AT where A =

_

1/31 1/31 · · · 1/31

_

. Since T is a

Gaussian random vector, Theorem 5.16 tells us that Y is a 1 dimensional Gaussian vector,

i.e., just a Gaussian random variable. The expected value of Y is µ

Y

= µ

T

= 80. The

covariance matrix of Y is 1 × 1 and is just equal to Var[Y]. Thus, by Theorem 5.16,

Var[Y] = AC

T

A

.

function p=julytemps(T);

[D1 D2]=ndgrid((1:31),(1:31));

CT=36./(1+abs(D1-D2));

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

In julytemps.m, the ﬁrst two lines gen-

erate the 31 ×31 covariance matrix CT, or

C

T

. Next we calculate Var[Y]. The ﬁnal

step is to use the (·) function to calculate

P[Y < T].

Here is the output of julytemps.m:

>> julytemps([70 75 80 85 90 95])

ans =

0.0000 0.0221 0.5000 0.9779 1.0000 1.0000

Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0000. Its

just that the MATLAB’s short format output, invoked with the command format short,

rounds off those probabilities. Here is the long format output:

>> format long

>> julytemps([70 75 80 85 90 95])

ans =

Columns 1 through 4

0.00002844263128 0.02207383067604 0.50000000000000 0.97792616932396

Columns 5 through 6

0.99997155736872 0.99999999922010

38

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The ndgrid function is a useful to way calculate many covariance matrices. However, in

this problem, C

X

has a special structure; the i, j th element is

C

T

(i, j ) = c

|i −j |

=

36

1 +|i − j |

. (1)

If we write out the elements of the covariance matrix, we see that

C

T

=

⎡

⎢

⎢

⎢

⎣

c

0

c

1

· · · c

30

c

1

c

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. c

1

c

30

· · · c

1

c

0

⎤

⎥

⎥

⎥

⎦

. (2)

This covariance matrix is known as a symmetric Toeplitz matrix. We will see in Chap-

ters 9 and 11 that Toeplitz covariance matrices are quite common. In fact, MATLAB has a

toeplitz function for generating them. The function julytemps2 use the toeplitz

to generate the correlation matrix C

T

.

function p=julytemps2(T);

c=36./(1+abs(0:30));

CT=toeplitz(c);

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

39

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 6

Quiz 6.1

Let K

1

, . . . , K

n

denote a sequence of iid random variables each with PMF

P

K

(k) =

_

1/4 k = 1, . . . , 4

0 otherwise

(1)

We can write W

n

in the form of W

n

= K

1

+ · · · + K

n

. First, we note that the ﬁrst two

moments of K

i

are

E [K

i

] = (1 +2 +3 +4)/4 = 2.5 (2)

E

_

K

2

i

_

= (1

2

+2

2

+3

2

+4

2

)/4 = 7.5 (3)

Thus the variance of K

i

is

Var[K

i

] = E

_

K

2

i

_

−(E [K

i

])

2

= 7.5 −(2.5)

2

= 1.25 (4)

Since E[K

i

] = 2.5, the expected value of W

n

is

E [W

n

] = E [K

1

] +· · · + E [K

n

] = nE [K

i

] = 2.5n (5)

Since the rolls are independent, the random variables K

1

, . . . , K

n

are independent. Hence,

by Theorem 6.3, the variance of the sum equals the sum of the variances. That is,

Var[W

n

] = Var[K

1

] +· · · +Var[K

n

] = 1.25n (6)

Quiz 6.2

Random variables X and Y have PDFs

f

X

(x) =

_

3e

−3x

x ≥ 0

0 otherwise

f

Y

(y) =

_

2e

−2y

y ≥ 0

0 otherwise

(1)

Since X and Y are nonnegative, W = X +Y is nonnegative. By Theorem 6.5, the PDF of

W = X +Y is

f

W

(w) =

_

∞

−∞

f

X

(w − y) f

Y

(y) dy = 6

_

w

0

e

−3(w−y)

e

−2y

dy (2)

Fortunately, this integral is easy to evaluate. For w > 0,

f

W

(w) = e

−3w

e

y

¸

¸

w

0

= 6

_

e

−2w

−e

−3w

_

(3)

Since f

W

(w) = 0 for w < 0, a conmplete expression for the PDF of W is

f

W

(w) =

_

6e

−2w

_

1 −e

−w

_

w ≥ 0,

0 otherwise.

(4)

40

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.3

The MGF of K is

φ

K

(s) = E

_

e

s K

_

==

4

k=0

(0.2)e

sk

= 0.2

_

1 +e

s

+e

2s

+e

3s

+e

4s

_

(1)

We ﬁnd the moments by taking derivatives. The ﬁrst derivative of φ

K

(s) is

dφ

K

(s)

ds

= 0.2(e

s

+2e

2s

+3e

3s

+4e

4s

) (2)

Evaluating the derivative at s = 0 yields

E [K] =

dφ

K

(s)

ds

¸

¸

¸

¸

s=0

= 0.2(1 +2 +3 +4) = 2 (3)

To ﬁnd higher-order moments, we continue to take derivatives:

E

_

K

2

_

=

d

2

φ

K

(s)

ds

2

¸

¸

¸

¸

s=0

= 0.2(e

s

+4e

2s

+9e

3s

+16e

4s

)

¸

¸

¸

s=0

= 6 (4)

E

_

K

3

_

=

d

3

φ

K

(s)

ds

3

¸

¸

¸

¸

s=0

= 0.2(e

s

+8e

2s

+27e

3s

+64e

4s

)

¸

¸

¸

s=0

= 20 (5)

E

_

K

4

_

=

d

4

φ

K

(s)

ds

4

¸

¸

¸

¸

s=0

= 0.2(e

s

+16e

2s

+81e

3s

+256e

4s

)

¸

¸

¸

s=0

= 70.8 (6)

(7)

Quiz 6.4

(A) Each K

i

has MGF

φ

K

(s) = E

_

e

s K

i

_

=

e

s

+e

2s

+· · · +e

ns

n

=

e

s

(1 −e

ns

)

n(1 −e

s

)

(1)

Since the sequence of K

i

is independent, Theorem 6.8 says the MGF of J is

φ

J

(s) = (φ

K

(s))

m

=

e

ms

(1 −e

ns

)

m

n

m

(1 −e

s

)

m

(2)

(B) Since the set of α

j

X

j

are independent Gaussian random variables, Theorem 6.10

says that W is a Gaussian random variable. Thus to ﬁnd the PDF of W, we need

only ﬁnd the expected value and variance. Since the expectation of the sum equals

the sum of the expectations:

E [W] = αE [X

1

] +α

2

E [X

2

] +· · · +α

n

E [X

n

] = 0 (3)

41

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since the α

j

X

j

are independent, the variance of the sum equals the sum of the vari-

ances:

Var[W] = α

2

Var[X

1

] +α

4

Var[X

2

] +· · · +α

2n

Var[X

n

] (4)

= α

2

+2(α

2

)

2

+3(α

2

)

3

+· · · +n(α

2

)

n

(5)

Deﬁning q = α

2

, we can use Math Fact B.6 to write

Var[W] =

α

2

−α

2n+2

[1 +n(1 −α

2

)]

(1 −α

2

)

2

(6)

With E[W] = 0 and σ

2

W

= Var[W], we can write the PDF of W as

f

W

(w) =

1

_

2πσ

2

W

e

−w

2

/2σ

2

W

(7)

Quiz 6.5

(1) From Table 6.1, each X

i

has MGF φ

X

(s) and random variable N has MGF φ

N

(s)

where

φ

X

(s) =

1

1 −s

, φ

N

(s) =

1

5

e

s

1 −

4

5

e

s

. (1)

From Theorem 6.12, R has MGF

φ

R

(s) = φ

N

(ln φ

X

(s)) =

1

5

φ

X

(s)

1 −

4

5

φ

X

(s)

(2)

Substituting the expression for φ

X

(s) yields

φ

R

(s) =

1

5

1

5

−s

. (3)

(2) From Table 6.1, we see that R has the MGF of an exponential (1/5) random variable.

The corresponding PDF is

f

R

(r) =

_

(1/5)e

−r/5

r ≥ 0

0 otherwise

(4)

This quiz is an example of the general result that a geometric sum of exponential

random variables is an exponential random variable.

42

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.6

(1) The expected access time is

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

12

0

x

12

dx = 6 msec (1)

(2) The second moment of the access time is

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

12

0

x

2

12

dx = 48 (2)

The variance of the access time is Var[X] = E[X

2

] −(E[X])

2

= 48 −36 = 12.

(3) Using X

i

to denote the access time of block i , we can write

A = X

1

+ X

2

+· · · + X

12

(3)

Since the expectation of the sum equals the sum of the expectations,

E [A] = E [X

1

] +· · · + E [X

12

] = 12E [X] = 72 msec (4)

(4) Since the X

i

are independent,

Var[A] = Var[X

1

] +· · · +Var[X

12

] = 12 Var[X] = 144 (5)

Hence, the standard deviation of A is σ

A

= 12

(5) To use the central limit theorem, we write

P [A > 75] = 1 − P [A ≤ 75] (6)

= 1 − P

_

A − E [A]

σ

A

≤

75 − E [A]

σ

A

_

(7)

≈ 1 −

_

75 −72

12

_

(8)

= 1 −0.5987 = 0.4013 (9)

Note that we used Table 3.1 to look up (0.25).

(6) Once again, we use the central limit theorem and Table 3.1 to estimate

P [A < 48] = P

_

A − E [A]

σ

A

<

48 − E [A]

σ

A

_

(10)

≈

_

48 −72

12

_

(11)

= 1 −(2) = 1 −0.9773 = 0.0227 (12)

43

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.7

Random variable K

n

has a binomial distribution for n trials and success probability

P[V] = 3/4.

(1) The expected number of voice calls out of 48 calls is E[K

48

] = 48P[V] = 36.

(2) The variance of K

48

is

Var[K

48

] = 48P [V] (1 − P [V]) = 48(3/4)(1/4) = 9 (1)

Thus K

48

has standard deviation σ

K

48

= 3.

(3) Using the ordinary central limit theorem and Table 3.1 yields

P [30 ≤ K

48

≤ 42] ≈

_

42 −36

3

_

−

_

30 −36

3

_

= (2) −(−2) (2)

Recalling that (−x) = 1 −(x), we have

P [30 ≤ K

48

≤ 42] ≈ 2(2) −1 = 0.9545 (3)

(4) Since K

48

is a discrete random variable, we can use the De Moivre-Laplace approx-

imation to estimate

P [30 ≤ K

48

≤ 42] ≈

_

42 +0.5 −36

3

_

−

_

30 −0.5 −36

3

_

(4)

= 2(2.16666) −1 = 0.9687 (5)

Quiz 6.8

The train interarrival times X

1

, X

2

, X

3

are iid exponential (λ) random variables. The

arrival time of the third train is

W = X

1

+ X

2

+ X

3

. (1)

In Theorem 6.11, we found that the sum of three iid exponential (λ) random variables is an

Erlang (n = 3, λ) random variable. From Appendix A, we ﬁnd that W has expected value

and variance

E [W] = 3/λ = 6 Var[W] = 3/λ

2

= 12 (2)

(1) By the Central Limit Theorem,

P [W > 20] = P

_

W −6

√

12

>

20 −6

√

12

_

≈ Q(7/

√

3) = 2.66 ×10

−5

(3)

44

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To use the Chernoff bound, we note that the MGF of W is

φ

W

(s) =

_

λ

λ −s

_

3

=

1

(1 −2s)

3

(4)

The Chernoff bound states that

P [W > 20] ≤ min

s≥0

e

−20s

φ

X

(s) = min

s≥0

e

−20s

(1 −2s)

3

(5)

To minimize h(s) = e

−20s

/(1 −2s)

3

, we set the derivative of h(s) to zero:

dh(s)

ds

=

−20(1 −2s)

3

e

−20s

+6e

−20s

(1 −2s)

2

(1 −2s)

6

= 0 (6)

This implies 20(1 − 2s) = 6 or s = 7/20. Applying s = 7/20 into the Chernoff

bound yields

P [W > 20] ≤

e

−20s

(1 −2s)

3

¸

¸

¸

¸

s=7/20

= (10/3)

3

e

−7

= 0.0338 (7)

(3) Theorem 3.11 says that for any w > 0, the CDF of the Erlang (λ, 3) random variable

W satisﬁes

F

W

(w) = 1 −

2

k=0

(λw)

k

e

−λw

k!

(8)

Equivalently, for λ = 1/2 and w = 20,

P [W > 20] = 1 − F

W

(20) (9)

= e

−10

_

1 +

10

1!

+

10

2

2!

_

= 61e

−10

= 0.0028 (10)

Although the Chernoff bound is relatively weak in that it overestimates the proba-

bility by roughly a factor of 12, it is a valid bound. By contrast, the Central Limit

Theorem approximation grossly underestimates the true probability.

Quiz 6.9

One solution to this problem is to follow the approach of Example 6.19:

%unifbinom100.m

sx=0:100;sy=0:100;

px=binomialpmf(100,0.5,sx); py=duniformpmf(0,100,sy);

[SX,SY]=ndgrid(sx,sy); [PX,PY]=ndgrid(px,py);

SW=SX+SY; PW=PX.*PY;

sw=unique(SW); pw=finitepmf(SW,PW,sw);

pmfplot(sw,pw,’\itw’,’\itP_W(w)’);

A graph of the PMF P

W

(w) appears in Figure 2 With some thought, it should be apparent

that the finitepmf function is implementing the convolution of the two PMFs.

45

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 20 40 60 80 100 120 140 160 180 200

0

0.002

0.004

0.006

0.008

0.01

w

P

W

(

w

)

Figure 2: From Quiz 6.9, the PMF P

W

(w) of the independent sum of a binomial (100, 0.5)

random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 7

Quiz 7.1

An exponential random variable with expected value 1 also has variance 1. By Theo-

rem 7.1, M

n

(X) has variance Var[M

n

(X)] = 1/n. Hence, we need n = 100 samples.

Quiz 7.2

The arrival time of the third elevator is W = X

1

+ X

2

+ X

3

. Since each X

i

is uniform

(0, 30),

E [X

i

] = 15, Var [X

i

] =

(30 −0)

2

12

= 75. (1)

Thus E[W] = 3E[X

i

] = 45, and Var[W] = 3 Var[X

i

] = 225.

(1) By the Markov inequality,

P [W > 75] ≤

E [W]

75

=

45

75

=

3

5

(2)

(2) By the Chebyshev inequality,

P [W > 75] = P [W − E [W] > 30] (3)

≤ P [|W − E [W]| > 30] ≤

Var [W]

30

2

=

225

900

=

1

4

(4)

Quiz 7.3

Deﬁne the random variable W = (X − µ

X

)

2

. Observe that V

100

(X) = M

100

(W). By

Theorem 7.6, the mean square error is

E

_

(M

100

(W) −µ

W

)

2

_

=

Var[W]

100

(1)

Observe that µ

X

= 0 so that W = X

2

. Thus,

µ

W

= E

_

X

2

_

=

_

1

−1

x

2

f

X

(x) dx = 1/3 (2)

E

_

W

2

_

= E

_

X

4

_

=

_

1

−1

x

4

f

X

(x) dx = 1/5 (3)

Therefore Var[W] = E[W

2

] − µ

2

W

= 1/5 − (1/3)

2

= 4/45 and the mean square error is

4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4

Assuming the number n of samples is large, we can use a Gaussian approximation for

M

n

(X). SinceE[X] = p and Var[X] = p(1 − p), we apply Theorem 7.13 which says that

the interval estimate

M

n

(X) −c ≤ p ≤ M

n

(X) +c (1)

has conﬁdence coefﬁcient 1 −α where

α = 2 −2

_

c

√

n

p(1 − p)

_

. (2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must

have

_

c

√

n

p(1 − p)

_

≥ 0.95 (3)

for every value of p. Since (x) is an increasing function of x, we must satisfy c

√

n ≥

1.65p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥

1.65

4

√

n

=

0.41

√

n

. (4)

The 0.9 conﬁdence interval estimate of p is

M

n

(X) −

0.41

√

n

≤ p ≤ M

n

(X) +

0.41

√

n

. (5)

For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c

√

n/( p(1−p))) ≥ 0.995.

This implies c

√

n ≥ 2.58p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥ (0.25)(2.58)/

√

n. In this case, the 0.99 conﬁdence interval estimate is

M

n

(X) −

0.645

√

n

≤ p ≤ M

n

(X) +

0.645

√

n

. (6)

Note that if M

100

(X) = 0.4, then the 0.99 conﬁdence interval estimate is

0.3355 ≤ p ≤ 0.4645. (7)

The interval is wide because the 0.99 conﬁdence is high.

Quiz 7.5

Following the approach of bernoullitraces.m, we generate m = 1000 sample

paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the

fraction of sample paths that have sample mean within one standard error of p. The pro-

gram bernoullisample.m generates graphs the number of traces within one standard

error as a function of the time, i.e. the number of trials in each trace.

48

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function OK=bernoullisample(n,m,p);

x=reshape(bernoullirv(p,m*n),n,m);

nn=(1:n)’*ones(1,m);

MN=cumsum(x)./nn;

stderr=sqrt(p*(1-p))./sqrt((1:n)’);

stderrmat=stderr*ones(1,m);

OK=sum(abs(MN-p)<stderrmat,2)/m;

plot(1:n,OK,’-s’);

The following graph was generated by bernoullisample(100,5000,0.5):

0 10 20 30 40 50 60 70 80 90 100

0.4

0.5

0.6

0.7

0.8

0.9

1

As we would expect, as m gets large, the fraction of traces within one standard error ap-

proaches 2(1) −1 ≈ 0.68. The unusual sawtooth pattern, though perhaps unexpected, is

examined in Problem 7.5.2.

49

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 8

Quiz 8.1

From the problem statement, each X

i

has PDF and CDF

f

X

i

(x) =

_

e

−x

x ≥ 0

0 otherwise

F

X

i

(x) =

_

0 x < 0

1 −e

−x

x ≥ 0

(1)

Hence, the CDF of the maximum of X

1

, . . . , X

15

obeys

F

X

(x) = P [X ≤ x] = P [X

1

≤ x, X

2

≤ x, · · · , X

15

≤ x] = [P [X

i

≤ x]]

15

. (2)

This implies that for x ≥ 0,

F

X

(x) =

_

F

X

i

(x)

_

15

=

_

1 −e

−x

_

15

(3)

To design a signiﬁcance test, we must choose a rejection region for X. A reasonable choice

is to reject the hypothesis if X is too small. That is, let R = {X ≤ r}. For a signiﬁcance

level of α = 0.01, we obtain

α = P [X ≤ r] = (1 −e

−r

)

15

= 0.01 (4)

It is straightforward to show that

r = −ln

_

1 −(0.01)

1/15

_

= 1.33 (5)

Hence, if we observe X < 1.33, then we reject the hypothesis.

Quiz 8.2

From the problem statement, the conditional PMFs of K are

P

K|H

0

(k) =

_

10

4k

e

−10

4

k!

k = 0, 1, . . .

0 otherwise

(1)

P

K|H

1

(k) =

_

10

6k

e

−10

6

k!

k = 0, 1, . . .

0 otherwise

(2)

Since the two hypotheses are equally likely, the MAP and ML tests are the same. From

Theorem 8.6, the ML hypothesis rule is

k ∈ A

0

if P

K|H

0

(k) ≥ P

K|H

1

(k) ; k ∈ A

1

otherwise. (3)

This rule simpliﬁes to

k ∈ A

0

if k ≤ k

∗

=

10

6

−10

4

ln 100

= 214, 975.7; k ∈ A

1

otherwise. (4)

Thus if we observe at least 214, 976 photons, then we accept hypothesis H

1

.

50

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 8.3

For the QPSK system, a symbol error occurs when s

i

is transmitted but (X

1

, X

2

) ∈ A

j

for some j = i . For a QPSK system, it is easier to calculate the probability of a correct

decision. Given H

0

, the conditional probability of a correct decision is

P [C|H

0

] = P [X

1

> 0, X

2

> 0|H

0

] = P

_

√

E/2 + N

1

> 0,

√

E/2 + N

2

> 0

_

(1)

Because of the symmetry of the signals, P[C|H

0

] = P[C|H

i

] for all i . This implies the

probability of a correct decision is P[C] = P[C|H

0

]. Since N

1

and N

2

are iid Gaussian

(0, σ) random variables, we have

P [C] = P [C|H

0

] = P

_

√

E/2 + N

1

> 0

_

P

_

√

E/2 + N

2

> 0

_

(2)

=

_

P

_

N

1

> −

√

E/2

__

2

(3)

=

_

1 −

_

−

√

E/2

σ

__

2

(4)

Since (−x) = 1 − (x), we have P[C] =

2

(

_

E/2σ

2

). Equivalently, the probability

of error is

P

ERR

= 1 − P [C] = 1 −

2

_

_

E

2σ

2

_

(5)

Quiz 8.4

To generate the ROC, the existing program sqdistor already calculates this miss

probability P

MISS

= P

01

and the false alarm probability P

FA

= P

10

. The modiﬁed pro-

gram, sqdistroc.m is essentially the same as sqdistor except the output is a ma-

trix FM whose columns are the false alarm and miss probabilities. Next, the program

sqdistrocplot.m calls sqdistroc three times to generate a plot that compares the

receiver performance for the three requested values of d. Here is the modiﬁed code:

function FM=sqdistroc(v,d,m,T)

%square law distortion recvr

%P(error) for m bits tested

%transmit v volts or -v volts,

%add N volts, N is Gauss(0,1)

%add d(v+N)ˆ2 distortion

%receive 1 if x>T, otherwise 0

%FM = [P(FA) P(MISS)]

x=(v+randn(m,1));

[XX,TT]=ndgrid(x,T(:));

P01=sum((XX+d*(XX.ˆ2)< TT),1)/m;

x= -v+randn(m,1);

[XX,TT]=ndgrid(x,T(:));

P10=sum((XX+d*(XX.ˆ2)>TT),1)/m;

FM=[P10(:) P01(:)];

function FM=sqdistrocplot(v,m,T);

FM1=sqdistroc(v,0.1,m,T);

FM2=sqdistroc(v,0.2,m,T);

FM5=sqdistroc(v,0.3,m,T);

FM=[FM1 FM2 FM5];

loglog(FM1(:,1),FM1(:,2),’-k’, ...

FM2(:,1),FM2(:,2),’--k’, ...

FM5(:,1),FM5(:,2),’:k’);

legend(’\it d=0.1’,’\it d=0.2’,...

’\it d=0.3’,3)

ylabel(’P_{MISS}’);

xlabel(’P_{FA}’);

51

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

To see the effect of d, the commands

T=-3:0.1:3; sqdistrocplot(3,100000,T);

generated the plot shown in Figure 3.

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

P

M

I

S

S

P

FA

d=0.1

d=0.2

d=0.3

T=-3:0.1:3; sqdistrocplot(3,100000,T);

Figure 3: The receiver operating curve for the communications system of Quiz 8.4 with

squared distortion.

52

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 9

Quiz 9.1

(1) First, we calculate the marginal PDF for 0 ≤ y ≤ 1:

f

Y

(y) =

_

y

0

2(y + x) dx = 2xy + x

2

¸

¸

¸

x=y

x=0

= 3y

2

(1)

This implies the conditional PDF of X given Y is

f

X|Y

(x|y) =

f

X,Y

(x, y)

f

Y

(y)

=

_

2

3y

+

2x

3y

2

0 ≤ x ≤ y

0 otherwise

(2)

(2) The minimum mean square error estimate of X given Y = y is

ˆ x

M

(y) = E [X|Y = y] =

_

y

0

_

2x

3y

+

2x

2

3y

2

_

dx = 5y/9 (3)

Thus the MMSE estimator of X given Y is

ˆ

X

M

(Y) = 5Y/9.

(3) To obtain the conditional PDF f

Y|X

(y|x), we need the marginal PDF f

X

(x). For

0 ≤ x ≤ 1,

f

X

(x) =

_

1

x

2(y + x) dy = y

2

+2xy

¸

¸

¸

y=1

y=x

= 1 +2x −3x

2

(4)

(5)

For 0 ≤ x ≤ 1, the conditional PDF of Y given X is

f

Y|X

(y|x) =

_

2(y+x)

1+2x−3x

2

x ≤ y ≤ 1

0 otherwise

(6)

(4) The MMSE estimate of Y given X = x is

ˆ y

M

(x) = E [Y|X = x] =

_

1

x

2y

2

+2xy

1 +2x −3x

2

dy (7)

=

2y

3

/3 + xy

2

1 +2x −3x

2

¸

¸

¸

¸

y=1

y=x

(8)

=

2 +3x −5x

3

3 +6x −9x

2

(9)

53

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.2

(1) Since the expectation of the sum equals the sum of the expectations,

E [R] = E [T] + E [X] = 0 (1)

(2) Since T and X are independent, the variance of the sum R = T + X is

Var[R] = Var[T] +Var[X] = 9 +3 = 12 (2)

(3) Since T and R have expected values E[R] = E[T] = 0,

Cov [T, R] = E [T R] = E [T(T + X)] = E

_

T

2

_

+ E [T X] (3)

Since T and X are independent and have zero expected value, E[T X] = E[T]E[X] =

0 and E[T

2

] = Var[T]. Thus Cov[T, R] = Var[T] = 9.

(4) From Deﬁnition 4.8, the correlation coefﬁcient of T and R is

ρ

T,R

=

Cov [T, R]

√

Var[R] Var[T]

=

σ

T

σ

R

=

√

3/2 (4)

(5) From Theorem 9.4, the optimum linear estimate of T given R is

ˆ

T

L

(R) = ρ

T,R

σ

T

σ

R

(R − E [R]) + E [T] (5)

Since E[R] = E[T] = 0 and ρ

T,R

= σ

T

/σ

R

,

ˆ

T

L

(R) =

σ

2

T

σ

2

R

R =

σ

2

T

σ

2

T

+σ

2

X

R =

3

4

R (6)

Hence a

∗

= 3/4 and b

∗

= 0.

(6) By Theorem 9.4, the mean square error of the linear estimate is

e

∗

L

= Var[T](1 −ρ

2

T,R

) = 9(1 −3/4) = 9/4 (7)

Quiz 9.3

When R = r, the conditional PDF of X = Y −40−40 log

10

r is Gaussian with expected

value −40 −40 log

10

r and variance 64. The conditional PDF of X given R is

f

X|R

(x|r) =

1

√

128π

e

−(x+40+40 log

10

r)

2

/128

(1)

54

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the conditional PDF f

X|R

(x|r), we can use Deﬁnition 9.2 to write the ML estimate

of R given X = x as

ˆ r

ML

(x) = arg max

r≥0

f

X|R

(x|r) (2)

We observe that f

X|R

(x|r) is maximized when the exponent (x + 40 + 40 log

10

r)

2

is

minimized. This minimum occurs when the exponent is zero, yielding

log

10

r = −1 − x/40 (3)

or

ˆ r

ML

(x) = (0.1)10

−x/40

m (4)

If the result doesn’t look correct, note that a typical ﬁgure for the signal strength might be

x = −120 dB. This corresponds to a distance estimate of ˆ r

ML

(−120) = 100 m.

For the MAP estimate, we observe that the joint PDF of X and R is

f

X,R

(x, r) = f

X|R

(x|r) f

R

(r) =

1

10

6

√

32π

re

−(x+40+40 log

10

r)

2

/128

(5)

From Theorem 9.6, the MAP estimate of R given X = x is the value of r that maximizes

f

X,R

(x, r). That is,

ˆ r

MAP

(x) = arg max

0≤r≤1000

f

X,R

(x, r) (6)

Note that we have included the constraint r ≤ 1000 in the maximization to highlight the

fact that under our probability model, R ≤ 1000 m. Setting the derivative of f

X,R

(x, r)

with respect to r to zero yields

e

−(x+40+40 log

10

r)

2

/128

_

1 −

80 log

10

e

128

(x +40 +40 log

10

r)

_

= 0 (7)

Solving for r yields

r = 10

_

1

25 log

10

e

−1

_

10

−x/40

= (0.1236)10

−x/40

(8)

This is the MAP estimate of R given X = x as long as r ≤ 1000 m. When x ≤ −156.3 dB,

the above estimate will exceed 1000 m, which is not possible in our probability model.

Hence, the complete description of the MAP estimate is

ˆ r

MAP

(x) =

_

1000 x < −156.3

(0.1236)10

−x/40

x ≥ −156.3

(9)

For example, if x = −120dB, then ˆ r

MAP

(−120) = 123.6 m. When the measured signal

strength is not too low, the MAP estimate is 23.6% larger than the ML estimate. This re-

ﬂects the fact that large values of R are a priori more probable than small values. However,

for very low signal strengths, the MAP estimate takes into account that the distance can

never exceed 1000 m.

55

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.4

(1) From Theorem 9.4, the LMSE estimate of X

2

given Y

2

is

ˆ

X

2

(Y

2

) = a

∗

Y

2

+b

∗

where

a

∗

=

Cov [X

2

, Y

2

]

Var[Y

2

]

, b

∗

= µ

X

2

−a

∗

µ

Y

2

. (1)

Because E[X] = E[Y] = 0,

Cov [X

2

, Y

2

] = E [X

2

Y

2

] = E [X

2

(X

2

+ W

2

)] = E

_

X

2

2

_

= 1 (2)

Var[Y

2

] = Var[X

2

] +Var[W

2

] = E

_

X

2

2

_

+ E

_

W

2

2

_

= 1.1 (3)

It follows that a

∗

= 1/1.1. Because µ

X

2

= µ

Y

2

= 0, it follows that b

∗

= 0. Finally,

to compute the expected square error, we calculate the correlation coefﬁcient

ρ

X

2

,Y

2

=

Cov [X

2

, Y

2

]

σ

X

2

σ

Y

2

=

1

√

1.1

(4)

The expected square error is

e

∗

L

= Var[X

2

](1 −ρ

2

X

2

,Y

2

) = 1 −

1

1.1

=

1

11

= 0.0909 (5)

(2) Since Y = X + W and E[X] = E[W] = 0, it follows that E[Y] = 0. Thus we can

apply Theorem 9.7. Note that X and W have correlation matrices

R

X

=

_

1 −0.9

−0.9 1

_

, R

W

=

_

0.1 0

0 0.1

_

. (6)

In terms of Theorem 9.7, n = 2 and we wish to estimate X

2

given the observation

vector Y =

_

Y

1

Y

2

_

**. To apply Theorem 9.7, we need to ﬁnd R
**

Y

and R

YX

2

.

R

Y

= E

_

YY

_

= E

_

(X +W)(X

+W

)

_

(7)

= E

_

XX

+XW

+WX

+WW

_

. (8)

Because Xand Ware independent, E[XW

] = E[X]E[W

] = 0. Similarly, E[WX

] =

0. This implies

R

Y

= E

_

XX

_

+ E

_

WW

_

= R

X

+R

W

=

_

1.1 −0.9

−0.9 1.1

_

. (9)

In addition, we need to ﬁnd

R

YX

2

= E [YX

2

] =

_

E [Y

1

X

2

]

E [Y

2

X

2

]

_

=

_

E [(X

1

+ W

1

)X

2

]

E [(X

2

+ W

2

)X

2

]

_

. (10)

56

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since Xand Ware independent vectors, E[W

1

X

2

] = E[W

1

]E[X

2

] = 0 and E[W

2

X

2

] =

0. Thus

R

YX

2

=

_

E[X

1

X

2

]

E

_

X

2

2

_

_

=

_

−0.9

1

_

. (11)

By Theorem 9.7,

ˆ a = R

−1

Y

R

YX

2

=

_

−0.225

0.725

_

(12)

Therefore, the optimum linear estimator of X

2

given Y

1

and Y

2

is

ˆ

X

L

= ˆ a

Y = −0.225Y

1

+0.725Y

2

. (13)

The mean square error is

Var [X

2

] − ˆ a

R

YX

2

= Var [X] −a

1

r

Y

1

,X

2

−a

2

r

Y

2

,X

2

= 0.0725. (14)

Quiz 9.5

Since X and W have zero expected value, Y also has zero expected value. Thus, by

Theorem 9.7,

ˆ

X

L

(Y) = ˆ a

Y where ˆ a = R

−1

Y

R

YX

. Since X and W are independent,

E[WX] = 0 and E[XW

] = 0

. This implies

R

YX

= E [YX] = E [(1X +W)X] = 1E

_

X

2

_

= 1. (1)

By the same reasoning, the correlation matrix of Y is

R

Y

= E

_

YY

_

= E

_

(1X +W)(1

X +W

)

_

(2)

= 11

E

_

X

2

_

+1E

_

XW

_

+ E [WX] 1

+ E

_

WW

_

(3)

= 11

+R

W

(4)

Note that 11

**is a 20 ×20 matrix with every entry equal to 1. Thus,
**

ˆ a = R

−1

Y

R

YX

=

_

11

+R

W

_

−1

1 (5)

and the optimal linear estimator is

ˆ

X

L

(Y) = 1

_

11

+R

W

_

−1

Y (6)

The mean square error is

e

∗

L

= Var[X] − ˆ a

R

YX

= 1 −1

_

11

+R

W

_

−1

1 (7)

Now we note that R

W

has i, j th entry R

W

(i, j ) = c

|i −j |−1

. The question we must address

is what value c minimizes e

∗

L

. This problem is atypical in that one does not usually get

57

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

to choose the correlation structure of the noise. However, we will see that the answer is

somewhat instructive.

We note that the answer is not obviously apparent from Equation (7). In particular, we

observe that Var[W

i

] = R

W

(i, i ) = 1/c. Thus, when c is small, the noises W

i

have high

variance and we would expect our estimator to be poor. On the other hand, if c is large

W

i

and W

j

are highly correlated and the separate measurements of X are very dependent.

This would suggest that large values of c will also result in poor MSE. If this argument is

not clear, consider the extreme case in which every W

i

and W

j

have correlation coefﬁcient

ρ

i j

= 1. In this case, our 20 measurements will be all the same and one measurement is as

good as 20 measurements.

To ﬁnd the optimal value of c, we write a MATLAB function mquiz9(c) to calculate

the MSE for a given c and second function that ﬁnds plots the MSE for a range of values

of c.

function [mse,af]=mquiz9(c);

v1=ones(20,1);

RW=toeplitz(c.ˆ((0:19)-1));

RY=(v1*(v1’)) +RW;

af=(inv(RY))*v1;

mse=1-((v1’)*af);

function cmin=mquiz9minc(c);

msec=zeros(size(c));

for k=1:length(c),

[msec(k),af]=mquiz9(c(k));

end

plot(c,msec);

xlabel(’c’);ylabel(’e_Lˆ*’);

[msemin,optk]=min(msec);

cmin=c(optk);

Note in mquiz9 that v1 corresponds to the vector 1 of all ones. The following commands

ﬁnds the minimum c and also produces the following graph:

>> c=0.01:0.01:0.99;

>> mquiz9minc(c)

ans =

0.4500

0 0.5 1

0.2

0.4

0.6

0.8

1

c

e

L *

As we see in the graph, both small values and large values of c result in large MSE.

58

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 10

Quiz 10.1

There are many correct answers to this question. A correct answer speciﬁes enough

random variables to specify the sample path exactly. One choice for an alternate set of

random variables that would specify m(t, s) is

• m(0, s), the number of ongoing calls at the start of the experiment

• N, the number of new calls that arrive during the experiment

• X

1

, . . . , X

N

, the interarrival times of the N new arrivals

• H, the number of calls that hang up during the experiment

• D

1

, . . . , D

H

, the call completion times of the H calls that hang up

Quiz 10.2

(1) We obtain a continuous time, continuous valued process when we record the temper-

ature as a continuous waveform over time.

(2) If at every moment in time, we round the temperature to the nearest degree, then we

obtain a continuous time, discrete valued process.

(3) If we sample the process in part (a) every T seconds, then we obtain a discrete time,

continuous valued process.

(4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time,

discrete valued process.

Quiz 10.3

(1) Each resistor has resistance R in ohms with uniform PDF

f

R

(r) =

_

0.01 950 ≤ r ≤ 1050

0 otherwise

(1)

The probability that a test produces a 1% resistor is

p = P [990 ≤ R ≤ 1010] =

_

1010

990

(0.01) dr = 0.2 (2)

59

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) In t seconds, exactly t resistors are tested. Each resistor is a 1% resistor with proba-

bility p, independent of any other resistor. Consequently, the number of 1% resistors

found has the binomial PMF

P

N(t )

(n) =

_ _

t

n

_

p

n

(1 − p)

t −n

n = 0, 1, . . . , t

0 otherwise

(3)

(3) First we will ﬁnd the PMF of T

1

. This problem is easy if we view each resistor test

as an independent trial. A success occurs on a trial with probability p if we ﬁnd a

1% resistor. The ﬁrst 1% resistor is found at time T

1

= t if we observe failures on

trials 1, . . . , t − 1 followed by a success on trial t . Hence, just as in Example 2.11,

T

1

has the geometric PMF

P

T

1

(t ) =

_

(1 − p)

t −1

p t = 1, 2, . . .

9 otherwise

(4)

Since p = 0.2, the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is

P

T

1

(5) = (0.8)

4

(0.2) = 0.08192.

(4) From Theorem 2.5, a geometric random variable with success probability p has ex-

pected value 1/p. In this problem, E[T

1

] = 1/p = 5.

(5) Note that once we ﬁnd the ﬁrst 1% resistor, the number of additional trials needed to

ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/p

since each independent trial is a success with probability p. That is, T

2

= T

1

+ T

where T

**is independent and identically distributed to T
**

1

. Thus

E [T

2

|T

1

= 10] = E [T

1

|T

1

= 10] + E

_

T

|T

1

= 10

_

(5)

= 10 + E

_

T

_

= 10 +5 = 15 (6)

Quiz 10.4

Since each X

i

is a N(0, 1) random variable, each X

i

has PDF

f

X(i )

(x) =

1

√

2π

e

−x

2

/2

(1)

By Theorem 10.1, the joint PDF of X =

_

X

1

· · · X

n

_

is

f

X

(x) = f

X(1),...,X(n)

(x

1

, . . . , x

n

) =

k

i =1

f

X

(x

i

) =

1

(2π)

n/2

e

−(x

2

1

+···+x

2

n

)/2

(2)

60

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.5

The ﬁrst and second hours are nonoverlapping intervals. Since one hour equals 3600

sec and the Poisson process has a rate of 10 packets/sec, the expected number of packets

in each hour is E[M

i

] = α = 36, 000. This implies M

1

and M

2

are independent Poisson

random variables each with PMF

P

M

i

(m) =

_

α

m

e

−α

m!

m = 0, 1, 2, . . .

0 otherwise

(1)

Since M

1

and M

2

are independent, the joint PMF of M

1

and M

2

is

P

M

1

,M

2

(m

1

, m

2

) = P

M

1

(m

1

) P

M

2

(m

2

) =

⎧

⎪

⎪

⎨

⎪

⎪

⎩

α

m

1

+m

2

e

−2α

m

1

!m

2

!

m

1

= 0, 1, . . . ;

m

2

= 0, 1, . . . ,

0 otherwise.

(2)

Quiz 10.6

To answer whether N

**(t ) is a Poisson process, we look at the interarrival times. Let
**

X

1

, X

2

, . . . denote the interarrival times of the N(t ) process. Since we count only even-

numbered arrival for N

(t ), the time until the ﬁrst arrival of the N

(t ) is Y

1

= X

1

+ X

2

.

Since X

1

and X

2

are independent exponential (λ) random variables, Y

1

is an Erlang (n =

2, λ) random variable; see Theorem 6.11. Since Y

i

(t ), the i th interarrival time of the N

(t )

process, has the same PDF as Y

1

(t ), we can conclude that the interarrival times of N

(t )

are not exponential random variables. Thus N

**(t ) is not a Poisson process.
**

Quiz 10.7

First, we note that for t > s,

X(t ) − X(s) =

W(t ) − W(s)

√

α

(1)

Since W(t ) −W(s) is a Gaussian random variable, Theorem 3.13 states that W(t ) −W(s)

is Gaussian with expected value

E [X(t ) − X(s)] =

E [W(t ) − W(s)]

√

α

= 0 (2)

and variance

E

_

(W(t ) − W(s))

2

_

=

E

_

(W(t ) − W(s))

2

_

α

=

α(t −s)

α

(3)

Consider s

≤ s < t . Since s ≥ s

, W(t ) − W(s) is independent of W(s

). This implies

[W(t ) − W(s)]/

√

α is independent of W(s

)/

√

α for all s ≥ s

. That is, X(t ) − X(s) is

independent of X(s

) for all s ≥ s

**. Thus X(t ) is a Brownian motion process with variance
**

Var[X(t )] = t .

61

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.8

First we ﬁnd the expected value

µ

Y

(t ) = µ

X

(t ) +µ

N

(t ) = µ

X

(t ). (1)

To ﬁnd the autocorrelation, we observe that since X(t ) and N(t ) are independent and since

N(t ) has zero expected value, E[X(t )N(t

)] = E[X(t )]E[N(t

)] = 0. Since R

Y

(t, τ) =

E[Y(t )Y(t +τ)], we have

R

Y

(t, τ) = E [(X(t ) + N(t )) (X(t +τ) + N(t +τ))] (2)

= E [X(t )X(t +τ)] + E [X(t )N(t +τ)]

+ E [X(t +τ)N(t )] + E [N(t )N(t +τ)] (3)

= R

X

(t, τ) + R

N

(t, τ). (4)

Quiz 10.9

From Deﬁnition 10.14, X

1

, X

2

, . . . is a stationary random sequence if for all sets of

time instants n

1

, . . . , n

m

and time offset k,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) (1)

Since the random sequence is iid,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (2)

Similarly, for time instants n

1

+k, . . . , n

m

+k,

f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (3)

We can conclude that the iid random sequence is stationary.

Quiz 10.10

We must check whether each function R(τ) meets the conditions of Theorem 10.12:

R(τ) ≥ 0 R(τ) = R(−τ) |R(τ)| ≤ R(0) (1)

(1) R

1

(τ) = e

−|τ|

meets all three conditions and thus is valid.

(2) R

2

(τ) = e

−τ

2

also is valid.

(3) R

3

(τ) = e

−τ

cos τ is not valid because

R

3

(−2π) = e

2π

cos 2π = e

2π

> 1 = R

3

(0) (2)

(4) R

4

(τ) = e

−τ

2

sin τ also cannot be an autocorrelation function because

R

4

(π/2) = e

−π/2

sin π/2 = e

−π/2

> 0 = R

4

(0) (3)

62

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.11

(1) The autocorrelation of Y(t ) is

R

Y

(t, τ) = E [Y(t )Y(t +τ)] (1)

= E [X(−t )X(−t −τ)] (2)

= R

X

(−t −(−t −τ)) = R

X

(τ) (3)

Since E[Y(t )] = E[X(−t )] = µ

X

, we can conclude that Y(t ) is a wide sense

stationary process. In fact, we see that by viewing a process backwards in time, we

see the same second order statistics.

(2) Since X(t ) and Y(t ) are both wide sense stationary processes, we can check whether

they are jointly wide sense stationary by seeing if R

XY

(t, τ) is just a function of τ.

In this case,

R

XY

(t, τ) = E [X(t )Y(t +τ)] (4)

= E [X(t )X(−t −τ)] (5)

= R

X

(t −(−t −τ)) = R

X

(2t +τ) (6)

Since R

XY

(t, τ) depends on both t and τ, we conclude that X(t ) and Y(t ) are not

jointly wide sense stationary. To see why this is, suppose R

X

(τ) = e

−|τ|

so that

samples of X(t ) far apart in time have almost no correlation. In this case, as t gets

larger, Y(t ) = X(−t ) and X(t ) become less and less correlated.

Quiz 10.12

From the problem statement,

E [X(t )] = E [X(t +1)] = 0 (1)

E [X(t )X(t +1)] = 1/2 (2)

Var[X(t )] = Var[X(t +1)] = 1 (3)

The Gaussian random vector X =

_

X(t ) X(t +1)

_

**has covariance matrix and corre-
**

sponding inverse

C

X

=

_

1 1/2

1/2 1

_

C

−1

X

=

4

3

_

1 −1/2

−1/2 1

_

(4)

Since

x

C

−1

X

x =

_

x

0

x

1

_

4

3

_

1 −1/2

−1/2 1

_ _

x

0

x

1

_

=

4

3

_

x

2

0

− x

0

x

+

x

2

1

_

(5)

the joint PDF of X(t ) and X(t +1) is the Gaussian vector PDF

f

X(t ),X(t +1)

(x

0

, x

1

) =

1

(2π)

n/2

[det (C

X

)]

1/2

exp

_

−

1

2

x

C

−1

X

x

_

(6)

=

1

√

3π

2

e

−

2

3

_

x

2

0

−x

0

x

1

+x

2

1

_

(7)

63

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 10 20 30 40 50 60 70 80 90 100

0

20

40

60

80

100

120

t

M

(

t

)

Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10.13.

Quiz 10.13

The simple structure of the switch simulation of Example 10.28 admits a deceptively

simple solution in terms of the vector of arrivals A and the vector of departures D. With the

introduction of call blocking. we cannot generate these vectors all at once. In particular,

when an arrival occurs at time t , we need to know that M(t ), the number of ongoing calls,

satisﬁes M(t ) < c = 120. Otherwise, when M(t ) = c, we must block the call. Call

blocking can be implemented by setting the service time of the call to zero so that the call

departs as soon as it arrives.

The blocking switch is an example of a discrete event system. The system evolves via

a sequence of discrete events, namely arrivals and departures, at discrete time instances. A

simulation of the system moves from one time instant to the next by maintaining a chrono-

logical schedule of future events (arrivals and departures) to be executed. The program

simply executes the event at the head of the schedule. The logic of such a simulation is

1. Start at time t = 0 with an empty system. Schedule the ﬁrst arrival to occur at S

1

, an

exponential (λ) random variable.

2. Examine the head-of-schedule event.

• When the head-of-schedule event is the kth arrival is at time t , check the state

M(t ).

– If M(t ) < c, admit the arrival, increase the system state n by 1, and sched-

ule a departure to occur at time t + S

n

, where S

k

is an exponential (λ)

random variable.

– If M(t ) = c, block the arrival, do not schedule a departure event.

• If the head of schedule event is a departure, reduce the system state n by 1.

3. Delete the head-of-schedule event and go to step 2.

After the head-of-schedule event is completed and any new events (departures in this sys-

tem) are scheduled, we know the system state cannot change until the next scheduled event.

64

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus we know that M(t ) will stay the same until then. In our simulation, we use the vector

t as the set of time instances at which we inspect the system state. Thus for all times t(i)

between the current head-of-schedule event and the next, we set m(i) to the current switch

state.

The complete program is shown in Figure 5. In most programming languages, it is

common to implement the event schedule as a linked list where each item in the list has

a data structure indicating an event timestamp and the type of the event. In MATLAB, a

simple (but not elegant) way to do this is to have maintain two vectors: time is a list

of timestamps of scheduled events and event is a the list of event types. In this case,

event(i)=1 if the i th scheduled event is an arrival, or event(i)=-1 if the i th sched-

uled event is a departure.

When the program is passed a vector t, the output [m a b] is such that m(i) is the

number of ongoing calls at time t(i) while a and b are the number of admits and blocks.

The following instructions

t=0:0.1:5000;

[m,a,b]=simblockswitch(10,0.1,120,t);

plot(t,m);

generated a simulation lasting 5,000 minutes. A sample path of the ﬁrst 100 minutes of

that simulation is shown in Figure 4. The 5,000 minute full simulation produced a=49658

admitted calls and b=239 blocked calls. We can estimate the probability a call is blocked

as

ˆ

P

b

=

b

a +b

= 0.0048. (1)

In Chapter 12, we will learn that the exact blocking probability is given by Equation (12.93),

a result known as the “Erlang-B formula.” From the Erlang-B formula, we can calculate

that the exact blocking probability is P

b

= 0.0057. One reason our simulation underesti-

mates the blocking probability is that in a 5,000 minute simulation, roughly the ﬁrst 100

minutes are needed to load up the switch since the switch is idle when the simulation starts

at time t = 0. However, this says that roughly the ﬁrst two percent of the simulation time

was unusual. Thus this would account for only part of the disparity. The rest of the gap

between 0.0048 and 0.0057 is that a simulation that includes only 239 blocks is not all that

likely to give a very accurate result for the blocking probability.

Note that in Chapter 12, we will learn that the blocking switch is an example of an

M/M/c/c queue, a kind of Markov chain. Chapter 12 develops techniques for analyzing

and simulating systems described by Markov chains that are much simpler than the discrete

event simulation technique shown here. Nevertheless, for very complicated systems, the

discrete event simulation is widely-used and often very efﬁcient simulation method.

65

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function [M,admits,blocks]=simblockswitch(lam,mu,c,t);

blocks=0; %total # blocks

admits=0; %total # admits

M=zeros(size(t));

n=0; % # in system

time=[ exponentialrv(lam,1) ];

event=[ 1 ]; %first event is an arrival

timenow=0;

tmax=max(t);

while (timenow<tmax)

M((timenow<=t)&(t<time(1)))=n;

timenow=time(1);

eventnow=event(1);

event(1)=[ ]; time(1)= [ ]; % clear current event

if (eventnow==1) % arrival

arrival=timenow+exponentialrv(lam,1); % next arrival

b4arrival=time<arrival;

event=[event(b4arrival) 1 event(˜b4arrival)];

time=[time(b4arrival) arrival time(˜b4arrival)];

if n<c %call admitted

admits=admits+1;

n=n+1;

depart=timenow+exponentialrv(mu,1);

b4depart=time<depart;

event=[event(b4depart) -1 event(˜b4depart)];

time=[time(b4depart) depart time(˜b4depart)];

else

blocks=blocks+1; %one more block, immed departure

disp(sprintf(’Time %10.3d Admits %10d Blocks %10d’,...

timenow,admits,blocks));

end

elseif (eventnow==-1) %departure

n=n-1;

end

end

Figure 5: Discrete event simulation of the blocking switch of Quiz 10.13.

66

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 11

Quiz 11.1

By Theorem 11.2,

µ

Y

= µ

X

_

∞

−∞

h(t )dt = 2

_

∞

0

e

−t

dt = 2 (1)

Since R

X

(τ) = δ(τ), the autocorrelation function of the output is

R

Y

(τ) =

_

∞

−∞

h(u)

_

∞

−∞

h(v)δ(τ +u −v) dv du =

_

∞

−∞

h(u)h(τ +u) du (2)

For τ > 0, we have

R

Y

(τ) =

_

∞

0

e

−u

e

−τ−u

du = e

−τ

_

∞

0

e

−2u

du =

1

2

e

−τ

(3)

For τ < 0, we can deduce that R

Y

(τ) =

1

2

e

−|τ|

by symmetry. Just to be safe though, we

can double check. For τ < 0,

R

Y

(τ) =

_

∞

−τ

h(u)h(τ +u) du =

_

∞

−τ

e

−u

e

−τ−u

du =

1

2

e

τ

(4)

Hence,

R

Y

(τ) =

1

2

e

−|τ|

(5)

Quiz 11.2

The expected value of the output is

µ

Y

= µ

X

∞

n=−∞

h

n

= 0.5(1 +−1) = 0 (1)

The autocorrelation of the output is

R

Y

[n] =

1

i =0

1

j =0

h

i

h

j

R

X

[n +i − j ] (2)

= 2R

X

[n] − R

X

[n −1] − R

X

[n +1] =

_

1 n = 0

0 otherwise

(3)

Since µ

Y

= 0, The variance of Y

n

is Var[Y

n

] = E[Y

2

n

] = R

Y

[0] = 1.

67

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−15 −10 −5 0 5 10 15

0

0.2

0.4

0.6

f

S

X

(

f

)

−1500−1000 −500 0 500 1000 1500

0

2

4

6

8

x 10

f

S

X

(

f

)

−0.2 −0.1 0 0.1 0.2

−5

0

5

10

τ

R

X

(

τ

)

−2 −1 0 1 2

x 10

−3

−5

0

5

10

τ

R

X

(

τ

)

(a) W = 10 (b) W = 1000

Figure 6: The autocorrelation R

X

(τ) and power spectral density S

X

( f ) for process X(t ) in

Quiz 11.5.

Quiz 11.3

By Theorem 11.8, Y =

_

Y

33

Y

34

Y

35

_

**is a Gaussian random vector since X
**

n

is

a Gaussian random process. Moreover, by Theorem 11.5, each Y

n

has expected value

E[Y

n

] = µ

X

∞

n=−∞

h

n

= 0. Thus E[Y] = 0. Fo ﬁnd the PDF of the Gaussian vector

Y, we need to ﬁnd the covariance matrix C

Y

, which equals the correlation matrix R

Y

since

Y has zero expected value. One way to ﬁnd the R

Y

is to observe that R

Y

has the Toeplitz

structure of Theorem 11.6 and to use Theorem 11.5 to ﬁnd the autocorrelation function

R

Y

[n] =

∞

i =−∞

∞

j =−∞

h

i

h

j

R

X

[n +i − j ]. (1)

Despite the fact that R

X

[k] is an impulse, using Equation (1) is surprisingly tedious because

we still need to sum over all i and j such that n +i − j = 0.

In this problem, it is simpler to observe that Y = HX where

X =

_

X

30

X

31

X

32

X

33

X

34

X

35

_

(2)

and

H =

1

4

⎡

⎣

1 1 1 1 0 0

0 1 1 1 1 0

0 0 1 1 1 1

⎤

⎦

. (3)

In this case, following Theorem 11.7, or by directly applying Theorem 5.13 with µ

X

= 0

and A = H, we obtain R

Y

= HR

X

H

. Since R

X

[n] = δ

n

, R

X

= I, the identity matrix.

68

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus

C

Y

= R

Y

= HH

=

1

16

⎡

⎣

4 3 2

3 4 3

2 3 4

⎤

⎦

. (4)

It follows (very quickly if you use MATLAB for 3 ×3 matrix inversion) that

C

−1

Y

= 16

⎡

⎣

7/12 −1/2 1/12

−1/2 1 −1/2

1/12 −1/2 7/12

⎤

⎦

. (5)

Thus, the PDF of Y is

f

Y

(y) =

1

(2π)

3/2

[det (C

Y

)]

1/2

exp

_

−

1

2

y

C

−1

Y

y

_

. (6)

A disagreeable amount of algebra will show det(C

Y

) = 3/1024 and that the PDF can be

“simpliﬁed” to

f

Y

(y) =

16

√

6π

3

exp

_

−8

_

7

12

y

2

33

+ y

2

34

+

7

12

y

2

35

− y

33

y

34

+

1

6

y

33

y

35

− y

34

y

35

__

. (7)

Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution

is that y

C

−1

Y

y is a very concise representation of the cross-terms in the exponent of f

Y

(y).

Quiz 11.4

This quiz is solved using Theorem 11.9 for the case of k = 1 and M = 2. In this case,

X

n

=

_

X

n−1

X

n

_

and

R

X

n

=

_

R

X

[0] R

X

[1]

R

X

[1] R

X

[0]

_

=

_

1.1 0.9

0.9 1.1

_

(1)

and

R

X

n

X

n+1

= E

__

X

n−1

X

n

_

X

n+1

_

=

_

R

X

[2]

R

X

[1]

_

=

_

0.81

0.9

_

. (2)

The MMSE linear ﬁrst order ﬁlter for predicting X

n+1

at time n is the ﬁlter h such that

←−

h = R

−1

X

n

R

X

n

X

n+1

=

_

1.1 0.9

0.9 1.1

_

−1

_

0.81

0.9

_

=

1

400

_

81

261

_

. (3)

It follows that the ﬁlter is h =

_

261/400 81/400

_

**and the MMSE linear predictor is
**

ˆ

X

n+1

=

81

400

X

n−1

+

261

400

X

n

. (4)

to ﬁnd the mean square error, one approach is to follow the method of Example 11.13 and

to directly calculate

e

∗

L

= E

_

(X

n+1

−

ˆ

X

n+1

)

2

_

. (5)

69

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This method is workable for this simple problem but becomes increasingly tedious for

higher order ﬁlters. Instead, we can derive the mean square error for an arbitary prediction

ﬁlter h. Since

ˆ

X

n+1

=

←−

h

X

n

,

e

∗

L

= E

_

_

X

n+1

−

←−

h

X

n

_

2

_

(6)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−

←−

h

X

n

)

_

(7)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−X

n

←−

h )

_

(8)

After a bit of algebra, we obtain

e

∗

L

= R

X

[0] −2

←−

h

R

X

n

X

n+1

+

←−

h

R

X

n

←−

h (9)

(10)

with the substitution

←−

h = R

−1

X

n

R

X

n

X

n+1

, we obtain

e

∗

L

= R

X

[0] −R

X

n

X

n+1

R

−1

X

n

R

X

n

X

n+1

(11)

= R

X

[0] −

←−

h

R

X

n

X

n+1

(12)

Note that this is essentially the same result as Theorem 9.7 with Y = X

n

, X = X

n+1

and

ˆ a

=

←−

h

. It is noteworthy that the result is derived in a much simpler way in the proof of

Theorem 9.7 by using the orthoginality property of the LMSE estimator.

In any case, the mean square error is

e

∗

L

= R

X

[0] −

←−

h

R

X

n

X

n+1

= 1.1 −

1

400

_

81 261

_

_

0.81

0.9

_

=

506

1451

= 0.3487. (13)

recalling that the blind estimate would yield a mean square error of Var[X] = 1.1, we see

that observing X

n−1

and X

n

improves the accuracy of our prediction of X

n+1

.

Quiz 11.5

(1) By Theorem 11.13(b), the average power of X(t ) is

E

_

X

2

(t )

_

=

_

∞

−∞

S

X

( f ) d f =

_

W

−W

5

W

d f = 10 Watts (1)

(2) The autocorrelation function is the inverse Fourier transform of S

X

( f ). Consulting

Table 11.1, we note that

S

X

( f ) = 10

1

2W

rect

_

f

2W

_

(2)

It follows that the inverse transform of S

X

( f ) is

R

X

(τ) = 10 sinc(2Wτ) = 10

sin(2πWτ)

2πWτ

(3)

(3) For W = 10 Hz and W = 1 kHZ, graphs of S

X

( f ) and R

X

(τ) appear in Figure 6.

70

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.6

In a sampled system, the discrete time impulse δ[n] has a ﬂat discrete Fourier transform.

That is, if R

X

[n] = 10δ[n], then

S

X

(φ) =

∞

n=−∞

10δ[n]e

−j 2πφn

= 10 (1)

Thus, R

X

[n] = 10δ[n]. (This quiz is really lame!)

Quiz 11.7

Since Y(t ) = X(t −t

0

),

R

XY

(t, τ) = E [X(t )Y(t +τ)] = E [X(t )X(t +τ −t

0

)] = R

X

(τ −t

0

) (1)

We see that R

XY

(t, τ) = R

XY

(τ) = R

X

(τ − t

0

). From Table 11.1, we recall the prop-

erty that g(τ − τ

0

) has Fourier transform G( f )e

−j 2π f τ

0

. Thus the Fourier transform of

R

XY

(τ) = R

X

(τ −t

0

) = g(τ −t

0

) is

S

XY

( f ) = S

X

( f )e

−j 2π f t

0

. (2)

Quiz 11.8

We solve this quiz using Theorem 11.17. First we need some preliminary facts. Let

a

0

= 5,000 so that

R

X

(τ) =

1

a

0

a

0

e

−a

0

|τ|

. (1)

Consulting with the Fourier transforms in Table 11.1, we see that

S

X

( f ) =

1

a

0

2a

2

0

a

2

0

+(2π f )

2

=

2a

0

a

2

0

+(2π f )

2

(2)

The RC ﬁlter has impulse response h(t ) = a

1

e

−a

1

t

u(t ), where u(t ) is the unit step function

and a

1

= 1/RC where RC = 10

−4

is the ﬁlter time constant. From Table 11.1,

H( f ) =

a

1

a

1

+ j 2π f

(3)

(1) Theorem 11.17,

S

XY

( f ) = H( f )S

X

( f ) =

2a

0

a

1

[a

1

+ j 2π f ]

_

a

2

0

+(2π f )

2

_. (4)

(2) Again by Theorem 11.17,

S

Y

( f ) = H

∗

( f )S

XY

( f ) = |H( f )|

2

S

X

( f ). (5)

71

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Note that

|H( f )|

2

= H( f )H

∗

( f ) =

a

1

(a

1

+ j 2π f )

a

1

(a

1

− j 2π f )

=

a

2

1

a

2

1

+(2π f )

2

(6)

Thus,

S

Y

( f ) = |H( f )|

2

S

X

( f ) =

2a

0

a

2

1

_

a

2

1

+(2π f )

2

_ _

a

2

0

+(2π f )

2

_ (7)

(3) To ﬁnd the average power at the ﬁlter output, we can either use basic calculus and

calculate

_

∞

−∞

S

Y

( f ) d f directly or we can ﬁnd R

Y

(τ) as an inverse transform of

S

Y

( f ). Using partial fractions and the Fourier transform table, the latter method is

actually less algebra. In particular, some algebra will show that

S

Y

( f ) =

K

0

a

2

0

+(2π f )

2

+

K

1

a

1

+(2π f )

2

(8)

where

K

0

=

2a

0

a

2

1

a

2

1

−a

2

0

, K

1

=

−2a

0

a

2

1

a

2

1

−a

2

0

. (9)

Thus,

S

Y

( f ) =

K

0

2a

2

0

2a

2

0

a

2

0

+(2π f )

2

+

K

1

2a

2

1

2a

2

1

a

1

+(2π f )

2

. (10)

Consulting with Table 11.1, we see that

R

Y

(τ) =

K

0

2a

2

0

a

0

e

−a

0

|τ|

+

K

1

2a

2

1

a

1

e

−a

1

|τ|

(11)

Substituting the values of K

0

and K

1

, we obtain

R

Y

(τ) =

a

2

1

e

−a

0

|τ|

−a

0

a

1

e

−a

1

|τ|

a

2

1

−a

2

0

. (12)

The average power of the Y(t ) process is

R

Y

(0) =

a

1

a

1

+a

0

=

2

3

. (13)

Note that the input signal has average power R

X

(0) = 1. Since the RC ﬁlter has a 3dB

bandwidth of 10,000 rad/sec and the signal X(t ) has most of its its signal energy below

5,000 rad/sec, the output signal has almost as much power as the input.

72

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.9

This quiz implements an example of Equations (11.146) and (11.147) for a system in

which we ﬁlter Y(t ) = X(t ) + N(t ) to produce an optimal linear estimate of X(t ). The

solution to this quiz is just to ﬁnd the ﬁlter

ˆ

H( f ) using Equation (11.146) and to calculate

the mean square error e

L

∗ using Equation (11.147).

Comment: Since the text omitted the derivations of Equations (11.146) and (11.147), we

note that Example 10.24 showed that

R

Y

(τ) = R

X

(τ) + R

N

(τ), R

Y X

(τ) = R

X

(τ). (1)

Taking Fourier transforms, it follows that

S

Y

( f ) = S

X

( f ) + S

N

( f ), S

Y X

( f ) = S

X

( f ). (2)

Now we can go on to the quiz, at peace with the derivations.

(1) Since µ

N

= 0, R

N

(0) = Var[N] = 1. This implies

R

N

(0) =

_

∞

−∞

S

N

( f ) d f =

_

B

−B

N

0

d f = 2N

0

B (3)

Thus N

0

= 1/(2B). Because the noise process N(t ) has constant power R

N

(0) = 1,

decreasing the single-sided bandwidth B increases the power spectral density of the

noise over frequencies | f | < B.

(2) Since R

X

(τ) = sinc(2Wτ), where W = 5,000 Hz, we see from Table 11.1 that

S

X

( f ) =

1

10

4

rect

_

f

10

4

_

. (4)

The noise power spectral density can be written as

S

N

( f ) = N

0

rect

_

f

2B

_

=

1

2B

rect

_

f

2B

_

, (5)

From Equation (11.146), the optimal ﬁlter is

ˆ

H( f ) =

S

X

( f )

S

X

( f ) + S

N

( f )

=

1

10

4

rect

_

f

10

4

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_. (6)

73

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) We produce the output

ˆ

X(t ) by passing the noisy signal Y(t ) through the ﬁlter

ˆ

H( f ).

From Equation (11.147), the mean square error of the estimate is

e

∗

L

=

_

∞

−∞

S

X

( f )S

N

( f )

S

X

( f ) + S

N

( f )

d f (7)

=

_

∞

−∞

1

10

4

rect

_

f

10

4

_

1

2B

rect

_

f

2B

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_ d f. (8)

To evaluate the MSE e

∗

L

, we need to whether B ≤ W. Since the problem asks us to

ﬁnd the largest possible B, let’s suppose B ≤ W. We can go back and consider the

case B > W later. When B ≤ W, the MSE is

e

∗

L

=

_

B

−B

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

10

4

1

10

4

+

1

2B

=

1

1 +

5,000

B

(9)

To obtain MSE e

∗

L

≤ 0.05 requires B ≤ 5,000/19 = 263.16 Hz.

Although this completes the solution to the quiz, what is happening may not be obvious.

The noise power is always Var[N] = 1 Watt, for all values of B. As B is decreased, the PSD

S

N

( f ) becomes increasingly tall, but only over a bandwidth B that is decreasing. Thus as

B descreases, the ﬁlter

ˆ

H( f ) makes an increasingly deep and narrow notch at frequencies

| f | ≤ B. Two examples of the ﬁlter

ˆ

H( f ) are shown in Figure 7. As B shrinks, the ﬁlter

suppresses less of the signal of X(t ). The result is that the MSE goes down.

Finally, we note that we can choose B very large and also achieve MSE e

∗

L

= 0.05. In

particular, when B > W = 5000, S

N

( f ) = 1/2B over frequencies | f | < W. In this case,

the Wiener ﬁlter

ˆ

H( f ) is an ideal (ﬂat) lowpass ﬁlter

ˆ

H( f ) =

⎧

⎨

⎩

1

10

4

1

10

4

+

1

2B

| f | < 5,000,

0 otherwise.

(10)

Thus increasing B spreads the constant 1 watt of power of N(t ) over more bandwidth. The

Wiener ﬁlter removes the noise that is outside the band of the desired signal. The mean

square error is

e

∗

L

=

_

5000

−5000

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

2B

1

10

4

+

1

2B

=

1

B

5000

+1

(11)

In this case, B ≥ 9.5 ×10

4

guarantees e

∗

L

≤ 0.05.

Quiz 11.10

It is fairly straightforward to ﬁnd S

X

(φ) and S

Y

(φ). The only thing to keep in mind is

to use fftc to transform the autocorrelation R

X

[ f ] into the power spectral density S

X

(φ).

The following MATLAB program generates and plots the functions shown in Figure 8

74

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

B = 500 B = 2500

Figure 7: Wiener ﬁlter for Quiz 11.9.

%mquiz11.m

N=32;

rx=[2 4 2]; SX=fftc(rx,N); %autocorrelation and PSD

stem(0:N-1,abs(sx));

xlabel(’n’);ylabel(’S_X(n/N)’);

h2=0.5*[1 1]; H2=fft(h2,N); %impulse/filter response: M=2

SY2=SX.* ((abs(H2)).ˆ2);

figure; stem(0:N-1,abs(SY2)); %PSD of Y for M=2

xlabel(’n’);ylabel(’S_{Y_2}(n/N)’);

h10=0.1*ones(1,10); H10=fft(h10,N); %impulse/filter response: M=10

SY10=sx.*((abs(H10)).ˆ2);

figure; stem(0:N-1,abs(SY10));

xlabel(’n’);ylabel(’S_{Y_{10}}(n/N)’);

Relative to M = 2, when M = 10, the ﬁlter H(φ) ﬁlters out almost all of the high

frequency components of X(t ). In the context of Example 11.26, the low pass moving

average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter

output that varies very slowly.

As an aside, note that the vectors SX, SY2 and SY10 in mquiz11 should all be real-

valued vectors. However, the ﬁnite numerical precision of MATLAB results in tiny imagi-

nary parts. Although these imaginary parts have no computational signiﬁcance, they tend

to confuse the stem function. Hence, we generate stem plots of the magnitude of each

power spectral density.

75

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 5 10 15 20 25 30 35

0

5

10

n

S

X

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

2

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

1

0

(

n

/

N

)

Figure 8: For Quiz 11.10, graphs of S

X

(φ), S

Y

(n/N) for M = 2, and S

φ

(n/N) for M = 10

using an N = 32 point DFT.

76

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 12

Quiz 12.1

The system has two states depending on whether the previous packet was received in

error. From the problem statement, we are given the conditional probabilities

P

_

X

n+1

= 0|X

n

= 0

_

= 0.99 P

_

X

n+1

= 1|X

n

= 1

_

= 0.9 (1)

Since each X

n

must be either 0 or 1, we can conclude that

P

_

X

n+1

= 1|X

n

= 0

_

= 0.01 P

_

X

n+1

= 0|X

n

= 1

_

= 0.1 (2)

These conditional probabilities correspond to the transition matrix and Markov chain:

0 1

0.01

0.1

0.99 0.9

P =

_

0.99 0.01

0.10 0.90

_

(3)

Quiz 12.2

From the problem statement, the Markov chain and the transition matrix are

0 1 1

0.6 0.2

0.2 0.6

0.4 0.6 0.4

P =

⎡

⎣

0.4 0.6 0

0.2 0.6 0.2

0 0.6 0.4

⎤

⎦

(1)

The eigenvalues of P are

λ

1

= 0 λ

2

= 0.4 λ

3

= 1 (2)

We can diagonalize P into

P = S

−1

DS =

⎡

⎣

−0.6 0.5 1

0.4 0 1

−0.6 −0.5 1

⎤

⎦

⎡

⎣

λ

1

0 0

0 λ

2

0

0 0 λ

3

⎤

⎦

⎡

⎣

−0.5 1 −0.5

1 0 −1

0.2 0.6 0.2

⎤

⎦

(3)

where s

i

, the i th row of S, is the left eigenvector of P satisfying s

i

P = λ

i

s

i

. Algebra will

verify that the n-step transition matrix is

P

n

= S

−1

D

n

S =

⎡

⎣

0.2 0.6 0.2

0.2 0.6 0.2

0.2 0.6 0.2

⎤

⎦

+(0.4)

n

⎡

⎣

0.5 0 −0.5

0 0 0

−0.5 0 0.5

⎤

⎦

(4)

Quiz 12.3

The Markov chain describing the factory status and the corresponding state transition

matrix are

77

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

2

0 1

0.9

0.1

1

1

P =

⎡

⎣

0.9 0.1 0

0 0 1

1 0 0

⎤

⎦

(1)

With π =

_

π

0

π

1

π

2

_

, the system of equations π

= π

P yields π

1

= 0.1π

0

and

π

2

= π

1

. This implies

π

0

+π

1

+π

2

= π

0

(1 +0.1 +0.1) = 1 (2)

It follows that the limiting state probabilities are

π

0

= 5/6, π

1

= 1/12, π

2

= 1/12. (3)

Quiz 12.4

The communicating classes are

C

1

= {0, 1} C

2

= {2, 3} C

3

= {4, 5, 6} (1)

The states in C

1

and C

3

are aperiodic. The states in C

2

have period 2. Once the system

enters a state in C

1

, the class C

1

is never left. Thus the states in C

1

are recurrent. That

is, C

1

is a recurrent class. Similarly, the states in C

3

are recurrent. On the other hand, the

states in C

2

are transient. Once the system exits C

2

, the states in C

2

are never reentered.

Quiz 12.5

At any time t , the state n can take on the values 0, 1, 2, . . .. The state transition proba-

bilities are

P

n−1,n

= P [K > n|K > n −1] =

P [K > n]

P [K > n −1]

(1)

P

n−1,0

= P [K = n|K > n −1] =

P [K = n]

P [K > n −1]

(2)

(3)

The Markov chain resembles

0 1

P K=2 [ ]

P K= [ 1]

3 4

P K=4 [ ]

2

P K=3 [ ]

P K=5 [ ]

1 1 1 1 1

… ...

78

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The stationary probabilities satisfy

π

0

= π

0

P [K = 1] +π

1

, (4)

π

1

= π

0

P [K = 2] +π

2

, (5)

.

.

.

π

k−1

= π

0

P [K = k] +π

k

, k = 1, 2, . . . (6)

From Equation (4), we obtain

π

1

= π

0

(1 − P [K = 1]) = π

0

P [K > 1] (7)

Similarly, Equation (5) implies

π

2

= π

1

−π

0

P [K = 2] = π

0

(P [K > 1] − P [K = 2]) = π

0

P [K > 2] (8)

This suggests that π

k

= π

0

P[K > k]. We verify this pattern by showing that π

k

=

π

0

P[K > k] satisﬁes Equation (6):

π

0

P [K > k −1] = π

0

P [K = k] +π

0

P [K > k] . (9)

When we apply

∞

k=0

π

k

= 1, we obtain π

0

∞

n=0

P[K > k] = 1. From Problem 2.5.11,

we recall that

∞

k=0

P[K > k] = E[K]. This implies

π

n

=

P [K > n]

E [K]

(10)

This Markov chain models repeated random countdowns. The system state is the time until

the counter expires. When the counter expires, the system is in state 0, and we randomly

reset the counter to a new value K = k and then we count down k units of time. Since we

spend one unit of time in each state, including state 0, we have k −1 units of time left after

the state 0 counter reset. If we have a random variable W such that the PMF of W satisﬁes

P

W

(n) = π

n

, then W has a discrete PMF representing the remaining time of the counter at

a time in the distant future.

Quiz 12.6

(1) By inspection, the number of transitions need to return to state 0 is always a multiple

of 2. Thus the period of state 0 is d = 2.

(2) To ﬁnd the stationary probabilities, we solve the system of equations π = πP and

3

i =0

π

i

= 1:

π

0

= (3/4)π

1

+(1/4)π

3

(1)

π

1

= (1/4)π

0

+(1/4)π

2

(2)

π

2

= (1/4)π

1

+(3/4)π

3

(3)

1 = π

0

+π

1

+π

2

+π

3

(4)

79

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Solving the second and third equations for π

2

and π

3

yields

π

2

= 4π

1

−π

0

π

3

= (4/3)π

2

−(1/3)π

1

= 5π

1

−(4/3)π

0

(5)

Substituting π

3

back into the ﬁrst equation yields

π

0

= (3/4)π

1

+(1/4)π

3

= (3/4)π

1

+(5/4)π

1

−(1/3)π

0

(6)

This implies π

1

= (2/3)π

0

. It follows from the ﬁrst and second equations that

π

2

= (5/3)π

0

and π

3

= 2π

0

. Lastly, we choose π

0

so the state probabilities sum to

1:

1 = π

0

+π

1

+π

2

+π

3

= π

0

_

1 +

2

3

+

5

3

+2

_

=

16

3

π

0

(7)

It follows that the state probabilities are

π

0

=

3

16

π

1

=

2

16

π

2

=

5

16

π

3

=

6

16

(8)

(3) Since the system starts in state 0 at time 0, we can use Theorem 12.14 to ﬁnd the

limiting probability that the system is in state 0 at time nd:

lim

n→∞

P

00

(nd) = dπ

0

=

3

8

(9)

Quiz 12.7

The Markov chain has the same structure as that in Example 12.22. The only difference

is the modiﬁed transition rates:

0 1

1

3 4

( ) 2/3

a

1 - ( ) 2/3

a

( ) 3/4

a

1 - 3/4 ( )

a

( ) 4/5

a

1 - 4/5 ( )

a

2

( ) 1/2

a

1- 1/2 ( )

a

…

The event T

00

> n occurs if the system reaches state n before returning to state 0, which

occurs with probability

P [T

00

> n] = 1 ×

_

1

2

_

α

×

_

2

3

_

α

×· · · ×

_

n −1

n

_

α

=

_

1

n

_

α

. (1)

Thus the CDF of T

00

satisﬁes F

T

00

(n) = 1−P[T

00

> n] = 1−1/n

α

. To determine whether

state 0 is recurrent, we observe that for all α > 0

P [V

00

] = lim

n→∞

F

T

00

(n) = lim

n→∞

1 −

1

n

α

= 1. (2)

80

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus state 0 is recurrent for all α > 0. Since the chain has only one communicating class,

all states are recurrent. ( We also note that if α = 0, then all states are transient.)

To determine whether the chain is null recurrent or positive recurrent, we need to calcu-

late E[T

00

]. In Example 12.24, we did this by deriving the PMF P

T

00

(n). In this problem,

it will be simpler to use the result of Problem 2.5.11 which says that

∞

k=0

P[K > k] =

E[K] for any non-negative integer-valued random variable K. Applying this result, the

expected time to return to state 0 is

E [T

00

] =

∞

n=0

P [T

00

> n] = 1 +

∞

n=1

1

n

α

. (3)

For 0 < α ≤ 1, 1/n

α

≥ 1/n and it follows that

E [T

00

] ≥ 1 +

∞

n=1

1

n

= ∞. (4)

We conclude that the Markov chain is null recurrent for 0 < α ≤ 1. On the other hand, for

α > 1,

E [T

00

] = 2 +

∞

n=2

1

n

α

. (5)

Note that for all n ≥ 2

1

n

α

≤

_

n

n−1

dx

x

α

(6)

This implies

E [T

00

] ≤ 2 +

∞

n=2

_

n

n−1

dx

x

α

(7)

= 2 +

_

∞

1

dx

x

α

(8)

= 2 +

x

−α+1

−α +1

¸

¸

¸

¸

∞

1

= 2 +

1

α −1

< ∞ (9)

Thus for all α > 1, the Markov chain is positive recurrent.

Quiz 12.8

The number of customers in the ”friendly” store is given by the Markov chain

1 i i+1

p p p

( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q

( ) 1-p q ( ) 1-p q ( ) 1-p q ( ) 1-p q

0

××× ×××

81

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

In the above chain, we note that (1 − p)q is the probability that no new customer arrives,

an existing customer gets one unit of service and then departs the store.

By applying Theorem 12.13 with state space partitioned between S = {0, 1, . . . , i } and

S

**= {i +1, i +2, . . .}, we see that for any state i ≥ 0,
**

π

i

p = π

i +1

(1 − p)q. (1)

This implies

π

i +1

=

p

(1 − p)q

π

i

. (2)

Since Equation (2) holds for i = 0, 1, . . ., we have that π

i

= π

0

α

i

where

α =

p

(1 − p)q

. (3)

Requiring the state probabilities to sum to 1, we have that for α < 1,

∞

i =0

π

i

= π

0

∞

i =0

α

i

=

π

0

1 −α

= 1. (4)

Thus for α < 1, the limiting state probabilities are

π

i

= (1 −α)α

i

, i = 0, 1, 2, . . . (5)

In addition, for α ≥ 1 or, equivalently, p ≥ q/(1 − q), the limiting state probabilities do

not exist.

Quiz 12.9

The continuous time Markov chain describing the processor is

0 1

2

3.01

3 4

2

3

2

3

2

2

3

0.01

0.01

0.01

Note that q

10

= 3.1 since the task completes at rate 3 per msec and the processor reboots

at rate 0.1 per msec and the rate to state 0 is the sum of those two rates. From the Markov

chain, we obtain the following useful equations for the stationary distribution.

5.01p

1

= 2p

0

+3p

2

5.01p

2

= 2p

1

+3p

3

5.01p

3

= 2p

2

+3p

4

3.01p

4

= 2p

3

We can solve these equations by working backward and solving for p

4

in terms of p

3

, p

3

in terms of p

2

and so on, yielding

p

4

=

20

31

p

3

p

3

=

620

981

p

2

p

2

=

19620

31431

p

1

p

1

=

628, 620

1, 014, 381

p

0

(1)

82

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Applying p

0

+ p

1

+ p

2

+ p

3

+ p

4

= 1 yields p

0

= 1, 014, 381/2, 443, 401 and the

stationary probabilities are

p

0

= 0.4151 p

1

= 0.2573 p

2

= 0.1606 p

3

= 0.1015 p

4

= 0.0655 (2)

Quiz 12.10

The M/M/c/∞queue has Markov chain

c c+1 1 0

λ λ λ λ λ

µ 2µ

cµ cµ cµ

From the Markov chain, the stationary probabilities must satisfy

p

n

=

_

(ρ/n) p

n−1

n = 1, 2, . . . , c

(ρ/c) p

n−1

n = c +1, c +2, . . .

(1)

It is straightforward to show that this implies

p

n

=

_

p

0

ρ

n

/n! n = 1, 2, . . . , c

p

0

(ρ/c)

n−c

ρ

c

/c! n = c +1, c +2, . . .

(2)

The requirement that

∞

n=0

p

n

= 1 yields

p

0

=

_

c

n=0

ρ

n

/n! +

ρ

c

c!

ρ/c

1 −ρ/c

_

−1

(3)

83

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Functions for Random Variables
**

bernoullipmf y=bernoullipmf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).

function pv=bernoullipmf(p,x) %For Bernoulli (p) rv X %input = vector x %output = vector pv %such that pv(i)=Prob(X=x(i)) pv=(1-p)*(x==0) + p*(x==1); pv=pv(:);

bernoullicdf

y=bernoullicdf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=bernoullicdf(p,x) %Usage: cdf=bernoullicdf(p,x) % For Bernoulli (p) rv X, %given input vector x, output is %vector pv such that pv(i)=Prob[X<=x(i)] x=floor(x(:)); allx=0:1; allcdf=cumsum(bernoullipmf(p,allx)); okx=(x>=0); %x_i < 1 are bad values x=(okx.*x); %set bad x_i=0 cdf= okx.*allcdf(x); %zeroes out bad x_i

bernoullirv

x=bernoullirv(p,m) Input: p is the success probability of a Bernoulli random variable X , m is a positive integer vector of possible sample values Output: x is a vector of m independent sample values of X

function x=bernoullirv(p,m) %return m samples of bernoulli (p) rv r=rand(m,1); x=(r>=(1-p));

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf

y=bignomialpmf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). Comment: This function should always produce the same output as binomialpmf(n,p,x); however, the function calculates the logarithm of the probability and thismay lead to small numerical innaccuracy.

function pmf=bignomialpmf(n,p,x) %binomial(n,p) rv X, %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] k=(0:n-1)’; a=log((p/(1-p))*((n-k)./(k+1))); L0=n*log(1-p); L=[L0; L0+cumsum(a)]; pb=exp(L); % pb=[P[X=0] ... P[X=n]]ˆt x=x(:); okx =(x>=0).*(x<=n).*(x==floor(x)); x=okx.*x; pmf=okx.*pb(x+1);

binomialcdf

y=binomialcdf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=binomialcdf(n,p,x) %Usage: cdf=binomialcdf(n,p,x) %For binomial(n,p) rv X, %and input vector x, output is %vector cdf: cdf(i)=P[X<=x(i)] x=floor(x(:)); %for noninteger x(i) allx=0:max(x); %calculate cdf from 0 to max(x) allcdf=cumsum(binomialpmf(n,p,allx)); okx=(x>=0); %x(i) < 0 are zero-prob values x=(okx.*x); %set zero-prob x(i)=0 cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

sigmaY.(2*rho*nx. okx =(x>=0). f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2))..m) % m binomial(n.y.1).p) samples r=rand(m. AR.sigmaY.rho. Output: f the value of the bivariate Gaussian PDF at x. P[X=n]]ˆt x=x(:). x=count(cdf. % pb=[P[X=0] . p) random variable X .p.com Phone:5017621195 binomialpmf y=binomialpmf(n. bivariategausspdf function f=bivariategausspdf(muX. Input: Scalar parameters muX.p) rv X. cdf=binomialcdf(n. binomialrv x=binomialrv(n. f=exp(-((nx.sigmaY.rho) PDF nx=(x-muX)/sigmaX.rho of the bivariate Gaussian PDF.sigmaX.sigmaY. end i=0:n-1.Name:joey iwatsuru Email:joeyiwat@yahoo..muY.p. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] if p<0.ˆ2) +(ny.rho. x=okx. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=binomialrv(n. pb=((1-pp)ˆn)*cumprod([1 ip]).*ny))/(2*(1-rhoˆ2))).sigmaX.sigmaX.ˆ2) .x. us (United States) Zip Code:71901 .x) %binomial(n.sigmaX.x.*pb(x+1). if pp < p pb=fliplr(pb).p.p.x) Input: n and p are the parameters of a binomial (n.*x.muY.y) %Usage: f=bivariategausspdf(muX.0:n). pmf=okx. ip= ((n-i). ny=(y-muY)/sigmaY.5 pp=p. scalars x and y.r).y) %Evaluate the bivariate Gaussian (muX.p. 4 Address:104 pine meadows loop. end pb=pb(:).m) Input: n and p are the parameters of a binomial random variable X . function pmf=binomialpmf(n. else pp=1-p.muY. hot springs./(i+1))*(pp/(1-pp)).*(x<=n). x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).muY.*(x==floor(x)).

%allcdf = cdf values from 0 to max(x) allcdf=cumsum(duniformpmf(k.l. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).*(x<=l). pmf=pmf(:)/(l-k+1).l. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).1).Name:joey iwatsuru Email:joeyiwat@yahoo.*x). AR. output is % vector cdf: cdf(i)=Prob[X<=x(i)] x=floor(x(:)).m) Input: k and l are the parameters of a discrete uniform (k.allx)). us (United States) Zip Code:71901 .l. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=duniformrv(k. l) random variable X .x) %discrete uniform(k. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] pmf= (x>=k).k:l). %for noninteger x_i allx=k:max(x). %x_i < k are zero prob values okx=(x>=k).*(x==floor(x)). duniformrv x=duniformrv(k.l) random variable r=rand(m. hot springs. function pmf=duniformpmf(k. cdf=duniformcdf(k.x) % For discrete uniform (k.r).l.l. x=k+count(cdf. %x(i)=0 for zero prob x(i) cdf= okx. 5 Address:104 pine meadows loop.x) Input: k and l are the parameters of a discrete uniform (k. l) random variable X .l. l) random variable X .l.l.com Phone:5017621195 duniformcdf y=duniformcdf(k.x) Input: k and l are the parameters of a discrete uniform (k. function cdf=duniformcdf(k.l.*allcdf(x-k+1).x) %Usage: cdf=duniformcdf(k.l) rv X % and input vector x.l) rv X. %set zero prob x(i)=k x=((1-okx)*k)+(okx.m) %returns m samples of a discrete %uniform (k. duniformpmf y=duniformpmf(k.

x) F=1. *(x. AR.m..n). the blocking probability of the queue function pb=erlangb(rho.x) f=((lambdaˆn)/factorial(n)).ˆ(n-1)).m) Input: n and lambda are the parameters of an Erlang random variable X . function F=exponentialcdf(lambda. %Usage: pb=erlangb(rho. vector x Output: Vector y such that yi = f X (xi ) = λn xin−1 e−λxi /(n − 1)!. function f=erlangpdf(n.x) Input: n and lambda are the parameters of an Erlang random variable X . x=sum(reshape(y.c) Input: Offered load rho (ρ = λ/µ). function F=erlangcdf(n.0-poissoncdf(lambda*x.x) Input: n and lambda are the parameters of an Erlang random variable X .*exp(-lambda*x).m) y=exponentialrv(lambda. exponentialcdf y=exponentialcdf(lambda.lambda. pb=pn(c+1)/sum(pn).com Phone:5017621195 erlangb pb=erlangb(rho.lambda.c) %returns the Erlang-B blocking %probability for sn M/M/c/c %queue with load rho pn=exp(-rho)*poissonpmf(rho.lambda.lambda.lambda.c).0:c). 6 Address:104 pine meadows loop. vector x Output: Vector y such that yi = FX (xi ) = 1 − e−λxi .m*n). vector x Output: Vector y such that yi = FX (xi ).n-1). integer m Output: Length m vector x such that each xi is a sample of X function x=erlangrv(n. and the number of servers c of an M/M/c/c queue. erlangrv x=erlangrv(n.0-exp(-lambda*x).x) F=1. us (United States) Zip Code:71901 . erlangpdf y=erlangpdf(n. Output: pb.x) Input: lambda is the parameter of an exponential random variable X .Name:joey iwatsuru Email:joeyiwat@yahoo.2)..lambda. hot springs. erlangcdf y=erlangcdf(n.

x) % finite random variable X: % vector sx of sample space % elements {sx(1). vy=finitevar(SY. integer m Output: Length m vector x such that each xi is a sample of X function x=exponentialrv(lambda.*SY. px is the corresponding probability assignment. .SY. 7 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo.} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % cdf: cdf(i)=P[X=x(i)] cdf=[].m) x=-(1/lambda)*log(1-rand(m.SY. hot springs. Output: rho. exponentialrv x=exponentialrv(lambda.PXY).PXY) %Calculate the correlation coefficient rho of %finite random variables X and Y ex=finiteexp(SX.PXY).m) Input: lambda is the parameter of an exponential random variable X . for i=1:length(x) pxi= sum(p(find(s<=x(i)))).com Phone:5017621195 exponentialpdf y=exponentialpdf(lambda..PXY) Input: Grids SX. vx=finitevar(SX.PXY). us (United States) Zip Code:71901 . AR.x) Input: sx is the range of a ﬁnite random variable X . function cdf=finitecdf(s.PXY). rho=(R-ex*ey)/sqrt(vx*vy).p. vector x Output: Vector y such that yi = f X (xi ) = λe−λxi . cdf=[cdf. finitecdf y=finitecdf(sx.p. ey=finiteexp(SY.SY. end finitecoeff rho=finitecoeff(SX. the correlation coefﬁcient of X and Y function rho=finitecoeff(SX.sx(2). SY and probability grid PXY describing the ﬁnite random variables X and Y . f=f.x) f=lambda*exp(-lambda*x). function f=exponentialpdf(lambda..x) Input: lambda is the parameter of an exponential random variable X . %Usage: rho=finitecoeff(SX.1)). pxi].PXY). x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).*(x>=0).PXY). R=finiteexp(SX.

x=s(1+count(cdf.px.*(px(:))).px) Input: Probability vector px. SY.p.x) % finite random variable X: % vector sx of sample space % elements {sx(1).PXY).p) rv %s=s(:). function covxy=finitecov(SX.SY. ey=finiteexp(SY. 8 Address:104 pine meadows loop.PXY).*SY. px is the corresponding probability assignment. function pmf=finitepmf(sx. AR.PXY) %returns the covariance of %finite random variables X and Y %given by grids SX.sx(2). for i=1:length(x) pmf(i)= sum(px(find(sx==x(i)))).r)).p. %Usage: ex=finiteexp(sx.PXY) Input: Grids SX. Output: covxy..px) %returns the expected value E[X] %of finite random variable X described %by samples sx and probabilities px ex=sum((sx(:))..Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 finitecov covxy=finitecov(SX. Output: ex.p. the expected value E[X ]. vector of samples sx describing random variable X . . x is a vector of possible sample values Output: y is a vector with y(i) = P[X = x(i)].SY. r=rand(m. %Usage: cxy=finitecov(SX. us (United States) Zip Code:71901 .PXY). cdf=cumsum(p). function x=finiterv(s. the covariance of X and Y. function ex=finiteexp(sx. finiteexp ex=finiteexp(sx.PXY).1). R=finiteexp(SX.x) Input: sx is the range of a ﬁnite random variable X . p is the corresponding probability assignment. SY and probability grid PXY describing the ﬁnite random variables X and Y . end finiterv x=finiterv(sx.m) % returns m samples % of finite (s. finitepmf y=finitepmf(sx.p=p(:).px).SY. m is positive integer Output: x is a vector of m sample values y(i) = FX (x(i)).} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % pmf: pmf(i)=P[X=x(i)] pmf=zeros(size(x(:))). covxy=R-ex*ey.m) Input: sx is the range of a ﬁnite random variable X . hot springs. and PXY ex=finiteexp(SX.

hot springs.sigma. ex=finiteexp(sx. Output: v.ˆ2.sigma. variance function v=finitevar(sx. 9 Address:104 pine meadows loop. function f=gausspdf(mu.px) Input: Probability vector px and vector of samples sx describing random variable X .x) Input: mu and sigma are the parameters of an Guassian random variable X .. %Usage: ex=finitevar(sx.1)).ˆ2/(2*sigmaˆ2))/.sigma.sigma. gaussrv x=gaussrv(mu.Name:joey iwatsuru Email:joeyiwat@yahoo. AR.sigma. gausscdf y=gausscdf(mu. function f=gausscdf(mu.com Phone:5017621195 finitevar v=finitevar(sx. gausspdf y=gausspdf(mu. vector x Output: Vector y such that yi = FX (xi ) = ((xi − µ)/σ ). sqrt(2*pi*sigmaˆ2). integer m Output: Length m vector x such that each xi is a sample of X function x=gaussrv(mu. v=ex2-(exˆ2).px).x) f=exp(-(x-mu).x) f=phi((x-mu)/sigma). us (United States) Zip Code:71901 .m) Input: mu and sigma are the parameters of an Gaussian random variable X .px).m) x=mu +(sigma*randn(m.x) Input: mu and sigma are the parameters of an Guassian random variable X .sigma.px). the Var[X ]..px) % returns the variance Var[X] % of finite random variables X described by % samples sx and probabilities px ex2=finiteexp(sx. vector x Output: Vector y such that yi = f X (xi ).

x=V*(Dˆ(0.D.x) Input: p is the parameter of a geometric random variable X .m). end [U. Output: f is the Gaussian vector PDF f X (x) evaluated at x.x) % for geometric(p) rv X. %For input vector x.. hot springs.*floor(x(:)). m is an integer. or a length 1 scalar. • C is the length n vector equal to the ﬁrst row of a symmetric Toeplitz covariance matrix CX . If mu is a length n vector. each element of X is assumed to have mean mu. output is vector %cdf such that cdf_i=Prob(X<=x_i) x=(x(:)>=1).V]=svd(C). sqrt((2*pi)ˆn*det(C)). if (length(mu)==1) mu=mu*ones(n. C is the n × n covariance matrix.x) Input: For a Gaussian (µX .2). function f=gaussvectorpdf(mu. otherwise. end n=size(C.5))*randn(n.x) n=length(x).i) is a sample vector of X function x=gaussvector(mu.ˆx).m)). mu is a length n vector.. gaussvector can be called in two ways: • C is the n × n covariance matrix. Output: n × m matrix x such that each column x(:. cdf=1-((1-p). function cdf=geometriccdf(p. z=x(:)-mu(:).m) Input: For a Gaussian (µX . mu is either a length n vector. m is an integer.C.1)..C. us (United States) Zip Code:71901 . x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). +(mu(:)*ones(1. or a length 1 scalar. %each with mean mu %and covariance matrix C if (min(size(C))==1) C=toeplitz(C).m) %output: m Gaussian vectors. AR. then mu is the expected value vector.C.Name:joey iwatsuru Email:joeyiwat@yahoo. CX ) random vector X. mu is either a length n vector. x is a length n vector.C. gaussvectorpdf f=gaussvector(mu.com Phone:5017621195 gaussvector x=gaussvector(mu. CX ) random vector X.. 10 Address:104 pine meadows loop. f=exp(-z’*inv(C)*z)/. geometriccdf y=geometriccdf(p.

x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). function pmf=geometricpmf(p.m that is M ATLAB’s representation of an inverse −1 CDF FX (x) of a random variable X . icdfrv x=icdfrv(@icdf.1).*pmf.Name:joey iwatsuru Email:joeyiwat@yahoo. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=geometricrv(p. AR.1).com Phone:5017621195 geometricpmf y=geometricpmf(p.ˆ(x-1)). pmf= p*((1-p). 11 Address:104 pine meadows loop.x) Input: p is the parameter of a geometric random variable X .m) %returns m samples of rv X %with inverse CDF icdf. x=ceil(log(1-r)/log(1-p)). geometricrv x=geometricrv(p.m) Input: p is the parameters of a geometric random variable X .m) Input: @icdfrv is a “handle” (a kind of pointer) to a M ATLAB function icdf.m u=rand(m.*(x==floor(x)). us (United States) Zip Code:71901 .x) %geometric(p) rv X %out: pmf(i)=Prob[X=x(i)] x=x(:).u).m) % returns m samples of a geometric (p) rv r=rand(m.m) %Usage: x=geometricrv(p. pmf= (x>0).m) %Usage: x=icdfrv(@icdf. integer m Output: Length m vector x such that each xi is a sample of X function x=icdfrv(icdfhandle. hot springs. x=feval(icdfhandle.

i=(k:n-1)’. %set bad x(i)=k to stop bad indexing x=(okx.*(x>=k).p. output is a %vector pmf: pmf(i)=Prob[X=x(i)] x=x(:). cdf= okx.*pb(x-k+1). %just so indexing is not fouled up x=(okx. us (United States) Zip Code:71901 .com Phone:5017621195 pascalcdf y=pascalcdf(k.p.p.x) %Usage: cdf=pascalcdf(k. %allcdf holds all needed cdf values allcdf=cumsum(pascalpmf(k.p) rv X. n=max(x). x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).Name:joey iwatsuru Email:joeyiwat@yahoo. pascalpmf y=pascalpmf(k. % pmf(i)=0 unless x(i) >= k pmf=okx. % for noninteger x(i) allx=k:max(x). p) random variable X . %x_i < k have zero-prob.p./(i+1-k))].(1-p)*(i.x) %For Pascal (k.p.x) Input: k and p are the parameters of a Pascal (k. %set zero-prob x(i)=k. hot springs. p) random variable X . x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). function cdf=pascalcdf(k.p.p) rv X %and input vector x. and %input vector x.*x) +((1-okx)*k).*x) + k*(1-okx). % other values are OK okx=(x>=k).*allcdf(x-k+1). ip= [1 .x) %For a pascal (k.x) Input: k and p are the parameters of a Pascal (k. 12 Address:104 pine meadows loop. function pmf=pascalpmf(k. %pb=all n-k+1 pascal probs pb=(pˆk)*cumprod(ip). the output %is a vector cdf such that % cdf(i)=Prob[X<=x(i)] x=floor(x(:)). okx=(x==floor(x)).allx)). AR.

%set max range sx=xmin:xmax. rmax=max(r).m) Input: k and p are the parameters of a Pascal random variable X .m) % return m samples of pascal(k.5 + 0.x) %output cdf(i)=Prob[X<=x(i)] x=floor(x(:)). cdf=pascalcdf(k.p.*cdf(x+1).*x). function y=phi(x) sq2=sqrt(2).x) Input: alpha is the parameter of a Poisson (α) random variable X . sx=xmin:xmax.5*erf(x/sq2).sx). sx=0:max(x). xmax=ceil(2*(k/p)).Name:joey iwatsuru Email:joeyiwat@yahoo. cdf=pascalcdf(k. cdf=cumsum(poissonpmf(alpha.1). while cdf(length(cdf)) <=rmax xmax=2*xmax.p.p.%set negative x(i)=0 cdf= okx.sx)). us (United States) Zip Code:71901 .%x(i)<0 -> cdf=0 x=(okx.p.p) rv r=rand(m. end x=xmin+countless(cdf. y= 0. %cdf=0 for x(i)<0 13 Address:104 pine meadows loop. = function cdf=poissoncdf(alpha. %cdf from 0 to max(x) okx=(x>=0).sx). x is a vector of possible sample values Output: y is a vector with y(i) FX (x(i)). m is a positive integer Output: x is a vector of m independent samples of random variable X function x=pascalrv(k. AR.com Phone:5017621195 pascalrv x=pascalrv(k. phi y=phi(x) Input: Vector x Output: Vector y such that y(i) = (x(i)). xmin=k. hot springs.r). poissoncdf y=poissoncdf(alpha.

Name:joey iwatsuru Email:joeyiwat@yahoo.*x. pb=exp([-alpha. xmax=ceil(2*alpha).1). us (United States) Zip Code:71901 . logfacts =cumsum(log(k)).*((x>=a) & (x<b))/(b-a). pmf=okx. vector x Output: Vector y such that yi = FX (xi ) function F=uniformcdf(a.sx).x) Input: a and ( b) are parameters for continuous uniform random variable X . %out=vector pmf: pmf(i)=P[X=x(i)] x=x(:).x) %Usage: F=uniformcdf(a. %while ( sum(cdf <=rmax) ==(xmax-xmin+1) ) while cdf(length(cdf)) <=rmax xmax=2*xmax.x) %Poisson (alpha) rv X.b. x=okx. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=poissonrv(alpha.com Phone:5017621195 poissonpmf y=poissonpmf(alpha.0*(x>=b).b. sx=xmin:xmax. cdf=poissoncdf(alpha. xmin=0.m) %return m samples of poisson(alpha) rv X r=rand(m.m) Input: alpha is the parameter of a Poisson (α) random variable X . k=(1:max(x))’.x) %returns the CDF of a continuous %uniform rv evaluated at x F=x. rmax=max(r). 14 Address:104 pine meadows loop.. %set max range sx=xmin:xmax. uniformcdf y=uniformcdf(a. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). function pmf=poissonpmf(alpha.sx). AR. end x=xmin+countless(cdf.*pb(x+1).b. %pmf(i)=0 for zero-prob x(i) poissonrv x=poissonrv(alpha. . -alpha+ (k*log(alpha))-logfacts]). hot springs. cdf=poissoncdf(alpha.x) Input: alpha is the parameter of a Poisson (α) random variable X . F=f+1..*(x==floor(x)).r). okx=(x>=0).

m) %Usage: x=uniformrv(a.com Phone:5017621195 uniformpdf y=uniformpdf(a.m) Input: a and ( b) are parameters for continuous uniform random variable X .b) random varible x=a+(b-a)*rand(m.x) %returns the PDF of a continuous %uniform rv evaluated at x f=((x>=a) & (x<b))/(b-a).Name:joey iwatsuru Email:joeyiwat@yahoo.b.b.b.m) %Returns m samples of a %uniform (a.1).x) %Usage: f=uniformpdf(a. 15 Address:104 pine meadows loop. AR. positive integer m Output: m element vector x such that each x(i) is a sample of X . function x=uniformrv(a.b.b.b. hot springs.x) Input: a and ( b) are parameters for continuous uniform random variable X . uniformrv x=uniformrv(a. us (United States) Zip Code:71901 . vector x Output: Vector y such that yi = f X (xi ) function f=uniformpdf(a.

hot springs. %max no. R(:.*gaussrv(0. function w=brownian(alpha. nonengative scalar t Output: Length n vector pv such that pv(t) is the state probability vector at time t of the Markov chain Comment: If p0 is a scalar integer. Output: w is a vector such that w(i) is the position at time t(i) of the particle in Brownian motion. delta=t-[0.1)=ones(n.1)=ones(n.2)). A(:. cmcstatprob pv=cmcstatprob(Q) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain Output: pv is the stationary probability vector for the continuous-time Markov chain function pv = cmcstatprob(Q) %Q has zero diagonal rates R=Q-diag(sum(Q.p0.t) %Q has zero diagonal rates %initial state probabilities p0 K=size(Q.1).p0. n=length(t). pv=([1 zeros(1. function pv = dmcstatprob(P) n=size(P.t) %Brownian motion process %sampled at t(1)<t(2)< .Name:joey iwatsuru Email:joeyiwat@yahoo. cmcprob pv=cmcprob(Q.1). n=size(Q. us (United States) Zip Code:71901 . A=(eye(n)-P). t=t(:). alpha is the scaling constant of a Brownian motion process such that the ith increment has variance α(ti − ti−1 ). AR.2)).1).n-1)]*Rˆ(-1))’.t) Input: t is a vector holding an ordered sequence of inspection times.1.com Phone:5017621195 Functions for Stochastic Processes brownian w=brownian(alpha.n).1). pv=([1 zeros(1. w=cumsum(x). x=sqrt(alpha*delta).n-1)]*Aˆ(-1))’. length n vector p0 denoting the initial state probabilities.. then the simulation starts in state p0 function pv = cmcprob(Q.1)-1. dmcstatprob pv=dmcstatprob(P) Input: n × n stochastic matrix P representing a discrete-time aperiodic irreducible ﬁnite Markov chain Output: pv is the stationary probability vector. end R=Q-diag(sum(Q.. 16 Address:104 pine meadows loop. pv= (p0(:)’*expm(R*t))’.t(1:n-1)]. state %check for integer p0 if (length(p0)==1) p0=((0:K)==p0).t) Input: n × n state transition matrix Q for a continuous-time ﬁnite Markov chain.

see Problem 10. integer n Output: A simulation of the Markov chain system over the time interval [0.2) is the amount of time spent in each state. poissonprocess N=poissonprocess(lambda. Note that length n is a Poisson random variable with expected value λT .T) %arrival times s=[s(1) . function ST=simcmc(Q.T) Input: state transition matrix Q for a continuous-time ﬁnite Markov chain. ST(i.n)).. T marks the end of an observation interval [0. S=simcmcstep(Q.S]. Input: lambda is the arrival rate of a Poisson process..1*lambda*T). T ]. end n=1+sum(cumsum(ST(:. Comment: This code is pretty stupid.T) function s=poissonarrivals(lambda. ST=[ST. function N=poissonprocess(lambda.2))<T).1).2) is the amount of time the system spends in state ST(i.5. while (s(length(s))< T).1). . cumsum(exponentialrv(lambda.p00.T). state %calc average trans. That is.1) is the sequence of system states and the second column ST(:. There are decidedly better ways to create a set of arrival times.1)-1. then the simulation starts in state p0. simcmc ST=simcmc(Q. T ]: The output is an n × 2 matrix ST such that the ﬁrst column ST(:. Output: s=[s(1).max(t)).p0. is random.:)/v(1+s).t).1). ST=simcmcstep(Q.n)). while (sum(ST(:. max no. s_new]. R=ps’*v.. v=sum(Q.Name:joey iwatsuru Email:joeyiwat@yahoo. s=ST(size(ST. rate ps=cmcstatprob(Q). t is a vector of “inspection times’. %N(i) = no. vector p0 denoting the initial state probabilities. Comment: If p0 is a scalar integer.2)..com Phone:5017621195 poissonarrivals s=poissonarrivals(lambda.p0. s(n)] % s(n)<= T < s(n+1) n=ceil(1. K=size(Q. s_new=s(length(s))+ .:). 17 Address:104 pine meadows loop. hot springs.p0. the number of state occupancy periods. s=cumsum(exponentialrv(lambda.. s=[s. N=count(s..n). n=ceil(0.13. us (United States) Zip Code:71901 . ST=ST(1:n. s(n)]’ is a vector such that s(i) is ith arrival time. end s=s(s<=T).6*T/R).2)).2*n).t) %input: rate lambda>0.t) Input: lambda is the arrival rate of a Poisson process. vector t %For a sample function of a %Poisson process of rate lambda. AR.’ Output: N is a vector such that N(i) is the number of arrival by inspection time t(i). %truncate last holding time ST(n. of arrivals by t(i) s=poissonarrivals(lambda.2)=T-sum(ST(1:n-1.2))<T). Note that n.. p00=Q(1+s.

state sx=0:K. state probabilities p0 K=size(Q.1).p0.p0. rate matrix Q.n) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain. simdmc x=simdmc(P.:). %state space x=zeros(n+1.*exponentialrv(1. then the simulation starts in state p0.p0.. S(:. length n vector p0 denoting the initial state probabilities. %highest no. function S=simcmcstep(Q.com Phone:5017621195 simcmcstep S=simcmcstep(Q. .2) is the amount of time the system spends in state ST(i. %S=simcmcstep(Q.1)) . end Input: n ×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov chain. end x(1)=finiterv(sx. AR.Name:joey iwatsuru Email:joeyiwat@yahoo.p0. x(m+1)=finiterv(sx.n) function x=simdmc(P.n+1). % init.2) is the amount of time spent in each state.1)-1. P=diag(t)*Q.2)=t(1+S(:. %initialization if (length(p0)==1) %convert integer p0 to prob vector p0=((0:K)==p0).p0.2).1)=simdmc(P. ST(i. That is.n) % Simulate n steps of a cts % Markov Chain. rates t=1.p0.p0. Output: A simulation of the Markov chain system such that for the length n vector x. integer n. Comment: If p0 is a scalar integer.%init allocation %check for integer p0 if (length(p0)==1) p0=((0:K)==p0). us (United States) Zip Code:71901 .1)-1./v. This program is the basis for simcmc. integer n Output: A simulation of n steps of the continuous-time Markov chain system: The output is an n × 2 matrix ST such that the ﬁrst column ST(:.1). state S=zeros(n+1.1) is the length n sequence of system states and the second column ST(:. %x(m)= state at time m-1 for m=1:n.1). vector p0 denoting the initial state probabilities.n). end v=sum(Q. x(m) is the state at time m-1 of the Markov chain. hot springs. S(:..1). Comment: If p0 is a scalar integer. then the simulation starts in state p0 18 Address:104 pine meadows loop.P(x(m)+1.n) K=size(P. %max no.n).2). %state dep.

MY]=ndgrid(x. %each column of MX = x %each row of MY = y n=(sum((MX==MY).y). hot springs.y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x equal to y(i).Name:joey iwatsuru Email:joeyiwat@yahoo. countequal n=countequal(x.y) %Usage: n=countequal(x. Output: F is the N by N discrete Fourier transform matrix function F = dftmat(N). F=exp((-1.1))’.MY]=ndgrid(x.y).MY]=ndgrid(x. 19 Address:104 pine meadows loop.1))’. function n=countequal(x.y) Input: Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x strictly less than y(i). us (United States) Zip Code:71901 . %each column of MX = x %each row of MY = y n=(sum((MX<=MY).y) %n(j)= # elements of x = y(j) [MX.y) %Usage: n=countless(x. countless n=countless(x. %each column of MX = x %each row of MY = y n=(sum((MX<MY).1))’.y) %n(i)= # elements of x < y(i) [MX. function n=countless(x.y) %Usage n=count(x.0j)*2*pi*(n*(n’))/N). dftmat F=dftmat(N) Input: Integer N .y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x less than or equal to y(i).y). Usage: F=dftmat(N) %F is the N by N DFT matrix n=(0:N-1)’.y) %n(i)= # elements of x <= y(i) [MX. function n=count(x. AR.com Phone:5017621195 Random Utilities count n=count(x.

us (United States) Zip Code:71901 . n=reshape(0:(N-1).J]=unique(xy.2)] % = kth unique pair [x y] and % fxy(k. end R=fft(r.N).SY) Input: For random variables X and Y . SX(:) SY(:)]. xy=finiterv(S.1) fxy(k.:)= ith sample pair X. hot springs.2) fxy(k.Y) pairs: xy=[xy.Y %Output fxy is a K x 3 matrix: % [fxy(k. r0 .SY) %xy is an m x 2 matrix: %xy(i. . yy(i. %DFT for a signal r %centered at the origin %Usage: % fftc(r. %extend xy to include a sample %for all possible (X. .*exp((1. else N=(2*L)-1. freq.3)] [fxy(k. Output: fxy is a K × 3 matrix. . if (nargin>1) N=varargin{2}(1). function S=fftc(varargin). 20 Address:104 pine meadows loop.SX. S=R.0j)*phase). S=fftc(r) Input: Vector r=[r(1) .N).m). xy is an m × 2 matrix holding a list of sample values pairs. Y ).1:max(J))-1. AR.PXY(:). %reorder fxy rows to match %rows of [SX(:) SY(:) PXY(:)]: fxy=sortrows(fxy. SY and the probability grid PXY. Y ) pair with relative frequency fxy(k.. Comment: Given the grids SX. .size(R)). rk centered around the origin. SY(:) PXY(:)]. In each row [fxy(k. rel.SY) %Usage: fxy = freqxy(xy.1) fxy(k.3).I. function fxy = freqxy(xy. N=N/sum(N). fxy=[U N(:)]. N=hist(J. Output: S is the DFT of r Comment: Supports the same calling conventions as fft.com Phone:5017621195 freqxy fxy=freqxy(xy..SX. .2)] is a unique (X.Name:joey iwatsuru Email:joeyiwat@yahoo.’rows’). r(2k+1)] holding the time sequence r−k . The output fxy is ordered so that the rows match the ordering of rows in the matrix [SX(:) fftc S=fftc(r. L=1+floor(length(r)/2). Grids SX and SY representing the sample space. phase=2*pi*(n/N)*(L-1).3)= corresp. . .:) is the ith sample pair (X.N): N point DFT of r % fftc(r): length(r) DFT of r r=varargin{1}. [U.SX. . a list of random sample value pairs xy can be simulated by the commands S=[SX(:) SY(:)].[2 1 3]).1) fxy(k.

yls) %Usage: pmfplot(sx.*y)+ (1. px=(px(:))’. PM=[zeros(size(px)).xls.’y axis text’) Input: Sample space vector sx and PMF vector px for ﬁnite random variable PXY.xls. xmin=xmin-xborder. sinc y=sinc(x) Input: Vector x Output: Vector y such that yi = sinc(xi ) = sin(π xi ) π xi function y=sinc(x). XM = [sx. sx]. xx=x+(x==0)./(pi*xx). sx=(sx(:))’. y=((1.’Bottom’).’VerticalAlignment’.Name:joey iwatsuru Email:joeyiwat@yahoo. px]. px is the PMF %xls and yls are x and y label strings nonzero=find(px). end xmin=min(sx). ymax=1. rect y=rect(x) Input: Vector x Output: Vector y such that yi = rect(xi ) = 1 |xi | < 0.PM. optional text strings xls and yls Output: A plot of the PMF PX (x) in the bar style used in the text.05*(xmax-xmin).yls) %sx and px are vectors.5).’x’.0*(abs(x)<0. y=sin(pi*xx).’-k’). axis([xmin xmax 0 ymax]). us (United States) Zip Code:71901 . y=1. %Usage:y=rect(x). h=plot(XM. function h=pmfplot(sx. px=px(nonzero).0-(x==0)). if (nargin==4) xlabel(xls). 21 Address:104 pine meadows loop. xmax=xmax+xborder.5 0 otherwise function y=rect(x). set(h. hot springs.1*max(px).px. xborder=0. xmax=max(sx).0*(x==0)). ylabel(yls. Comment: The code is ugly because it makes sure to produce the right limit value at xi = 0. sx=sx(nonzero).com Phone:5017621195 pmfplot pmfplot(sx.’LineWidth’.px.3). AR.px.

% The cumulative sum % of visit times are transition instances.com Phone:5017621195 simplot simplot(S.2) = state visit times. hot springs. % S(:.2)]).1) = state sequence. ylabel(yls. then each stair has equal width. end Input: The simulated state sequence vector S generated by S=simdmc(P.p0.xls. Comment: If S is just a state sequence vector. then the width of the stair is proportional to the time spent in that state.xlabel. %h=simplot(S. us (United States) Zip Code:71901 . S(:. a cts time Markov chain % is assumed where % S(:. X=cumsum([0 . end Y=[S(:. a discrete time chain is assumed % with visit times of one unit. h=stairs(X.Name:joey iwatsuru Email:joeyiwat@yahoo.p0. S(size(S.ylabel) % Plots the output of a simulated state sequence % If S is N by 1. % If S is an N by 2 matrix.1).Y).n) or the n × 2 state/time matrix ST generated by either ST=simcmc(Q.2)==1) S=[S ones(size(S))].yls). if (nargin==3) xlabel(xls).ylabel) function h=simplot(S.1) . If S is n × 2 state/time matrix ST. % h is a handle to a stairs plot of the state sequence % vs state transition times %in case of discrete time simulation if (size(S.n).1)].T) or ST=simcmcstep(Q. 22 Address:104 pine meadows loop.p0. Output: A “stairs” plot showing the sequence of simulation states over time. AR.xlabel.’Bottom’).’VerticalAlignment’.

If you ﬁnd errors or have suggestions or comments. Goodman May 22.m ﬁles in matcode.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers Second Edition Quiz Solutions Roy D. AR.m ﬁles associated with examples or quizzes in the text. When errors are found. • We have made a substantial effort to check the solution to every quiz. a probability close to unity) that errors will be found.rutgers. there is a nonzero probability (in fact. hot springs.zip. 1 Address:104 pine meadows loop. 2004 • The M ATLAB section quizzes at the end of each chapter use programs available for download as the archive matcode. please send email to ryates@winlab. corrected solutions will be posted at the website.zip.edu. Yates and David J. Also available is a manual probmatlab. Nevertheless.pdf describing the general purpose . This archive has programs of general purpose programs for solving probability problems as well as speciﬁc . us (United States) Zip Code:71901 .

However. The pair A4 and B4 are not mutually exclusive since dvd belongs to A4 and B4 .2 (1) A1 = {vvv. 2 Address:104 pine meadows loop. A4 and B4 are collectively exhaustive. dvd. vdd. dvv.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. dvv. ddv} (8) B4 = {ddd. vdv. ddd} (6) B3 = {vdv. vdd. The pair A2 and B2 are mutually exclusive and collectively exhaustive. vdv. Since we have written down each pair Ai and Bi above. us (United States) Zip Code:71901 . vvd. ddv. dvd} (4) B2 = {vdv. The pair A3 and B3 are mutually exclusive but not collectively exhaustive.com Phone:5017621195 Quiz Solutions – Chapter 1 Quiz 1. Also. dvd} (4) R ∩ M (6) T c − M (7) A4 = {vvv.1 In the Venn diagrams for parts (a)-(g) below. Ai and Bi are mutually exclusive if Ai ∩ Bi = φ. ddv. vdd} (2) B1 = {dvv. ddd} (3) A2 = {vvv. dvd. vdd} Recall that Ai and Bi are collectively exhaustive if Ai ∪ Bi = S. vvd. we can simply check for these properties. The pair A1 and B1 are mutually exclusive and collectively exhaustive. M T O M T O M T O (1) R = T c (2) M ∪ O (3) M ∩ O M T O M T O M T O (4) R ∪ M Quiz 1. ddv. ddd} (5) A3 = {vvv. dvd. vvd. the shaded area represents the indicated set. AR.

hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.02 (2) P[{s100 }] = 0.02 = 0.05 Finding the various probabilities is now straightforward: 3 Address:104 pine meadows loop. V L. we can conclude that P[V B] = 0.3 There are exactly 50 equally likely outcomes: s51 through s100 . s100 }] = 21 × 0.02 = 0. . the problem statement tells us how to ﬁll in the table. P [V ] = 0. and DL.35 ? B ? ? In a roundabout way.62 (8) P[student passes] = P[{s60 . .02 = 0.35 and that P[DL] = 0. . (1) P[{s79 }] = 0.4 We can describe this experiment by the event space consisting of the four possible events V B. .25 B 0.02 = 0. . Each of these outcomes has probability 0.22 (4) P[F] = P[{s51 .6 = P [V L] + P [DL] (1) (2) Since P[V L] = 0. . s59 }] = 9 × 0. .35 ? The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1. .18 (5) P[T ≥ 80] = P[{s80 . D B. . . .78 (7) P[a C grade or better] = P[{s70 . . This implies P[D B] = 0. .35 = 0. .35 0. .7 = P [V L] + P [V B] P [L] = 0. s100 }] = 41 × 0. . s100 }] = 31 × 0.35 0.05 and the complete table is V D L 0. . In particular.35. . .com Phone:5017621195 Quiz 1.25 B 0. AR. . s52 .42 (6) P[T < 90] = P[{s51 .82 Quiz 1. s100 }] = 11 × 0. s89 }] = 39 × 0.35 0. us (United States) Zip Code:71901 . We represent these events in the table: V D L 0.02 (3) P[A] = P[{s90 .02 = 0.25.02. This allows us to ﬁll in two more table entries: V D L 0. . . .02 = 0. .6 − 0.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25 (2) P[D ∪ L] = P[V L] + P[DL] + P[D B] = 0.35 + 0.25 + 0.05 = 0.65. (3) P[V B] = 0.35 (4) P[V ∪ L] = P[V ] + P[L] − P[V L] = 0.7 + 0.6 − 0.35 = 0.95 (5) P[V ∪ D] = P[S] = 1 (6) P[L B] = P[L L c ] = 0 Quiz 1.5 (1) The probability of exactly two voice calls is P [N V = 2] = P [{vvd, vdv, dvv}] = 0.3 (2) The probability of at least one voice call is P [N V ≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] = 6(0.1) + 0.2 = 0.8 An easier way to get the same answer is to observe that P [N V ≥ 1] = 1 − P [N V < 1] = 1 − P [N V = 0] = 1 − P [{ddd}] = 0.8 (4) (2) (3) (1)

(3) The conditional probability of two voice calls followed by a data call given that there were two voice calls is 1 P [{vvd} , N V = 2] P [{vvd}] 0.1 = (5) = = P [{vvd} |N V = 2] = P [N V = 2] P [N V = 2] 0.3 3 (4) The conditional probability of two data calls followed by a voice call given there were two voice calls is P [{ddv} , N V = 2] P [{ddv} |N V = 2] = =0 (6) P [N V = 2] The joint event of the outcome ddv and exactly two voice calls has probability zero since there is only one voice call in the outcome ddv. (5) The conditional probability of exactly two voice calls given at least one voice call is P [N V = 2, N V ≥ 1] P [N V = 2] 0.3 3 = = = (7) P [N V = 2|Nv ≥ 1] = P [N V ≥ 1] P [N V ≥ 1] 0.8 8 (6) The conditional probability of at least one voice call given there were exactly two voice calls is P [N V ≥ 1, N V = 2] P [N V = 2] P [N V ≥ 1|N V = 2] = = =1 (8) P [N V = 2] P [N V = 2] Given that there were two voice calls, there must have been at least one voice call. 4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6 In this experiment, there are four outcomes with probabilities P[{vv}] = (0.8)2 = 0.64 P[{dv}] = (0.2)(0.8) = 0.16 P[{vd}] = (0.8)(0.2) = 0.16 P[{dd}] = (0.2)2 = 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes, we now can test for the independence of events. (1) First, we calculate the probability of the joint event: P [N V = 2, N V ≥ 1] = P [N V = 2] = P [{vv}] = 0.64 Next, we observe that P [N V ≥ 1] = P [{vd, dv, vv}] = 0.96 Finally, we make the comparison P [N V = 2] P [N V ≥ 1] = (0.64)(0.96) = P [N V = 2, N V ≥ 1] which shows the two events are dependent. (2) The probability of the joint event is P [N V ≥ 1, C1 = v] = P [{vd, vv}] = 0.80 From part (a), P[N V ≥ 1] = 0.96. Further, P[C1 = v] = 0.8 so that P [N V ≥ 1] P [C1 = v] = (0.96)(0.8) = 0.768 = P [N V ≥ 1, C1 = v] Hence, the events are dependent. (3) The problem statement that the calls were independent implies that the events the second call is a voice call, {C2 = v}, and the ﬁrst call is a data call, {C1 = d} are independent events. Just to be sure, we can do the calculations to check: P [C1 = d, C2 = v] = P [{dv}] = 0.16 (6) Since P[C1 = d]P[C2 = v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are independent. Note that this shouldn’t be surprising since we used the information that the calls were independent in the problem statement to determine the probabilities of the outcomes. (4) The probability of the joint event is P [C2 = v, N V is even] = P [{vv}] = 0.64 Also, each event has probability P [C2 = v] = P [{dv, vv}] = 0.8, P [N V is even] = P [{dd, vv}] = 0.68 (8) Thus, P[C2 = v]P[N V is even] = (0.8)(0.68) = 0.544. Since P[C2 = v, N V is even] = 0.544, the events are dependent. 5 (7) (5) (4) (3) (2) (1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7 Let Fi denote the event that that the user is found on page i. The tree for the experiment is

0.8 ¨ F1 0.8 ¨ F2 0.8 ¨ F3 ¨¨ ¨¨ ¨¨ ¨ ¨¨ ¨¨ c c c ¨¨ F3 F1 ¨ F2 ¨ 0.2 0.2 0.2

The user is found unless all three paging attempts fail. Thus the probability the user is found is c c c P [F] = 1 − P F1 F2 F3 = 1 − (0.2)3 = 0.992 (1) Quiz 1.8 (1) We can view choosing each bit in the code word as a subexperiment. Each subexperiment has two possible outcomes: 0 and 1. Thus by the fundamental principle of counting, there are 2 × 2 × 2 × 2 = 24 = 16 possible code words. (2) An experiment that can yield all possible code words with two zeroes is to choose which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There are 4 = 6 ways to do this. Hence, there are six code words with exactly two zeroes. 2 For this problem, it is also possible to simply enumerate the six code words: 1100, 1010, 1001, 0101, 0110, 0011. (3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst bit has only one outcome. For each of the next three bits, we have two choices. In this case, there are 1 × 2 × 2 × 2 = 8 ways of choosing a code word. (4) For the constant ratio code, we can specify a code word by choosing M of the bits to be ones. The other N − M bits will be zeroes. The number of ways of choosing such N a code word is M . For N = 8 and M = 3, there are 8 = 56 code words. 3 Quiz 1.9 (1) In this problem, k bits received in error is the same as k failures in 100 trials. The failure probability is = 1 − p and the success probability is 1 − = p. That is, the probability of k bits in error and 100 − k correctly received bits is P Sk,100−k = 100 k 6

k

(1 − )100−k

(1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

0610 (6) Quiz 1.99 + P S2.9)) .99) 2 3 99 (2) (3) (4) (5) = 0. 8 P [C9 ] = (P [C])9 = p 9n .4. and X(i)=3) is ﬂip i landed on the edge.1.9819 = 0. AR. • If 0. us (United States) Zip Code:71901 .98 + P S3. X(i)=1 if ﬂip i was heads. Since transistor failures are independent of each other. These three cases will have probabilities 0.97 = 0.100 = (1 − )100 = (0. Lastly. The probability that a chip works is P[C] = pn .99 = 100(0.01. chip failures are also independent. we use the hist function to count how many occurences of each possible value of X(i).5 and 0.98 = 4950(0. That is. the transistors in the chip are like devices in series.01) (0.9)). X(i)=2 if ﬂip i was tails.3660 P S1.. Let Ck denote the event that exactly k chips work. P S0. then X(i)=2. The module works if either 8 chips work or 9 chips work. 7 Address:104 pine meadows loop.9 < R(i).10 Since the chip works only if all n transistors work.99) 8 = 0.3700 9 97 P S2.4. Second.11 R=rand(1. 0.. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.99) (2) The probability a packet is decoded correctly is just P [C] = P S0..9.100). Thus each P[Ck ] has the binomial probability 9 (P [C])8 (1 − P [C])9−8 = 9 p 8n (1 − p n ). we ﬁrst generate a vector R of 100 random numbers.99)100 = 0.. To see how this works. Y=hist(X.1849 P S3.4 < R(i) and R(i)<=0. we note there are three cases: • If R(i) <= 0. 700(0.com Phone:5017621195 For = 0. • If 0.01)(0.100 + P S1. we generate vector X as a function of R to represent the 3 possible outcomes of a ﬂip.4) .97 = 161. + (3*(R>0. then X(i)=1.1:3) (1) (2) (3) For a M ATLAB simulation. X=(R<= 0. + (2*(R>0. then X(i)=3.01) (0. P [C8 ] = The probability a memory module works is P [M] = P [C8 ] + P [C9 ] = p 8n (9 − 8 p n ) Quiz 1.4).*(R<=0.

0387 8 (2) Address:104 pine meadows loop.1)(0. that is.5 0.3 Decoding each transmitted bit is an independent trial where we call a bit error a “success. Now we can interpret each experiment in the generic context of independent trials. us (United States) Zip Code:71901 . X has the geometric PMF PX (x) = p(1 − p)x−1 x = 1. .16 2 PN (n) = c 1 + n=1 1 1 + 2 3 =1 (1) This implies c = 6/11. 0 otherwise (1) (2) If p = 0.11. hot springs. 3 G 0. That is. (2) P[N = 1] = PN (1) = c = 6/11 (3) P[N ≥ 2] = PN (2) + PN (3) = c/2 + c/3 = 5/11 (4) P[N > 3] = ∞ n=4 PN (n) = 0 Quiz 2. then the probability exactly 10 bits are sent is P [X = 10] = PX (10) = (0. probabilities and corresponding grades for the experiment are Outcome P[·] BB BC CB CC Quiz 2.1.2 (1) To ﬁnd c. we recall that the PMF must sum to 1.36 3. . (1) The random variable X is the number of trials up to and including the ﬁrst success. AR.1 The sample space. with probability p. 2.5 0.0 0.24 2.Name:joey iwatsuru Email:joeyiwat@yahoo. Now that we have found c. .9)9 = 0. the trial is a success.” Each bit is in error. the remaining parts are straightforward.24 2. Similar to Example 2.com Phone:5017621195 Quiz Solutions – Chapter 2 Quiz 2.

1. the probability that the third error occurs on bit 12 is PZ (12) = 11 (0.1849 2 (5) (4) The probability of no more than 2 errors is P [Y ≤ 2] = PY (0) + PY (1) + PY (2) = (0.0645 2 (10) Quiz 2. (1) P[Y < 1] = FY (1− ) = 0 9 Address:104 pine meadows loop.01. (6) If p = 0.01)2 (0.15) PZ (z) = z−1 3 p (1 − p)z−3 2 (9) Note that PZ (z) > 0 for z = 3.com Phone:5017621195 The probability that at least 10 bits are sent is P[X ≥ 10] = ∞ PX (x).4 Each of these probabilities can be read off the CDF FY (y)..99)98 = 0.99)98 2 (6) (7) (8) (5) Random variable Z is the number of trials up to and including the third success.9207 100 (0.910 = 0. 4. . .01)(0.3487.25)3 (0. Y has the binomial PMF PY (y) = 100 y p (1 − p)100−y y (4) (3) If p = 0. However. 5. . P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)10 For p = 0. That is. we must keep in + mind that when FY (y) has a discontinuity at y0 . Just as in Example 2.75)9 = 0. Thus Z has the Pascal PMF (see Example 2. FY (y) takes the upper value FY (y0 ).99)100 + 100(0. hot springs. us (United States) Zip Code:71901 .25. However. its even easier to observe that X ≥ 10 if the ﬁrst 10 bits are transmitted correctly.Name:joey iwatsuru Email:joeyiwat@yahoo. AR.99)99 + = 0. This x=10 sum is not too hard to calculate. the probability of exactly 2 errors is P [Y = 2] = PY (2) = 100 (0. P[X ≥ 10] = 0. (3) The random variable Y is the number of successes in 100 independent trials.13.01)2 (0.

the cost T is T = 25N + 40(3 − N ) = 120 − 15N (2) To ﬁnd the PMF of T .7 c = 25 PC (c) = 0. us (United States) Zip Code:71901 . hot springs.6 = 0. 105 PT (t) = 0. 90.Name:joey iwatsuru Email:joeyiwat@yahoo. we have a data call and C = 40.6 (3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − FY (2) = 1 − 0. the expected value of T is E [T ] = 75PT (75) + 90PT (90) + 105PT (105) + 120PT (120) = (75 + 90 + 105)(0. a call is a voice call and C = 25. we can draw the following tree: N =0 •T =120 0.3 N =2 •T =90 r rr 0.5 cents Quiz 2.4 (5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = FY (1+ ) − FY (1− ) = 0.3) + 120(0.8 = 0.8 − 0.2 (4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − FY (2− ) = 1 − 0.3.3$$N =1 •T =105 $ (2) (1) $ $$ ¨¨$ rr rr0.1) = 62 (2) (3) (4) 10 Address:104 pine meadows loop.7) + 40(0.6 (1) As a function of N .com Phone:5017621195 (2) P[Y ≤ 1] = FY (1) = 0.1 t = 120 ⎩ 0 otherwise From the PMF PT (t).8 = 0 Quiz 2. Otherwise. AR.5 (1) With probability 0. we can write down the PMF of T : ⎧ ⎨ 0.3) = 29.7.1¨¨ ¨ ¨ ¨ 0.3 c = 40 (1) ⎩ 0 otherwise (2) The expected value of C is E [C] = 25(0.6 (6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = FY (3+ ) − FY (3− ) = 0. with probability 0. This corresponds to the PMF ⎧ ⎨ 0.3 N =3 •T =75 From the tree.3 t = 75.

the expected number of memory chips is 4 (2) E [M] = a=1 g(A)PA (a) = 4(0.4) + 2(0.2) + 4(0.3) + 6(0.8 = g(E[A]).663. hot springs.4) + 22 (0.1) = 2 (1) (2) The number of memory chips is M = g(A) where ⎧ ⎨ 4 A = 1.4)2 = 0.4) + 2(0.Name:joey iwatsuru Email:joeyiwat@yahoo.4 − (1.2) + 8(0.1) = 4.44 (4) The standard deviation is σ N = √ Var[N ] = √ 0. E[M] = 4. (3) 11 Address:104 pine meadows loop. Quiz 2. (1) The expected value of N is 2 E [N ] = n=0 n PN (n) = 0(0. g(E[A]) = g(2) = 4. AR.8 (3) Since E[A] = 2.4 (2) (3) The variance of N is Var[N ] = E N 2 − (E [N ])2 = 2.8 The PMF PN (n) allows to calculate each of the desired quantities.7 (1) Using Deﬁnition 2.5) = 2.4) + 4(0.5) = 1. 2 g(A) = 6 A = 3 ⎩ 8 A=4 (3) By Theorem 2.14. the expected number of applications is 4 E [A] = a=1 a PA (a) = 1(0.1) + 1(0.3) + 3(0.44 = 0.1) + 12 (0.10.com Phone:5017621195 Quiz 2. However. The two quantities are different because g(A) is not of the form α A + β. us (United States) Zip Code:71901 .4 (1) (2) The second moment of N is 2 E N 2 = n=0 n 2 PN (n) = 02 (0.

3. . 2. 3. 7. . .155)(5) + (0.25) n = 1. 5 0 otherwise (2) (3) The problem statement tells us that P[T ] = 1 − P[I ] = 3/4. 5 = 0.80 (6) By Theorem 2. 10 ⎩ 0 otherwise (5) Once we have the conditional PMF. 9. 50 = 0(0.75) + 0. 50 ⎩ 0 otherwise (4) First we ﬁnd 10 (3) (4) (5) P [N ≤ 10] = n=1 PN (n) = (0. we learn that the conditional PMF of N given the event I is 0.2 n = 1.155 n = 1. 2.005 n = 6.8 n = 6. 50 PN |I (n) = (1) 0 otherwise (2) Also from the problem statement. 10 ⎩ 0 otherwise ⎧ ⎨ 0. 5 n = 6. 3. calculating conditional expectations is easy.155/0. . 7. 2.Name:joey iwatsuru Email:joeyiwat@yahoo. 7. .02(0.15625 12 Address:104 pine meadows loop. .02(0. AR. 4.17. 7. 5 = 0. 2. the conditional PMF of N given N ≤ 10 is PN |N ≤10 (n) = PN (n) P[N ≤10] ⎧ ⎨ 0. . . 5 = 0. 2. 8.19375) + n=6 n(0. 3. 3. . 4.9 (1) From the problem statement.2(0.19375 n = 1. us (United States) Zip Code:71901 .00625) (11) (12) = 3.8 n = 1. 8. hot springs.75) + 0.005)(5) = 0. we ﬁnd the PMF of N is PN (n) = PN |T (n) P [T ] + PN |I (n) P [I ] ⎧ ⎨ 0.10 (the law of total probability). 9. 4. 2. .00625 n = 6. .02 n = 1. 4. E [N |N ≤ 10] = n 5 0 n ≤ 10 otherwise (7) (8) (9) n PN |N ≤10 (n) 10 (10) = n=1 n(0. .com Phone:5017621195 Quiz 2. 4. From Theorem 1. the conditional PMF of N given the event T is PN |T (n) = 0.25) ⎩ 0 otherwise ⎧ ⎨ 0.005/0.

The ith column M(:. Each time samplemean(k) is called produces a random output. for i=1:5.71875 The conditional variance is Var[N |N ≤ 10] = E N 2 |N ≤ 10 − (E [N |N ≤ 10])2 = 12. hot springs.19375) + 330(0. us (United States) Zip Code:71901 .k). K=(1:k)’. function M=samplemean(k). k. . What is observed in these ﬁgures is that for small n. Examples of the function calls (a) samplemean(100) and (b) samplemean(1000) are shown in Figure 1.00625) n=6 = n=1 n (0.75684 (16) (17) Quiz 2. .19375) + 2 (14) (15) = 55(0. m n is fairly random but as n gets 13 Address:104 pine meadows loop. end.10.com Phone:5017621195 10 8 6 4 2 0 0 50 100 10 8 6 4 2 0 0 500 1000 (a) samplemean(100) (b) samplemean(1000) Figure 1: Two examples of the output of samplemean(k) (6) To ﬁnd the conditional variance. we ﬁrst ﬁnd the conditional second moment E N 2 |N ≤ 10 = n 5 n 2 PN |N ≤10 (n) 10 (13) n 2 (0. m k . .15625)2 = 2. 2. . m 2 .Name:joey iwatsuru Email:joeyiwat@yahoo. M(:. ./K.i)=cumsum(X). . plot(K.M). AR. X=duniformrv(0. M=zeros(k.00625) = 12.5).10 The function samplemean(k) generates and plots ﬁve m n sequences for n = 1. .71875 − (3. .i) of M holds a sequence m 1 .

14 Address:104 pine meadows loop. . AR. m n gets close to E[X ] = 5. . Although each sequence m 1 .com Phone:5017621195 large. m 2 . us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo. . the sequences always converges to E[X ]. hot springs. This random convergence is analyzed in Chapter 7. that we generate is random.

us (United States) Zip Code:71901 .5] = 1 − FY (1. We will evaluate this integral using integration by parts: ∞ −∞ f X (x) d x = 0 ∞ cxe−x/2 d x ∞ 0 (1) ∞ 0 = −2cxe−x/2 =0 + 2ce−x/2 d x (2) = −4ce−x/2 ∞ 0 = 4c (3) Thus c = 1/4 and X has the Erlang (n = 2.1 The CDF of Y is 1 FY(y) 0.5 0 0 2 y 4 ⎧ y<0 ⎨ 0 y/4 0 ≤ y ≤ 4 FY (y) = ⎩ 1 y>4 (1) From the CDF FY (y).5) = 1 − (1. λ = 1/2) PDF 0. we can calculate the probabilities: (1) P[Y ≤ −1] = FY (−1) = 0 (2) P[Y ≤ 1] = FY (1) = 1/4 (3) P[2 < Y ≤ 3] = FY (3) − FY (2) = 3/4 − 2/4 = 1/4 (4) P[Y > 1.2 (1) First we will ﬁnd the constant c and then we will sketch the PDF. AR. hot springs.com Phone:5017621195 Quiz Solutions – Chapter 3 Quiz 3.2 0.5)/4 = 5/8 Quiz 3. To ﬁnd c.Name:joey iwatsuru Email:joeyiwat@yahoo. we use ∞ the fact that −∞ f X (x) d x = 1.5] = 1 − P[Y ≤ 1.1 0 0 5 x 10 15 f X (x) = (x/4)e−x/2 x ≥ 0 0 otherwise fX(x) (4) 15 Address:104 pine meadows loop.

P [−2 ≤ X ≤ 2] = FX (2) − FX (−2) = 1 − 3e−1 .5 0 0 5 x 10 15 FX (x) = 1− 0 x 2 + 1 e−x/2 x ≥ 0 otherwise (8) (3) From the CDF FX (x).com Phone:5017621195 (2) To ﬁnd the CDF FX (x). hot springs.. (3) 16 Address:104 pine meadows loop.3 The PDF of Y is 3 fY(y) 2 1 0 −2 0 y 2 f Y (y) = 3y 2 /2 −1 ≤ y ≤ 1. FX (x) = 0 x f X (y) dy = 0 x y −y/2 e dy 4 (5) (6) (7) x x 1 y − e−y/2 dy = − e−y/2 − 2 2 0 0 x −x/2 =1− e − e−x/2 2 The complete expression for the CDF is 1 FX(x) 0. For x ≥ 0. (4) Similarly. P [0 ≤ X ≤ 4] = FX (4) − FX (0) = 1 − 3e−2 . (2) Note that the above calculation wasn’t really necessary because E[Y ] = 0 whenever the PDF f Y (y) is an even function (i. (2) The second moment of Y is E Y2 = ∞ −∞ y 2 f Y (y) dy = 1 −1 (3/2)y 4 dy = (3/10)y 5 1 −1 = 3/5. (9) (10) Quiz 3. we ﬁrst note X is a nonnegative random variable so that FX (x) = 0 for all x < 0. f Y (y) = f Y (−y)). AR.Name:joey iwatsuru Email:joeyiwat@yahoo. (1) (1) The expected value of Y is E [Y ] = ∞ −∞ y f Y (y) dy = 1 −1 (3/2)y 3 dy = (3/8)y 4 1 −1 = 0. us (United States) Zip Code:71901 . 0 otherwise.e.

E[X ] = 1/λ and Var[X ] = 1/λ2 . However.2 fX(x) 0 −5 x ← fX(x) ← f (y) Y 0 y 5 17 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. we apply Theorem 3. (5) (z) function and Table 3. Quiz 3. The fact that Y has twice the standard deviation of X is reﬂected in the greater spread of f Y (y). the peak value of the Gaussian PDF goes down.2. (3) (4) The complete expression for the PDF of X is √ √ √ 1/(6 3) 3 − 3 3 ≤ x < 3 + 3 3.4 0. fY(y) 0.4 (1) When X is an exponential (λ) random variable. f X (x) = 0 otherwise. The PDF of X is f X (x) = (1/3)e−x/3 x ≥ 0. we must have λ = 1/3. hot springs. 12 (2) √ b − a = ±6 3. (4) The standard deviation of Y is σY = Quiz 3. AR. a+b =3 2 Var[X ] = (b − a)2 = 9.com Phone:5017621195 (3) The variance of Y is Var[Y ] = E Y 2 − (E [Y ])2 = 3/5. b) random variable. 0 otherwise. √ b = 3 + 3 3. To ﬁnd a and b. The only valid solution with a < b is √ a = 3 − 3 3.1 (1) The PDFs of X and Y are shown below. it is important to remember that as the standard deviation increases.6 to write E [X ] = This implies a + b = 6. us (United States) Zip Code:71901 . (1) √ Var[Y ] = √ 3/5. (4) (2) We know X is a uniform (a. Since E[X ] = 3 and Var[X ] = 9.5 Each of the requested probabilities can be calculated using or Q(z) and Table 3. We start with the sketches.

P[Y > 3.75) = 1 − 2 0. (5) Since Y is Gaussian (0. (3) Since Y is Gaussian (0. (3) P[X = 1] = FX (1+ ) − FX (1− ) = 1 − 1/2 = 1/2. 1).5 0 −2 0 x 2 ⎧ −1 ≤ x < 1.5 0 −2 (1. Quiz 3. hot springs.33 × 10−4 . since X is Gaussian (0. ⎨ 0 FX (x) = (x + 1)/4 −1 ≤ x < 1.5 fX(x) 0. 2).7 18 Address:104 pine meadows loop.5 ) = Q(1.5] = Q(3. (4) We ﬁnd the PDF f Y (y) by taking the derivative of FY (y). P[X > 3. P [−1 < X ≤ 1] = FX (1) − FX (−1) = (1) − (−1) = 2 (1) − 1 = 0. P [−1 < Y ≤ 1] = FY (1) − FY (−1) 1 −1 = − σY σY (3) =2 1 − 1 = 0. 1).75) = 0 x 2 ⎧ x < −1. ⎩ 0 otherwise. ⎩ 1 x ≥ 1.0401. 2 (4) (1) (2) (4) Again.383. (2) Quiz 3.6826. 2).Name:joey iwatsuru Email:joeyiwat@yahoo. ⎨ 1/4 f X (x) = (1/2)δ(x − 1) x = 1. AR. us (United States) Zip Code:71901 .com Phone:5017621195 (2) Since X is Gaussian (0. The resulting PDF is 0.5) = 2. (2) P[X < 1] = FX (1− ) = 1/2.5] = Q( 3. (1) The following probabilities can be read directly from the CDF: (1) P[X ≤ 1] = FX (1) = 1.6 The CDF of X is 1 FX(x) 0.

Note that when y < 0 or y > 1. FY (y) = y−y ⎩ 1 y ≥ 1. Using the CDF FX (x). the PDF is zero. FX (x) = x−x ⎩ 1 x > 2. Also.5 0 −1 Y (4) 0 As expected. Also.5 0 −1 0 1 y 2 3 1 y 2 3 ⎧ y < 0. we obtain the PDF f Y (y).25 f Y (y) = 1 − y/2 + (1/4)δ(y − 1) 0 ≤ y ≤ 1 0 otherwise Y (6) Quiz 3. (5) 0. FX (x) = x −∞ f X (y) dy = 0 x (1 − y/2) dy = x − x 2 /4. FY (y) = 1 for all y ≥ 1. (4) By taking the derivative of FY (y). Thus FY (y) = 0 for y < 0. Lastly. the complete expression for the CDF of Y is 1 F (y) 0. us (United States) Zip Code:71901 . we see that the jump in FY (y) at y = 1 is exactly equal to P[Y = 1]. (2) (2) The probability that Y = 1 is P [Y = 1] = P [X ≥ 1] = 1 − FX (1) = 1 − 3/4 = 1/4. 19 Address:104 pine meadows loop. because Y ≤ 1. ⎨ 0 2 /4 0 ≤ y < 1. (3) (3) Since X is nonnegative. FX (x) = 0 for x < 0. Finally. Y is also nonnegative.6 . ⎨ 0 2 /4 0 ≤ x ≤ 2. FY (y) = P [Y ≤ y] = P [X ≤ y] = FX (y) .Name:joey iwatsuru Email:joeyiwat@yahoo.8 (1) P[Y ≤ 6] = 6 −∞ f Y (y) dy = 6 0 (1/10) dy = 0.5 0 −1 X 0 1 x 2 3 ⎧ x < 0. (1) The complete CDF of X is 1 F (x) 0. 1. for 0 < y < 1. for 0 ≤ x ≤ 2. hot springs.5 f (y) 1 0. AR. FX (x) = 1 for x ≥ 2 since its always true that x ≤ 2.com Phone:5017621195 (1) Since X is always nonnegative.

1).2 .com Phone:5017621195 (2) From Deﬁnition 3.m) generates the vector t. i=i+1. (3) (5) From the conditional PDF f Y |Y ≤6 (y). t=zeros(m. 1/6 0 ≤ y ≤ 6. (1) 1 dy = 0. 1/2 8 < y ≤ 10. hot springs. us (United States) Zip Code:71901 . end end A second method exploits the fact that if T is an exponential (λ) random variable. In this case the command t=2. if (x>2) t(i+1)=x.1). Here is a M ATLAB function that uses this method: function t=t2rv(m) i=0. 2 (5) Quiz 3.15. = otherwise. the conditional PDF of Y given Y ≤ 6 is f Y |Y ≤6 (y) = (3) The probability Y > 8 is P [Y > 8] = 8 10 f Y (y) P[Y ≤6] 0 y ≤ 6.lambda=1/3.Name:joey iwatsuru Email:joeyiwat@yahoo. 10 (2) (4) From Deﬁnition 3. 0 otherwise.0+exponentialrv(1/3. the conditional PDF of Y given Y > 8 is f Y |Y >8 (y) = f Y (y) P[Y >8] 0 y > 8. 6 (4) (6) From the conditional PDF f Y |Y >8 (y). = otherwise. 0 otherwise.15.9 A natural way to produce random variables with PDF f T |T >2 (t) is to generate samples of T with PDF f T (t) and then to discard those samples which fail to satisfy the condition T > 2. x=exponentialrv(lambda. we can calculate the conditional expectation E [Y |Y > 8] = ∞ −∞ y f Y |Y >8 (y) dy = 10 8 y dy = 9. AR. 20 Address:104 pine meadows loop. we can calculate the conditional expectation E [Y |Y ≤ 6] = ∞ −∞ y f Y |Y ≤6 (y) dy = 6 0 y dy = 3. then T = T + 2 has PDF f T (t) = f T |T >2 (t). while (i<m).

Y (∞. (3) FX.G (0.12 = 0.2 From the joint PMF of Q and G given in the table. g) (6) (7) = 0. Quiz 4. AR. (1) FX. g) (4) (5) = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. −∞) = P[X ≤ ∞. 0) + PQ.16 + 0. This result is given in Theorem 4.78 21 Address:104 pine meadows loop.G (0. (2) FX. 3) = 0.G (1.G (q.1 Each value of the joint CDF can be found by considering the corresponding probability.06 + 0. ∞) = P[X ≤ ∞.18 + 0.24 + 0. Y ≤ −∞] = 0 since Y cannot take on the value −∞.12 + 0.1. 2) = P[X ≤ −∞.24 + 0.18 (3) The probability that G > 1 is 3 1 (1) (2) (3) P [G > 1] = g=2 q=0 PQ. (4) FX.08 = 0. (1) The probability that Q = 0 is P [Q = 0] = PQ. y) = P[X ≤ ∞.16 + 0.24 + 0.08 = 0.6 (4) The probability that G > Q is 1 3 P [G > Q] = q=0 g=q+1 PQ. 2) + PQ.Y (∞. 1) + PQ. Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on the value −∞.com Phone:5017621195 Quiz Solutions – Chapter 4 Quiz 4.6 (2) The probability that Q = G is P [Q = G] = PQ. us (United States) Zip Code:71901 . Y ≤ ∞] = 1.Y (∞.G (q.18 + 0.Y (−∞. Y ≤ y] = P[Y ≤ y] = FY (y). 0) + PQ.G (0. we can calculate the requested probabilities by summing the PMF over those values of Q and G that correspond to the event. 1) = 0.G (0.12 + 0. hot springs.G (0.

2 0 0 2 1 f X.6 0.2 0.B (h.2 PB (b) 0. y) d x d y = 1. this corresponds to calculating the row sum across the table of the joint PMF. b) (1) For each value of h.B (h.Y (x. we convert to polar coordinates using the substitutions x = r cos θ . AR.1 0 0. b) (2) For each value of b.4 PH. Similarly. hot springs. y) d x d y = =c cx y d x dy y 0 2 0 (1) dy 2 0 x 2 /2 1 0 (2) =c (3) = (c/2) Thus c = 1.2 h=0 h=1 0. us (United States) Zip Code:71901 . y) d x d y (4) To integrate over A. the marginal PMF of H is PH (h) = b=0. this corresponds to the column sum down the table of the joint PMF.1 0.2 0.3 By Theorem 4. we write P [A] = A y dy = (c/4)y 2 f X.Name:joey iwatsuru Email:joeyiwat@yahoo. y = r sin θ and d x d y = r dr dθ . we apply ∞ ∞ −∞ −∞ ∞ ∞ −∞ −∞ (3) f X.com Phone:5017621195 Quiz 4. The easiest way to calculate these marginal PMFs is to simply sum each row and column: PH.5 0. the marginal PMF of B is 1 PB (b) = h=−1 PH.4 0. b) b = 0 b = 2 b = 4 PH (h) h = −1 0 0. To calculate P[A].1 0 0.3.B (h.Y (x.2.4 To ﬁnd the constant c. Speciﬁcally.Y (x.1 0. yielding 2 1 Y P [A] = 0 π/2 0 1 0 1 r 2 sin θ cos θ r dr dθ π/2 0 2 π/2 (5) (6) A 1 X = = r 3 dr ⎛ 1 0 sin θ cos θ dθ ⎞ ⎠ = 1/8 r 4 /4 ⎝ sin θ 2 (7) 0 22 Address:104 pine meadows loop.3 Quiz 4.

10 (T =120) 0.8. we can calculate the time T needed for the transfer. the marginal PDF of X is f X (x) = ∞ −∞ f X.Y (x. f Y (y) = = ∞ −∞ 6 1 f X.2 t = 36.05 (T =180) 0.6 (A) The time required for the transfer is T = L/B. AR.1 t = 120 PT (t) = ⎪ 0.20 (T =90) 0. y) dy (x + y 2 ) d x = 6 2 x /2 + x y 2 5 x=1 x=0 (4) 6 3 + 6y 2 = (1/2 + y 2 ) = 5 5 (5) 5 0 Since f Y (y) = 0 for y < 0 or y > 1. the complete expression for the PDF of Y is f Y (y) = Quiz 4.20 (T =270) (3 + 6y 2 )/5 0 ≤ y ≤ 1 0 otherwise (6) From the table.5 By Theorem 4.05 t = 18 ⎪ ⎪ ⎪ 0. 400 0. writing down the PMF of T is straightforward.com Phone:5017621195 Quiz 4. ⎧ ⎪ 0. 776. For 0 ≤ x ≤ 1. 000 l = 7. For each pair of values of L and B.B (l. 800 0.2 t = 270 ⎪ ⎪ ⎪ ⎪ 0.05 (T =18) 0.10 (T =24) 0. f X (x) = 0.10 (T =360) b = 28. us (United States) Zip Code:71901 . 90 ⎪ ⎪ ⎨ 0. f X (x) = 6 5 1 0 (x + y 2 ) dy = 6 x y + y 3 /3 5 y=1 y=0 6x + 2 6 = (x + 1/3) = 5 5 (2) The complete expression for the PDf of X is f X (x) = (6x + 2)/5 0 ≤ x ≤ 1 0 otherwise (3) By the same method we obtain the marginal PDF for Y .00 (T =540) b = 21.05 t = 180 ⎪ ⎪ ⎪ 0. 400 l = 2.Name:joey iwatsuru Email:joeyiwat@yahoo.Y (x. b) l = 518.1 t = 360 ⎪ ⎪ ⎩ 0 otherwise 23 (1) Address:104 pine meadows loop. For 0 ≤ y ≤ 1. 600 0. hot springs. We can write these down on the table for the joint PMF of L and B as follows: PL . 000 b = 14. y) dy (1) For x < 0 or x > 1.20 (T =36) 0.1 t = 24 ⎪ ⎪ ⎪ ⎪ 0. 592.

24 t = 40 0. Speciﬁcally.25 0. we ﬁnd the PDF is ⎧ 0 w<0 d FW (w) ⎨ f W (w) = = − ln w 0 ≤ w ≤ 1 ⎩ dw 0 w>1 Quiz 4. PL .6 t = 60 0. the variance of L is Var [L] = E L 2 − (E [L])2 = 0. Thus f W (0) = 0 and f W (1) = 1.15 0.Name:joey iwatsuru Email:joeyiwat@yahoo.25) + 22 (0.3 0.5 0. The calculus is simpler if we integrate over the region X Y > w.5) + 32 (0. integrating over the region W ≤ w is fairly complex.T (l.com Phone:5017621195 (B) First.25) = 2.25) = 4.5.15 0.5) + 3(0. hot springs. W = X Y satisﬁes 0 ≤ W ≤ 1.7 (A) It is helpful to ﬁrst make a table that includes the marginal PMFs. AR.4 PL (l) 0.1 0.2 0. us (United States) Zip Code:71901 . As shown below. we calculate the CDF FW (w) = P[W ≤ w]. Y 1 w w 1 XY > w FW (w) = 1 − P [X Y > w] =1− =1− 1 1 w w/x 1 w (2) (3) (4) (5) (6) dy dx XY = w X (1 − w/x) d x = 1 − x − w ln x|x=1 x=w = 1 − (1 − w + w ln w) = w − w ln w The complete expression for the CDF is ⎧ w<0 ⎨ 0 FW (w) = w − w ln w 0 ≤ w ≤ 1 ⎩ 1 w>1 By taking the derivative of the CDF. t) l=1 l=2 l=3 PT (t) (1) The expected value of L is E [L] = 1(0. For 0 < w < 1.25 (7) (8) (1) (2) (3) Address:104 pine meadows loop. we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1.25) + 2(0. Since the second moment of L is E L 2 = 12 (0.5.1 0.

15) + 2(40)(0. For 0 ≤ x ≤ 1. y) dy = 0 2 1 x y dy = x y 2 2 y=2 = 2x y=0 (12) Similarly. for 0 ≤ y ≤ 2. f X (x) = ∞ −∞ f X.T = 0.3) + 3(40)(0. the correlation coefﬁcient is ρ L .com Phone:5017621195 (2) The expected value of T is E [T ] = 40(0.Y (x.4) = 2400. T ] = E [L T ] − E [L] E [T ] = 96 − 2(48) = 0 (5) Since Cov[L .16(a).1) = 96 (4) From Theorem 4. T ] = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. us (United States) Zip Code:71901 . y) d x = 0 2 xy dx = 1 2 x y 2 x=1 = x=0 y 2 (13) The complete expressions for the marginal PDFs are f X (x) = 2x 0 ≤ x ≤ 1 0 otherwise f Y (y) = y/2 0 ≤ y ≤ 2 0 otherwise (14) From the marginal PDFs.4) = 48.6) + 60(0.6) + 602 (0.60 l=1 lt PL T (lt) (7) (8) (9) (10) = 1(40)(0.2) + 3(60)(0. the covariance of L and T is Cov [L . the calculations become easier if we ﬁrst calculate the marginal PDFs f X (x) and f Y (y). (3) The correlation is 3 (4) (5) (6) E [L T ] = t=40.Y (x. f Y (y) = ∞ −∞ f X.15) + 1(60)(0. Thus Var[T ] = E T 2 − (E [T ])2 = 2400 − 482 = 96. hot springs. 25 Address:104 pine meadows loop. The second moment of T is E T 2 = 402 (0. (11) (B) As in the discrete case. it is straightforward to calculate the various expectations.1) + 2(60)(0.

AR. 60).T |A (l. P [A] = P [V > 80] = PL . Quiz 4. us (United States) Zip Code:71901 .T (2. T ) = (2. Y ] = 0. 40) and (L .45 By Deﬁnition 4. (2) The ﬁrst and second moments of Y are E [Y ] = E Y2 4 1 2 y dy = 3 −∞ 0 2 ∞ 2 1 = y 2 f Y (y) dy = y 3 dy = 2 −∞ 0 2 y f Y (y) dy = ∞ 2 (18) (19) The variance of Y is Var[Y ] = E[Y 2 ] − (E[Y ])2 = 2 − 16/9 = 2/9. (3) The correlation of X and Y is E [X Y ] = = ∞ ∞ −∞ −∞ 1 2 2 2 0 0 x y f X. hot springs. 60).Y = 0.Y (x. dy 1 0 (20) 2 x3 x y d x. PL .T (3. t) = 26 PL . 60) = 0.T (3. 40) + PL . dy = 3 y3 3 = 0 8 9 (21) (4) The covariance of X and Y is Cov [X.T (l. T ) = (3. 60) + PL . T ) = (3.9.t) P[A] (1) 0 lt > 80 otherwise (2) Address:104 pine meadows loop.com Phone:5017621195 (1) The ﬁrst and second moments of X are E [X ] = E X2 = ∞ −∞ ∞ −∞ x f X (x) d x = 0 1 2x 2 d x = 1 2 3 1 2 (15) (16) (17) x 2 f X (x) d x = 0 2x 3 d x = The variance of X is Var[X ] = E[X 2 ] − (E[X ])2 = 1/18. (22) (5) Since Cov[X. the correlation coefﬁcient is ρ X. (L .Name:joey iwatsuru Email:joeyiwat@yahoo.8 (A) Since the event V > 80 occurs only for the pairs (L . y) d x. Y ] = E [X Y ] − E [X ] E [Y ] = 2 8 − 9 3 4 3 = 0.

400 9 3 9 It follows that Var [V |A] = E V 2 |A − (E [V |A])2 = 622 2 9 (7) (B) For continuous random variables X and Y . us (United States) Zip Code:71901 .T |A (l.Name:joey iwatsuru Email:joeyiwat@yahoo.Y (x. y) d x d y = = = 60 40 60 40 60 3 80/y xy dx dy 4000 x2 2 3 (8) dy (9) (10) (11) y 4000 80/y 9 3200 y − 2 y 40 4000 2 9 4 3 = − ln ≈ 0.801 8 5 2 dy The conditional PDF of X and Y is f X. we ﬁrst calculate the probability of the conditioning event.T |A (l.Y (x. t) (3) (4) 1 2 1 4 = (2 · 60) + (3 · 40) + (3 · 60) = 133 9 3 9 3 For the conditional variance Var[V |A].T |A (l. E [V |A] = l t lt PL . AR. hot springs. y) ∈ B 0 otherwise K x y 40 ≤ y ≤ 60.com Phone:5017621195 We can represent this conditional PMF in the following table: PL . we ﬁrst ﬁnd the conditional second moment E V 2 |A = l t (lt)2 PL . 80/y ≤ x ≤ 3 0 otherwise 27 (12) (13) Address:104 pine meadows loop. y) /P [B] (x. P [B] = B f X. t) (5) (6) 4 1 2 = (2 · 60)2 + (3 · 40)2 + (3 · 60)2 = 18. y) = = f X.Y |B (x. t) t = 40 t = 60 l=1 0 0 l=2 0 4/9 1/3 2/9 l=3 The conditional expectation of V can be found from the conditional PMF.

hot springs. b) b=0 b=1 a=0 PB|A (0|0)PA (0) PB|A (1|0)PA (0) PB|A (0|2)PA (2) PB|A (1|2)PA (2) a=2 28 Address:104 pine meadows loop.9 (24) (A) (1) The joint PMF of A and B can be found from the marginal and conditional PMFs via PA. Incorporating the information from the given conditional PMFs can be confusing. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo.Y |B (x. y) d x d y K x 3 y3 d x d y y3 x 4 x=3 x=80/y (19) (20) = (K /4) 80/y 60 40 60 40 dy (21) (22) ≈ 16. y) d x d y K x 2 y2 d x d y y2 x 3 x=3 x=80/y (14) (15) = (K /3) = (K /3) 80/y 60 40 60 40 dy (16) (17) (18) 27y 2 − 803 /y dy 60 40 = (K /3) 9y 3 − 803 ln y The conditional second moment of K given B is E W 2 |B = = ∞ ∞ ≈ 120. A table of the joint PMF will include all four possible combinations of A and B. we can note that A has range S A = {0. 116.30 Quiz 4.B (a. The conditional expectation of W given event B is E [W |B] = = ∞ ∞ −∞ −∞ 60 3 40 x y f X. 2} and B has range S B = {0.B (a.10 (23) = (K /4) 81y 3 − 804 /y dy 60 40 = (K /4) (81/4)y 4 − 804 ln y It follows that the conditional variance of W given B is Var [W |B] = E W 2 |B − (E [W |B])2 ≈ 1528. AR. b) = PB|A (b|a)PA (a).78 −∞ −∞ 60 3 40 (x y)2 f X.com Phone:5017621195 where K = (4000P[B])−1 . The general form of the table is PA. however. Consequently. 1}.Y |B (x.

6) a=2 or PA. First we calculate the conditional expected value E [A|B = 0] = a a PA|B (a|0) = 0(16/31) + 2(15/31) = 30/31 (4) The conditional second moment is E A2 |B = 0 = a a 2 PA|B (a|0) = 02 (16/31) + 22 (15/31) = 60/31 (5) The conditional variance is then Var[A|B = 0] = E A2 |B = 0 − (E [A|B = 0])2 = (B) (1) The joint PDF of X and Y is f X.08 0. it is easy to calculate the conditional expectation 1 E [B|A = 2] = b=0 b PB|A (b|2) = (0)(0. hot springs.62 a = 2 (2) ⎩ PB (0) 0 otherwise ⎧ ⎨ 16/31 a = 0 = 15/31 a = 2 (3) ⎩ 0 otherwise (4) We can calculate the conditional variance Var[A|B = 0] using the conditional PMF PA|B (a|0).3 0.B (a.B (a. we have b=0 b=1 PA. 0) ⎨ PA|B (a|0) = = 0.Y (x.5)(0.5 (1) (3) From the joint PMF PA.3/0.5)(0. us (United States) Zip Code:71901 .6) (0. 0 ≤ x ≤ 1 0 otherwise (7) 960 961 (6) Address:104 pine meadows loop.3 a=2 (2) Given the conditional PMF PB|A (b|2). AR.32/0.2)(0. b) b = 0 b = 1 a=0 0. f Y |X (y|1/2) = 29 8y 0 ≤ y ≤ 1/2 0 otherwise (8) 6y 0 ≤ y ≤ x.4) (0.5) = 0.com Phone:5017621195 Substituting values from PB|A (b|a) and PA (a).4) (0.32 0. y) = f Y |X (y|x) f X (x) = (2) From the given conditional PDF f Y |X (y|x).B (a. we can calculate the the conditional PMF ⎧ 0.62 a = 0 PA.8)(0.Name:joey iwatsuru Email:joeyiwat@yahoo. b) a=0 (0.B (a. b).5) + (1)(0.

16 0.2. we observe that PY (1) = 0. we can conclude that X and Y are dependent. PX. for 1/2 ≤ x ≤ 1. x2 ) = f X 1 (x1 ) f X 2 (x2 ) = (1 − x1 /2)(1 − x2 /2) 0 ≤ x1 ≤ 2. b) PDF.1. f Y (1/2) = Thus.X 2 (x1 . y) = PX (x)PY (y). g) = PQ (q)PG (g) for every pair q.1/2 ( ) d x = 1 1/2 6(1/2) d x = 3/2 (9) (4) From the pervious part. 1). In this case.10 0.2. f X 1 . AR. we calculate the marginal PMFs from the table of the joint PMF PQ.24 0. 1) = 0 = PX (0) PY (1) (1) (1 − 1/2)2 1 = 12 48 (11) Since we have found a pair x.Name:joey iwatsuru Email:joeyiwat@yahoo. (2) For random variables Q and G from Quiz 4. However.Y (x. Var [X |Y = 1/2] = Quiz 4. 0 ≤ x2 ≤ 2 0 otherwise 30 (2) (3) Address:104 pine meadows loop. Unlike X and Y in part (a).08 0.09 and PX (0) = 0.01. independence requires that either PX (x) = 0 or PY (y) = 0. f X |Y (x|1/2) = f X. y) = 0.12 0. we integrate the joint PDF. To ﬁnd f Y (1/2).60 0.12 0. Note that whenever PX. 1/2)/ f Y (1/2).04 0.40 q=1 PG (g) 0. we see that given Y = 1/2. PQ.18 0.10 (A) (1) For random variables X and Y from Example 4.20 Careful study of the table will verify that PQ.06 0.G (q.Y (x. by the deﬁnition of the uniform (a.30 0.Y (x. there are no obvious pairs q.G (q.40 0.Y (x. (B) (1) Since X 1 and X 2 are independent. the conditional PDF of X is uniform (1/2. g) in Quiz 4. hot springs.Y (0.com Phone:5017621195 (3) The conditional PDF of Y given X = 1/2 is f X |Y (x|1/2) = f X. g that fail the independence requirement.G (q. y such that PX. Hence Q and G are independent. g. 1/2) 6(1/2) =2 = f Y (1/2) 3/2 (10) ∞ −∞ f X. g) g = 0 g = 1 g = 2 g = 3 PQ (q) q=0 0. us (United States) Zip Code:71901 . it is not obvious whether they are independent. Thus.

17. us (United States) Zip Code:71901 . σ2 = σY = 1. (1) (2) By Theorem 4. and that σ1 = σ X = 1.Name:joey iwatsuru Email:joeyiwat@yahoo. from the problem statement. ˜ (4) When Y = y = 2. From the PDF f X (x). X 2 ) is found by observing that Z ≤ z iff X 1 ≤ z and X 2 ≤ z.29. The CDF of Z = max(X 1 . we need to ﬁnd the CDF of each X i . f X. FZ (z) = (z − z 2 /4)2 (7) The complete expression for the CDF of Z is ⎧ z<0 ⎨ 0 2 /4)2 0 ≤ z ≤ 2 FZ (z) = (z − z ⎩ 1 z>1 (8) Quiz 4.11 This problem just requires identifying the various terms in Deﬁnition 4.com Phone:5017621195 (2) Let FX (x) denote the CDF of both X 1 and X 2 . µ1 = µ X = 0. Speciﬁcally. y) = √ 3π 2 (3) µ2 = µY = 0. we see that E[X |Y = 2] = 1 and Var[X |Y = 2] = 3/4. The conditional PDF of X given Y = 2 is simply the Gaussian PDF 1 2 e−2(x−1) /3 . we know that ρ = 1/2. hot springs. the CDF is ⎧ x <0 ⎨ 0 x 2 /4 0 ≤ x ≤ 2 FX (x) = f X (y) dy = (6) x−x ⎩ −∞ 1 x >2 Thus for 0 ≤ z ≤ 2. P [Z ≤ z] = P [X 1 ≤ z. (2) (1) Applying these facts to Deﬁnition 4. AR.30. we have 1 2 2 e−2(x −x y+y )/3 . That is. X 2 ≤ z] = P [X 1 ≤ z] P [X 2 ≤ z] = [FX (z)]2 (4) (5) To complete the problem. f X |Y (x|2) = √ 3π/2 (5) 31 Address:104 pine meadows loop.17 and Theorem 4.Y (x. the conditional expected value and standard deviation of X given Y = y are 2 E [X |Y = y] = y/2 σ X = σ1 (1 − ρ 2 ) = 3/4.

PX (x) = 1/4 x = 1. px=0. hot springs. 32 Address:104 pine meadows loop.com Phone:5017621195 Quiz 4. 2. .2.12 One straightforward method is to follow the approach of Example 4.4]. and an independent uniform (0. Also. .1)).m). Y has a discrete uniform (1. given X = x.y’]. Instead. PY |X (y|x) = 1/x y = 1. 1) random variable U . y=ceil(x.Name:joey iwatsuru Email:joeyiwat@yahoo. x=finiterv(sx. 4. This observation prompts the following program: function xy=dtrianglerv(m) sx=[1. x 0 otherwise (1) Given X = x. x) PMF. xy=[x’.28. . us (United States) Zip Code:71901 . we can generate a sample value of Y with a discrete uniform (1. .1). AR. we use an alternate approach.25*ones(4. That is. 0 otherwise. x) PMF via Y = xU .*rand(m.px. First we observe that X has the discrete uniform (1.3. 3. 4) PMF.

x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X (x) d x3 = f X (x) d x1 = f X (x) d x2 = 1 6 d x3 = 6(1 − x2 ).} 0 otherwise (5) Quiz 5.X 3 (x2 . Thus. .X 3 (x2 .2 By deﬁnition of A. y2 . Y1 = X 1 . f X 2 . each Yi must be a strictly positive integer. hot springs. 2.1 We ﬁnd P[C] by integrating the joint PDF over the region of interest. x2 ) = 0 unless 0 ≤ x 1 ≤ x2 ≤ 1. Y2 = X 2 − X 1 and Y3 = X 3 − X 2 . (1) (2) =4 0 y2 dy2 0 y4 dy4 Quiz 5. 6 d x1 = 6x2 . 2.X 2 (x1 . Within these constraints.}. X 2 − X 1 = y2 . for y1 . . we must keep in mind that f X 1 . x3 ) = 0 unless 0 ≤ x2 ≤ x3 ≤ 1. . Speciﬁcally. X 3 = y3 + y2 + y1 ] = (1 − p)3 p y1 +y2 +y3 (1) (2) (3) (4) By deﬁning the vector a = 1 1 1 . P [C] = 0 1/2 y2 1/2 y4 dy2 0 1/2 dy1 0 dy4 0 1/2 4dy3 = 1/4. us (United States) Zip Code:71901 . X 3 − X 2 = y3 ] = P [X 1 = y1 .X 3 (x1 .Name:joey iwatsuru Email:joeyiwat@yahoo. Y3 = y3 ] = P [X 1 = y1 .X 3 (x1 . x2 ) = f X 2 . AR.X 2 (x1 . we have f X 1 . Y2 = y2 . .3 First we note that each marginal PDF is nonzero only if any subset of the xi obeys the ordering contraints 0 ≤ x 1 ≤ x2 ≤ x3 ≤ 1. .com Phone:5017621195 Quiz Solutions – Chapter 5 Quiz 5. 6 d x2 = 6(x3 − x1 ). y2 . . PY (y) = P [Y1 = y1 . Since 0 < X 1 < X 2 < X 3 . and that f X 1 . the complete expression for the joint PMF of Y is PY (y) = (1 − p) p a y y1 . y3 ∈ {1. (1) (2) (3) x2 x2 0 x3 x1 In particular. y3 ∈ {1. x3 ) = f X 1 . x3 ) = 0 unless 0 ≤ x 1 ≤ 33 Address:104 pine meadows loop. X 2 = y2 + y1 .

w) = 4 0 ≤ v1 ≤ v2 ≤ 1. x3 ) = 6(1 − x2 ) 0 ≤ x1 ≤ x2 ≤ 1 0 otherwise 6x2 0 ≤ x2 ≤ x3 ≤ 1 0 otherwise 6(x3 − x1 ) 0 ≤ x1 ≤ x3 ≤ 1 0 otherwise (4) (5) (6) Now we can ﬁnd the marginal PDFs. x2 ) d x2 = f X 2 .com Phone:5017621195 x3 ≤ 1. x2 ) = f X 2 .X 3 (x2 .X 2 (x1 . We can separate these constraints by creating the vectors V= The joint PDF of V and W is f V. AR.X 3 (x2 . Y4 (1) 34 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. x3 ) = f X 1 . the components have dependencies as a result of the ordering constraints Y1 ≤ Y2 and Y3 ≤ Y4 . When 0 ≤ xi ≤ 1 for each xi . hot springs. Y2 W= Y3 . x3 ) d x2 = 1 x1 1 6(1 − x2 ) d x2 = 3(1 − x1 )2 6x2 d x3 = 6x2 (1 − x2 ) 2 6x2 d x2 = 3x3 (7) (8) (9) x2 x3 0 The complete expressions are f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = 3(1 − x1 )2 0 ≤ x1 ≤ 1 0 otherwise 6x2 (1 − x2 ) 0 ≤ x2 ≤ 1 0 otherwise 2 3x3 0 ≤ x3 ≤ 1 0 otherwise (10) (11) (12) Quiz 5. f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X 1 .W (v.4 In the PDF f Y (y). The complete expressions are f X 1 . x3 ) d x3 = f X 2 .X 2 (x1 . us (United States) Zip Code:71901 . 0 ≤ w1 ≤ w2 ≤ 1 0 otherwise (2) Y1 .X 3 (x1 .X 3 (x2 .

1)x3 x1 + x2 + x3 = 5.3. . . . f W (w) = = 4(1 − w1 ) dw1 = 2 f V. 0. AR.x3 (0. us (United States) Zip Code:71901 . 0. If we view each test as a trial with success probability P[L] = 0. . 1. PX i (x) = pix (1 − pi )5−x x = 0. w) dv1 dv2 1 0 1 v1 (6) (7) 4 dv2 dv1 = 2 It follows that V and W have PDFs f V (v) = 2 0 ≤ v1 ≤ v2 ≤ 1 .W (v.3)x1 (0. . p2 = 0. each test is a subexperiment with three possible outcomes: L. For 0 ≤ v1 ≤ v2 ≤ 1. Quiz 5. hot springs. 0. In ﬁve trials.x2 . f V (v) = = 0 1 f V.1) random variable. for p1 = 0.6)x2 (0. p) = (5. for 0 ≤ w1 ≤ w2 ≤ 1. conﬁrming that V and W are independent vectors. we see that X 1 is a binomial (n. .Name:joey iwatsuru Email:joeyiwat@yahoo. .6 and p3 = 0.W (v. That is.1.com Phone:5017621195 We must verify that V and W are independent.W (v. 0 otherwise f W (w) = 2 0 ≤ w 1 ≤ w2 ≤ 1 0 otherwise (8) It is easy to verify that f V. 1. however it is simpler to just start from ﬁrst principles and observe that X 1 is the number of occurrences of L in ﬁve independent tests.3. PX (x) = (1) x1 .5 (A) Referring to Theorem 1.19. 5 0 otherwise 35 5 x (2) Address:104 pine meadows loop. X 2 is a binomial (5. w) = f V (v) f W (w). 5} ⎩ 0 otherwise We can ﬁnd the marginal PMF for each X i from the joint PMF PX (x). A and R. x3 ∈ {0. . Similarly.6) random variable and X 3 is a binomial (5. x2 . the vector X = X 1 X 2 X 3 indicating the number of outcomes of each subexperiment has the multinomial PMF ⎧ 5 ⎨ x1 . w) dw1 dw2 1 w1 1 0 (3) (4) (5) 4 dw2 dw1 = Similarly.3) random variable.

6)2 (0.10 to write f Y (y) = y1 − 4 y2 − 4 y3 − 4 1 . since X 1 + X 2 + X 3 = 5 and since each X i is non-negative. PW (2) = PX (1.6)(0. . we can apply Theorem 5.1458 = (3) (4) (5) In addition. 6x 2 (1 − x) d x = 1/2. 3x 3 d x = 3/4.3(0.32 (0. 2.com Phone:5017621195 From the marginal PMFs.32 (0. we use 3x(1 − x)2 d x = 1/4.6 We start by ﬁnding the components E[X i ] = the marginal PDFs f X i (x) found in Quiz 5.1)2 + 0. we see that X 1 . we must use Theorem 5.1)2 + 0. the constraints on y resulting from the constraints 0 ≤ X 1 ≤ X 2 ≤ X 3 can be much more complicated.6)2 (0. PW (0) = PW (1) = 0. and w = 5. Hence.486 PW (4) = PX 1 (4) + PX 2 (4) + PX 3 (4) = 0. us (United States) Zip Code:71901 . X 2 and X 3 are not independent.3: E [X 1 ] = 0 1 ∞ −∞ x f X i (x) d x of µ X . X 2 = w. 2) + PX (2. Furthermore.0802 (B) Since each Yi = 2X i + 4. or X 3 = w occurs. 1) 5![0.1)] 2!2!1! = 0. 1. Quiz 5. we need to ﬁnd E[X i X j ] for all i and j. We start with 36 Address:104 pine meadows loop. (1) (2) (3) E [X 2 ] = 0 1 E [X 3 ] = 0 1 To ﬁnd the correlation matrix R X . 2. In particular. the event W = w occurs if and only if one of the mutually exclusive events X 1 = w. w = 4.288 PW (5) = PX 1 (5) + PX 2 (5) + PX 3 (5) = 0. f 3 X 2 2 2 2 (1/8)e−(y3 −4)/2 4 ≤ y1 ≤ y2 ≤ y3 = 0 otherwise (9) (10) (6) (7) (8) Note that for other matrices A. To do so. AR.6 to ﬁnd the PMF of W . Thus. for w = 3. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo. PW (3) = PX 1 (3) + PX 2 (3) + PX 3 (3) = 0. 2) + PX (2.

6x 3 (1 − x) d x = 3/10. d x1 d x2 d x1 (7) (8) (9) (10) (11) (12) (13) (14) 6x1 x2 (1 − x2 ) d x2 E [X 2 X 3 ] = 0 1 = E [X 1 X 3 ] = 0 2 4 [3x2 − 3x2 ] d x2 = 2/5 1 x1 1 6x1 x3 (x3 − x1 ) d x3 d x1 .com Phone:5017621195 the second moments: E 2 X1 = 0 1 3x 2 (1 − x)2 d x = 1/10. hot springs. Summarizing the results. AR. x2 ) . 3x 4 d x = 3/5. 1 x2 1 0 2 6x2 x3 d x3 d x2 x1 x2 f X 1 . the cross terms are E [X 1 X 2 ] = = = 0 ∞ ∞ −∞ −∞ 1 1 0 1 x1 3 4 [x1 − 3x1 + 2x1 ] d x1 = 3/20. us (United States) Zip Code:71901 .3. 1/4 3/8 ⎦ = 80 1 2 3 3/8 9/16 (17) (18) Address:104 pine meadows loop. 1/5 2/5 3/5 Vector X has covariance matrix C X = R X − E [X] E [X] ⎡ ⎤ ⎡ ⎤ 1/10 3/20 1/5 1/4 ⎣3/20 3/10 2/5⎦ − ⎣1/2⎦ = 1/5 2/5 3/5 3/4 ⎡ ⎤ ⎡ 1/10 3/20 1/5 1/16 ⎣3/20 3/10 2/5⎦ − ⎣ 1/8 = 1/5 2/5 3/5 3/16 37 (15) (16) 1/4 1/2 3/4 ⎤ ⎡ ⎤ 3 2 1 1/8 3/16 1 ⎣ 2 4 2⎦ . x3 =1 x3 =x1 = 0 1 3 2 2 (2x1 x3 − 3x1 x3 ) d x1 = 0 1 2 4 [2x1 − 3x1 + x1 ] d x1 = 1/5. (4) (5) (6) 2 E X2 = 2 E X3 = 1 0 1 0 Using marginal PDFs from Quiz 5. X has correlation matrix ⎡ ⎤ 1/10 3/20 1/5 R X = ⎣3/20 3/10 2/5⎦ .Name:joey iwatsuru Email:joeyiwat@yahoo.X 2 (x1 .

just a Gaussian random variable.00002844263128 0. Thus.16 tells us that Y is a 1 dimensional Gaussian vector.0000 0.18 that µ X = b and that C X = AA = 2 1 1 −1 2 1 5 1 = . the ﬁrst two lines generate the 31 × 31 covariance matrix CT.7 We observe that X = AZ + b where A= 2 1 .m: >> julytemps([70 75 80 85 90 95]) ans = 0. we observe that Y = AT where A = 1/31 1/31 · · · 1/31 .16.e.com Phone:5017621195 This problem shows that even for fairly simple joint PDFs. In julytemps.99999999922010 0. us (United States) Zip Code:71901 . A=ones(31. Var[Y ] = ACT A . Since T is a Gaussian random vector. or CT .m. [D1 D2]=ndgrid((1:31). 1 −1 b= 2 .1)/31.99997155736872 0.0221 0.0000 Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0. The ﬁnal step is to use the (·) function to calculate P[Y < T ].5000 0. function p=julytemps(T). Its just that the M ATLAB’s short format output.8 First./(1+abs(D1-D2)).Name:joey iwatsuru Email:joeyiwat@yahoo. p=phi((T-80)/sqrt(CY)).0000.9779 1. Here is the long format output: >> format long >> julytemps([70 75 80 85 90 95]) ans = Columns 1 through 4 0.0000 1. rounds off those probabilities. Next we calculate Var[Y ]. by Theorem 5. 0 (1) It follows from Theorem 5. CY=(A’)*CT*A. CT=36.50000000000000 0. hot springs. The expected value of Y is µY = µT = 80. Quiz 5. 1 −1 1 2 (2) Quiz 5.97792616932396 38 Address:104 pine meadows loop. The covariance matrix of Y is 1 × 1 and is just equal to Var[Y ]. i.02207383067604 Columns 5 through 6 0. Theorem 5.. invoked with the command format short. AR. Here is the output of julytemps. computing the covariance matrix by calculus can be a time consuming task.(1:31)).

. 1 + |i − j| (1) If we write out the elements of the covariance matrix. j) = c|i− j| = 36 . .Name:joey iwatsuru Email:joeyiwat@yahoo. A=ones(31. in this problem. .1)/31.. . However./(1+abs(0:30)).0. ⎢ c1 c0 CT = ⎢ . ⎥. the i. ⎥ ⎢ . c=36. AR. . ⎥ . p=phi((T-80)/sqrt(CY)). CY=(A’)*CT*A. function p=julytemps2(T).com Phone:5017621195 The ndgrid function is a useful to way calculate many covariance matrices.. c30 · · · c1 c0 (2) This covariance matrix is known as a symmetric Toeplitz matrix. c1 ⎦ . we see that ⎡ ⎤ c0 c1 · · · c30 . We will see in Chapters 9 and 11 that Toeplitz covariance matrices are quite common. hot springs. us (United States) Zip Code:71901 . ⎣ . C X has a special structure. jth element is CT (i. The function julytemps2 use the toeplitz to generate the correlation matrix CT . In fact.. CT=toeplitz(c).. 39 Address:104 pine meadows loop. M ATLAB has a toeplitz function for generating them.

5. the random variables K 1 . this integral is easy to evaluate.5.Name:joey iwatsuru Email:joeyiwat@yahoo. First. For w > 0. K n are independent. hot springs.com Phone:5017621195 Quiz Solutions – Chapter 6 Quiz 6.1 Let K 1 . . (4) 40 Address:104 pine meadows loop. . Hence.5 − (2. K n denote a sequence of iid random variables each with PMF PK (k) = 1/4 k = 1. Var[Wn ] = Var[K 1 ] + · · · + Var[K n ] = 1. By Theorem 6.2 Random variables X and Y have PDFs f X (x) = 3e−3x x ≥ 0 0 otherwise f Y (y) = 2e−2y y ≥ 0 0 otherwise (1) (6) (4) (2) (3) Since X and Y are nonnegative. . . . That is.3. . us (United States) Zip Code:71901 . . by Theorem 6. f W (w) = e−3w e y w 0 = 6 e−2w − e−3w (3) Since f W (w) = 0 for w < 0.5 Thus the variance of K i is Var[K i ] = E K i2 − (E [K i ])2 = 7. AR. the PDF of W = X + Y is f W (w) = ∞ −∞ f X (w − y) f Y (y) dy = 6 0 w e−3(w−y) e−2y dy (2) Fortunately. . a conmplete expression for the PDF of W is f W (w) = 6e−2w 1 − e−w 0 w ≥ 0. . . 4 0 otherwise (1) We can write Wn in the form of Wn = K 1 + · · · + K n . W = X + Y is nonnegative. we note that the ﬁrst two moments of K i are E [K i ] = (1 + 2 + 3 + 4)/4 = 2.5)2 = 1. the expected value of Wn is E [Wn ] = E [K 1 ] + · · · + E [K n ] = n E [K i ] = 2. .25n Quiz 6. otherwise.25 Since E[K i ] = 2.5 E K i2 = (12 + 22 + 32 + 42 )/4 = 7.5n (5) Since the rolls are independent. . the variance of the sum equals the sum of the variances.

we continue to take derivatives: E K2 = E K3 E K4 d 2 φ K (s) ds 2 d 3 φ K (s) = ds 3 d 4 φ K (s) = ds 4 = 0. Theorem 6. hot springs.3 The MGF of K is 4 φ K (s) = E es K == k=0 (0.2(es + 16e2s + 81e3s + 256e4s ) s=0 s=0 Quiz 6. us (United States) Zip Code:71901 .8 says the MGF of J is φ J (s) = (φ K (s))m = (2) (B) Since the set of α j X j are independent Gaussian random variables.4 (A) Each K i has MGF φ K (s) = E es K i = es (1 − ens ) es + e2s + · · · + ens = n n(1 − es ) ems (1 − ens )m n m (1 − es )m (1) Since the sequence of K i is independent.com Phone:5017621195 Quiz 6.2(1 + 2 + 3 + 4) = 2 s=0 (2) (3) To ﬁnd higher-order moments.Name:joey iwatsuru Email:joeyiwat@yahoo. AR.10 says that W is a Gaussian random variable.2)esk = 0. Since the expectation of the sum equals the sum of the expectations: E [W ] = α E [X 1 ] + α 2 E [X 2 ] + · · · + α n E [X n ] = 0 41 (3) Address:104 pine meadows loop.2 1 + es + e2s + e3s + e4s (1) We ﬁnd the moments by taking derivatives. Thus to ﬁnd the PDF of W . Theorem 6.8 (4) (5) (6) (7) = 0. we need only ﬁnd the expected value and variance.2(es + 4e2s + 9e3s + 16e4s ) s=0 s=0 =6 = 20 = 70.2(es + 8e2s + 27e3s + 64e4s ) s=0 s=0 = 0.2(es + 2e2s + 3e3s + 4e4s ) ds Evaluating the derivative at s = 0 yields E [K ] = d φ K (s) ds = 0. The ﬁrst derivative of φ K (s) is d φ K (s) = 0.

6 to write Var[W ] = α 2 − α 2n+2 [1 + n(1 − α 2 )] (1 − α 2 )2 (6) (4) (5) 2 With E[W ] = 0 and σW = Var[W ]. hot springs. each X i has MGF φ X (s) and random variable N has MGF φ N (s) where φ X (s) = 1 . The corresponding PDF is f R (r ) = (1/5)e−r/5 r ≥ 0 0 otherwise (4) This quiz is an example of the general result that a geometric sum of exponential random variables is an exponential random variable. AR. R has MGF φ R (s) = φ N (ln φ X (s)) = Substituting the expression for φ X (s) yields φ R (s) = 1 5 1 5 1 5 φ X (s) 1 − 4 φ X (s) 5 (2) −s . 1 − 4 es 5 (1) From Theorem 6. the variance of the sum equals the sum of the variances: Var[W ] = α 2 Var[X 1 ] + α 4 Var[X 2 ] + · · · + α 2n Var[X n ] = α 2 + 2(α 2 )2 + 3(α 2 )3 + · · · + n(α 2 )n Deﬁning q = α 2 . we can write the PDF of W as f W (w) = 1 2 2π σW e−w 2 /2σ 2 W (7) Quiz 6. 42 Address:104 pine meadows loop.1. 1−s φ N (s) = 1 s 5e . we see that R has the MGF of an exponential (1/5) random variable.1.12. us (United States) Zip Code:71901 .com Phone:5017621195 Since the α j X j are independent.Name:joey iwatsuru Email:joeyiwat@yahoo.5 (1) From Table 6. (3) (2) From Table 6. we can use Math Fact B.

1 to look up (0. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.4013 Note that we used Table 3. the standard deviation of A is σ A = 12 (5) To use the central limit theorem.6 (1) The expected access time is E [X ] = ∞ −∞ x f X (x) d x = 0 12 x d x = 6 msec 12 (1) (2) The second moment of the access time is E X2 = ∞ −∞ x 2 f X (x) d x = 0 12 x2 d x = 48 12 (2) The variance of the access time is Var[X ] = E[X 2 ] − (E[X ])2 = 48 − 36 = 12. we use the central limit theorem and Table 3. (6) (7) (8) (9) (5) (4) (3) (6) Once again.0227 (10) (11) (12) 43 Address:104 pine meadows loop. we can write A = X 1 + X 2 + · · · + X 12 Since the expectation of the sum equals the sum of the expectations.5987 = 0.1 to estimate P [A < 48] = P 48 − E [A] A − E [A] < σA σA 48 − 72 ≈ 12 = 1 − (2) = 1 − 0. E [A] = E [X 1 ] + · · · + E [X 12 ] = 12E [X ] = 72 msec (4) Since the X i are independent. we write P [A > 75] = 1 − P [A ≤ 75] 75 − E [A] A − E [A] ≤ =1− P σA σA 75 − 72 ≈1− 12 = 1 − 0. us (United States) Zip Code:71901 . AR.25).9773 = 0.com Phone:5017621195 Quiz 6. (3) Using X i to denote the access time of block i. Var[A] = Var[X 1 ] + · · · + Var[X 12 ] = 12 Var[X ] = 144 Hence.

X 2 .16666) − 1 = 0.11. we found that the sum of three iid exponential (λ) random variables is an Erlang (n = 3. (1) In Theorem 6. The arrival time of the third train is W = X 1 + X 2 + X 3.Name:joey iwatsuru Email:joeyiwat@yahoo. X 3 are iid exponential (λ) random variables.66 × 10−5 √ 12 12 (3) 44 Address:104 pine meadows loop. From Appendix A.5 − 36 − 3 3 = 2 (2. λ) random variable.7 Random variable K n has a binomial distribution for n trials and success probability P[V ] = 3/4.com Phone:5017621195 Quiz 6.8 The train interarrival times X 1 .1 yields P [30 ≤ K 48 ≤ 42] ≈ Recalling that (−x) = 1 − 42 − 36 − 3 (x). we can use the De Moivre-Laplace approximation to estimate P [30 ≤ K 48 ≤ 42] ≈ 42 + 0. we have (3) 30 − 36 3 = (2) − (−2) (2) (1) P [30 ≤ K 48 ≤ 42] ≈ 2 (2) − 1 = 0. we ﬁnd that W has expected value and variance E [W ] = 3/λ = 6 Var[W ] = 3/λ2 = 12 (2) (1) By the Central Limit Theorem.9545 (4) Since K 48 is a discrete random variable. us (United States) Zip Code:71901 .5 − 36 30 − 0. (1) The expected number of voice calls out of 48 calls is E[K 48 ] = 48P[V ] = 36. P [W > 20] = P √ W −6 20 − 6 > √ ≈ Q(7/ 3) = 2. hot springs. AR. (2) The variance of K 48 is Var[K 48 ] = 48P [V ] (1 − P [V ]) = 48(3/4)(1/4) = 9 Thus K 48 has standard deviation σ K 48 = 3.9687 (4) (5) Quiz 6. (3) Using the ordinary central limit theorem and Table 3.

PW.0028 (9) (10) Although the Chernoff bound is relatively weak in that it overestimates the probability by roughly a factor of 12.m sx=0:100.5. AR. P [W > 20] = 1 − FW (20) = e−10 1 + 10 102 + 1! 2! = 61e−10 = 0. it should be apparent that the finitepmf function is implementing the convolution of the two PMFs. hot springs.py). [PX.SY]=ndgrid(sx. A graph of the PMF PW (w) appears in Figure 2 With some thought. PW=PX. for λ = 1/2 and w = 20.0338 s=7/20 (7) (3) Theorem 3.*PY.PY]=ndgrid(px. we set the derivative of h(s) to zero: −20(1 − 2s)3 e−20s + 6e−20s (1 − 2s)2 d h(s) = =0 ds (1 − 2s)6 (6) This implies 20(1 − 2s) = 6 or s = 7/20. the CDF of the Erlang (λ. [SX.11 says that for any w > 0.sx). we note that the MGF of W is φW (s) = The Chernoff bound states that P [W > 20] ≤ min e−20s φ X (s) = min s≥0 s≥0 λ λ−s 3 = 1 (1 − 2s)3 e−20s (1 − 2s)3 (4) (5) To minimize h(s) = e−20s /(1 − 2s)3 .’\itP_W(w)’). it is a valid bound. pmfplot(sw.9 One solution to this problem is to follow the approach of Example 6. pw=finitepmf(SW. the Central Limit Theorem approximation grossly underestimates the true probability.pw. Quiz 6.19: %unifbinom100. Applying s = 7/20 into the Chernoff bound yields P [W > 20] ≤ e−20s (1 − 2s)3 = (10/3)3 e−7 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo.sy=0:100. px=binomialpmf(100.sy).’\itw’. sw=unique(SW).0. SW=SX+SY.sw). py=duniformpmf(0. By contrast. 45 Address:104 pine meadows loop.100.com Phone:5017621195 (2) To use the Chernoff bound. us (United States) Zip Code:71901 .sy). 3) random variable W satisﬁes 2 (λw)k e−λw FW (w) = 1 − (8) k! k=0 Equivalently.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0.01 0.008 PW(w) 0.006 0.004 0.002 0 0 20 40 60 80 100 w 120 140 160 180 200

Figure 2: From Quiz 6.9, the PMF PW (w) of the independent sum of a binomial (100, 0.5) random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Quiz Solutions – Chapter 7
**

Quiz 7.1 An exponential random variable with expected value 1 also has variance 1. By Theorem 7.1, Mn (X ) has variance Var[Mn (X )] = 1/n. Hence, we need n = 100 samples. Quiz 7.2 The arrival time of the third elevator is W = X 1 + X 2 + X 3 . Since each X i is uniform (0, 30), (30 − 0)2 Var [X i ] = = 75. (1) E [X i ] = 15, 12 Thus E[W ] = 3E[X i ] = 45, and Var[W ] = 3 Var[X i ] = 225. (1) By the Markov inequality, P [W > 75] ≤ (2) By the Chebyshev inequality, P [W > 75] = P [W − E [W ] > 30] ≤ P [|W − E [W ]| > 30] ≤ 225 Var [W ] 1 = = 2 900 4 30 (3) (4) E [W ] 45 3 = = 75 75 5 (2)

Quiz 7.3 Deﬁne the random variable W = (X − µ X )2 . Observe that V100 (X ) = M100 (W ). By Theorem 7.6, the mean square error is E (M100 (W ) − µW )2 = Observe that µ X = 0 so that W = X 2 . Thus, µW = E X

2

Var[W ] 100

(1)

=

1 −1 1 −1

x 2 f X (x) d x = 1/3 x 4 f X (x) d x = 1/5

(2) (3)

E W2 = E X4 =

Therefore Var[W ] = E[W 2 ] − µ2 = 1/5 − (1/3)2 = 4/45 and the mean square error is W 4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4 Assuming the number n of samples is large, we can use a Gaussian approximation for Mn (X ). SinceE[X ] = p and Var[X ] = p(1 − p), we apply Theorem 7.13 which says that the interval estimate Mn (X ) − c ≤ p ≤ Mn (X ) + c (1) has conﬁdence coefﬁcient 1 − α where α =2−2 √ c n . p(1 − p)

(2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must have √ c n ≥ 0.95 (3) p(1 − p) √ for every value of p. Since (x) is an increasing function of x, we must satisfy c n ≥ 1.65 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that 1.65 0.41 c≥ √ = √ . 4 n n The 0.9 conﬁdence interval estimate of p is 0.41 0.41 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n (5) (4)

√ For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c n/( p(1− p))) ≥ 0.995. √ This implies c n√ 2.58 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that ≥ c ≥ (0.25)(2.58)/ n. In this case, the 0.99 conﬁdence interval estimate is 0.645 0.645 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n Note that if M100 (X ) = 0.4, then the 0.99 conﬁdence interval estimate is 0.3355 ≤ p ≤ 0.4645. The interval is wide because the 0.99 conﬁdence is high. Quiz 7.5 Following the approach of bernoullitraces.m, we generate m = 1000 sample paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the fraction of sample paths that have sample mean within one standard error of p. The program bernoullisample.m generates graphs the number of traces within one standard error as a function of the time, i.e. the number of trials in each trace. 48 (7) (6)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

2.0.p). 49 Address:104 pine meadows loop. x=reshape(bernoullirv(p.5): 1 0.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 .4 0 10 20 30 40 50 60 70 80 90 100 As we would expect.m. MN=cumsum(x). The following graph was generated by bernoullisample(100./nn.7 0.n. The unusual sawtooth pattern. stderrmat=stderr*ones(1. nn=(1:n)’*ones(1. the fraction of traces within one standard error approaches 2 (1) − 1 ≈ 0. OK=sum(abs(MN-p)<stderrmat.8 0.68. though perhaps unexpected. plot(1:n.’-s’). is examined in Problem 7.6 0. stderr=sqrt(p*(1-p)).9 0. AR.5.m). as m gets large. hot springs.m).2)/m.5000.com Phone:5017621195 function OK=bernoullisample(n.OK./sqrt((1:n)’).m*n).5 0.m).

FX (x) = FX i (x) 15 (2) = 1 − e−x 15 (3) To design a signiﬁcance test. This rule simpliﬁes to 106 − 104 k ∈ A0 if k ≤ k = = 214. (3) k ∈ A1 otherwise. hot springs. A reasonable choice is to reject the hypothesis if X is too small. · · · . otherwise (1) (2) 0 Since the two hypotheses are equally likely. X 15 ≤ x] = [P [X i ≤ x]]15 . if we observe X < 1. 976 photons. . That is.33 Hence. For a signiﬁcance level of α = 0. the CDF of the maximum of X 1 .2 From the problem statement.6. . the MAP and ML tests are the same. (4) Thus if we observe at least 214. . 50 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. then we accept hypothesis H1 .7. each X i has PDF and CDF f X i (x) = e−x x ≥ 0 0 otherwise FX i (x) = 0 x <0 1 − e−x x ≥ 0 (1) Hence. From Theorem 8. .01 It is straightforward to show that r = − ln 1 − (0. . . the ML hypothesis rule is k ∈ A0 if PK |H0 (k) ≥ PK |H1 (k) . we obtain α = P [X ≤ r ] = (1 − e−r )15 = 0. This implies that for x ≥ 0. .1 From the problem statement. 1. then we reject the hypothesis. otherwise k = 0. Quiz 8. X 15 obeys FX (x) = P [X ≤ x] = P [X 1 ≤ x. . the conditional PMFs of K are PK |H0 (k) = PK |H1 (k) = 104k e−10 k! 4 (4) (5) 0 106k e−10 k! 6 k = 0.01. we must choose a rejection region for X . let R = {X ≤ r }. 975. . .33.01)1/15 = 1. 1.com Phone:5017621195 Quiz Solutions – Chapter 8 Quiz 8. X 2 ≤ x. ln 100 ∗ k ∈ A1 otherwise. us (United States) Zip Code:71901 .

FM2(:. FM5(:.0.1)).1). Given H0 .m. a symbol error occurs when si is transmitted but (X 1 .2’.3) ylabel(’P_{MISS}’). FM5=sqdistroc(v. .3.T). the existing program sqdistor already calculates this miss probability PMISS = P01 and the false alarm probability PFA = P10 .0.TT]=ndgrid(x.T). xlabel(’P_{FA}’).m. ’\it d=0.2. %add N volts.m. The modiﬁed program.’:k’).1). 51 Address:104 pine meadows loop. x= -v+randn(m. [XX. X 2 > 0|H0 ] = P E/2 + N1 > 0. the probability 2 PERR = 1 − P [C] = 1 − E 2σ 2 (5) Quiz 8. Here is the modiﬁed code: function FM=sqdistroc(v.T).1).1’.4 To generate the ROC.FM5(:. P01=sum((XX+d*(XX.0.m.’\it d=0. Since N1 and N2 are iid Gaussian (0.m. FM=[P10(:) P01(:)]. σ ) random variables.1. FM1=sqdistroc(v. P[C|H0 ] = P[C|Hi ] for all i..T) %square law distortion recvr %P(error) for m bits tested %transmit v volts or -v volts.T(:)). FM2(:. E/2 + N2 > 0 (1) Because of the symmetry of the signals.2). FM2=sqdistroc(v. N is Gauss(0. we have √ √ P [C] = P [C|H0 ] = P E/2 + N1 > 0 P E/2 + N2 > 0 (2) √ 2 (3) = P N1 > − E/2 √ 2 − E/2 (4) = 1− σ Since (−x) = 1 − of error is (x)..1).m calls sqdistroc three times to generate a plot that compares the receiver performance for the three requested values of d. legend(’\it d=0. P10=sum((XX+d*(XX. loglog(FM1(:.com Phone:5017621195 Quiz 8.. it is easier to calculate the probability of a correct decision.’-k’.2).m is essentially the same as sqdistor except the output is a matrix FM whose columns are the false alarm and miss probabilities.3 For the QPSK system. .2). the conditional probability of a correct decision is √ √ P [C|H0 ] = P [X 1 > 0.T(:))..Name:joey iwatsuru Email:joeyiwat@yahoo. Next.TT]=ndgrid(x. hot springs. us (United States) Zip Code:71901 .. AR.ˆ2)< TT). sqdistroc.FM1(:..1)/m. function FM=sqdistrocplot(v. otherwise 0 %FM = [P(FA) P(MISS)] x=(v+randn(m.ˆ2)>TT). Equivalently.’--k’. This implies the probability of a correct decision is P[C] = P[C|H0 ]. FM=[FM1 FM2 FM5].3’.1)/m. [XX.. For a QPSK system.T).1) %add d(v+N)ˆ2 distortion %receive 1 if x>T. X 2 ) ∈ A j for some j = i. we have P[C] = 2( E/2σ 2 ). the program sqdistrocplot.d.

Name:joey iwatsuru Email:joeyiwat@yahoo.1 d=0. us (United States) Zip Code:71901 . hot springs. Figure 3: The receiver operating curve for the communications system of Quiz 8.4 with squared distortion.T).2 d=0.100000.3 −5 10 10 −4 10 −3 10 PFA −2 10 −1 10 0 T=-3:0.T). sqdistrocplot(3.1:3.com Phone:5017621195 To see the effect of d. AR. sqdistrocplot(3. the commands T=-3:0. generated the plot shown in Figure 3.1:3.100000. 52 Address:104 pine meadows loop. 10 0 10 −1 10 PMISS 10 10 −2 −3 −4 10 −5 d=0.

us (United States) Zip Code:71901 . For 0 ≤ x ≤ 1. AR.com Phone:5017621195 Quiz Solutions – Chapter 9 Quiz 9. f X (x) = x 1 2(y + x) dy = y 2 + 2x y y=1 y=x = 1 + 2x − 3x 2 (4) (5) For 0 ≤ x ≤ 1. the conditional PDF of Y given X is f Y |X (y|x) = 2(y+x) 1+2x−3x 2 0 x ≤y≤1 otherwise (6) (4) The MMSE estimate of Y given X = x is y M (x) = E [Y |X = x] = ˆ x 1 2y 2 + 2x y dy 1 + 2x − 3x 2 y=1 y=x (7) (8) (9) 2y 3 /3 + x y 2 = 1 + 2x − 3x 2 = 2 + 3x − 5x 3 3 + 6x − 9x 2 53 Address:104 pine meadows loop. we need the marginal PDF f X (x).Name:joey iwatsuru Email:joeyiwat@yahoo. we calculate the marginal PDF for 0 ≤ y ≤ 1: f Y (y) = 0 y 2(y + x) d x = 2x y + x 2 x=y x=0 = 3y 2 (1) This implies the conditional PDF of X given Y is f X |Y (x|y) = f X. (3) To obtain the conditional PDF f Y |X (y|x).Y (x. hot springs.1 (1) First. y) = f Y (y) 2 3y + 2x 3y 2 0 0≤x ≤y otherwise (2) (2) The minimum mean square error estimate of X given Y = y is x M (y) = E [X |Y = y] = ˆ 0 y 2x 2 2x + 2 3y 3y d x = 5y/9 (3) ˆ Thus the MMSE estimator of X given Y is X M (Y ) = 5Y /9.

R = √ √ σT Cov [T. the optimum linear estimate of T given R is σT ˆ TL (R) = ρT.4.3 When R = r . AR. hot springs.R (R − E [R]) + E [T ] σR Since E[R] = E[T ] = 0 and ρT.4. R] = Var[T ] = 9. (6) By Theorem 9. the mean square error of the linear estimate is 2 e∗ = Var[T ](1 − ρT. us (United States) Zip Code:71901 . (4) From Deﬁnition 4.com Phone:5017621195 Quiz 9. ˆ TL (R) = Hence a ∗ = 3/4 and b∗ = 0.R = σT /σ R . the conditional PDF of X = Y −40−40 log10 r is Gaussian with expected value −40 − 40 log10 r and variance 64. The conditional PDF of X given R is 1 2 f X |R (x|r ) = √ e−(x+40+40 log10 r ) /128 128π 54 (1) Address:104 pine meadows loop.R ) = 9(1 − 3/4) = 9/4 L 2 σT (5) σR R= 2 2 σT 2 2 σT + σ X R= 3 R 4 (6) (7) Quiz 9. Cov [T. the correlation coefﬁcient of T and R is ρT. R] = E [T R] = E [T (T + X )] = E T 2 + E [T X ] (3) (2) (1) Since T and X are independent and have zero expected value.Name:joey iwatsuru Email:joeyiwat@yahoo.2 (1) Since the expectation of the sum equals the sum of the expectations. E[T X ] = E[T ]E[X ] = 0 and E[T 2 ] = Var[T ].8. R] = = 3/2 σR Var[R] Var[T ] (4) (5) From Theorem 9. Thus Cov[T. the variance of the sum R = T + X is Var[R] = Var[T ] + Var[X ] = 9 + 3 = 12 (3) Since T and R have expected values E[R] = E[T ] = 0. E [R] = E [T ] + E [X ] = 0 (2) Since T and X are independent.

1)10−x/40 m ˆ (3) (4) If the result doesn’t look correct.3 (0.2 to write the ML estimate of R given X = x as rML (x) = arg max f X |R (x|r ) ˆ r ≥0 (2) We observe that f X |R (x|r ) is maximized when the exponent (x + 40 + 40 log10 r )2 is minimized. This minimum occurs when the exponent is zero.R (x. if x = −120dB. the MAP estimate of R given X = x is the value of r that maximizes f X.6 m. for very low signal strengths.R (x. That is. r ) = f X |R (x|r ) f R (r ) = 106 32π 1 √ r e−(x+40+40 log10 r ) 2 /128 (5) From Theorem 9. r ). When the measured signal ˆ strength is not too low.3 −x/40 x ≥ −156. the complete description of the MAP estimate is rMAP (x) = ˆ 1000 x < −156. hot springs. Hence.R (x. Setting the derivative of f X.6% larger than the ML estimate. This reﬂects the fact that large values of R are a priori more probable than small values.6. yielding log10 r = −1 − x/40 or rML (x) = (0. (6) rMAP (x) = arg max f X. us (United States) Zip Code:71901 .R (x. ˆ For the MAP estimate. we can use Deﬁnition 9. r ) ˆ 0≤r ≤1000 Note that we have included the constraint r ≤ 1000 in the maximization to highlight the fact that under our probability model.3 dB. then rMAP (−120) = 123. r ) with respect to r to zero yields e−(x+40+40 log10 r ) Solving for r yields r = 10 1 25 log10 e −1 2 /128 1− 80 log10 e (x + 40 + 40 log10 r ) = 0 128 (7) 10−x/40 = (0. When x ≤ −156.1236)10 (9) For example. the above estimate will exceed 1000 m. However. we observe that the joint PDF of X and R is f X. the MAP estimate is 23.1236)10−x/40 (8) This is the MAP estimate of R given X = x as long as r ≤ 1000 m. AR. R ≤ 1000 m. This corresponds to a distance estimate of rML (−120) = 100 m. the MAP estimate takes into account that the distance can never exceed 1000 m.com Phone:5017621195 From the conditional PDF f X |R (x|r ). note that a typical ﬁgure for the signal strength might be x = −120 dB. which is not possible in our probability model.Name:joey iwatsuru Email:joeyiwat@yahoo. 55 Address:104 pine meadows loop.

9 1.1 (2) (3) It follows that a ∗ = 1/1. it follows that E[Y] = 0. AR.com Phone:5017621195 Quiz 9. −0. Y2 ] 1 =√ σ X 2 σY2 1.7. E[XW ] = E[X]E[W ] = 0. Note that X and W have correlation matrices RX = 1 −0.1 11 (5) (2) Since Y = X + W and E[X] = E[W] = 0. Var[Y2 ] b ∗ = µ X 2 − a ∗ µ Y2 . Finally.4. To apply Theorem 9. we calculate the correlation coefﬁcient ρ X 2 . RY = E YY = E (X + W)(X + W ) = E XX + XW + WX + WW . Y2 ] = E [X 2 Y2 ] = E [X 2 (X 2 + W2 )] = E X 2 = 1 2 2 Var[Y2 ] = Var[X 2 ] + Var[W2 ] = E X 2 + E W2 = 1. (7) (8) Because X and W are independent. we need to ﬁnd RY and RYX 2 . Because µ X 2 = µY2 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs.9 . Y2 ] . us (United States) Zip Code:71901 .Y2 = The expected square error is 2 e∗ = Var[X 2 ](1 − ρ X 2 . it follows that b∗ = 0. 0 0. the LMSE estimate of X 2 given Y2 is X 2 (Y2 ) = a ∗ Y2 + b∗ where a∗ = Cov [X 2 .Y2 ) = 1 − L Cov [X 2 . −0.7. E[WX ] = 0. n = 2 and we wish to estimate X 2 given the observation vector Y = Y1 Y2 . (1) Because E[X] = E[Y] = 0.1 (9) Address:104 pine meadows loop. to compute the expected square error. we need to ﬁnd RYX 2 = E [YX 2 ] = E [Y1 X 2 ] E [(X 1 + W1 )X 2 ] = .9 1 RW = 0.1 0 .1 (4) 1 1 = = 0.1 −0. E [Y2 X 2 ] E [(X 2 + W2 )X 2 ] 56 (10) 1. Thus we can apply Theorem 9. 2 Cov [X 2 .0909 1.1 (6) In terms of Theorem 9. Similarly.1.7.9 . This implies RY = E XX + E WW = RX + RW = In addition.4 ˆ (1) From Theorem 9.

hot springs.9 RYX 2 = = .com Phone:5017621195 Since X and W are independent vectors.725 (12) Therefore.225Y1 + 0. Thus. This implies RYX = E [YX ] = E [(1X + W)X ] = 1E X 2 = 1.725Y2 .X 2 − a2rY2 . E[W1 X 2 ] = E[W1 ]E[X 2 ] = 0 and E[W2 X 2 ] = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. ˆ a = R−1 RYX 2 = Y −0. (11) 2 1 E X2 By Theorem 9. j) = c|i− j|−1 . by ˆ ˆ ˆ Theorem 9. us (United States) Zip Code:71901 .0725. Y also has zero expected value. ˆ a = R−1 RYX = 11 + RW Y and the optimal linear estimator is ˆ X L (Y) = 1 11 + RW The mean square error is ˆ e∗ = Var[X ] − a RYX = 1 − 1 11 + RW L −1 −1 −1 (1) (2) (3) (4) 1 (5) Y (6) 1 (7) Now we note that RW has i. Since X and W are independent.7. This problem is atypical in that one does not usually get L 57 Address:104 pine meadows loop. the optimum linear estimator of X 2 given Y1 and Y2 is ˆ ˆ X L = a Y = −0. AR.7.X 2 = 0. The mean square error is ˆ Var [X 2 ] − a RYX 2 = Var [X ] − a1rY1 . X L (Y) = a Y where a = R−1 RYX . Thus. Thus E[X 1 X 2 ] −0. By the same reasoning.225 0. The question we must address is what value c minimizes e∗ . Y E[WX ] = 0 and E[X W ] = 0 . the correlation matrix of Y is RY = E YY = E (1X + W)(1 X + W ) = 11 E X 2 + 1E X W + E [WX ] 1 + E WW = 11 + RW Note that 11 is a 20 × 20 matrix with every entry equal to 1.5 Since X and W have zero expected value. jth entry RW (i. (14) (13) Quiz 9.

RY=(v1*(v1’)) +RW. for k=1:length(c).01:0. We note that the answer is not obviously apparent from Equation (7).01:0. If this argument is not clear. xlabel(’c’). the noises Wi have high variance and we would expect our estimator to be poor. us (United States) Zip Code:71901 .99. However. v1=ones(20.6 0. To ﬁnd the optimal value of c.5 c 1 As we see in the graph. On the other hand. >> mquiz9minc(c) ans = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. RW=toeplitz(c. we observe that Var[Wi ] = RW (i.af]=mquiz9(c). In this case. when c is small. hot springs. i) = 1/c.2 0 0. we will see that the answer is somewhat instructive. mse=1-((v1’)*af). Note in mquiz9 that v1 corresponds to the vector 1 of all ones.optk]=min(msec).msec). Thus.af]=mquiz9(c(k)). AR. [msec(k). In particular.ˆ((0:19)-1)).8 e* L 0. our 20 measurements will be all the same and one measurement is as good as 20 measurements. The following commands ﬁnds the minimum c and also produces the following graph: >> c=0. [msemin. function [mse. we write a M ATLAB function mquiz9(c) to calculate the MSE for a given c and second function that ﬁnds plots the MSE for a range of values of c. function cmin=mquiz9minc(c).ylabel(’e_Lˆ*’). msec=zeros(size(c)). end plot(c.4500 1 0.com Phone:5017621195 to choose the correlation structure of the noise. cmin=c(optk). af=(inv(RY))*v1. both small values and large values of c result in large MSE. consider the extreme case in which every Wi and W j have correlation coefﬁcient ρi j = 1.1). if c is large Wi and W j are highly correlated and the separate measurements of X are very dependent. This would suggest that large values of c will also result in poor MSE.4 0. 58 Address:104 pine meadows loop.

discrete valued process.01) dr = 0. continuous valued process.2 (2) 59 Address:104 pine meadows loop. discrete valued process. (2) If at every moment in time. X N . Quiz 10.2 (1) We obtain a continuous time. the call completion times of the H calls that hang up Quiz 10. continuous valued process when we record the temperature as a continuous waveform over time. s) is • m(0. . the number of calls that hang up during the experiment • D1 . (4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time. . A correct answer speciﬁes enough random variables to specify the sample path exactly. the interarrival times of the N new arrivals • H . (3) If we sample the process in part (a) every T seconds. the number of ongoing calls at the start of the experiment • N . D H . we round the temperature to the nearest degree. One choice for an alternate set of random variables that would specify m(t. . then we obtain a continuous time. . .Name:joey iwatsuru Email:joeyiwat@yahoo. AR. .01 950 ≤ r ≤ 1050 0 otherwise (1) The probability that a test produces a 1% resistor is p = P [990 ≤ R ≤ 1010] = 1010 990 (0. hot springs.1 There are many correct answers to this question. us (United States) Zip Code:71901 . then we obtain a discrete time.3 (1) Each resistor has resistance R in ohms with uniform PDF f R (r ) = 0. . . s).com Phone:5017621195 Quiz Solutions – Chapter 10 Quiz 10. the number of new calls that arrive during the experiment • X 1 .

Thus E [T2 |T1 = 10] = E [T1 |T1 = 10] + E T |T1 = 10 = 10 + E T = 10 + 5 = 15 (5) (6) Quiz 10.1. independent of any other resistor. the joint PDF of X = X 1 · · · X n is k (1) f X (x) = f X (1). . .8)4 (0. E[T1 ] = 1/ p = 5. exactly t resistors are tested. Consequently. the number of additional trials needed to ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/ p since each independent trial is a success with probability p.08192. T2 = T1 + T where T is independent and identically distributed to T1 .2.5. the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is PT1 (5) = (0. . Each resistor is a 1% resistor with probability p. . . In this problem. . each X i has PDF 1 2 f X (i) (x) = √ e−x /2 2π By Theorem 10. (5) Note that once we ﬁnd the ﬁrst 1% resistor. 9 otherwise (4) Since p = 0. t − 1 followed by a success on trial t. AR.Name:joey iwatsuru Email:joeyiwat@yahoo. . the number of 1% resistors found has the binomial PMF PN (t) (n) = p n (1 − p)t−n n = 0. Hence.11. hot springs. A success occurs on a trial with probability p if we ﬁnd a 1% resistor.. us (United States) Zip Code:71901 . . . 1) random variable. a geometric random variable with success probability p has expected value 1/ p. . .2) = 0. t 0 otherwise t n (3) (3) First we will ﬁnd the PMF of T1 .4 Since each X i is a N (0.. 2. . (4) From Theorem 2. The ﬁrst 1% resistor is found at time T1 = t if we observe failures on trials 1.com Phone:5017621195 (2) In t seconds. That is. . .. This problem is easy if we view each resistor test as an independent trial. T1 has the geometric PMF PT1 (t) = (1 − p)t−1 p t = 1. 1. xn ) = i=1 f X (xi ) = 1 2 2 e−(x1 +···+xn )/2 n/2 (2π ) (2) 60 Address:104 pine meadows loop..X (n) (x1 . . just as in Example 2.

61 Address:104 pine meadows loop. . .com Phone:5017621195 Quiz 10. (2) Quiz 10. Thus X (t) is a Brownian motion process with variance Var[X (t)] = t. . otherwise (1) Since M1 and M2 are independent. Let X 1 . AR. denote the interarrival times of the N (t) process.13 states that W (t) − W (s) is Gaussian with expected value E [X (t) − X (s)] = and variance E (W (t) − W (s))2 = E (W (t) − W (s))2 α(t − s) = α α (3) E [W (t) − W (s)] =0 √ α (2) Consider s ≤ s √ t.11. λ) random variable. we note that for t > s. . has the same PDF as Y1 (t). This implies < √ [W (t) − W (s)]/ α is independent of W (s )/ α for all s ≥ s . X 2 . Since X 1 and X 2 are independent exponential (λ) random variables. 000. 1. see Theorem 6.7 First. the time until the ﬁrst arrival of the N (t) is Y1 = X 1 + X 2 . PM1 . Y1 is an Erlang (n = 2.M2 (m 1 . us (United States) Zip Code:71901 . . . . This implies M1 and M2 are independent Poisson random variables each with PMF PMi (m) = α m e−α m! 0 m = 0. . m 2 ) = PM1 (m 1 ) PM2 (m 2 ) = ⎪ ⎪ ⎩ 0 otherwise. Thus N (t) is not a Poisson process. . . the expected number of packets in each hour is E[Mi ] = α = 36. 2. Quiz 10. Since Yi (t). X (t) − X (s) = W (t) − W (s) √ α (1) Since W (t) − W (s) is a Gaussian random variable. Since one hour equals 3600 sec and the Poisson process has a rate of 10 packets/sec.Name:joey iwatsuru Email:joeyiwat@yahoo. ⎪ m 1 !m 2 ! ⎪ ⎨ m 2 = 0. Since s ≥ s . . the joint PMF of M1 and M2 is ⎧ α m 1 +m 2 e−2α m 1 = 0.6 To answer whether N (t) is a Poisson process. . . hot springs. we look at the interarrival times. 1. the ith interarrival time of the N (t) process.5 The ﬁrst and second hours are nonoverlapping intervals. W (t) − W (s) is independent of W (s ). . X (t) − X (s) is independent of X (s ) for all s ≥ s . Theorem 3. we can conclude that the interarrival times of N (t) are not exponential random variables. That is. Since we count only evennumbered arrival for N (t). 1.

. . . (2) R2 (τ ) = e−τ also is valid. Quiz 10. Since RY (t. we have RY (t. .... . .. xm ) = f X n1 +k . . .. f X n1 . . X 1 . . E[X (t)N (t )] = E[X (t)]E[N (t )] = 0.X nm +k (x1 ... 2 (3) R3 (τ ) = e−τ cos τ is not valid because R3 (−2π ) = e2π cos 2π = e2π > 1 = R3 (0) (4) R4 (τ ) = e−τ sin τ also cannot be an autocorrelation function because 2 (2) R4 (π/2) = e−π/2 sin π/2 = e−π/2 > 0 = R4 (0) (3) 62 Address:104 pine meadows loop. . hot springs. . .14..Name:joey iwatsuru Email:joeyiwat@yahoo. τ ) + R N (t.. xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) We can conclude that the iid random sequence is stationary. we observe that since X (t) and N (t) are independent and since N (t) has zero expected value.. xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) Similarly. . . f X n1 +k . τ ) = E[Y (t)Y (t + τ )]. AR.9 From Deﬁnition 10. . .8 First we ﬁnd the expected value µY (t) = µ X (t) + µ N (t) = µ X (t).com Phone:5017621195 Quiz 10.. . X 2 . .12: R(τ ) ≥ 0 R(τ ) = R(−τ ) |R(τ )| ≤ R(0) (1) (3) (2) (1) (1) R1 (τ ) = e−|τ | meets all three conditions and thus is valid. (1) To ﬁnd the autocorrelation..10 We must check whether each function R(τ ) meets the conditions of Theorem 10. .X nm (x1 . .. (2) (3) (4) Quiz 10. xm ) Since the random sequence is iid. . is a stationary random sequence if for all sets of time instants n 1 . . .X nm +k (x1 . .X nm (x1 . τ ).. . us (United States) Zip Code:71901 . f X n1 . n m and time offset k.. for time instants n 1 + k. . τ ) = E [(X (t) + N (t)) (X (t + τ ) + N (t + τ ))] = E [X (t)X (t + τ )] + E [X (t)N (t + τ )] + E [X (t + τ )N (t)] + E [N (t)N (t + τ )] = R X (t. n m + k. .

12 From the problem statement. In this case. we can check whether they are jointly wide sense stationary by seeing if R X Y (t. x1 ) = 1 (2π )n/2 [det (CX )]1/2 1 3π 2 e− 3 2 2 2 x0 −x0 x1 +x1 (5) 1 exp − x C−1 x X 2 (6) (7) =√ 63 Address:104 pine meadows loop. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (−t − τ )] = R X (t − (−t − τ )) = R X (2t + τ ) (4) (5) (6) Since R X Y (t. suppose R X (τ ) = e−|τ | so that samples of X (t) far apart in time have almost no correlation.X (t+1) (x0 . we conclude that X (t) and Y (t) are not jointly wide sense stationary.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. In this case.11 (1) The autocorrelation of Y (t) is RY (t. we can conclude that Y (t) is a wide sense stationary process. (2) Since X (t) and Y (t) are both wide sense stationary processes. τ ) = E [Y (t)Y (t + τ )] = E [X (−t)X (−t − τ )] = R X (−t − (−t − τ )) = R X (τ ) (1) (2) (3) Since E[Y (t)] = E[X (−t)] = µ X . Y (t) = X (−t) and X (t) become less and less correlated. as t gets larger. Quiz 10. us (United States) Zip Code:71901 . τ ) depends on both t and τ . hot springs. In fact. R X Y (t. we see that by viewing a process backwards in time.com Phone:5017621195 Quiz 10. τ ) is just a function of τ . To see why this is. E [X (t)] = E [X (t + 1)] = 0 E [X (t)X (t + 1)] = 1/2 Var[X (t)] = Var[X (t + 1)] = 1 The Gaussian random vector X = X (t) X (t + 1) sponding inverse CX = Since 1 1/2 1/2 1 C−1 = X (1) (2) (3) has covariance matrix and corre- 4 1 −1/2 1 3 −1/2 (4) 4 4 2 1 −1/2 x0 2 x − x0 x+ x1 = 1 x1 3 −1/2 3 0 the joint PDF of X (t) and X (t + 1) is the Gaussian vector PDF x C−1 x = x0 x1 X f X (t). we see the same second order statistics.

Call blocking can be implemented by setting the service time of the call to zero so that the call departs as soon as it arrives. the number of ongoing calls. Examine the head-of-schedule event. and schedule a departure to occur at time t + Sn . A simulation of the system moves from one time instant to the next by maintaining a chronological schedule of future events (arrivals and departures) to be executed. at discrete time instances. AR. In particular. we need to know that M(t). we know the system state cannot change until the next scheduled event. Delete the head-of-schedule event and go to step 2. • If the head of schedule event is a departure.28 admits a deceptively simple solution in terms of the vector of arrivals A and the vector of departures D. Quiz 10. 2.13 The simple structure of the switch simulation of Example 10. hot springs. 3. 64 Address:104 pine meadows loop. do not schedule a departure event. namely arrivals and departures. admit the arrival. we cannot generate these vectors all at once. reduce the system state n by 1. The blocking switch is an example of a discrete event system. we must block the call. – If M(t) = c. – If M(t) < c. check the state M(t). increase the system state n by 1. After the head-of-schedule event is completed and any new events (departures in this system) are scheduled. Start at time t = 0 with an empty system. us (United States) Zip Code:71901 . where Sk is an exponential (λ) random variable. • When the head-of-schedule event is the kth arrival is at time t. Otherwise. when an arrival occurs at time t.Name:joey iwatsuru Email:joeyiwat@yahoo. The system evolves via a sequence of discrete events. satisﬁes M(t) < c = 120. block the arrival. With the introduction of call blocking.13. an exponential (λ) random variable. The program simply executes the event at the head of the schedule. The logic of such a simulation is 1. when M(t) = c. Schedule the ﬁrst arrival to occur at S1 .com Phone:5017621195 120 100 80 M(t) 60 40 20 0 0 10 20 30 40 50 t 60 70 80 90 100 Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10.

The rest of the gap between 0. Thus for all times t(i) between the current head-of-schedule event and the next.000 minutes. the output [m a b] is such that m(i) is the number of ongoing calls at time t(i) while a and b are the number of admits and blocks. generated a simulation lasting 5.m). When the program is passed a vector t. it is common to implement the event schedule as a linked list where each item in the list has a data structure indicating an event timestamp and the type of the event. we use the vector t as the set of time instances at which we inspect the system state. the discrete event simulation is widely-used and often very efﬁcient simulation method.b]=simblockswitch(10.000 minute simulation. we set m(i) to the current switch state.t).0. AR.” From the Erlang-B formula.com Phone:5017621195 Thus we know that M(t) will stay the same until then. In our simulation. we will learn that the exact blocking probability is given by Equation (12. The 5. In this case. (1) Pb = a+b In Chapter 12.1. One reason our simulation underestimates the blocking probability is that in a 5. this says that roughly the ﬁrst two percent of the simulation time was unusual. roughly the ﬁrst 100 minutes are needed to load up the switch since the switch is idle when the simulation starts at time t = 0. Note that in Chapter 12. Thus this would account for only part of the disparity.000 minute full simulation produced a=49658 admitted calls and b=239 blocked calls. Chapter 12 develops techniques for analyzing and simulating systems described by Markov chains that are much simpler than the discrete event simulation technique shown here. hot springs. [m. we will learn that the blocking switch is an example of an M/M/c/c queue. for very complicated systems. Nevertheless.a. or event(i)=-1 if the ith scheduled event is a departure. The complete program is shown in Figure 5.0048 and 0. In M ATLAB.1:5000. a result known as the “Erlang-B formula. event(i)=1 if the ith scheduled event is an arrival.120. A sample path of the ﬁrst 100 minutes of that simulation is shown in Figure 4.0057 is that a simulation that includes only 239 blocks is not all that likely to give a very accurate result for the blocking probability.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . We can estimate the probability a call is blocked as b ˆ = 0. a simple (but not elegant) way to do this is to have maintain two vectors: time is a list of timestamps of scheduled events and event is a the list of event types. we can calculate that the exact blocking probability is Pb = 0. However. In most programming languages. a kind of Markov chain. plot(t. 65 Address:104 pine meadows loop.0057.0048.93). The following instructions t=0:0.

event(1)=[ ]. n=0.1) ].admits. timenow=time(1). b4depart=time<depart. % next arrival b4arrival=time<arrival.Name:joey iwatsuru Email:joeyiwat@yahoo.mu. immed departure disp(sprintf(’Time %10. us (United States) Zip Code:71901 . hot springs. else blocks=blocks+1.1).3d Admits %10d Blocks %10d’. time=[time(b4arrival) arrival time(˜b4arrival)]. end elseif (eventnow==-1) %departure n=n-1. timenow. %first event is an arrival timenow=0. %one more block.13. event=[ 1 ]. AR. 66 Address:104 pine meadows loop. n=n+1. %total # blocks admits=0. tmax=max(t). end end Figure 5: Discrete event simulation of the blocking switch of Quiz 10.blocks]=simblockswitch(lam.. time=[time(b4depart) depart time(˜b4depart)]. event=[event(b4arrival) 1 event(˜b4arrival)]. eventnow=event(1).admits. while (timenow<tmax) M((timenow<=t)&(t<time(1)))=n.t).. event=[event(b4depart) -1 event(˜b4depart)]. depart=timenow+exponentialrv(mu. % clear current event if (eventnow==1) % arrival arrival=timenow+exponentialrv(lam. if n<c %call admitted admits=admits+1.c. % # in system time=[ exponentialrv(lam. blocks=0.com Phone:5017621195 function [M. %total # admits M=zeros(size(t))..blocks)). time(1)= [ ].1).

Just to be safe though. hot springs. we 2 can double check. The variance of Yn is Var[Yn ] = E[Yn ] = RY [0] = 1. µY = µ X ∞ −∞ h(t)dt = 2 0 ∞ e−t dt = 2 (1) Since R X (τ ) = δ(τ ).2. For τ < 0. we have RY (τ ) = 0 ∞ e−u e−τ −u du = e−τ 0 ∞ 1 e−2u du = e−τ 2 (3) For τ < 0. us (United States) Zip Code:71901 .com Phone:5017621195 Quiz Solutions – Chapter 11 Quiz 11. we can deduce that RY (τ ) = 1 e−|τ | by symmetry. AR. 67 Address:104 pine meadows loop.1 By Theorem 11.5(1 + −1) = 0 (1) The autocorrelation of the output is 1 1 RY [n] = i=0 j=0 h i h j R X [n + i − j] 1 n=0 0 otherwise (2) (3) = 2R X [n] − R X [n − 1] − R X [n + 1] = 2 Since µY = 0.2 The expected value of the output is ∞ ∞ −τ h(u)h(τ + u) du = ∞ −τ 1 e−u e−τ −u du = eτ 2 (4) (5) µY = µ X n=−∞ h n = 0. the autocorrelation function of the output is RY (τ ) = ∞ −∞ ∞ h(u) −∞ h(v)δ(τ + u − v) dv du = ∞ −∞ h(u)h(τ + u) du (2) For τ > 0. RY (τ ) = Hence.Name:joey iwatsuru Email:joeyiwat@yahoo. 1 RY (τ ) = e−|τ | 2 Quiz 11.

In this problem. RX = I.13 with µX = 0 and A = H. we obtain RY = HRX H . 68 Address:104 pine meadows loop.5. Fo ﬁnd the PDF of the Gaussian vector Y. hot springs.8. which equals the correlation matrix RY since Y has zero expected value. AR.7. Y = Y33 Y34 Y35 is a Gaussian random vector since X n is a Gaussian random process. we need to ﬁnd the covariance matrix CY . following Theorem 11. it is simpler to observe that Y = HX where X = X 30 X 31 X 32 X 33 X 34 X 35 and ⎡ ⎤ 1 1 1 1 0 0 1 H = ⎣0 1 1 1 1 0⎦ . each Yn has expected value E[Yn ] = µ X ∞ n=−∞ h n = 0.2 −0. using Equation (1) is surprisingly tedious because we still need to sum over all i and j such that n + i − j = 0. (1) Despite the fact that R X [k] is an impulse.5 to ﬁnd the autocorrelation function ∞ ∞ RY [n] = i=−∞ j=−∞ h i h j R X [n + i − j].6 SX(f) 0. Thus E[Y] = 0.6 and to use Theorem 11. by Theorem 11. Since R X [n] = δn .Name:joey iwatsuru Email:joeyiwat@yahoo.2 X (a) W = 10 (b) W = 1000 Figure 6: The autocorrelation R X (τ ) and power spectral density S X ( f ) for process X (t) in Quiz 11. the identity matrix. Moreover. Quiz 11.1 0 τ 0.4 0. 4 0 0 1 1 1 1 (2) (3) In this case.com Phone:5017621195 x 10 8 0.2 0 −15 −10 −5 0 f 5 10 15 SX(f) 6 4 2 0 −1500−1000 −500 10 R (τ) 5 0 −5 −2 −1 0 τ 1 x 10 2 −3 0 f 500 1000 1500 10 RX(τ) 5 0 −5 −0.1 0.5.3 By Theorem 11. One way to ﬁnd the RY is to observe that RY has the Toeplitz structure of Theorem 11. or by directly applying Theorem 5. us (United States) Zip Code:71901 .

CY = RY = HH = 16 2 3 4 (4) It follows (very quickly if you use M ATLAB for 3 × 3 matrix inversion) that ⎡ ⎤ 7/12 −1/2 1/12 1 −1/2⎦ . Xn = X n−1 X n and RXn = and RXn X n+1 = E 1.13 and to directly calculate ˆ (5) e∗ = E (X n+1 − X n+1 )2 .1 0. Y 2 (5) (6) A disagreeable amount of algebra will show det(CY ) = 3/1024 and that the PDF can be “simpliﬁed” to 16 7 2 7 2 1 2 y33 + y34 + y35 − y33 y34 + y33 y35 − y34 y35 exp −8 f Y (y) = √ 3 12 12 6 6π .9 1. hot springs.4 This quiz is solved using Theorem 11. one approach is to follow the method of Example 11. Y Quiz 11. X n+1 = Xn 0. us (United States) Zip Code:71901 . the PDF of Y is f Y (y) = 1 (2π )3/2 [det (CY )]1/2 1 exp − y C−1 y .9 h = R−1 RXn X n+1 = Xn 0. AR. L 69 Address:104 pine meadows loop. In this case.Name:joey iwatsuru Email:joeyiwat@yahoo. (7) Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution is that y C−1 y is a very concise representation of the cross-terms in the exponent of f Y (y).81 X n−1 R X [2] = . X n+1 = 400 400 (4) to ﬁnd the mean square error.1 0.9 400 261 (3) It follows that the ﬁlter is h = 261/400 81/400 and the MMSE linear predictor is 81 261 ˆ X n−1 + Xn.9 R X [1] −1 (1) (2) The MMSE linear ﬁrst order ﬁlter for predicting X n+1 at time n is the ﬁlter h such that ← − 1. C−1 = 16 ⎣−1/2 Y 1/12 −1/2 7/12 Thus.1 R X [1] R X [0] 0. 0.1 1 0.9 R X [0] R X [1] = 0.81 81 = .9 for the case of k = 1 and M = 2.9 1.com Phone:5017621195 Thus ⎡ ⎤ 4 3 2 1 ⎣ 3 4 3⎦ .

Instead.7 by using the orthoginality property of the LMSE estimator.Name:joey iwatsuru Email:joeyiwat@yahoo. we note that 1 f S X ( f ) = 10 rect (2) 2W 2W It follows that the inverse transform of S X ( f ) is sin(2π W τ ) R X (τ ) = 10 sinc(2W τ ) = 10 (3) 2π W τ (3) For W = 10 Hz and W = 1 kHZ.1 − = = 0. X = X n+1 and ← − ˆ a = h . e∗ = E L ← − X n+1 − h Xn 2 (6) (7) (8) ← − ← − = E (X n+1 − h Xn )(X n+1 − h Xn ) ← − ← − = E (X n+1 − h Xn )(X n+1 − Xn h ) After a bit of algebra. Since X n+1 = h Xn . Quiz 11. we can derive the mean square error for an arbitary prediction ← − ˆ ﬁlter h. AR. the mean square error is 1 506 ← − 0.com Phone:5017621195 This method is workable for this simple problem but becomes increasingly tedious for higher order ﬁlters.7 with Y = Xn .81 81 261 e∗ = R X [0] − h RXn X n+1 = 1.1. we obtain Xn e∗ = R X [0] − RXn X n+1 R−1 RXn X n+1 L Xn ← − = R X [0] − h RXn X n+1 (11) (12) Note that this is essentially the same result as Theorem 9. graphs of S X ( f ) and R X (τ ) appear in Figure 6.5 (1) By Theorem 11.13(b).3487.1. In any case. we obtain ← − ← − ← − e∗ = R X [0] − 2 h RXn X n+1 + h RXn h L (9) (10) ← − with the substitution h = R−1 RXn X n+1 . 70 Address:104 pine meadows loop. Consulting Table 11. hot springs.9 400 1451 recalling that the blind estimate would yield a mean square error of Var[X ] = 1. we see that observing X n−1 and X n improves the accuracy of our prediction of X n+1 . (13) L 0. It is noteworthy that the result is derived in a much simpler way in the proof of Theorem 9. us (United States) Zip Code:71901 . the average power of X (t) is E X 2 (t) = ∞ −∞ W −W SX ( f ) d f = 5 d f = 10 Watts W (1) (2) The autocorrelation function is the inverse Fourier transform of S X ( f ).

2 [a1 + j2π f ] a0 + (2π f )2 (4) a1 a1 + j2π f (3) Address:104 pine meadows loop.1. H( f ) = (1) Theorem 11. we see that 2 2a0 1 2a0 SX ( f ) = = 2 2 + (2π f )2 a0 a0 a0 + (2π f )2 (2) The RC ﬁlter has impulse response h(t) = a1 e−a1 t u(t). From Table 11. if R X [n] = 10δ[n].000 so that 1 (1) R X (τ ) = a0 e−a0 |τ | .8 We solve this quiz using Theorem 11. τ ) = R X Y (τ ) = R X (τ − t0 ). Let a0 = 5. R X Y (t. where u(t) is the unit step function and a1 = 1/RC where RC = 10−4 is the ﬁlter time constant. 71 (5) 2a0 a1 . (2) Quiz 11. Thus the Fourier transform of R X Y (τ ) = R X (τ − t0 ) = g(τ − t0 ) is S X Y ( f ) = S X ( f )e− j2π f t0 . First we need some preliminary facts.Name:joey iwatsuru Email:joeyiwat@yahoo. From Table 11.1.1.6 In a sampled system. hot springs. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (t + τ − t0 )] = R X (τ − t0 ) (1) We see that R X Y (t. we recall the property that g(τ − τ0 ) has Fourier transform G( f )e− j2π f τ0 .7 Since Y (t) = X (t − t0 ). (This quiz is really lame!) Quiz 11. a0 Consulting with the Fourier transforms in Table 11. R X [n] = 10δ[n].17. us (United States) Zip Code:71901 . AR. the discrete time impulse δ[n] has a ﬂat discrete Fourier transform.com Phone:5017621195 Quiz 11. That is. then ∞ S X (φ) = n=−∞ 10δ[n]e− j2π φn = 10 (1) Thus. SY ( f ) = H ∗ ( f )S X Y ( f ) = |H ( f )|2 S X ( f ). S X Y ( f ) = H ( f )S X ( f ) = (2) Again by Theorem 11.17.17.

the latter method is actually less algebra.1.com Phone:5017621195 Note that |H ( f )|2 = H ( f )H ∗ ( f ) = Thus. we obtain RY (τ ) = 2 a1 e−a0 |τ | − a0 a1 e−a1 |τ | 2 2 a1 − a0 .000 rad/sec and the signal X (t) has most of its its signal energy below 5. SY ( f ) = 2 2 2a0 a0 + (2π f )2 2a1 a1 + (2π f )2 2 a0 K0 K1 + + (2π f )2 a1 + (2π f )2 2 −2a0 a1 2 2 a1 − a0 (8) 2 2a0 a1 2 a1 − a0 .Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. we can either use basic calculus and ∞ calculate −∞ SY ( f ) d f directly or we can ﬁnd RY (τ ) as an inverse transform of SY ( f ). AR. (9) (10) Consulting with Table 11. 2 K1 = . (12) The average power of the Y (t) process is RY (0) = a1 2 = . 72 Address:104 pine meadows loop. SY ( f ) = |H ( f )|2 S X ( f ) = 2 2a0 a1 2 2 a1 + (2π f )2 a0 + (2π f )2 2 a1 a1 a1 = 2 (a1 + j2π f ) (a1 − j2π f ) a1 + (2π f )2 (6) (7) (3) To ﬁnd the average power at the ﬁlter output. us (United States) Zip Code:71901 . a1 + a0 3 (13) Note that the input signal has average power R X (0) = 1. some algebra will show that SY ( f ) = where K0 = Thus. we see that RY (τ ) = K0 K1 a e−a0 |τ | + 2 a1 e−a1 |τ | 2 0 2a0 2a1 (11) Substituting the values of K 0 and K 1 . Since the RC ﬁlter has a 3dB bandwidth of 10. the output signal has almost as much power as the input. Using partial fractions and the Fourier transform table.000 rad/sec. 2 2 2a0 2a1 K0 K1 + 2 . In particular.

146).146) and to calculate the mean square error e L ∗ using Equation (11.146) and (11.9 This quiz implements an example of Equations (11. 4 10 104 (4) The noise power spectral density can be written as S N ( f ) = N0 rect f 2B = 1 f rect 2B 2B . The ˆ solution to this quiz is just to ﬁnd the ﬁlter H ( f ) using Equation (11. (2) RY X (τ ) = R X (τ ). SY X ( f ) = S X ( f ).1 that SX ( f ) = 1 f rect . hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.147) for a system in which we ﬁlter Y (t) = X (t) + N (t) to produce an optimal linear estimate of X (t). (6) 73 Address:104 pine meadows loop. This implies R N (0) = ∞ −∞ SN ( f ) d f = B −B N0 d f = 2N0 B (3) Thus N0 = 1/(2B). Comment: Since the text omitted the derivations of Equations (11. (1) Since µ N = 0.000 Hz.147). (2) Since R X (τ ) = sinc(2W τ ). R N (0) = Var[N ] = 1.com Phone:5017621195 Quiz 11. (5) From Equation (11. it follows that SY ( f ) = S X ( f ) + S N ( f ). AR.24 showed that RY (τ ) = R X (τ ) + R N (τ ). Taking Fourier transforms. (1) Now we can go on to the quiz.146) and (11. the optimal ﬁlter is ˆ H( f ) = SX ( f ) = SX ( f ) + SN ( f ) 1 104 1 104 rect + f 104 1 2B rect f 104 rect f 2B . we note that Example 10. we see from Table 11. Because the noise process N (t) has constant power R N (0) = 1.147). decreasing the single-sided bandwidth B increases the power spectral density of the noise over frequencies | f | < B. where W = 5. us (United States) Zip Code:71901 . at peace with the derivations.

the mean square error of the estimate is e∗ = L = ∞ −∞ ∞ −∞ S X ( f )S N ( f ) df SX ( f ) + SN ( f ) 1 104 1 104 (7) f 2B f 2B rect f 104 f 104 1 2B rect rect rect + 1 2B d f.000. The following M ATLAB program generates and plots the functions shown in Figure 8 74 Address:104 pine meadows loop. Thus increasing B spreads the constant 1 watt of power of N (t) over more bandwidth. The mean square error is e∗ L = 1 1 104 2B 1 1 −5000 104 + 2B 5000 df = 1 2B 1 104 + 1 2B = 1 B 5000 +1 (11) In this case. Since the problem asks us to L ﬁnd the largest possible B. The noise power is always Var[N ] = 1 Watt. From Equation (11. In L particular. When B ≤ W . the MSE is e∗ L = 1 1 104 2B 1 1 −B 104 + 2B B df = 1 104 1 104 + 1 2B = 1 1+ 5. As B shrinks. L Quiz 11. us (United States) Zip Code:71901 . As B is decreased. what is happening may not be obvious.5 × 104 guarantees e∗ ≤ 0. AR. the ﬁlter suppresses less of the signal of X (t). Finally.10 It is fairly straightforward to ﬁnd S X (φ) and SY (φ).05.147). B ≥ 9. let’s suppose B ≤ W .Name:joey iwatsuru Email:joeyiwat@yahoo. L Although this completes the solution to the quiz.000 B (9) To obtain MSE e∗ ≤ 0. Two examples of the ﬁlter H ( f ) are shown in Figure 7. Thus as ˆ B descreases. The Wiener ﬁlter removes the noise that is outside the band of the desired signal. the ﬁlter H ( f ) makes an increasingly deep and narrow notch at frequencies ˆ | f | ≤ B. the PSD S N ( f ) becomes increasingly tall. 1 ˆ + 1 (10) H( f ) = 104 2B ⎩ 0 otherwise. (8) To evaluate the MSE e∗ .000/19 = 263.05. The result is that the MSE goes down. we need to whether B ≤ W . In this case.com Phone:5017621195 ˆ ˆ (3) We produce the output X (t) by passing the noisy signal Y (t) through the ﬁlter H ( f ). hot springs. S N ( f ) = 1/2B over frequencies | f | < W . when B > W = 5000. We can go back and consider the case B > W later. ˆ the Wiener ﬁlter H ( f ) is an ideal (ﬂat) lowpass ﬁlter ⎧ 1 ⎨ 104 | f | < 5. but only over a bandwidth B that is decreasing. for all values of B. we note that we can choose B very large and also achieve MSE e∗ = 0. The only thing to keep in mind is to use fftc to transform the autocorrelation R X [ f ] into the power spectral density S X (φ).16 Hz.05 requires B ≤ 5.

H2=fft(h2. SX=fftc(rx.N). %impulse/filter response: M=10 SY10=sx. they tend to confuse the stem function. 75 Address:104 pine meadows loop.5 0 −5000 −2000 0 f 2000 5000 B = 500 B = 2500 Figure 7: Wiener ﬁlter for Quiz 11. the ﬁnite numerical precision of M ATLAB results in tiny imaginary parts. xlabel(’n’). Hence.abs(sx)).N). H10=fft(h10. %autocorrelation and PSD stem(0:N-1. figure.abs(SY10)). hot springs. %mquiz11.5 0 H(f) −5000 −2000 0 f 2000 5000 1 0. Although these imaginary parts have no computational signiﬁcance.ylabel(’S_{Y_2}(n/N)’).ylabel(’S_{Y_{10}}(n/N)’). h2=0. As an aside.*((abs(H10)). h10=0.ˆ2). xlabel(’n’). us (United States) Zip Code:71901 .10). note that the vectors SX. figure. stem(0:N-1. AR.abs(SY2)). when M = 10.9. the low pass moving average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter output that varies very slowly. Relative to M = 2. we generate stem plots of the magnitude of each power spectral density. the ﬁlter H (φ) ﬁlters out almost all of the high frequency components of X (t).* ((abs(H2)).1*ones(1. stem(0:N-1. rx=[2 4 2]. %impulse/filter response: M=2 SY2=SX.Name:joey iwatsuru Email:joeyiwat@yahoo.5*[1 1].ˆ2). In the context of Example 11.26.ylabel(’S_X(n/N)’). However.N).m N=32.com Phone:5017621195 1 H(f) 0. %PSD of Y for M=2 xlabel(’n’). SY2 and SY10 in mquiz11 should all be realvalued vectors.

AR. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.10. graphs of S X (φ). us (United States) Zip Code:71901 .com Phone:5017621195 10 SX(n/N) 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 2 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 10 5 0 0 5 10 15 n 20 25 30 35 Figure 8: For Quiz 11. SY (n/N ) for M = 2. and Sφ (n/N ) for M = 10 using an N = 32 point DFT. 76 Address:104 pine meadows loop.

01 0.5 1 λ1 0 0 0 −1 ⎦ 0 1 ⎦ ⎣ 0 λ2 0 ⎦ ⎣ 1 P = S−1 DS = ⎣ 0.2⎦ + (0.5 1 −0.6 0. the Markov chain and the transition matrix are ⎡ ⎤ 0.2 0.2 The eigenvalues of P are λ1 = 0 λ2 = 0.6 −0. AR.6 0. hot springs.1 1 P= 0.4 0.3 The Markov chain describing the factory status and the corresponding state transition matrix are 77 Address:104 pine meadows loop.4 λ3 = 1 (1) (2) We can diagonalize P into ⎤⎡ ⎡ ⎤ ⎤⎡ −0.99 P X n+1 = 1|X n = 1 = 0.6 0.5 0.10 0.6 0.5 1 (3) where si .6 0.6 0.5 0. is the left eigenvector of P satisfying si P = λi si . Algebra will verify that the n-step transition matrix is ⎡ ⎡ ⎤ ⎤ 0.01 0.6 0. the ith row of S.5 0 −0.01 0 0.Name:joey iwatsuru Email:joeyiwat@yahoo.9 (1) Since each X n must be either 0 or 1.99 0.4 0.2 −0.2 0.2 0.2 Quiz 12. From the problem statement.4 0 0 λ3 0.90 (3) Quiz 12.2 0.5 −0.2 From the problem statement.com Phone:5017621195 Quiz Solutions – Chapter 12 Quiz 12. we can conclude that P X n+1 = 1|X n = 0 = 0.6 0 0.2 0 0 ⎦ Pn = S−1 Dn S = ⎣0.2⎦ 1 0 1 0 0.1 The system has two states depending on whether the previous packet was received in error.6 0.1 (2) These conditional probabilities correspond to the transition matrix and Markov chain: 0.2 0.4)n ⎣ 0 (4) −0.6 0.5 0 0.2 0.4 0. us (United States) Zip Code:71901 . we are given the conditional probabilities P X n+1 = 0|X n = 0 = 0.6 P = ⎣0.4 0.9 P X n+1 = 0|X n = 1 = 0.99 0.

That is.0 P [K > n] P [K > n − 1] P [K = n] = P [K = n|K > n − 1] = P [K > n − 1] (1) (2) (3) The Markov chain resembles P[K=5] P[K=4] P[K= 1] P[K=2] P[K=3] 0 1 1 1 2 1 3 1 4 . the state n can take on the values 0. This implies π0 + π1 + π2 = π0 (1 + 0. Once the system enters a state in C1 . .1 0 0 1⎦ P=⎣ 0 1 0 0 (1) 2 With π = π0 π1 π2 ..5 At any time t.1 0 1 1 1 ⎡ ⎤ 0.. . . The state transition probabilities are Pn−1. the class C1 is never left. 5. us (United States) Zip Code:71901 .1 + 0. the states in C2 are transient. the states in C3 are recurrent. Once the system exits C2 . AR.com Phone:5017621195 0. 1. Quiz 12. The states in C2 have period 2. 1} C2 = {2.n = P [K > n|K > n − 1] = Pn−1. the system of equations π = π P yields π1 = 0. 6} (1) π1 = 1/12. Thus the states in C1 are recurrent. 3} C3 = {4.9 0. (3) (2) The states in C1 and C3 are aperiodic. π2 = 1/12.1) = 1 It follows that the limiting state probabilities are π0 = 5/6.1π0 and π2 = π1 .Name:joey iwatsuru Email:joeyiwat@yahoo. the states in C2 are never reentered. 2.9 0. C1 is a recurrent class.4 The communicating classes are C1 = {0. 1 … 78 Address:104 pine meadows loop. Quiz 12.. Similarly. hot springs. On the other hand.

the number of transitions need to return to state 0 is always a multiple of 2. Thus the period of state 0 is d = 2.6 (1) By inspection. πk−1 = π0 P [K = k] + πk . hot springs. π1 = π0 P [K = 2] + π2 .com Phone:5017621195 The stationary probabilities satisfy π0 = π0 P [K = 1] + π1 . we have k − 1 units of time left after the state 0 counter reset. n=0 > k] = E[K ]. From Problem 2. . . AR. This implies πn = P [K > n] E [K ] (10) This Markov chain models repeated random countdowns. If we have a random variable W such that the PMF of W satisﬁes PW (n) = πn .5. The system state is the time until the counter expires. we obtain π0 ∞ P[K > k] = 1. . the system is in state 0. When we apply we recall that ∞ k=0 πk ∞ k=0 P[K (9) = 1. . From Equation (4). 2. Since we spend one unit of time in each state. including state 0. When the counter expires. . and we randomly reset the counter to a new value K = k and then we count down k units of time.11. Quiz 12. Equation (5) implies π2 = π1 − π0 P [K = 2] = π0 (P [K > 1] − P [K = 2]) = π0 P [K > 2] (8) (7) (4) (5) k = 1. . we obtain π1 = π0 (1 − P [K = 1]) = π0 P [K > 1] Similarly. then W has a discrete PMF representing the remaining time of the counter at a time in the distant future. we solve the system of equations π = πP and 3 i=0 πi = 1: π0 = (3/4)π1 + (1/4)π3 π1 = (1/4)π0 + (1/4)π2 π2 = (1/4)π1 + (3/4)π3 1 = π0 + π1 + π2 + π3 79 (1) (2) (3) (4) Address:104 pine meadows loop. us (United States) Zip Code:71901 . (6) This suggests that πk = π0 P[K > k]. (2) To ﬁnd the stationary probabilities. We verify this pattern by showing that πk = π0 P[K > k] satisﬁes Equation (6): π0 P [K > k − 1] = π0 P [K = k] + π0 P [K > k] .Name:joey iwatsuru Email:joeyiwat@yahoo.

we observe that for all α > 0 P [V00 ] = lim FT00 (n) = lim 1 − n→∞ n→∞ 1 = 1. The only difference is the modiﬁed transition rates: 1 (1/2)a (2/3)a (3/4) a (4/5) a 0 1. hot springs.7 The Markov chain has the same structure as that in Example 12.22. AR.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Solving the second and third equations for π2 and π3 yields π2 = 4π1 − π0 π3 = (4/3)π2 − (1/3)π1 = 5π1 − (4/3)π0 (5) Substituting π3 back into the ﬁrst equation yields π0 = (3/4)π1 + (1/4)π3 = (3/4)π1 + (5/4)π1 − (1/3)π0 (6) This implies π1 = (2/3)π0 . we can use Theorem 12. nα (2) 80 Address:104 pine meadows loop.14 to ﬁnd the limiting probability that the system is in state 0 at time nd: lim P00 (nd) = dπ0 = 3 8 (9) n→∞ Quiz 12.(1/2) a 1 1 .(2/3) a 1 . It follows from the ﬁrst and second equations that π2 = (5/3)π0 and π3 = 2π0 . Lastly. we choose π0 so the state probabilities sum to 1: 16 2 5 1 = π0 + π1 + π2 + π3 = π0 1 + + + 2 = π0 (7) 3 3 3 It follows that the state probabilities are π0 = 3 16 π1 = 2 16 π2 = 5 16 π3 = 6 16 (8) (3) Since the system starts in state 0 at time 0.(3/4) 1 .(4/5)a a 2 3 4 … The event T00 > n occurs if the system reaches state n before returning to state 0. (1) Thus the CDF of T00 satisﬁes FT00 (n) = 1− P[T00 > n] = 1−1/n α . which occurs with probability P [T00 1 > n] = 1 × 2 α 2 × 3 α n−1 × ··· × n α = 1 n α . us (United States) Zip Code:71901 . To determine whether state 0 is recurrent.

11 which says that ∞ P[K > k] = k=0 E[K ] for any non-negative integer-valued random variable K . ∞ 1 E [T00 ] = 2 + . ( We also note that if α = 0. Applying this result. In this problem. On the other hand. then all states are transient. In Example 12. we did this by deriving the PMF PT00 (n).) To determine whether the chain is null recurrent or positive recurrent. the Markov chain is positive recurrent.24. n (4) We conclude that the Markov chain is null recurrent for 0 < α ≤ 1.8 The number of customers in the ”friendly” store is given by the Markov chain (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) 0 (1-p)q 1 (1-p)q ××× i (1-p)q (1-p)q i+1 ××× 81 Address:104 pine meadows loop. AR. 1/n α ≥ 1/n and it follows that ∞ E [T00 ] ≥ 1 + n=1 1 = ∞. (5) nα n=2 Note that for all n ≥ 2 1 ≤ nα ∞ n n−1 dx xα (6) This implies E [T00 ] ≤ 2 + =2+ n n=2 n−1 ∞ dx 1 dx xα (7) (8) xα x −α+1 =2+ −α + 1 ∞ =2+ 1 1 <∞ α−1 (9) Thus for all α > 1.Name:joey iwatsuru Email:joeyiwat@yahoo. Since the chain has only one communicating class. hot springs.com Phone:5017621195 Thus state 0 is recurrent for all α > 0. us (United States) Zip Code:71901 . we need to calculate E[T00 ]. Quiz 12. for α > 1.5. all states are recurrent. it will be simpler to use the result of Problem 2. the expected time to return to state 0 is ∞ ∞ E [T00 ] = n=0 P [T00 > n] = 1 + n=1 1 . nα (3) For 0 < α ≤ 1.

we obtain the following useful equations for the stationary distribution.Name:joey iwatsuru Email:joeyiwat@yahoo. . 014. 2. (1 − p)q (1) (2) Since Equation (2) holds for i = 0. . yielding p4 = 20 p3 31 p3 = 620 p2 981 p2 = 82 19620 p1 31431 p1 = 628. By applying Theorem 12.01 p2 = 2 p1 + 3 p3 3.1 per msec and the rate to state 0 is the sum of those two rates. for α ≥ 1 or. p3 in terms of p2 and so on. 1. p ≥ q/(1 − q).com Phone:5017621195 In the above chain.13 with state space partitioned between S = {0. . . πi p = πi+1 (1 − p)q.01 0. we have that for α < 1. From the Markov chain. us (United States) Zip Code:71901 . we note that (1 − p)q is the probability that no new customer arrives.01 p4 = 2 p3 We can solve these equations by working backward and solving for p4 in terms of p3 .1 since the task completes at rate 3 per msec and the processor reboots at rate 0.01 1 3 0.9 The continuous time Markov chain describing the processor is 2 2 2 2 0 3. the limiting state probabilities do not exist. ∞ ∞ (3) πi = π0 i=0 i=0 αi = π0 = 1. 5. AR. (5) In addition. the limiting state probabilities are πi = (1 − α)α i . i} and S = {i + 1. α= (1 − p)q Requiring the state probabilities to sum to 1. i + 2. .01 0. 381 (1) Address:104 pine meadows loop. . 1−α (4) Thus for α < 1. . i = 0. we see that for any state i ≥ 0. . . Quiz 12.. 620 p0 1. .01 2 3 3 3 4 Note that q10 = 3.01 p3 = 2 p2 + 3 p4 5. hot springs. . equivalently. This implies πi+1 = p πi . 1. . an existing customer gets one unit of service and then departs the store.}. 1. .01 p1 = 2 p0 + 3 p2 5. we have that πi = π0 α i where p .

c n−c c p0 (ρ/c) ρ /c! n = c + 1. . (1) It is straightforward to show that this implies pn = The requirement that ∞ n=0 p0 ρ n /n! n = 1. AR. . us (United States) Zip Code:71901 . .0655 Quiz 12. pn = 1 yields c (2) p0 = n=0 ρ c ρ/c ρ /n! + c! 1 − ρ/c n −1 (3) 83 Address:104 pine meadows loop.1606 p3 = 0. 443.10 The M/M/c/∞ queue has Markov chain λ λ λ λ λ (2) 0 µ 1 2µ cµ c cµ c+1 cµ From the Markov chain. c (ρ/c) pn−1 n = c + 1. . . . the stationary probabilities must satisfy pn = (ρ/n) pn−1 n = 1. . 014.1015 p4 = 0.2573 p2 = 0. . 2. 401 and the stationary probabilities are p0 = 0.4151 p1 = 0. 2. hot springs. c + 2.Name:joey iwatsuru Email:joeyiwat@yahoo. c + 2. .com Phone:5017621195 Applying p0 + p1 + p2 + p3 + p4 = 1 yields p0 = 1. . . . 381/2. . .

Sign up to vote on this title

UsefulNot useful- [솔루션] Probability and Stochastic Processes 2nd Roy D. Yates and David J. Goodman 2판 확률과 통계 솔루션 433 4000
- Probability and Stochastic Processes 2nd Roy D Yates and David J Goodman
- lectureAll_ece5325_6325_f11
- Sol Manual
- Probability and Stochastic Processes
- 1
- Signal Detection and Estimation - Solution Manual
- Yates - Probability and Stochastic Processes (2nd Edition)
- Instrumentos de Medição Eletrica
- Propagation Study
- Planing a Microwave Radio Link
- Matlab Ref
- 1.10 Sums and Other Functions of Random Variables
- Prob Notes
- r059210401 Probability Theory and Stochastic Process
- Conditional Expectation and Martingales
- Conditional ion
- Zolotarev_Uchaykin
- Week5-Homework.pdf
- Lecture10 Asymptotics Fixed
- Section 6
- kernlab
- Midterm One 6711 f 13
- CH7 Prob Supp
- Random Variable - Probability
- 12 Lecture 2
- Using Ode 45 in matlab
- Vectorize the Code in Matlab
- using ode45
- Brandt.PRD.77
- Probability and Stochastic Processes 2E, By Roy D. Yates , David J. Goodman

Copyright © 2017 Scribd Inc. .Términos de servicio.Accesibilidad.Privacidad.Sitio móvil.Idioma del sitio:

Copyright © 2017 Scribd Inc. .Términos de servicio.Accesibilidad.Privacidad.Sitio móvil.Idioma del sitio: