This action might not be possible to undo. Are you sure you want to continue?

**A Friendly Introduction for Electrical and Computer Engineers
**

SECOND EDITION

MATLAB Function Reference

Roy D. Yates and David J. Goodman

May 22, 2004

This document is a supplemental reference for MATLAB functions described in the text Prob-

ability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers.

This document should be accompanied by matcode.zip, an archive of the corresponding MAT-

LAB .m ﬁles. Here are some points to keep in mind in using these functions.

• The actual programs can be found in the archive matcode.zip or in a directory matcode.

To use the functions, you will need to use the MATLAB command addpath to add this

directory to the path that MATLAB searches for executable .m ﬁles.

• The matcode archive has both general purpose programs for solving probability problems

as well as speciﬁc .m ﬁles associated with examples or quizzes in the text. This manual

describes only the general purpose .m ﬁles in matcode.zip. Other programs in the archive

are described in main text or in the Quiz Solution Manual.

• The MATLAB functions described here are intended as a supplement the text. The code is

not fully commented. Many comments and explanations relating to the code appear in the

text, the Quiz Solution Manual (available on the web) or in the Problem Solution Manual

(available on the web for instructors).

• The code is instructional. The focus is on MATLAB programming techniques to solve prob-

ability problems and to simulate experiments. The code is deﬁnitely not bulletproof; for

example, input range checking is generally neglected.

• This is a work in progress. At the moment (May, 2004), the homework solution manual has

a number of unsolved homework problems. As these solutions require the development of

additional MATLAB functions, these functions will be added to this reference manual.

• There is a nonzero probability (in fact, a probability close to unity) that errors will be found. If

you ﬁnd errors or have suggestions or comments, please send email to ryates@winlab.rutgers.edu.

When errors are found, revisions both to this document and the collection of MATLAB func-

tions will be posted.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Random Variables

bernoullipmf y=bernoullipmf(p,x)

function pv=bernoullipmf(p,x)

%For Bernoulli (p) rv X

%input = vector x

%output = vector pv

%such that pv(i)=Prob(X=x(i))

pv=(1-p)*(x==0) + p*(x==1);

pv=pv(:);

Input: p is the success probability of a Bernoulli

random variable X, x is a vector of possible

sample values

Output: y is a vector with y(i) = P

X

(x(i)).

bernoullicdf y=bernoullicdf(p,x)

function cdf=bernoullicdf(p,x)

%Usage: cdf=bernoullicdf(p,x)

% For Bernoulli (p) rv X,

%given input vector x, output is

%vector pv such that pv(i)=Prob[X<=x(i)]

x=floor(x(:));

allx=0:1;

allcdf=cumsum(bernoullipmf(p,allx));

okx=(x>=0); %x_i < 1 are bad values

x=(okx.*x); %set bad x_i=0

cdf= okx.*allcdf(x); %zeroes out bad x_i

Input: p is the success probability of

a Bernoulli random variable X,

x is a vector of possible sample

values

Output: y is a vector with y(i) =

F

X

(x(i)).

bernoullirv x=bernoullirv(p,m)

function x=bernoullirv(p,m)

%return m samples of bernoulli (p) rv

r=rand(m,1);

x=(r>=(1-p));

Input: p is the success probability of a

Bernoulli random variable X, m is

a positive integer vector of possible

sample values

Output: x is a vector of m independent

sample values of X

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf y=bignomialpmf(n,p,x)

function pmf=bignomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

k=(0:n-1)’;

a=log((p/(1-p))*((n-k)./(k+1)));

L0=n*log(1-p);

L=[L0; L0+cumsum(a)];

pb=exp(L);

% pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

Comment: This function should al-

ways produce the same output

as binomialpmf(n,p,x);

however, the function calcu-

lates the logarithmof the proba-

bility and thismay lead to small

numerical innaccuracy.

binomialcdf y=binomialcdf(n,p,x)

function cdf=binomialcdf(n,p,x)

%Usage: cdf=binomialcdf(n,p,x)

%For binomial(n,p) rv X,

%and input vector x, output is

%vector cdf: cdf(i)=P[X<=x(i)]

x=floor(x(:)); %for noninteger x(i)

allx=0:max(x);

%calculate cdf from 0 to max(x)

allcdf=cumsum(binomialpmf(n,p,allx));

okx=(x>=0); %x(i) < 0 are zero-prob values

x=(okx.*x); %set zero-prob x(i)=0

cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

Input: n and p are the pa-

rameters of a bino-

mial (n, p) random

variable X, x is a vec-

tor of possible sample

values

Output: y is a vector with

y(i) = F

X

(x(i)).

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

binomialpmf y=binomialpmf(n,p,x)

function pmf=binomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

if p<0.5

pp=p;

else

pp=1-p;

end

i=0:n-1;

ip= ((n-i)./(i+1))*(pp/(1-pp));

pb=((1-pp)ˆn)*cumprod([1 ip]);

if pp < p

pb=fliplr(pb);

end

pb=pb(:); % pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

binomialrv x=binomialrv(n,p,m)

function x=binomialrv(n,p,m)

% m binomial(n,p) samples

r=rand(m,1);

cdf=binomialcdf(n,p,0:n);

x=count(cdf,r);

Input: n and p are the parameters of a binomial ran-

dom variable X, m is a positive integer

Output: x is a vector of m independent samples of

random variable X

bivariategausspdf

function f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Usage: f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Evaluate the bivariate Gaussian (muX,muY,sigmaX,sigmaY,rho) PDF

nx=(x-muX)/sigmaX;

ny=(y-muY)/sigmaY;

f=exp(-((nx.ˆ2) +(ny.ˆ2) - (2*rho*nx.*ny))/(2*(1-rhoˆ2)));

f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2));

Input: Scalar parameters muX,muY,sigmaX,sigmaY,rho of the bivariate Gaussian PDF, scalars

x and y.

Output: f the value of the bivariate Gaussian PDF at x,y.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

duniformcdf y=duniformcdf(k,l,x)

function cdf=duniformcdf(k,l,x)

%Usage: cdf=duniformcdf(k,l,x)

% For discrete uniform (k,l) rv X

% and input vector x, output is

% vector cdf: cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); %for noninteger x_i

allx=k:max(x);

%allcdf = cdf values from 0 to max(x)

allcdf=cumsum(duniformpmf(k,l,allx));

%x_i < k are zero prob values

okx=(x>=k);

%set zero prob x(i)=k

x=((1-okx)*k)+(okx.*x);

%x(i)=0 for zero prob x(i)

cdf= okx.*allcdf(x-k+1);

Input: k and l are the parameters of

a discrete uniform (k, l) random

variable X, x is a vector of pos-

sible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

duniformpmf y=duniformpmf(k,l,x)

function pmf=duniformpmf(k,l,x)

%discrete uniform(k,l) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

pmf= (x>=k).*(x<=l).*(x==floor(x));

pmf=pmf(:)/(l-k+1);

Input: k and l are the parameters

of a discrete uniform (k, l) ran-

dom variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

duniformrv x=duniformrv(k,l,m)

function x=duniformrv(k,l,m)

%returns m samples of a discrete

%uniform (k,l) random variable

r=rand(m,1);

cdf=duniformcdf(k,l,k:l);

x=k+count(cdf,r);

Input: k and l are the parameters of a discrete

uniform (k, l) random variable X, m is a

positive integer

Output: x is a vector of m independent samples

of random variable X

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

erlangb pb=erlangb(rho,c)

function pb=erlangb(rho,c);

%Usage: pb=erlangb(rho,c)

%returns the Erlang-B blocking

%probability for sn M/M/c/c

%queue with load rho

pn=exp(-rho)*poissonpmf(rho,0:c);

pb=pn(c+1)/sum(pn);

Input: Offered load rho (ρ = λ/µ), and

the number of servers c of an M/M/c/c

queue.

Output: pb, the blocking probability of the

queue

erlangcdf y=erlangcdf(n,lambda,x)

function F=erlangcdf(n,lambda,x)

F=1.0-poissoncdf(lambda*x,n-1);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

).

erlangpdf y=erlangpdf(n,lambda,x)

function f=erlangpdf(n,lambda,x)

f=((lambdaˆn)/factorial(n))...

*(x.ˆ(n-1)).*exp(-lambda*x);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λ

n

x

n−1

i

e

−λx

i

/(n − 1)!.

erlangrv x=erlangrv(n,lambda,m)

function x=erlangrv(n,lambda,m)

y=exponentialrv(lambda,m*n);

x=sum(reshape(y,m,n),2);

Input: n and lambda are the parameters of an

Erlang random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

exponentialcdf y=exponentialcdf(lambda,x)

function F=exponentialcdf(lambda,x)

F=1.0-exp(-lambda*x);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

1 − e

−λx

i

.

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

exponentialpdf y=exponentialpdf(lambda,x)

function f=exponentialpdf(lambda,x)

f=lambda*exp(-lambda*x);

f=f.*(x>=0);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λe

−λx

i

.

exponentialrv x=exponentialrv(lambda,m)

function x=exponentialrv(lambda,m)

x=-(1/lambda)*log(1-rand(m,1));

Input: lambda is the parameter of an expo-

nential random variable X, integer m

Output: Length m vector x such that each x

i

is a sample of X

finitecdf y=finitecdf(sx,p,x)

function cdf=finitecdf(s,p,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% cdf: cdf(i)=P[X=x(i)]

cdf=[];

for i=1:length(x)

pxi= sum(p(find(s<=x(i))));

cdf=[cdf; pxi];

end

Input: sx is the range of a ﬁnite random variable

X, px is the corresponding probability as-

signment, x is a vector of possible sample

values

Output: y is a vector with y(i) = F

X

(x(i)).

finitecoeff rho=finitecoeff(SX,SY,PXY)

function rho=finitecoeff(SX,SY,PXY);

%Usage: rho=finitecoeff(SX,SY,PXY)

%Calculate the correlation coefficient rho of

%finite random variables X and Y

ex=finiteexp(SX,PXY); vx=finitevar(SX,PXY);

ey=finiteexp(SY,PXY); vy=finitevar(SY,PXY);

R=finiteexp(SX.*SY,PXY);

rho=(R-ex*ey)/sqrt(vx*vy);

Input: Grids SX, SY and

probability grid PXY de-

scribing the ﬁnite ran-

dom variables X and Y.

Output: rho, the correlation

coefﬁcient of X and Y

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitecov covxy=finitecov(SX,SY,PXY)

function covxy=finitecov(SX,SY,PXY);

%Usage: cxy=finitecov(SX,SY,PXY)

%returns the covariance of

%finite random variables X and Y

%given by grids SX, SY, and PXY

ex=finiteexp(SX,PXY);

ey=finiteexp(SY,PXY);

R=finiteexp(SX.*SY,PXY);

covxy=R-ex*ey;

Input: Grids SX, SY and probability grid

PXY describing the ﬁnite random

variables X and Y.

Output: covxy, the covariance of X and

Y.

finiteexp ex=finiteexp(sx,px)

function ex=finiteexp(sx,px);

%Usage: ex=finiteexp(sx,px)

%returns the expected value E[X]

%of finite random variable X described

%by samples sx and probabilities px

ex=sum((sx(:)).*(px(:)));

Input: Probability vector px, vector

of samples sx describing random

variable X.

Output: ex, the expected value E[X].

finitepmf y=finitepmf(sx,p,x)

function pmf=finitepmf(sx,px,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% pmf: pmf(i)=P[X=x(i)]

pmf=zeros(size(x(:)));

for i=1:length(x)

pmf(i)= sum(px(find(sx==x(i))));

end

Input: sx is the range of a ﬁnite random

variable X, px is the corresponding

probability assignment, x is a vector

of possible sample values

Output: y is a vector with y(i) =

P[X = x(i)].

finiterv x=finiterv(sx,p,m)

function x=finiterv(s,p,m)

% returns m samples

% of finite (s,p) rv

%s=s(:);p=p(:);

r=rand(m,1);

cdf=cumsum(p);

x=s(1+count(cdf,r));

Input: sx is the range of a ﬁnite random variable X, p

is the corresponding probability assignment, m is

positive integer

Output: x is a vector of m sample values y(i) =

F

X

(x(i)).

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitevar v=finitevar(sx,px)

function v=finitevar(sx,px);

%Usage: ex=finitevar(sx,px)

% returns the variance Var[X]

% of finite random variables X described by

% samples sx and probabilities px

ex2=finiteexp(sx.ˆ2,px);

ex=finiteexp(sx,px);

v=ex2-(exˆ2);

Input: Probability vector px

and vector of samples

sx describing random

variable X.

Output: v, the variance

Var[X].

gausscdf y=gausscdf(mu,sigma,x)

function f=gausscdf(mu,sigma,x)

f=phi((x-mu)/sigma);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

((x

i

− µ)/σ).

gausspdf y=gausspdf(mu,sigma,x)

function f=gausspdf(mu,sigma,x)

f=exp(-(x-mu).ˆ2/(2*sigmaˆ2))/...

sqrt(2*pi*sigmaˆ2);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

).

gaussrv x=gaussrv(mu,sigma,m)

function x=gaussrv(mu,sigma,m)

x=mu +(sigma*randn(m,1));

Input: mu and sigma are the parameters of an

Gaussian random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

gaussvector x=gaussvector(mu,C,m)

function x=gaussvector(mu,C,m)

%output: m Gaussian vectors,

%each with mean mu

%and covariance matrix C

if (min(size(C))==1)

C=toeplitz(C);

end

n=size(C,2);

if (length(mu)==1)

mu=mu*ones(n,1);

end

[U,D,V]=svd(C);

x=V*(Dˆ(0.5))*randn(n,m)...

+(mu(:)*ones(1,m));

Input: For a Gaussian (µ

X

, C

X

) random vector X,

gaussvector can be called in two ways:

• C is the n × n covariance matrix, mu is

either a length n vector, or a length 1

scalar, m is an integer.

• C is the length n vector equal to the ﬁrst

row of a symmetric Toeplitz covariance

matrix C

X

, mu is either a length n vec-

tor, or a length 1 scalar, m is an integer.

If mu is a length n vector, then mu is the ex-

pected value vector; otherwise, each element

of X is assumed to have mean mu.

Output: n × m matrix x such that each column

x(:,i) is a sample vector of X

gaussvectorpdf f=gaussvector(mu,C,x)

function f=gaussvectorpdf(mu,C,x)

n=length(x);

z=x(:)-mu(:);

f=exp(-z’*inv(C)*z)/...

sqrt((2*pi)ˆn*det(C));

Input: For a Gaussian (µ

X

, C

X

) random vec-

tor X, mu is a length n vector, C is the

n × n covariance matrix, x is a length n

vector.

Output: f is the Gaussian vector PDF f

X

(x)

evaluated at x.

geometriccdf y=geometriccdf(p,x)

function cdf=geometriccdf(p,x)

% for geometric(p) rv X,

%For input vector x, output is vector

%cdf such that cdf_i=Prob(X<=x_i)

x=(x(:)>=1).*floor(x(:));

cdf=1-((1-p).ˆx);

Input: p is the parameter of a geometric

random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

geometricpmf y=geometricpmf(p,x)

function pmf=geometricpmf(p,x)

%geometric(p) rv X

%out: pmf(i)=Prob[X=x(i)]

x=x(:);

pmf= p*((1-p).ˆ(x-1));

pmf= (x>0).*(x==floor(x)).*pmf;

Input: p is the parameter of a geometric random

variable X, x is a vector of possible sample

values

Output: y is a vector with y(i) = P

X

(x(i)).

geometricrv x=geometricrv(p,m)

function x=geometricrv(p,m)

%Usage: x=geometricrv(p,m)

% returns m samples of a geometric (p) rv

r=rand(m,1);

x=ceil(log(1-r)/log(1-p));

Input: p is the parameters of a

geometric random variable

X, m is a positive integer

Output: x is a vector of m inde-

pendent samples of random

variable X

icdfrv x=icdfrv(@icdf,m)

function x=icdfrv(icdfhandle,m)

%Usage: x=icdfrv(@icdf,m)

%returns m samples of rv X

%with inverse CDF icdf.m

u=rand(m,1);

x=feval(icdfhandle,u);

Input: @icdfrv is a “handle” (a kind of pointer)

to a MATLAB function icdf.m that is

MATLAB’s representation of an inverse

CDF F

−1

X

(x) of a random variable X, inte-

ger m

Output: Length m vector x such that each x

i

is a

sample of X

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalcdf y=pascalcdf(k,p,x)

function cdf=pascalcdf(k,p,x)

%Usage: cdf=pascalcdf(k,p,x)

%For a pascal (k,p) rv X

%and input vector x, the output

%is a vector cdf such that

% cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); % for noninteger x(i)

allx=k:max(x);

%allcdf holds all needed cdf values

allcdf=cumsum(pascalpmf(k,p,allx));

%x_i < k have zero-prob,

% other values are OK

okx=(x>=k);

%set zero-prob x(i)=k,

%just so indexing is not fouled up

x=(okx.*x) +((1-okx)*k);

cdf= okx.*allcdf(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

pascalpmf y=pascalpmf(k,p,x)

function pmf=pascalpmf(k,p,x)

%For Pascal (k,p) rv X, and

%input vector x, output is a

%vector pmf: pmf(i)=Prob[X=x(i)]

x=x(:);

n=max(x);

i=(k:n-1)’;

ip= [1 ;(1-p)*(i./(i+1-k))];

%pb=all n-k+1 pascal probs

pb=(pˆk)*cumprod(ip);

okx=(x==floor(x)).*(x>=k);

%set bad x(i)=k to stop bad indexing

x=(okx.*x) + k*(1-okx);

% pmf(i)=0 unless x(i) >= k

pmf=okx.*pb(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalrv x=pascalrv(k,p,m)

function x=pascalrv(k,p,m)

% return m samples of pascal(k,p) rv

r=rand(m,1);

rmax=max(r);

xmin=k;

xmax=ceil(2*(k/p)); %set max range

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

end

x=xmin+countless(cdf,r);

Input: k and p are the parameters of a Pas-

cal random variable X, m is a posi-

tive integer

Output: x is a vector of m independent

samples of random variable X

phi y=phi(x)

function y=phi(x)

sq2=sqrt(2);

y= 0.5 + 0.5*erf(x/sq2);

Input: Vector x

Output: Vector y such that y(i) = (x(i)).

poissoncdf y=poissoncdf(alpha,x)

function cdf=poissoncdf(alpha,x)

%output cdf(i)=Prob[X<=x(i)]

x=floor(x(:));

sx=0:max(x);

cdf=cumsum(poissonpmf(alpha,sx));

%cdf from 0 to max(x)

okx=(x>=0);%x(i)<0 -> cdf=0

x=(okx.*x);%set negative x(i)=0

cdf= okx.*cdf(x+1);

%cdf=0 for x(i)<0

Input: alpha is the parameter of a Poisson

(α) random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonpmf y=poissonpmf(alpha,x)

function pmf=poissonpmf(alpha,x)

%Poisson (alpha) rv X,

%out=vector pmf: pmf(i)=P[X=x(i)]

x=x(:);

k=(1:max(x))’;

logfacts =cumsum(log(k));

pb=exp([-alpha; ...

-alpha+ (k*log(alpha))-logfacts]);

okx=(x>=0).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

%pmf(i)=0 for zero-prob x(i)

Input: alpha is the parameter of a

Poisson (α) random variable X, x

is a vector of possible sample val-

ues

Output: y is a vector with y(i) =

P

X

(x(i)).

poissonrv x=poissonrv(alpha,m)

function x=poissonrv(alpha,m)

%return m samples of poisson(alpha) rv X

r=rand(m,1);

rmax=max(r);

xmin=0;

xmax=ceil(2*alpha); %set max range

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

%while ( sum(cdf <=rmax) ==(xmax-xmin+1) )

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

end

x=xmin+countless(cdf,r);

Input: alpha is the parameter of

a Poisson (α) random vari-

able X, m is a positive inte-

ger

Output: x is a vector of m inde-

pendent samples of random

variable X

uniformcdf y=uniformcdf(a,b,x)

function F=uniformcdf(a,b,x)

%Usage: F=uniformcdf(a,b,x)

%returns the CDF of a continuous

%uniform rv evaluated at x

F=x.*((x>=a) & (x<b))/(b-a);

F=f+1.0*(x>=b);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

)

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

uniformpdf y=uniformpdf(a,b,x)

function f=uniformpdf(a,b,x)

%Usage: f=uniformpdf(a,b,x)

%returns the PDF of a continuous

%uniform rv evaluated at x

f=((x>=a) & (x<b))/(b-a);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

)

uniformrv x=uniformrv(a,b,m)

function x=uniformrv(a,b,m)

%Usage: x=uniformrv(a,b,m)

%Returns m samples of a

%uniform (a,b) random varible

x=a+(b-a)*rand(m,1);

Input: a and ( b) are parameters for continuous uni-

form random variable X, positive integer m

Output: m element vector x such that each x(i) is

a sample of X.

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Stochastic Processes

brownian w=brownian(alpha,t)

function w=brownian(alpha,t)

%Brownian motion process

%sampled at t(1)<t(2)< ...

t=t(:);

n=length(t);

delta=t-[0;t(1:n-1)];

x=sqrt(alpha*delta).*gaussrv(0,1,n);

w=cumsum(x);

Input: t is a vector holding an ordered se-

quence of inspection times, alpha

is the scaling constant of a Brownian

motion process such that the i th in-

crement has variance α(t

i

− t

i −1

).

Output: w is a vector such that w(i) is

the position at time t(i) of the par-

ticle in Brownian motion.

cmcprob pv=cmcprob(Q,p0,t)

function pv = cmcprob(Q,p0,t)

%Q has zero diagonal rates

%initial state probabilities p0

K=size(Q,1)-1; %max no. state

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

R=Q-diag(sum(Q,2));

pv= (p0(:)’*expm(R*t))’;

Input: n × n state transition matrix Q for a

continuous-time ﬁnite Markov chain, length

n vector p0 denoting the initial state proba-

bilities, nonengative scalar t

Output: Length n vector pv such that pv(t) is

the state probability vector at time t of the

Markov chain

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0

cmcstatprob pv=cmcstatprob(Q)

function pv = cmcstatprob(Q)

%Q has zero diagonal rates

R=Q-diag(sum(Q,2));

n=size(Q,1);

R(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Rˆ(-1))’;

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain

Output: pv is the stationary probability vector for

the continuous-time Markov chain

dmcstatprob pv=dmcstatprob(P)

function pv = dmcstatprob(P)

n=size(P,1);

A=(eye(n)-P);

A(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Aˆ(-1))’;

Input: n × n stochastic matrix P representing

a discrete-time aperiodic irreducible ﬁnite

Markov chain

Output: pv is the stationary probability vector.

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonarrivals s=poissonarrivals(lambda,T)

function s=poissonarrivals(lambda,T)

%arrival times s=[s(1) ... s(n)]

% s(n)<= T < s(n+1)

n=ceil(1.1*lambda*T);

s=cumsum(exponentialrv(lambda,n));

while (s(length(s))< T),

s_new=s(length(s))+ ...

cumsum(exponentialrv(lambda,n));

s=[s; s_new];

end

s=s(s<=T);

Input: lambda is the arrival rate of a

Poisson process, T marks the end of

an observation interval [0, T].

Output: s=[s(1), ..., s(n)]’ is

a vector such that s(i) is i th arrival

time. Note that length n is a Poisson

random variable with expected value

λT.

Comment: This code is pretty stupid.

There are decidedly better ways to

create a set of arrival times; see Prob-

lem 10.13.5.

poissonprocess N=poissonprocess(lambda,t)

function N=poissonprocess(lambda,t)

%input: rate lambda>0, vector t

%For a sample function of a

%Poisson process of rate lambda,

%N(i) = no. of arrivals by t(i)

s=poissonarrivals(lambda,max(t));

N=count(s,t);

Input: lambda is the arrival rate of a Pois-

son process, t is a vector of “inspec-

tion times’.’

Output: N is a vector such that N(i) is the

number of arrival by inspection time

t(i).

simcmc ST=simcmc(Q,p0,T)

function ST=simcmc(Q,p0,T);

K=size(Q,1)-1; max no. state

%calc average trans. rate

ps=cmcstatprob(Q);

v=sum(Q,2); R=ps’*v;

n=ceil(0.6*T/R);

ST=simcmcstep(Q,p0,2*n);

while (sum(ST(:,2))<T),

s=ST(size(ST,1),1);

p00=Q(1+s,:)/v(1+s);

S=simcmcstep(Q,p00,n);

ST=[ST;S];

end

n=1+sum(cumsum(ST(:,2))<T);

ST=ST(1:n,:);

%truncate last holding time

ST(n,2)=T-sum(ST(1:n-1,2));

Input: state transition matrix Q for a continuous-time

ﬁnite Markov chain, vector p0 denoting the ini-

tial state probabilities, integer n

Output: A simulation of the Markov chain system

over the time interval [0, T]: The output is an

n × 2 matrix ST such that the ﬁrst column

ST(:,1) is the sequence of system states and

the second column ST(:,2) is the amount of

time spent in each state. That is, ST(i,2) is

the amount of time the system spends in state

ST(i,1).

Comment: If p0 is a scalar integer, then the simula-

tion starts in state p0. Note that n, the number

of state occupancy periods, is random.

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simcmcstep S=simcmcstep(Q,p0,n)

function S=simcmcstep(Q,p0,n);

%S=simcmcstep(Q,p0,n)

% Simulate n steps of a cts

% Markov Chain, rate matrix Q,

% init. state probabilities p0

K=size(Q,1)-1; %max no. state

S=zeros(n+1,2);%init allocation

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

v=sum(Q,2); %state dep. rates

t=1./v;

P=diag(t)*Q;

S(:,1)=simdmc(P,p0,n);

S(:,2)=t(1+S(:,1)) ...

.*exponentialrv(1,n+1);

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain, vector p0 denot-

ing the initial state probabilities, integer n

Output: A simulation of n steps of the

continuous-time Markov chain system:

The output is an n × 2 matrix ST such that

the ﬁrst column ST(:,1) is the length n

sequence of system states and the second

column ST(:,2) is the amount of time

spent in each state. That is, ST(i,2) is

the amount of time the system spends in

state ST(i,1).

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0. This program is

the basis for simcmc.

simdmc x=simdmc(P,p0,n)

function x=simdmc(P,p0,n)

K=size(P,1)-1; %highest no. state

sx=0:K; %state space

x=zeros(n+1,1); %initialization

if (length(p0)==1) %convert integer p0 to prob vector

p0=((0:K)==p0);

end

x(1)=finiterv(sx,p0,1); %x(m)= state at time m-1

for m=1:n,

x(m+1)=finiterv(sx,P(x(m)+1,:),1);

end

Input: n×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov

chain, length n vector p0 denoting the initial state probabilities, integer n.

Output: A simulation of the Markov chain system such that for the length n vector x, x(m) is the

state at time m-1 of the Markov chain.

Comment: If p0 is a scalar integer, then the simulation starts in state p0

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Random Utilities

count n=count(x,y)

function n=count(x,y)

%Usage n=count(x,y)

%n(i)= # elements of x <= y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<=MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x less than or equal to y(i).

countequal n=countequal(x,y)

function n=countequal(x,y)

%Usage: n=countequal(x,y)

%n(j)= # elements of x = y(j)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX==MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x equal to y(i).

countless n=countless(x,y)

function n=countless(x,y)

%Usage: n=countless(x,y)

%n(i)= # elements of x < y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<MY),1))’;

Input:

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x strictly less than y(i).

dftmat F=dftmat(N)

function F = dftmat(N);

Usage: F=dftmat(N)

%F is the N by N DFT matrix

n=(0:N-1)’;

F=exp((-1.0j)*2*pi*(n*(n’))/N);

Input: Integer N.

Output: F is the N by N discrete Fourier trans-

form matrix

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

freqxy fxy=freqxy(xy,SX,SY)

function fxy = freqxy(xy,SX,SY)

%Usage: fxy = freqxy(xy,SX,SY)

%xy is an m x 2 matrix:

%xy(i,:)= ith sample pair X,Y

%Output fxy is a K x 3 matrix:

% [fxy(k,1) fxy(k,2)]

% = kth unique pair [x y] and

% fxy(k,3)= corresp. rel. freq.

%extend xy to include a sample

%for all possible (X,Y) pairs:

xy=[xy; SX(:) SY(:)];

[U,I,J]=unique(xy,’rows’);

N=hist(J,1:max(J))-1;

N=N/sum(N);

fxy=[U N(:)];

%reorder fxy rows to match

%rows of [SX(:) SY(:) PXY(:)]:

fxy=sortrows(fxy,[2 1 3]);

Input: For random variables X and Y, xy is

an m × 2 matrix holding a list of sample

values pairs; yy(i,:) is the i th sample

pair (X, Y). Grids SX and SY represent-

ing the sample space.

Output: fxy is a K × 3 matrix. In each row

[fxy(k,1) fxy(k,2) fxy(k,3)]

[fxy(k,1) fxy(k,2)] is a unique

(X, Y) pair with relative frequency

fxy(k,3).

Comment: Given the grids SX, SY and the

probability grid PXY, a list of random

sample value pairs xy can be simulated

by the commands

S=[SX(:) SY(:)];

xy=finiterv(S,PXY(:),m);

The output fxy is ordered so that the

rows match the ordering of rows in the

matrix

[SX(:) SY(:) PXY(:)].

fftc S=fftc(r,N); S=fftc(r)

function S=fftc(varargin);

%DFT for a signal r

%centered at the origin

%Usage:

% fftc(r,N): N point DFT of r

% fftc(r): length(r) DFT of r

r=varargin{1};

L=1+floor(length(r)/2);

if (nargin>1)

N=varargin{2}(1);

else

N=(2*L)-1;

end

R=fft(r,N);

n=reshape(0:(N-1),size(R));

phase=2*pi*(n/N)*(L-1);

S=R.*exp((1.0j)*phase);

Input: Vector r=[r(1) ... r(2k+1)]

holding the time sequence r

−k

, . . . , r

0

, . . . , r

k

centered around the origin.

Output: S is the DFT of r

Comment: Supports the same calling conventions

as fft.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pmfplot pmfplot(sx,px,’x’,’y axis text’)

function h=pmfplot(sx,px,xls,yls)

%Usage: pmfplot(sx,px,xls,yls)

%sx and px are vectors, px is the PMF

%xls and yls are x and y label strings

nonzero=find(px);

sx=sx(nonzero); px=px(nonzero);

sx=(sx(:))’; px=(px(:))’;

XM = [sx; sx];

PM=[zeros(size(px)); px];

h=plot(XM,PM,’-k’);

set(h,’LineWidth’,3);

if (nargin==4)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

xmin=min(sx); xmax=max(sx);

xborder=0.05*(xmax-xmin);

xmax=xmax+xborder;

xmin=xmin-xborder;

ymax=1.1*max(px);

axis([xmin xmax 0 ymax]);

Input: Sample space vector sx

and PMF vector px for ﬁ-

nite random variable PXY,

optional text strings xls

and yls

Output: A plot of the PMF

P

X

(x) in the bar style used

in the text.

rect y=rect(x)

function y=rect(x);

%Usage:y=rect(x);

y=1.0*(abs(x)<0.5);

Input: Vector x

Output: Vector y such that

y

i

= rect(x

i

) =

1 |x

i

| < 0.5

0 otherwise

sinc y=sinc(x)

function y=sinc(x);

xx=x+(x==0);

y=sin(pi*xx)./(pi*xx);

y=((1.0-(x==0)).*y)+ (1.0*(x==0));

Input: Vector x

Output: Vector y such that

y

i

= sinc(x

i

) =

sin(πx

i

)

πx

i

Comment: The code is ugly because it makes

sure to produce the right limit value at

x

i

= 0.

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simplot simplot(S,xlabel,ylabel)

function h=simplot(S,xls,yls);

%h=simplot(S,xlabel,ylabel)

% Plots the output of a simulated state sequence

% If S is N by 1, a discrete time chain is assumed

% with visit times of one unit.

% If S is an N by 2 matrix, a cts time Markov chain

% is assumed where

% S(:,1) = state sequence.

% S(:,2) = state visit times.

% The cumulative sum

% of visit times are transition instances.

% h is a handle to a stairs plot of the state sequence

% vs state transition times

%in case of discrete time simulation

if (size(S,2)==1)

S=[S ones(size(S))];

end

Y=[S(:,1) ; S(size(S,1),1)];

X=cumsum([0 ; S(:,2)]);

h=stairs(X,Y);

if (nargin==3)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

Input: The simulated state sequence vector S generated by S=simdmc(P,p0,n) or the n × 2

state/time matrix ST generated by either

ST=simcmc(Q,p0,T)

or

ST=simcmcstep(Q,p0,n).

Output: A “stairs” plot showing the sequence of simulation states over time.

Comment: If S is just a state sequence vector, then each stair has equal width. If S is n × 2

state/time matrix ST, then the width of the stair is proportional to the time spent in that state.

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Probability and Stochastic Processes

A Friendly Introduction for Electrical and Computer Engineers

Second Edition

Quiz Solutions

Roy D. Yates and David J. Goodman

May 22, 2004

• The MATLAB section quizzes at the end of each chapter use programs available for

download as the archive matcode.zip. This archive has programs of general pur-

pose programs for solving probability problems as well as speciﬁc .m ﬁles associated

with examples or quizzes in the text. Also available is a manual probmatlab.pdf

describing the general purpose .m ﬁles in matcode.zip.

• We have made a substantial effort to check the solution to every quiz. Nevertheless,

there is a nonzero probability (in fact, a probability close to unity) that errors will be

found. If you ﬁnd errors or have suggestions or comments, please send email to

ryates@winlab.rutgers.edu.

When errors are found, corrected solutions will be posted at the website.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 1

Quiz 1.1

In the Venn diagrams for parts (a)-(g) below, the shaded area represents the indicated

set.

M

O

T

M

O

T

M

O

T

(1) R = T

c

(2) M ∪ O (3) M ∩ O

M

O

T

M

O

T

M

O

T

(4) R ∪ M (4) R ∩ M (6) T

c

− M

Quiz 1.2

(1) A

1

= {vvv, vvd, vdv, vdd}

(2) B

1

= {dvv, dvd, ddv, ddd}

(3) A

2

= {vvv, vvd, dvv, dvd}

(4) B

2

= {vdv, vdd, ddv, ddd}

(5) A

3

= {vvv, ddd}

(6) B

3

= {vdv, dvd}

(7) A

4

= {vvv, vvd, vdv, dvv, vdd, dvd, ddv}

(8) B

4

= {ddd, ddv, dvd, vdd}

Recall that A

i

and B

i

are collectively exhaustive if A

i

∪ B

i

= S. Also, A

i

and B

i

are

mutually exclusive if A

i

∩ B

i

= φ. Since we have written down each pair A

i

and B

i

above,

we can simply check for these properties.

The pair A

1

and B

1

are mutually exclusive and collectively exhaustive. The pair A

2

and

B

2

are mutually exclusive and collectively exhaustive. The pair A

3

and B

3

are mutually

exclusive but not collectively exhaustive. The pair A

4

and B

4

are not mutually exclusive

since dvd belongs to A

4

and B

4

. However, A

4

and B

4

are collectively exhaustive.

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.3

There are exactly 50 equally likely outcomes: s

51

through s

100

. Each of these outcomes

has probability 0.02.

(1) P[{s

79

}] = 0.02

(2) P[{s

100

}] = 0.02

(3) P[A] = P[{s

90

, . . . , s

100

}] = 11 ×0.02 = 0.22

(4) P[F] = P[{s

51

, . . . , s

59

}] = 9 ×0.02 = 0.18

(5) P[T ≥ 80] = P[{s

80

, . . . , s

100

}] = 21 ×0.02 = 0.42

(6) P[T < 90] = P[{s

51

, s

52

, . . . , s

89

}] = 39 ×0.02 = 0.78

(7) P[a C grade or better] = P[{s

70

, . . . , s

100

}] = 31 ×0.02 = 0.62

(8) P[student passes] = P[{s

60

, . . . , s

100

}] = 41 ×0.02 = 0.82

Quiz 1.4

We can describe this experiment by the event space consisting of the four possible

events V B, V L, DB, and DL. We represent these events in the table:

V D

L 0.35 ?

B ? ?

In a roundabout way, the problem statement tells us how to ﬁll in the table. In particular,

P [V] = 0.7 = P [V L] + P [V B] (1)

P [L] = 0.6 = P [V L] + P [DL] (2)

Since P[V L] = 0.35, we can conclude that P[V B] = 0.35 and that P[DL] = 0.6 −

0.35 = 0.25. This allows us to ﬁll in two more table entries:

V D

L 0.35 0.25

B 0.35 ?

The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.

This implies P[DB] = 0.05 and the complete table is

V D

L 0.35 0.25

B 0.35 0.05

Finding the various probabilities is now straightforward:

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25

(2) P[D ∪ L] = P[V L] + P[DL] + P[DB] = 0.35 +0.25 +0.05 = 0.65.

(3) P[V B] = 0.35

(4) P[V ∪ L] = P[V] + P[L] − P[V L] = 0.7 +0.6 −0.35 = 0.95

(5) P[V ∪ D] = P[S] = 1

(6) P[LB] = P[LL

c

] = 0

Quiz 1.5

(1) The probability of exactly two voice calls is

P [N

V

= 2] = P [{vvd, vdv, dvv}] = 0.3 (1)

(2) The probability of at least one voice call is

P [N

V

≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] (2)

= 6(0.1) +0.2 = 0.8 (3)

An easier way to get the same answer is to observe that

P [N

V

≥ 1] = 1 − P [N

V

< 1] = 1 − P [N

V

= 0] = 1 − P [{ddd}] = 0.8 (4)

(3) The conditional probability of two voice calls followed by a data call given that there

were two voice calls is

P [{vvd} |N

V

= 2] =

P [{vvd} , N

V

= 2]

P [N

V

= 2]

=

P [{vvd}]

P [N

V

= 2]

=

0.1

0.3

=

1

3

(5)

(4) The conditional probability of two data calls followed by a voice call given there

were two voice calls is

P [{ddv} |N

V

= 2] =

P [{ddv} , N

V

= 2]

P [N

V

= 2]

= 0 (6)

The joint event of the outcome ddv and exactly two voice calls has probability zero

since there is only one voice call in the outcome ddv.

(5) The conditional probability of exactly two voice calls given at least one voice call is

P [N

V

= 2|N

v

≥ 1] =

P [N

V

= 2, N

V

≥ 1]

P [N

V

≥ 1]

=

P [N

V

= 2]

P [N

V

≥ 1]

=

0.3

0.8

=

3

8

(7)

(6) The conditional probability of at least one voice call given there were exactly two

voice calls is

P [N

V

≥ 1|N

V

= 2] =

P [N

V

≥ 1, N

V

= 2]

P [N

V

= 2]

=

P [N

V

= 2]

P [N

V

= 2]

= 1 (8)

Given that there were two voice calls, there must have been at least one voice call.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6

In this experiment, there are four outcomes with probabilities

P[{vv}] = (0.8)

2

= 0.64 P[{vd}] = (0.8)(0.2) = 0.16

P[{dv}] = (0.2)(0.8) = 0.16 P[{dd}] = (0.2)

2

= 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition

and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes,

we now can test for the independence of events.

(1) First, we calculate the probability of the joint event:

P [N

V

= 2, N

V

≥ 1] = P [N

V

= 2] = P [{vv}] = 0.64 (1)

Next, we observe that

P [N

V

≥ 1] = P [{vd, dv, vv}] = 0.96 (2)

Finally, we make the comparison

P [N

V

= 2] P [N

V

≥ 1] = (0.64)(0.96) = P [N

V

= 2, N

V

≥ 1] (3)

which shows the two events are dependent.

(2) The probability of the joint event is

P [N

V

≥ 1, C

1

= v] = P [{vd, vv}] = 0.80 (4)

From part (a), P[N

V

≥ 1] = 0.96. Further, P[C

1

= v] = 0.8 so that

P [N

V

≥ 1] P [C

1

= v] = (0.96)(0.8) = 0.768 = P [N

V

≥ 1, C

1

= v] (5)

Hence, the events are dependent.

(3) The problem statement that the calls were independent implies that the events the

second call is a voice call, {C

2

= v}, and the ﬁrst call is a data call, {C

1

= d} are

independent events. Just to be sure, we can do the calculations to check:

P [C

1

= d, C

2

= v] = P [{dv}] = 0.16 (6)

Since P[C

1

= d]P[C

2

= v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are

independent. Note that this shouldn’t be surprising since we used the information that

the calls were independent in the problem statement to determine the probabilities of

the outcomes.

(4) The probability of the joint event is

P [C

2

= v, N

V

is even] = P [{vv}] = 0.64 (7)

Also, each event has probability

P [C

2

= v] = P [{dv, vv}] = 0.8, P [N

V

is even] = P [{dd, vv}] = 0.68 (8)

Thus, P[C

2

= v]P[N

V

is even] = (0.8)(0.68) = 0.544. Since P[C

2

= v, N

V

is even] =

0.544, the events are dependent.

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7

Let F

i

denote the event that that the user is found on page i . The tree for the experiment

is

¨

¨

¨

¨

¨

¨

F

1

0.8

F

c

1

0.2

¨

¨

¨

¨

¨

¨

F

2

0.8

F

c

2

0.2

¨

¨

¨

¨

¨

¨

F

3

0.8

F

c

3

0.2

The user is found unless all three paging attempts fail. Thus the probability the user is

found is

P [F] = 1 − P

_

F

c

1

F

c

2

F

c

3

_

= 1 −(0.2)

3

= 0.992 (1)

Quiz 1.8

(1) We can view choosing each bit in the code word as a subexperiment. Each subex-

periment has two possible outcomes: 0 and 1. Thus by the fundamental principle of

counting, there are 2 ×2 ×2 ×2 = 2

4

= 16 possible code words.

(2) An experiment that can yield all possible code words with two zeroes is to choose

which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There

are

_

4

2

_

= 6 ways to do this. Hence, there are six code words with exactly two zeroes.

For this problem, it is also possible to simply enumerate the six code words:

1100, 1010, 1001, 0101, 0110, 0011.

(3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst

bit has only one outcome. For each of the next three bits, we have two choices. In

this case, there are 1 ×2 ×2 ×2 = 8 ways of choosing a code word.

(4) For the constant ratio code, we can specify a code word by choosing M of the bits to

be ones. The other N −M bits will be zeroes. The number of ways of choosing such

a code word is

_

N

M

_

. For N = 8 and M = 3, there are

_

8

3

_

= 56 code words.

Quiz 1.9

(1) In this problem, k bits received in error is the same as k failures in 100 trials. The

failure probability is = 1 − p and the success probability is 1 − = p. That is, the

probability of k bits in error and 100 −k correctly received bits is

P

_

S

k,100−k

_

=

_

100

k

_

k

(1 −)

100−k

(1)

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

For = 0.01,

P

_

S

0,100

_

= (1 −)

100

= (0.99)

100

= 0.3660 (2)

P

_

S

1,99

_

= 100(0.01)(0.99)

99

= 0.3700 (3)

P

_

S

2,98

_

= 4950(0.01)

2

(0.99)

9

8 = 0.1849 (4)

P

_

S

3,97

_

= 161, 700(0.01)

3

(0.99)

97

= 0.0610 (5)

(2) The probability a packet is decoded correctly is just

P [C] = P

_

S

0,100

_

+ P

_

S

1,99

_

+ P

_

S

2,98

_

+ P

_

S

3,97

_

= 0.9819 (6)

Quiz 1.10

Since the chip works only if all n transistors work, the transistors in the chip are like

devices in series. The probability that a chip works is P[C] = p

n

.

The module works if either 8 chips work or 9 chips work. Let C

k

denote the event that

exactly k chips work. Since transistor failures are independent of each other, chip failures

are also independent. Thus each P[C

k

] has the binomial probability

P [C

8

] =

_

9

8

_

(P [C])

8

(1 − P [C])

9−8

= 9p

8n

(1 − p

n

), (1)

P [C

9

] = (P [C])

9

= p

9n

. (2)

The probability a memory module works is

P [M] = P [C

8

] + P [C

9

] = p

8n

(9 −8p

n

) (3)

Quiz 1.11

R=rand(1,100);

X=(R<= 0.4) ...

+ (2*(R>0.4).*(R<=0.9)) ...

+ (3*(R>0.9));

Y=hist(X,1:3)

For a MATLAB simulation, we ﬁrst gen-

erate a vector R of 100 random numbers.

Second, we generate vector X as a func-

tion of R to represent the 3 possible out-

comes of a ﬂip. That is, X(i)=1 if ﬂip i

was heads, X(i)=2 if ﬂip i was tails, and

X(i)=3) is ﬂip i landed on the edge.

To see how this works, we note there are three cases:

• If R(i) <= 0.4, then X(i)=1.

• If 0.4 < R(i) and R(i)<=0.9, then X(i)=2.

• If 0.9 < R(i), then X(i)=3.

These three cases will have probabilities 0.4, 0.5 and 0.1. Lastly, we use the hist function

to count how many occurences of each possible value of X(i).

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 2

Quiz 2.1

The sample space, probabilities and corresponding grades for the experiment are

Outcome P[·] G

BB 0.36 3.0

BC 0.24 2.5

CB 0.24 2.5

CC 0.16 2

Quiz 2.2

(1) To ﬁnd c, we recall that the PMF must sum to 1. That is,

3

n=1

P

N

(n) = c

_

1 +

1

2

+

1

3

_

= 1 (1)

This implies c = 6/11. Now that we have found c, the remaining parts are straight-

forward.

(2) P[N = 1] = P

N

(1) = c = 6/11

(3) P[N ≥ 2] = P

N

(2) + P

N

(3) = c/2 +c/3 = 5/11

(4) P[N > 3] =

∞

n=4

P

N

(n) = 0

Quiz 2.3

Decoding each transmitted bit is an independent trial where we call a bit error a “suc-

cess.” Each bit is in error, that is, the trial is a success, with probability p. Now we can

interpret each experiment in the generic context of independent trials.

(1) The random variable X is the number of trials up to and including the ﬁrst success.

Similar to Example 2.11, X has the geometric PMF

P

X

(x) =

_

p(1 − p)

x−1

x = 1, 2, . . .

0 otherwise

(1)

(2) If p = 0.1, then the probability exactly 10 bits are sent is

P [X = 10] = P

X

(10) = (0.1)(0.9)

9

= 0.0387 (2)

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The probability that at least 10 bits are sent is P[X ≥ 10] =

∞

x=10

P

X

(x). This

sum is not too hard to calculate. However, its even easier to observe that X ≥ 10 if

the ﬁrst 10 bits are transmitted correctly. That is,

P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)

10

(3)

For p = 0.1, P[X ≥ 10] = 0.9

10

= 0.3487.

(3) The random variable Y is the number of successes in 100 independent trials. Just as

in Example 2.13, Y has the binomial PMF

P

Y

(y) =

_

100

y

_

p

y

(1 − p)

100−y

(4)

If p = 0.01, the probability of exactly 2 errors is

P [Y = 2] = P

Y

(2) =

_

100

2

_

(0.01)

2

(0.99)

98

= 0.1849 (5)

(4) The probability of no more than 2 errors is

P [Y ≤ 2] = P

Y

(0) + P

Y

(1) + P

Y

(2) (6)

= (0.99)

100

+100(0.01)(0.99)

99

+

_

100

2

_

(0.01)

2

(0.99)

98

(7)

= 0.9207 (8)

(5) Random variable Z is the number of trials up to and including the third success. Thus

Z has the Pascal PMF (see Example 2.15)

P

Z

(z) =

_

z −1

2

_

p

3

(1 − p)

z−3

(9)

Note that P

Z

(z) > 0 for z = 3, 4, 5, . . ..

(6) If p = 0.25, the probability that the third error occurs on bit 12 is

P

Z

(12) =

_

11

2

_

(0.25)

3

(0.75)

9

= 0.0645 (10)

Quiz 2.4

Each of these probabilities can be read off the CDF F

Y

(y). However, we must keep in

mind that when F

Y

(y) has a discontinuity at y

0

, F

Y

(y) takes the upper value F

Y

(y

+

0

).

(1) P[Y < 1] = F

Y

(1

−

) = 0

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) P[Y ≤ 1] = F

Y

(1) = 0.6

(3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − F

Y

(2) = 1 −0.8 = 0.2

(4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − F

Y

(2

−

) = 1 −0.6 = 0.4

(5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = F

Y

(1

+

) − F

Y

(1

−

) = 0.6

(6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = F

Y

(3

+

) − F

Y

(3

−

) = 0.8 −0.8 = 0

Quiz 2.5

(1) With probability 0.7, a call is a voice call and C = 25. Otherwise, with probability

0.3, we have a data call and C = 40. This corresponds to the PMF

P

C

(c) =

⎧

⎨

⎩

0.7 c = 25

0.3 c = 40

0 otherwise

(1)

(2) The expected value of C is

E [C] = 25(0.7) +40(0.3) = 29.5 cents (2)

Quiz 2.6

(1) As a function of N, the cost T is

T = 25N +40(3 − N) = 120 −15N (1)

(2) To ﬁnd the PMF of T, we can draw the following tree:

¨

¨

¨

¨

¨

¨

¨

N=0

0.1

r

r

r

r

r

r

r

N=3

0.3

$

$

$

$

$

$

$N=1 0.3

N=2 0.3

•T=120

•T=105

•T=90

•T=75

From the tree, we can write down the PMF of T:

P

T

(t ) =

⎧

⎨

⎩

0.3 t = 75, 90, 105

0.1 t = 120

0 otherwise

(2)

From the PMF P

T

(t ), the expected value of T is

E [T] = 75P

T

(75) +90P

T

(90) +105P

T

(105) +120P

T

(120) (3)

= (75 +90 +105)(0.3) +120(0.1) = 62 (4)

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.7

(1) Using Deﬁnition 2.14, the expected number of applications is

E [A] =

4

a=1

aP

A

(a) = 1(0.4) +2(0.3) +3(0.2) +4(0.1) = 2 (1)

(2) The number of memory chips is M = g(A) where

g(A) =

⎧

⎨

⎩

4 A = 1, 2

6 A = 3

8 A = 4

(2)

(3) By Theorem 2.10, the expected number of memory chips is

E [M] =

4

a=1

g(A)P

A

(a) = 4(0.4) +4(0.3) +6(0.2) +8(0.1) = 4.8 (3)

Since E[A] = 2, g(E[A]) = g(2) = 4. However, E[M] = 4.8 = g(E[A]). The two

quantities are different because g(A) is not of the form αA +β.

Quiz 2.8

The PMF P

N

(n) allows to calculate each of the desired quantities.

(1) The expected value of N is

E [N] =

2

n=0

nP

N

(n) = 0(0.1) +1(0.4) +2(0.5) = 1.4 (1)

(2) The second moment of N is

E

_

N

2

_

=

2

n=0

n

2

P

N

(n) = 0

2

(0.1) +1

2

(0.4) +2

2

(0.5) = 2.4 (2)

(3) The variance of N is

Var[N] = E

_

N

2

_

−(E [N])

2

= 2.4 −(1.4)

2

= 0.44 (3)

(4) The standard deviation is σ

N

=

√

Var[N] =

√

0.44 = 0.663.

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.9

(1) From the problem statement, we learn that the conditional PMF of N given the event

I is

P

N|I

(n) =

_

0.02 n = 1, 2, . . . , 50

0 otherwise

(1)

(2) Also from the problem statement, the conditional PMF of N given the event T is

P

N|T

(n) =

_

0.2 n = 1, 2, 3, 4, 5

0 otherwise

(2)

(3) The problem statement tells us that P[T] = 1 − P[I ] = 3/4. From Theorem 1.10

(the law of total probability), we ﬁnd the PMF of N is

P

N

(n) = P

N|T

(n) P [T] + P

N|I

(n) P [I ] (3)

=

⎧

⎨

⎩

0.2(0.75) +0.02(0.25) n = 1, 2, 3, 4, 5

0(0.75) +0.02(0.25) n = 6, 7, . . . , 50

0 otherwise

(4)

=

⎧

⎨

⎩

0.155 n = 1, 2, 3, 4, 5

0.005 n = 6, 7, . . . , 50

0 otherwise

(5)

(4) First we ﬁnd

P [N ≤ 10] =

10

n=1

P

N

(n) = (0.155)(5) +(0.005)(5) = 0.80 (6)

By Theorem 2.17, the conditional PMF of N given N ≤ 10 is

P

N|N≤10

(n) =

_

P

N

(n)

P[N≤10]

n ≤ 10

0 otherwise

(7)

=

⎧

⎨

⎩

0.155/0.8 n = 1, 2, 3, 4, 5

0.005/0.8 n = 6, 7, 8, 9, 10

0 otherwise

(8)

=

⎧

⎨

⎩

0.19375 n = 1, 2, 3, 4, 5

0.00625 n = 6, 7, 8, 9, 10

0 otherwise

(9)

(5) Once we have the conditional PMF, calculating conditional expectations is easy.

E [N|N ≤ 10] =

n

nP

N|N≤10

(n) (10)

=

5

n=1

n(0.19375) +

10

n=6

n(0.00625) (11)

= 3.15625 (12)

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 50 100

0

2

4

6

8

10

0 500 1000

0

2

4

6

8

10

(a) samplemean(100) (b) samplemean(1000)

Figure 1: Two examples of the output of samplemean(k)

(6) To ﬁnd the conditional variance, we ﬁrst ﬁnd the conditional second moment

E

_

N

2

|N ≤ 10

_

=

n

n

2

P

N|N≤10

(n) (13)

=

5

n=1

n

2

(0.19375) +

10

n=6

n

2

(0.00625) (14)

= 55(0.19375) +330(0.00625) = 12.71875 (15)

The conditional variance is

Var[N|N ≤ 10] = E

_

N

2

|N ≤ 10

_

−(E [N|N ≤ 10])

2

(16)

= 12.71875 −(3.15625)

2

= 2.75684 (17)

Quiz 2.10

The function samplemean(k) generates and plots ﬁve m

n

sequences for n = 1, 2, . . . , k.

The i th column M(:,i) of M holds a sequence m

1

, m

2

, . . . , m

k

.

function M=samplemean(k);

K=(1:k)’;

M=zeros(k,5);

for i=1:5,

X=duniformrv(0,10,k);

M(:,i)=cumsum(X)./K;

end;

plot(K,M);

Examples of the function calls (a) samplemean(100) and (b) samplemean(1000)

are shown in Figure 1. Each time samplemean(k) is called produces a random output.

What is observed in these ﬁgures is that for small n, m

n

is fairly random but as n gets

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

large, m

n

gets close to E[X] = 5. Although each sequence m

1

, m

2

, . . . that we generate is

random, the sequences always converges to E[X]. This random convergence is analyzed

in Chapter 7.

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 3

Quiz 3.1

The CDF of Y is

0 2 4

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0

y/4 0 ≤ y ≤ 4

1 y > 4

(1)

From the CDF F

Y

(y), we can calculate the probabilities:

(1) P[Y ≤ −1] = F

Y

(−1) = 0

(2) P[Y ≤ 1] = F

Y

(1) = 1/4

(3) P[2 < Y ≤ 3] = F

Y

(3) − F

Y

(2) = 3/4 −2/4 = 1/4

(4) P[Y > 1.5] = 1 − P[Y ≤ 1.5] = 1 − F

Y

(1.5) = 1 −(1.5)/4 = 5/8

Quiz 3.2

(1) First we will ﬁnd the constant c and then we will sketch the PDF. To ﬁnd c, we use

the fact that

_

∞

−∞

f

X

(x) dx = 1. We will evaluate this integral using integration by

parts:

_

∞

−∞

f

X

(x) dx =

_

∞

0

cxe

−x/2

dx (1)

= −2cxe

−x/2

¸

¸

¸

∞

0

. ,, .

=0

+

_

∞

0

2ce

−x/2

dx (2)

= −4ce

−x/2

¸

¸

¸

∞

0

= 4c (3)

Thus c = 1/4 and X has the Erlang (n = 2, λ = 1/2) PDF

0 5 10 15

0

0.1

0.2

x

f

X

(

x

)

f

X

(x) =

_

(x/4)e

−x/2

x ≥ 0

0 otherwise

(4)

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To ﬁnd the CDF F

X

(x), we ﬁrst note X is a nonnegative random variable so that

F

X

(x) = 0 for all x < 0. For x ≥ 0,

F

X

(x) =

_

x

0

f

X

(y) dy =

_

x

0

y

4

e

−y/2

dy (5)

= −

y

2

e

−y/2

¸

¸

¸

x

0

−

_

x

0

−

1

2

e

−y/2

dy (6)

= 1 −

x

2

e

−x/2

−e

−x/2

(7)

The complete expression for the CDF is

0 5 10 15

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

_

1 −

_

x

2

+1

_

e

−x/2

x ≥ 0

0 otherwise

(8)

(3) From the CDF F

X

(x),

P [0 ≤ X ≤ 4] = F

X

(4) − F

X

(0) = 1 −3e

−2

. (9)

(4) Similarly,

P [−2 ≤ X ≤ 2] = F

X

(2) − F

X

(−2) = 1 −3e

−1

. (10)

Quiz 3.3

The PDF of Y is

−2 0 2

0

1

2

3

y

f

Y

(

y

)

f

Y

(y) =

_

3y

2

/2 −1 ≤ y ≤ 1,

0 otherwise.

(1)

(1) The expected value of Y is

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

1

−1

(3/2)y

3

dy = (3/8)y

4

¸

¸

¸

1

−1

= 0. (2)

Note that the above calculation wasn’t really necessary because E[Y] = 0 whenever

the PDF f

Y

(y) is an even function (i.e., f

Y

(y) = f

Y

(−y)).

(2) The second moment of Y is

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

1

−1

(3/2)y

4

dy = (3/10)y

5

¸

¸

¸

1

−1

= 3/5. (3)

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The variance of Y is

Var[Y] = E

_

Y

2

_

−(E [Y])

2

= 3/5. (4)

(4) The standard deviation of Y is σ

Y

=

√

Var[Y] =

√

3/5.

Quiz 3.4

(1) When X is an exponential (λ) random variable, E[X] = 1/λ and Var[X] = 1/λ

2

.

Since E[X] = 3 and Var[X] = 9, we must have λ = 1/3. The PDF of X is

f

X

(x) =

_

(1/3)e

−x/3

x ≥ 0,

0 otherwise.

(1)

(2) We know X is a uniform (a, b) random variable. To ﬁnd a and b, we apply Theo-

rem 3.6 to write

E [X] =

a +b

2

= 3 Var[X] =

(b −a)

2

12

= 9. (2)

This implies

a +b = 6, b −a = ±6

√

3. (3)

The only valid solution with a < b is

a = 3 −3

√

3, b = 3 +3

√

3. (4)

The complete expression for the PDF of X is

f

X

(x) =

_

1/(6

√

3) 3 −3

√

3 ≤ x < 3 +3

√

3,

0 otherwise.

(5)

Quiz 3.5

Each of the requested probabilities can be calculated using (z) function and Table 3.1

or Q(z) and Table 3.2. We start with the sketches.

(1) The PDFs of X and Y are shown below. The fact that Y has twice the standard

deviation of X is reﬂected in the greater spread of f

Y

(y). However, it is important

to remember that as the standard deviation increases, the peak value of the Gaussian

PDF goes down.

−5 0 5

0

0.2

0.4

x y

f

X

(

x

)

f

Y

(

y

)

← f

X

(x)

← f

Y

(y)

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Since X is Gaussian (0, 1),

P [−1 < X ≤ 1] = F

X

(1) − F

X

(−1) (1)

= (1) −(−1) = 2(1) −1 = 0.6826. (2)

(3) Since Y is Gaussian (0, 2),

P [−1 < Y ≤ 1] = F

Y

(1) − F

Y

(−1) (3)

=

_

1

σ

Y

_

−

_

−1

σ

Y

_

= 2

_

1

2

_

−1 = 0.383. (4)

(4) Again, since X is Gaussian (0, 1), P[X > 3.5] = Q(3.5) = 2.33 ×10

−4

.

(5) Since Y is Gaussian (0, 2), P[Y > 3.5] = Q(

3.5

2

) = Q(1.75) = 1 − (1.75) =

0.0401.

Quiz 3.6

The CDF of X is

−2 0 2

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < −1,

(x +1)/4 −1 ≤ x < 1,

1 x ≥ 1.

(1)

The following probabilities can be read directly from the CDF:

(1) P[X ≤ 1] = F

X

(1) = 1.

(2) P[X < 1] = F

X

(1

−

) = 1/2.

(3) P[X = 1] = F

X

(1

+

) − F

X

(1

−

) = 1 −1/2 = 1/2.

(4) We ﬁnd the PDF f

Y

(y) by taking the derivative of F

Y

(y). The resulting PDF is

−2 0 2

0

0.5

x

f

X

(

x

)

0.5

f

X

(x) =

⎧

⎨

⎩

1/4 −1 ≤ x < 1,

(1/2)δ(x −1) x = 1,

0 otherwise.

(2)

Quiz 3.7

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) Since X is always nonnegative, F

X

(x) = 0 for x < 0. Also, F

X

(x) = 1 for x ≥ 2

since its always true that x ≤ 2. Lastly, for 0 ≤ x ≤ 2,

F

X

(x) =

_

x

−∞

f

X

(y) dy =

_

x

0

(1 − y/2) dy = x − x

2

/4. (1)

The complete CDF of X is

−1 0 1 2 3

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < 0,

x − x

2

/4 0 ≤ x ≤ 2,

1 x > 2.

(2)

(2) The probability that Y = 1 is

P [Y = 1] = P [X ≥ 1] = 1 − F

X

(1) = 1 −3/4 = 1/4. (3)

(3) Since X is nonnegative, Y is also nonnegative. Thus F

Y

(y) = 0 for y < 0. Also,

because Y ≤ 1, F

Y

(y) = 1 for all y ≥ 1. Finally, for 0 < y < 1,

F

Y

(y) = P [Y ≤ y] = P [X ≤ y] = F

X

(y) . (4)

Using the CDF F

X

(x), the complete expression for the CDF of Y is

−1 0 1 2 3

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0,

y − y

2

/4 0 ≤ y < 1,

1 y ≥ 1.

(5)

As expected, we see that the jump in F

Y

(y) at y = 1 is exactly equal to P[Y = 1].

(4) By taking the derivative of F

Y

(y), we obtain the PDF f

Y

(y). Note that when y < 0

or y > 1, the PDF is zero.

−1 0 1 2 3

0

0.5

1

1.5

y

f

Y

(

y

)

0.25

f

Y

(y) =

_

1 − y/2 +(1/4)δ(y −1) 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 3.8

(1) P[Y ≤ 6] =

_

6

−∞

f

Y

(y) dy =

_

6

0

(1/10) dy = 0.6 .

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) From Deﬁnition 3.15, the conditional PDF of Y given Y ≤ 6 is

f

Y|Y≤6

(y) =

_

f

Y

(y)

P[Y≤6]

y ≤ 6,

0 otherwise,

=

_

1/6 0 ≤ y ≤ 6,

0 otherwise.

(1)

(3) The probability Y > 8 is

P [Y > 8] =

_

10

8

1

10

dy = 0.2 . (2)

(4) From Deﬁnition 3.15, the conditional PDF of Y given Y > 8 is

f

Y|Y>8

(y) =

_

f

Y

(y)

P[Y>8]

y > 8,

0 otherwise,

=

_

1/2 8 < y ≤ 10,

0 otherwise.

(3)

(5) From the conditional PDF f

Y|Y≤6

(y), we can calculate the conditional expectation

E [Y|Y ≤ 6] =

_

∞

−∞

y f

Y|Y≤6

(y) dy =

_

6

0

y

6

dy = 3. (4)

(6) From the conditional PDF f

Y|Y>8

(y), we can calculate the conditional expectation

E [Y|Y > 8] =

_

∞

−∞

y f

Y|Y>8

(y) dy =

_

10

8

y

2

dy = 9. (5)

Quiz 3.9

A natural way to produce random variables with PDF f

T|T>2

(t ) is to generate samples

of T with PDF f

T

(t ) and then to discard those samples which fail to satisfy the condition

T > 2. Here is a MATLAB function that uses this method:

function t=t2rv(m)

i=0;lambda=1/3;

t=zeros(m,1);

while (i<m),

x=exponentialrv(lambda,1);

if (x>2)

t(i+1)=x;

i=i+1;

end

end

A second method exploits the fact that if T is an exponential (λ) random variable, then

T

= T +2 has PDF f

T

(t ) = f

T|T>2

(t ). In this case the command

t=2.0+exponentialrv(1/3,m)

generates the vector t.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 4

Quiz 4.1

Each value of the joint CDF can be found by considering the corresponding probability.

(1) F

X,Y

(−∞, 2) = P[X ≤ −∞, Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on

the value −∞.

(2) F

X,Y

(∞, ∞) = P[X ≤ ∞, Y ≤ ∞] = 1. This result is given in Theorem 4.1.

(3) F

X,Y

(∞, y) = P[X ≤ ∞, Y ≤ y] = P[Y ≤ y] = F

Y

(y).

(4) F

X,Y

(∞, −∞) = P[X ≤ ∞, Y ≤ −∞] = 0 since Y cannot take on the value −∞.

Quiz 4.2

From the joint PMF of Q and G given in the table, we can calculate the requested

probabilities by summing the PMF over those values of Q and G that correspond to the

event.

(1) The probability that Q = 0 is

P [Q = 0] = P

Q,G

(0, 0) + P

Q,G

(0, 1) + P

Q,G

(0, 2) + P

Q,G

(0, 3) (1)

= 0.06 +0.18 +0.24 +0.12 = 0.6 (2)

(2) The probability that Q = G is

P [Q = G] = P

Q,G

(0, 0) + P

Q,G

(1, 1) = 0.18 (3)

(3) The probability that G > 1 is

P [G > 1] =

3

g=2

1

q=0

P

Q,G

(q, g) (4)

= 0.24 +0.16 +0.12 +0.08 = 0.6 (5)

(4) The probability that G > Q is

P [G > Q] =

1

q=0

3

g=q+1

P

Q,G

(q, g) (6)

= 0.18 +0.24 +0.12 +0.16 +0.08 = 0.78 (7)

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.3

By Theorem 4.3, the marginal PMF of H is

P

H

(h) =

b=0,2,4

P

H,B

(h, b) (1)

For each value of h, this corresponds to calculating the row sum across the table of the joint

PMF. Similarly, the marginal PMF of B is

P

B

(b) =

1

h=−1

P

H,B

(h, b) (2)

For each value of b, this corresponds to the column sum down the table of the joint PMF.

The easiest way to calculate these marginal PMFs is to simply sum each row and column:

P

H,B

(h, b) b = 0 b = 2 b = 4 P

H

(h)

h = −1 0 0.4 0.2 0.6

h = 0 0.1 0 0.1 0.2

h = 1 0.1 0.1 0 0.2

P

B

(b) 0.2 0.5 0.3

(3)

Quiz 4.4

To ﬁnd the constant c, we apply

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy = 1. Speciﬁcally,

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy =

_

2

0

_

1

0

cxy dx dy (1)

= c

_

2

0

y

_

x

2

/2

¸

¸

¸

1

0

_

dy (2)

= (c/2)

_

2

0

y dy = (c/4)y

2

¸

¸

¸

2

0

= c (3)

Thus c = 1. To calculate P[A], we write

P [A] =

__

A

f

X,Y

(x, y) dx dy (4)

To integrate over A, we convert to polar coordinates using the substitutions x = r cos θ,

y = r sin θ and dx dy = r dr dθ, yielding

Y

X

1

1

2

A

P [A] =

_

π/2

0

_

1

0

r

2

sin θ cos θ r dr dθ (5)

=

_

_

1

0

r

3

dr

__

_

π/2

0

sin θ cos θ dθ

_

(6)

=

_

r

4

/4

¸

¸

¸

1

0

_

⎛

⎝

sin

2

θ

2

¸

¸

¸

¸

¸

π/2

0

⎞

⎠

= 1/8 (7)

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.5

By Theorem 4.8, the marginal PDF of X is

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy (1)

For x < 0 or x > 1, f

X

(x) = 0. For 0 ≤ x ≤ 1,

f

X

(x) =

6

5

_

1

0

(x + y

2

) dy =

6

5

_

xy + y

3

/3

_¸

¸

¸

y=1

y=0

=

6

5

(x +1/3) =

6x +2

5

(2)

The complete expression for the PDf of X is

f

X

(x) =

_

(6x +2)/5 0 ≤ x ≤ 1

0 otherwise

(3)

By the same method we obtain the marginal PDF for Y. For 0 ≤ y ≤ 1,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dy (4)

=

6

5

_

1

0

(x + y

2

) dx =

6

5

_

x

2

/2 + xy

2

_¸

¸

¸

x=1

x=0

=

6

5

(1/2 + y

2

) =

3 +6y

2

5

(5)

Since f

Y

(y) = 0 for y < 0 or y > 1, the complete expression for the PDF of Y is

f

Y

(y) =

_

(3 +6y

2

)/5 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 4.6

(A) The time required for the transfer is T = L/B. For each pair of values of L and B,

we can calculate the time T needed for the transfer. We can write these down on the

table for the joint PMF of L and B as follows:

P

L,B

(l, b) b = 14, 400 b = 21, 600 b = 28, 800

l = 518, 400 0.20 (T=36) 0.10 (T=24) 0.05 (T=18)

l = 2, 592, 000 0.05 (T=180) 0.10 (T=120) 0.20 (T=90)

l = 7, 776, 000 0.00 (T=540) 0.10 (T=360) 0.20 (T=270)

From the table, writing down the PMF of T is straightforward.

P

T

(t ) =

⎧

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎩

0.05 t = 18

0.1 t = 24

0.2 t = 36, 90

0.1 t = 120

0.05 t = 180

0.2 t = 270

0.1 t = 360

0 otherwise

(1)

23

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(B) First, we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1, W = XY satisﬁes

0 ≤ W ≤ 1. Thus f

W

(0) = 0 and f

W

(1) = 1. For 0 < w < 1, we calculate the

CDF F

W

(w) = P[W ≤ w]. As shown below, integrating over the region W ≤ w

is fairly complex. The calculus is simpler if we integrate over the region XY > w.

Speciﬁcally,

Y

X

1

1

XY > w

w

w

XY = w

F

W

(w) = 1 − P [XY > w] (2)

= 1 −

_

1

w

_

1

w/x

dy dx (3)

= 1 −

_

1

w

(1 −w/x) dx (4)

= 1 −

_

x −wln x|

x=1

x=w

_

(5)

= 1 −(1 −w +wln w) = w −wln w (6)

The complete expression for the CDF is

F

W

(w) =

⎧

⎨

⎩

0 w < 0

w −wln w 0 ≤ w ≤ 1

1 w > 1

(7)

By taking the derivative of the CDF, we ﬁnd the PDF is

f

W

(w) =

d F

W

(w)

dw

=

⎧

⎨

⎩

0 w < 0

−ln w 0 ≤ w ≤ 1

0 w > 1

(8)

Quiz 4.7

(A) It is helpful to ﬁrst make a table that includes the marginal PMFs.

P

L,T

(l, t ) t = 40 t = 60 P

L

(l)

l = 1 0.15 0.1 0.25

l = 2 0.3 0.2 0.5

l = 3 0.15 0.1 0.25

P

T

(t ) 0.6 0.4

(1) The expected value of L is

E [L] = 1(0.25) +2(0.5) +3(0.25) = 2. (1)

Since the second moment of L is

E

_

L

2

_

= 1

2

(0.25) +2

2

(0.5) +3

2

(0.25) = 4.5, (2)

the variance of L is

Var [L] = E

_

L

2

_

−(E [L])

2

= 0.5. (3)

24

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) The expected value of T is

E [T] = 40(0.6) +60(0.4) = 48. (4)

The second moment of T is

E

_

T

2

_

= 40

2

(0.6) +60

2

(0.4) = 2400. (5)

Thus

Var[T] = E

_

T

2

_

−(E [T])

2

= 2400 −48

2

= 96. (6)

(3) The correlation is

E [LT] =

t =40,60

3

l=1

lt P

LT

(lt ) (7)

= 1(40)(0.15) +2(40)(0.3) +3(40)(0.15) (8)

+1(60)(0.1) +2(60)(0.2) +3(60)(0.1) (9)

= 96 (10)

(4) From Theorem 4.16(a), the covariance of L and T is

Cov [L, T] = E [LT] − E [L] E [T] = 96 −2(48) = 0 (11)

(5) Since Cov[L, T] = 0, the correlation coefﬁcient is ρ

L,T

= 0.

(B) As in the discrete case, the calculations become easier if we ﬁrst calculate the marginal

PDFs f

X

(x) and f

Y

(y). For 0 ≤ x ≤ 1,

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy =

_

2

0

xy dy =

1

2

xy

2

¸

¸

¸

¸

y=2

y=0

= 2x (12)

Similarly, for 0 ≤ y ≤ 2,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dx =

_

2

0

xy dx =

1

2

x

2

y

¸

¸

¸

¸

x=1

x=0

=

y

2

(13)

The complete expressions for the marginal PDFs are

f

X

(x) =

_

2x 0 ≤ x ≤ 1

0 otherwise

f

Y

(y) =

_

y/2 0 ≤ y ≤ 2

0 otherwise

(14)

From the marginal PDFs, it is straightforward to calculate the various expectations.

25

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) The ﬁrst and second moments of X are

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

1

0

2x

2

dx =

2

3

(15)

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

1

0

2x

3

dx =

1

2

(16)

(17)

The variance of X is Var[X] = E[X

2

] −(E[X])

2

= 1/18.

(2) The ﬁrst and second moments of Y are

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

2

0

1

2

y

2

dy =

4

3

(18)

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

2

0

1

2

y

3

dy = 2 (19)

The variance of Y is Var[Y] = E[Y

2

] −(E[Y])

2

= 2 −16/9 = 2/9.

(3) The correlation of X and Y is

E [XY] =

_

∞

−∞

_

∞

−∞

xy f

X,Y

(x, y) dx, dy (20)

=

_

1

0

_

2

0

x

2

y

2

dx, dy =

x

3

3

¸

¸

¸

¸

1

0

y

3

3

¸

¸

¸

¸

2

0

=

8

9

(21)

(4) The covariance of X and Y is

Cov [X, Y] = E [XY] − E [X] E [Y] =

8

9

−

_

2

3

__

4

3

_

= 0. (22)

(5) Since Cov[X, Y] = 0, the correlation coefﬁcient is ρ

X,Y

= 0.

Quiz 4.8

(A) Since the event V > 80 occurs only for the pairs (L, T) = (2, 60), (L, T) = (3, 40)

and (L, T) = (3, 60),

P [A] = P [V > 80] = P

L,T

(2, 60) + P

L,T

(3, 40) + P

L,T

(3, 60) = 0.45 (1)

By Deﬁnition 4.9,

P

L,T| A

(l, t ) =

_

P

L,T

(l,t )

P[A]

lt > 80

0 otherwise

(2)

26

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We can represent this conditional PMF in the following table:

P

L,T| A

(l, t ) t = 40 t = 60

l = 1 0 0

l = 2 0 4/9

l = 3 1/3 2/9

The conditional expectation of V can be found from the conditional PMF.

E [V| A] =

l

t

lt P

L,T| A

(l, t ) (3)

= (2 · 60)

4

9

+(3 · 40)

1

3

+(3 · 60)

2

9

= 133

1

3

(4)

For the conditional variance Var[V| A], we ﬁrst ﬁnd the conditional second moment

E

_

V

2

| A

_

=

l

t

(lt )

2

P

L,T| A

(l, t ) (5)

= (2 · 60)

2

4

9

+(3 · 40)

2

1

3

+(3 · 60)

2

2

9

= 18, 400 (6)

It follows that

Var [V| A] = E

_

V

2

| A

_

−(E [V| A])

2

= 622

2

9

(7)

(B) For continuous random variables X and Y, we ﬁrst calculate the probability of the

conditioning event.

P [B] =

__

B

f

X,Y

(x, y) dx dy =

_

60

40

_

3

80/y

xy

4000

dx dy (8)

=

_

60

40

y

4000

_

x

2

2

¸

¸

¸

¸

3

80/y

_

dy (9)

=

_

60

40

y

4000

_

9

2

−

3200

y

2

_

dy (10)

=

9

8

−

4

5

ln

3

2

≈ 0.801 (11)

The conditional PDF of X and Y is

f

X,Y|B

(x, y) =

_

f

X,Y

(x, y) /P [B] (x, y) ∈ B

0 otherwise

(12)

=

_

Kxy 40 ≤ y ≤ 60, 80/y ≤ x ≤ 3

0 otherwise

(13)

27

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

where K = (4000P[B])

−1

. The conditional expectation of W given event B is

E [W|B] =

_

∞

−∞

_

∞

−∞

xy f

X,Y|B

(x, y) dx dy (14)

=

_

60

40

_

3

80/y

Kx

2

y

2

dx dy (15)

= (K/3)

_

60

40

y

2

x

3

¸

¸

¸

x=3

x=80/y

dy (16)

= (K/3)

_

60

40

_

27y

2

−80

3

/y

_

dy (17)

= (K/3)

_

9y

3

−80

3

ln y

_¸

¸

¸

60

40

≈ 120.78 (18)

The conditional second moment of K given B is

E

_

W

2

|B

_

=

_

∞

−∞

_

∞

−∞

(xy)

2

f

X,Y|B

(x, y) dx dy (19)

=

_

60

40

_

3

80/y

Kx

3

y

3

dx dy (20)

= (K/4)

_

60

40

y

3

x

4

¸

¸

¸

x=3

x=80/y

dy (21)

= (K/4)

_

60

40

_

81y

3

−80

4

/y

_

dy (22)

= (K/4)

_

(81/4)y

4

−80

4

ln y

_¸

¸

¸

60

40

≈ 16, 116.10 (23)

It follows that the conditional variance of W given B is

Var [W|B] = E

_

W

2

|B

_

−(E [W|B])

2

≈ 1528.30 (24)

Quiz 4.9

(A) (1) The joint PMF of A and B can be found from the marginal and conditional

PMFs via P

A,B

(a, b) = P

B| A

(b|a)P

A

(a). Incorporating the information from

the given conditional PMFs can be confusing, however. Consequently, we can

note that A has range S

A

= {0, 2} and B has range S

B

= {0, 1}. A table of the

joint PMF will include all four possible combinations of A and B. The general

form of the table is

P

A,B

(a, b) b = 0 b = 1

a = 0 P

B| A

(0|0)P

A

(0) P

B| A

(1|0)P

A

(0)

a = 2 P

B| A

(0|2)P

A

(2) P

B| A

(1|2)P

A

(2)

28

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Substituting values from P

B| A

(b|a) and P

A

(a), we have

P

A,B

(a, b) b = 0 b = 1

a = 0 (0.8)(0.4) (0.2)(0.4)

a = 2 (0.5)(0.6) (0.5)(0.6)

or

P

A,B

(a, b) b = 0 b = 1

a = 0 0.32 0.08

a = 2 0.3 0.3

(2) Given the conditional PMF P

B| A

(b|2), it is easy to calculate the conditional

expectation

E [B| A = 2] =

1

b=0

bP

B| A

(b|2) = (0)(0.5) +(1)(0.5) = 0.5 (1)

(3) From the joint PMF P

A,B

(a, b), we can calculate the the conditional PMF

P

A|B

(a|0) =

P

A,B

(a, 0)

P

B

(0)

=

⎧

⎨

⎩

0.32/0.62 a = 0

0.3/0.62 a = 2

0 otherwise

(2)

=

⎧

⎨

⎩

16/31 a = 0

15/31 a = 2

0 otherwise

(3)

(4) We can calculate the conditional variance Var[A|B = 0] using the conditional

PMF P

A|B

(a|0). First we calculate the conditional expected value

E [A|B = 0] =

a

aP

A|B

(a|0) = 0(16/31) +2(15/31) = 30/31 (4)

The conditional second moment is

E

_

A

2

|B = 0

_

=

a

a

2

P

A|B

(a|0) = 0

2

(16/31) +2

2

(15/31) = 60/31 (5)

The conditional variance is then

Var[A|B = 0] = E

_

A

2

|B = 0

_

−(E [A|B = 0])

2

=

960

961

(6)

(B) (1) The joint PDF of X and Y is

f

X,Y

(x, y) = f

Y|X

(y|x) f

X

(x) =

_

6y 0 ≤ y ≤ x, 0 ≤ x ≤ 1

0 otherwise

(7)

(2) From the given conditional PDF f

Y|X

(y|x),

f

Y|X

(y|1/2) =

_

8y 0 ≤ y ≤ 1/2

0 otherwise

(8)

29

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The conditional PDF of Y given X = 1/2 is f

X|Y

(x|1/2) = f

X,Y

(x, 1/2)/f

Y

(1/2).

To ﬁnd f

Y

(1/2), we integrate the joint PDF.

f

Y

(1/2) =

_

∞

−∞

f

X,1/2

( ) dx =

_

1

1/2

6(1/2) dx = 3/2 (9)

Thus, for 1/2 ≤ x ≤ 1,

f

X|Y

(x|1/2) =

f

X,Y

(x, 1/2)

f

Y

(1/2)

=

6(1/2)

3/2

= 2 (10)

(4) From the pervious part, we see that given Y = 1/2, the conditional PDF of X

is uniform (1/2, 1). Thus, by the deﬁnition of the uniform (a, b) PDF,

Var [X|Y = 1/2] =

(1 −1/2)

2

12

=

1

48

(11)

Quiz 4.10

(A) (1) For random variables X and Y from Example 4.1, we observe that P

Y

(1) =

0.09 and P

X

(0) = 0.01. However,

P

X,Y

(0, 1) = 0 = P

X

(0) P

Y

(1) (1)

Since we have found a pair x, y such that P

X,Y

(x, y) = P

X

(x)P

Y

(y), we can

conclude that X and Y are dependent. Note that whenever P

X,Y

(x, y) = 0,

independence requires that either P

X

(x) = 0 or P

Y

(y) = 0.

(2) For random variables Q and G from Quiz 4.2, it is not obvious whether they

are independent. Unlike X and Y in part (a), there are no obvious pairs q, g

that fail the independence requirement. In this case, we calculate the marginal

PMFs from the table of the joint PMF P

Q,G

(q, g) in Quiz 4.2.

P

Q,G

(q, g) g = 0 g = 1 g = 2 g = 3 P

Q

(q)

q = 0 0.06 0.18 0.24 0.12 0.60

q = 1 0.04 0.12 0.16 0.08 0.40

P

G

(g) 0.10 0.30 0.40 0.20

Careful study of the table will verify that P

Q,G

(q, g) = P

Q

(q)P

G

(g) for every

pair q, g. Hence Q and G are independent.

(B) (1) Since X

1

and X

2

are independent,

f

X

1

,X

2

(x

1

, x

2

) = f

X

1

(x

1

) f

X

2

(x

2

) (2)

=

_

(1 − x

1

/2)(1 − x

2

/2) 0 ≤ x

1

≤ 2, 0 ≤ x

2

≤ 2

0 otherwise

(3)

30

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Let F

X

(x) denote the CDF of both X

1

and X

2

. The CDF of Z = max(X

1

, X

2

)

is found by observing that Z ≤ z iff X

1

≤ z and X

2

≤ z. That is,

P [Z ≤ z] = P [X

1

≤ z, X

2

≤ z] (4)

= P [X

1

≤ z] P [X

2

≤ z] = [F

X

(z)]

2

(5)

To complete the problem, we need to ﬁnd the CDF of each X

i

. From the PDF

f

X

(x), the CDF is

F

X

(x) =

_

x

−∞

f

X

(y) dy =

⎧

⎨

⎩

0 x < 0

x − x

2

/4 0 ≤ x ≤ 2

1 x > 2

(6)

Thus for 0 ≤ z ≤ 2,

F

Z

(z) = (z − z

2

/4)

2

(7)

The complete expression for the CDF of Z is

F

Z

(z) =

⎧

⎨

⎩

0 z < 0

(z − z

2

/4)

2

0 ≤ z ≤ 2

1 z > 1

(8)

Quiz 4.11

This problem just requires identifying the various terms in Deﬁnition 4.17 and Theo-

rem 4.29. Speciﬁcally, from the problem statement, we know that ρ = 1/2,

µ

1

= µ

X

= 0, µ

2

= µ

Y

= 0, (1)

and that

σ

1

= σ

X

= 1, σ

2

= σ

Y

= 1. (2)

(1) Applying these facts to Deﬁnition 4.17, we have

f

X,Y

(x, y) =

1

√

3π

2

e

−2(x

2

−xy+y

2

)/3

. (3)

(2) By Theorem 4.30, the conditional expected value and standard deviation of X given

Y = y are

E [X|Y = y] = y/2 ˜ σ

X

= σ

2

1

(1 −ρ

2

) =

_

3/4. (4)

When Y = y = 2, we see that E[X|Y = 2] = 1 and Var[X|Y = 2] = 3/4. The

conditional PDF of X given Y = 2 is simply the Gaussian PDF

f

X|Y

(x|2) =

1

√

3π/2

e

−2(x−1)

2

/3

. (5)

31

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.12

One straightforward method is to follow the approach of Example 4.28. Instead, we use

an alternate approach. First we observe that X has the discrete uniform (1, 4) PMF. Also,

given X = x, Y has a discrete uniform (1, x) PMF. That is,

P

X

(x) =

_

1/4 x = 1, 2, 3, 4,

0 otherwise,

P

Y|X

(y|x) =

_

1/x y = 1, . . . , x

0 otherwise

(1)

Given X = x, and an independent uniform (0, 1) random variable U, we can generate a

sample value of Y with a discrete uniform (1, x) PMF via Y = xU. This observation

prompts the following program:

function xy=dtrianglerv(m)

sx=[1;2;3;4];

px=0.25*ones(4,1);

x=finiterv(sx,px,m);

y=ceil(x.*rand(m,1));

xy=[x’;y’];

32

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 5

Quiz 5.1

We ﬁnd P[C] by integrating the joint PDF over the region of interest. Speciﬁcally,

P [C] =

_

1/2

0

dy

2

_

y

2

0

dy

1

_

1/2

0

dy

4

_

y

4

0

4dy

3

(1)

= 4

_

_

1/2

0

y

2

dy

2

__

_

1/2

0

y

4

dy

4

_

= 1/4. (2)

Quiz 5.2

By deﬁnition of A, Y

1

= X

1

, Y

2

= X

2

−X

1

and Y

3

= X

3

−X

2

. Since 0 < X

1

< X

2

<

X

3

, each Y

i

must be a strictly positive integer. Thus, for y

1

, y

2

, y

3

∈ {1, 2, . . .},

P

Y

(y) = P [Y

1

= y

1

, Y

2

= y

2

, Y

3

= y

3

] (1)

= P [X

1

= y

1

, X

2

− X

1

= y

2

, X

3

− X

2

= y

3

] (2)

= P [X

1

= y

1

, X

2

= y

2

+ y

1

, X

3

= y

3

+ y

2

+ y

1

] (3)

= (1 − p)

3

p

y

1

+y

2

+y

3

(4)

By deﬁning the vector a =

_

1 1 1

_

**, the complete expression for the joint PMF of Y is
**

P

Y

(y) =

_

(1 − p) p

a

y

y

1

, y

2

, y

3

∈ {1, 2, . . .}

0 otherwise

(5)

Quiz 5.3

First we note that each marginal PDF is nonzero only if any subset of the x

i

obeys the

ordering contraints 0 ≤ x

1

≤ x

2

≤ x

3

≤ 1. Within these constraints, we have

f

X

1

,X

2

(x

1

, x

2

) =

_

∞

−∞

f

X

(x) dx

3

=

_

1

x

2

6 dx

3

= 6(1 − x

2

), (1)

f

X

2

,X

3

(x

2

, x

3

) =

_

∞

−∞

f

X

(x) dx

1

=

_

x

2

0

6 dx

1

= 6x

2

, (2)

f

X

1

,X

3

(x

1

, x

3

) =

_

∞

−∞

f

X

(x) dx

2

=

_

x

3

x

1

6 dx

2

= 6(x

3

− x

1

). (3)

In particular, we must keep in mind that f

X

1

,X

2

(x

1

, x

2

) = 0 unless 0 ≤ x

1

≤ x

2

≤ 1,

f

X

2

,X

3

(x

2

, x

3

) = 0 unless 0 ≤ x

2

≤ x

3

≤ 1, and that f

X

1

,X

3

(x

1

, x

3

) = 0 unless 0 ≤ x

1

≤

33

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

x

3

≤ 1. The complete expressions are

f

X

1

,X

2

(x

1

, x

2

) =

_

6(1 − x

2

) 0 ≤ x

1

≤ x

2

≤ 1

0 otherwise

(4)

f

X

2

,X

3

(x

2

, x

3

) =

_

6x

2

0 ≤ x

2

≤ x

3

≤ 1

0 otherwise

(5)

f

X

1

,X

3

(x

1

, x

3

) =

_

6(x

3

− x

1

) 0 ≤ x

1

≤ x

3

≤ 1

0 otherwise

(6)

Now we can ﬁnd the marginal PDFs. When 0 ≤ x

i

≤ 1 for each x

i

,

f

X

1

(x

1

) =

_

∞

−∞

f

X

1

,X

2

(x

1

, x

2

) dx

2

=

_

1

x

1

6(1 − x

2

) dx

2

= 3(1 − x

1

)

2

(7)

f

X

2

(x

2

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

3

=

_

1

x

2

6x

2

dx

3

= 6x

2

(1 − x

2

) (8)

f

X

3

(x

3

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

2

=

_

x

3

0

6x

2

dx

2

= 3x

2

3

(9)

The complete expressions are

f

X

1

(x

1

) =

_

3(1 − x

1

)

2

0 ≤ x

1

≤ 1

0 otherwise

(10)

f

X

2

(x

2

) =

_

6x

2

(1 − x

2

) 0 ≤ x

2

≤ 1

0 otherwise

(11)

f

X

3

(x

3

) =

_

3x

2

3

0 ≤ x

3

≤ 1

0 otherwise

(12)

Quiz 5.4

In the PDF f

Y

(y), the components have dependencies as a result of the ordering con-

straints Y

1

≤ Y

2

and Y

3

≤ Y

4

. We can separate these constraints by creating the vectors

V =

_

Y

1

Y

2

_

, W =

_

Y

3

Y

4

_

. (1)

The joint PDF of V and W is

f

V,W

(v, w) =

_

4 0 ≤ v

1

≤ v

2

≤ 1, 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(2)

34

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We must verify that V and W are independent. For 0 ≤ v

1

≤ v

2

≤ 1,

f

V

(v) =

__

f

V,W

(v, w) dw

1

dw

2

(3)

=

_

1

0

_

_

1

w

1

4 dw

2

_

dw

1

(4)

=

_

1

0

4(1 −w

1

) dw

1

= 2 (5)

Similarly, for 0 ≤ w

1

≤ w

2

≤ 1,

f

W

(w) =

__

f

V,W

(v, w) dv

1

dv

2

(6)

=

_

1

0

_

_

1

v

1

4 dv

2

_

dv

1

= 2 (7)

It follows that V and W have PDFs

f

V

(v) =

_

2 0 ≤ v

1

≤ v

2

≤ 1

0 otherwise

, f

W

(w) =

_

2 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(8)

It is easy to verify that f

V,W

(v, w) = f

V

(v) f

W

(w), conﬁrming that V and W are indepen-

dent vectors.

Quiz 5.5

(A) Referring to Theorem 1.19, each test is a subexperiment with three possible out-

comes: L, A and R. In ﬁve trials, the vector X =

_

X

1

X

2

X

3

_

indicating the

number of outcomes of each subexperiment has the multinomial PMF

P

X

(x) =

⎧

⎨

⎩

_

5

x

1

,x

2

,x

3

_

(0.3)

x

1

(0.6)

x

2

(0.1)

x

3

x

1

+ x

2

+ x

3

= 5;

x

1

, x

2

, x

3

∈ {0, 1, . . . , 5}

0 otherwise

(1)

We can ﬁnd the marginal PMF for each X

i

from the joint PMF P

X

(x); however it

is simpler to just start from ﬁrst principles and observe that X

1

is the number of

occurrences of L in ﬁve independent tests. If we view each test as a trial with success

probability P[L] = 0.3, we see that X

1

is a binomial (n, p) = (5, 0.3) random

variable. Similarly, X

2

is a binomial (5, 0.6) random variable and X

3

is a binomial

(5, 0.1) random variable. That is, for p

1

= 0.3, p

2

= 0.6 and p

3

= 0.1,

P

X

i

(x) =

_ _

5

x

_

p

x

i

(1 − p

i

)

5−x

x = 0, 1, . . . , 5

0 otherwise

(2)

35

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the marginal PMFs, we see that X

1

, X

2

and X

3

are not independent. Hence, we

must use Theorem 5.6 to ﬁnd the PMF of W. In particular, since X

1

+ X

2

+ X

3

= 5

and since each X

i

is non-negative, P

W

(0) = P

W

(1) = 0. Furthermore,

P

W

(2) = P

X

(1, 2, 2) + P

X

(2, 1, 2) + P

X

(2, 2, 1) (3)

=

5![0.3(0.6)

2

(0.1)

2

+0.3

2

(0.6)(0.1)

2

+0.3

2

(0.6)

2

(0.1)]

2!2!1!

(4)

= 0.1458 (5)

In addition, for w = 3, w = 4, and w = 5, the event W = w occurs if and only if

one of the mutually exclusive events X

1

= w, X

2

= w, or X

3

= w occurs. Thus,

P

W

(3) = P

X

1

(3) + P

X

2

(3) + P

X

3

(3) = 0.486 (6)

P

W

(4) = P

X

1

(4) + P

X

2

(4) + P

X

3

(4) = 0.288 (7)

P

W

(5) = P

X

1

(5) + P

X

2

(5) + P

X

3

(5) = 0.0802 (8)

(B) Since each Y

i

= 2X

i

+4, we can apply Theorem 5.10 to write

f

Y

(y) =

1

2

3

f

X

_

y

1

−4

2

,

y

2

−4

2

,

y

3

−4

2

_

(9)

=

_

(1/8)e

−(y

3

−4)/2

4 ≤ y

1

≤ y

2

≤ y

3

0 otherwise

(10)

Note that for other matrices A, the constraints on y resulting from the constraints

0 ≤ X

1

≤ X

2

≤ X

3

can be much more complicated.

Quiz 5.6

We start by ﬁnding the components E[X

i

] =

_

∞

−∞

x f

X

i

(x) dx of µ

X

. To do so, we use

the marginal PDFs f

X

i

(x) found in Quiz 5.3:

E [X

1

] =

_

1

0

3x(1 − x)

2

dx = 1/4, (1)

E [X

2

] =

_

1

0

6x

2

(1 − x) dx = 1/2, (2)

E [X

3

] =

_

1

0

3x

3

dx = 3/4. (3)

To ﬁnd the correlation matrix R

X

, we need to ﬁnd E[X

i

X

j

] for all i and j . We start with

36

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

the second moments:

E

_

X

2

1

_

=

_

1

0

3x

2

(1 − x)

2

dx = 1/10. (4)

E

_

X

2

2

_

=

_

1

0

6x

3

(1 − x) dx = 3/10. (5)

E

_

X

2

3

_

=

_

1

0

3x

4

dx = 3/5. (6)

Using marginal PDFs from Quiz 5.3, the cross terms are

E [X

1

X

2

] =

_

∞

−∞

_

∞

−∞

x

1

x

2

f

X

1

,X

2

(x

1

, x

2

) , dx

1

dx

2

(7)

=

_

1

0

_

_

1

x

1

6x

1

x

2

(1 − x

2

) dx

2

_

dx

1

(8)

=

_

1

0

[x

1

−3x

3

1

+2x

4

1

] dx

1

= 3/20. (9)

E [X

2

X

3

] =

_

1

0

_

1

x

2

6x

2

2

x

3

dx

3

dx

2

(10)

=

_

1

0

[3x

2

2

−3x

4

2

] dx

2

= 2/5 (11)

E [X

1

X

3

] =

_

1

0

_

1

x

1

6x

1

x

3

(x

3

− x

1

) dx

3

dx

1

. (12)

=

_

1

0

_

(2x

1

x

3

3

−3x

2

1

x

2

3

)

¸

¸

¸

x

3

=1

x

3

=x

1

_

dx

1

(13)

=

_

1

0

[2x

1

−3x

2

1

+ x

4

1

] dx

1

= 1/5. (14)

Summarizing the results, X has correlation matrix

R

X

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

. (15)

Vector X has covariance matrix

C

X

= R

X

− E [X] E [X]

(16)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/4

1/2

3/4

⎤

⎦

_

1/4 1/2 3/4

_

(17)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/16 1/8 3/16

1/8 1/4 3/8

3/16 3/8 9/16

⎤

⎦

=

1

80

⎡

⎣

3 2 1

2 4 2

1 2 3

⎤

⎦

. (18)

37

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This problemshows that even for fairly simple joint PDFs, computing the covariance matrix

by calculus can be a time consuming task.

Quiz 5.7

We observe that X = AZ +b where

A =

_

2 1

1 −1

_

, b =

_

2

0

_

. (1)

It follows from Theorem 5.18 that µ

X

= b and that

C

X

= AA

=

_

2 1

1 −1

_ _

2 1

1 −1

_

=

_

5 1

1 2

_

. (2)

Quiz 5.8

First, we observe that Y = AT where A =

_

1/31 1/31 · · · 1/31

_

. Since T is a

Gaussian random vector, Theorem 5.16 tells us that Y is a 1 dimensional Gaussian vector,

i.e., just a Gaussian random variable. The expected value of Y is µ

Y

= µ

T

= 80. The

covariance matrix of Y is 1 × 1 and is just equal to Var[Y]. Thus, by Theorem 5.16,

Var[Y] = AC

T

A

.

function p=julytemps(T);

[D1 D2]=ndgrid((1:31),(1:31));

CT=36./(1+abs(D1-D2));

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

In julytemps.m, the ﬁrst two lines gen-

erate the 31 ×31 covariance matrix CT, or

C

T

. Next we calculate Var[Y]. The ﬁnal

step is to use the (·) function to calculate

P[Y < T].

Here is the output of julytemps.m:

>> julytemps([70 75 80 85 90 95])

ans =

0.0000 0.0221 0.5000 0.9779 1.0000 1.0000

Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0000. Its

just that the MATLAB’s short format output, invoked with the command format short,

rounds off those probabilities. Here is the long format output:

>> format long

>> julytemps([70 75 80 85 90 95])

ans =

Columns 1 through 4

0.00002844263128 0.02207383067604 0.50000000000000 0.97792616932396

Columns 5 through 6

0.99997155736872 0.99999999922010

38

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The ndgrid function is a useful to way calculate many covariance matrices. However, in

this problem, C

X

has a special structure; the i, j th element is

C

T

(i, j ) = c

|i −j |

=

36

1 +|i − j |

. (1)

If we write out the elements of the covariance matrix, we see that

C

T

=

⎡

⎢

⎢

⎢

⎣

c

0

c

1

· · · c

30

c

1

c

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. c

1

c

30

· · · c

1

c

0

⎤

⎥

⎥

⎥

⎦

. (2)

This covariance matrix is known as a symmetric Toeplitz matrix. We will see in Chap-

ters 9 and 11 that Toeplitz covariance matrices are quite common. In fact, MATLAB has a

toeplitz function for generating them. The function julytemps2 use the toeplitz

to generate the correlation matrix C

T

.

function p=julytemps2(T);

c=36./(1+abs(0:30));

CT=toeplitz(c);

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

39

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 6

Quiz 6.1

Let K

1

, . . . , K

n

denote a sequence of iid random variables each with PMF

P

K

(k) =

_

1/4 k = 1, . . . , 4

0 otherwise

(1)

We can write W

n

in the form of W

n

= K

1

+ · · · + K

n

. First, we note that the ﬁrst two

moments of K

i

are

E [K

i

] = (1 +2 +3 +4)/4 = 2.5 (2)

E

_

K

2

i

_

= (1

2

+2

2

+3

2

+4

2

)/4 = 7.5 (3)

Thus the variance of K

i

is

Var[K

i

] = E

_

K

2

i

_

−(E [K

i

])

2

= 7.5 −(2.5)

2

= 1.25 (4)

Since E[K

i

] = 2.5, the expected value of W

n

is

E [W

n

] = E [K

1

] +· · · + E [K

n

] = nE [K

i

] = 2.5n (5)

Since the rolls are independent, the random variables K

1

, . . . , K

n

are independent. Hence,

by Theorem 6.3, the variance of the sum equals the sum of the variances. That is,

Var[W

n

] = Var[K

1

] +· · · +Var[K

n

] = 1.25n (6)

Quiz 6.2

Random variables X and Y have PDFs

f

X

(x) =

_

3e

−3x

x ≥ 0

0 otherwise

f

Y

(y) =

_

2e

−2y

y ≥ 0

0 otherwise

(1)

Since X and Y are nonnegative, W = X +Y is nonnegative. By Theorem 6.5, the PDF of

W = X +Y is

f

W

(w) =

_

∞

−∞

f

X

(w − y) f

Y

(y) dy = 6

_

w

0

e

−3(w−y)

e

−2y

dy (2)

Fortunately, this integral is easy to evaluate. For w > 0,

f

W

(w) = e

−3w

e

y

¸

¸

w

0

= 6

_

e

−2w

−e

−3w

_

(3)

Since f

W

(w) = 0 for w < 0, a conmplete expression for the PDF of W is

f

W

(w) =

_

6e

−2w

_

1 −e

−w

_

w ≥ 0,

0 otherwise.

(4)

40

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.3

The MGF of K is

φ

K

(s) = E

_

e

s K

_

==

4

k=0

(0.2)e

sk

= 0.2

_

1 +e

s

+e

2s

+e

3s

+e

4s

_

(1)

We ﬁnd the moments by taking derivatives. The ﬁrst derivative of φ

K

(s) is

dφ

K

(s)

ds

= 0.2(e

s

+2e

2s

+3e

3s

+4e

4s

) (2)

Evaluating the derivative at s = 0 yields

E [K] =

dφ

K

(s)

ds

¸

¸

¸

¸

s=0

= 0.2(1 +2 +3 +4) = 2 (3)

To ﬁnd higher-order moments, we continue to take derivatives:

E

_

K

2

_

=

d

2

φ

K

(s)

ds

2

¸

¸

¸

¸

s=0

= 0.2(e

s

+4e

2s

+9e

3s

+16e

4s

)

¸

¸

¸

s=0

= 6 (4)

E

_

K

3

_

=

d

3

φ

K

(s)

ds

3

¸

¸

¸

¸

s=0

= 0.2(e

s

+8e

2s

+27e

3s

+64e

4s

)

¸

¸

¸

s=0

= 20 (5)

E

_

K

4

_

=

d

4

φ

K

(s)

ds

4

¸

¸

¸

¸

s=0

= 0.2(e

s

+16e

2s

+81e

3s

+256e

4s

)

¸

¸

¸

s=0

= 70.8 (6)

(7)

Quiz 6.4

(A) Each K

i

has MGF

φ

K

(s) = E

_

e

s K

i

_

=

e

s

+e

2s

+· · · +e

ns

n

=

e

s

(1 −e

ns

)

n(1 −e

s

)

(1)

Since the sequence of K

i

is independent, Theorem 6.8 says the MGF of J is

φ

J

(s) = (φ

K

(s))

m

=

e

ms

(1 −e

ns

)

m

n

m

(1 −e

s

)

m

(2)

(B) Since the set of α

j

X

j

are independent Gaussian random variables, Theorem 6.10

says that W is a Gaussian random variable. Thus to ﬁnd the PDF of W, we need

only ﬁnd the expected value and variance. Since the expectation of the sum equals

the sum of the expectations:

E [W] = αE [X

1

] +α

2

E [X

2

] +· · · +α

n

E [X

n

] = 0 (3)

41

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since the α

j

X

j

are independent, the variance of the sum equals the sum of the vari-

ances:

Var[W] = α

2

Var[X

1

] +α

4

Var[X

2

] +· · · +α

2n

Var[X

n

] (4)

= α

2

+2(α

2

)

2

+3(α

2

)

3

+· · · +n(α

2

)

n

(5)

Deﬁning q = α

2

, we can use Math Fact B.6 to write

Var[W] =

α

2

−α

2n+2

[1 +n(1 −α

2

)]

(1 −α

2

)

2

(6)

With E[W] = 0 and σ

2

W

= Var[W], we can write the PDF of W as

f

W

(w) =

1

_

2πσ

2

W

e

−w

2

/2σ

2

W

(7)

Quiz 6.5

(1) From Table 6.1, each X

i

has MGF φ

X

(s) and random variable N has MGF φ

N

(s)

where

φ

X

(s) =

1

1 −s

, φ

N

(s) =

1

5

e

s

1 −

4

5

e

s

. (1)

From Theorem 6.12, R has MGF

φ

R

(s) = φ

N

(ln φ

X

(s)) =

1

5

φ

X

(s)

1 −

4

5

φ

X

(s)

(2)

Substituting the expression for φ

X

(s) yields

φ

R

(s) =

1

5

1

5

−s

. (3)

(2) From Table 6.1, we see that R has the MGF of an exponential (1/5) random variable.

The corresponding PDF is

f

R

(r) =

_

(1/5)e

−r/5

r ≥ 0

0 otherwise

(4)

This quiz is an example of the general result that a geometric sum of exponential

random variables is an exponential random variable.

42

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.6

(1) The expected access time is

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

12

0

x

12

dx = 6 msec (1)

(2) The second moment of the access time is

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

12

0

x

2

12

dx = 48 (2)

The variance of the access time is Var[X] = E[X

2

] −(E[X])

2

= 48 −36 = 12.

(3) Using X

i

to denote the access time of block i , we can write

A = X

1

+ X

2

+· · · + X

12

(3)

Since the expectation of the sum equals the sum of the expectations,

E [A] = E [X

1

] +· · · + E [X

12

] = 12E [X] = 72 msec (4)

(4) Since the X

i

are independent,

Var[A] = Var[X

1

] +· · · +Var[X

12

] = 12 Var[X] = 144 (5)

Hence, the standard deviation of A is σ

A

= 12

(5) To use the central limit theorem, we write

P [A > 75] = 1 − P [A ≤ 75] (6)

= 1 − P

_

A − E [A]

σ

A

≤

75 − E [A]

σ

A

_

(7)

≈ 1 −

_

75 −72

12

_

(8)

= 1 −0.5987 = 0.4013 (9)

Note that we used Table 3.1 to look up (0.25).

(6) Once again, we use the central limit theorem and Table 3.1 to estimate

P [A < 48] = P

_

A − E [A]

σ

A

<

48 − E [A]

σ

A

_

(10)

≈

_

48 −72

12

_

(11)

= 1 −(2) = 1 −0.9773 = 0.0227 (12)

43

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.7

Random variable K

n

has a binomial distribution for n trials and success probability

P[V] = 3/4.

(1) The expected number of voice calls out of 48 calls is E[K

48

] = 48P[V] = 36.

(2) The variance of K

48

is

Var[K

48

] = 48P [V] (1 − P [V]) = 48(3/4)(1/4) = 9 (1)

Thus K

48

has standard deviation σ

K

48

= 3.

(3) Using the ordinary central limit theorem and Table 3.1 yields

P [30 ≤ K

48

≤ 42] ≈

_

42 −36

3

_

−

_

30 −36

3

_

= (2) −(−2) (2)

Recalling that (−x) = 1 −(x), we have

P [30 ≤ K

48

≤ 42] ≈ 2(2) −1 = 0.9545 (3)

(4) Since K

48

is a discrete random variable, we can use the De Moivre-Laplace approx-

imation to estimate

P [30 ≤ K

48

≤ 42] ≈

_

42 +0.5 −36

3

_

−

_

30 −0.5 −36

3

_

(4)

= 2(2.16666) −1 = 0.9687 (5)

Quiz 6.8

The train interarrival times X

1

, X

2

, X

3

are iid exponential (λ) random variables. The

arrival time of the third train is

W = X

1

+ X

2

+ X

3

. (1)

In Theorem 6.11, we found that the sum of three iid exponential (λ) random variables is an

Erlang (n = 3, λ) random variable. From Appendix A, we ﬁnd that W has expected value

and variance

E [W] = 3/λ = 6 Var[W] = 3/λ

2

= 12 (2)

(1) By the Central Limit Theorem,

P [W > 20] = P

_

W −6

√

12

>

20 −6

√

12

_

≈ Q(7/

√

3) = 2.66 ×10

−5

(3)

44

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To use the Chernoff bound, we note that the MGF of W is

φ

W

(s) =

_

λ

λ −s

_

3

=

1

(1 −2s)

3

(4)

The Chernoff bound states that

P [W > 20] ≤ min

s≥0

e

−20s

φ

X

(s) = min

s≥0

e

−20s

(1 −2s)

3

(5)

To minimize h(s) = e

−20s

/(1 −2s)

3

, we set the derivative of h(s) to zero:

dh(s)

ds

=

−20(1 −2s)

3

e

−20s

+6e

−20s

(1 −2s)

2

(1 −2s)

6

= 0 (6)

This implies 20(1 − 2s) = 6 or s = 7/20. Applying s = 7/20 into the Chernoff

bound yields

P [W > 20] ≤

e

−20s

(1 −2s)

3

¸

¸

¸

¸

s=7/20

= (10/3)

3

e

−7

= 0.0338 (7)

(3) Theorem 3.11 says that for any w > 0, the CDF of the Erlang (λ, 3) random variable

W satisﬁes

F

W

(w) = 1 −

2

k=0

(λw)

k

e

−λw

k!

(8)

Equivalently, for λ = 1/2 and w = 20,

P [W > 20] = 1 − F

W

(20) (9)

= e

−10

_

1 +

10

1!

+

10

2

2!

_

= 61e

−10

= 0.0028 (10)

Although the Chernoff bound is relatively weak in that it overestimates the proba-

bility by roughly a factor of 12, it is a valid bound. By contrast, the Central Limit

Theorem approximation grossly underestimates the true probability.

Quiz 6.9

One solution to this problem is to follow the approach of Example 6.19:

%unifbinom100.m

sx=0:100;sy=0:100;

px=binomialpmf(100,0.5,sx); py=duniformpmf(0,100,sy);

[SX,SY]=ndgrid(sx,sy); [PX,PY]=ndgrid(px,py);

SW=SX+SY; PW=PX.*PY;

sw=unique(SW); pw=finitepmf(SW,PW,sw);

pmfplot(sw,pw,’\itw’,’\itP_W(w)’);

A graph of the PMF P

W

(w) appears in Figure 2 With some thought, it should be apparent

that the finitepmf function is implementing the convolution of the two PMFs.

45

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 20 40 60 80 100 120 140 160 180 200

0

0.002

0.004

0.006

0.008

0.01

w

P

W

(

w

)

Figure 2: From Quiz 6.9, the PMF P

W

(w) of the independent sum of a binomial (100, 0.5)

random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 7

Quiz 7.1

An exponential random variable with expected value 1 also has variance 1. By Theo-

rem 7.1, M

n

(X) has variance Var[M

n

(X)] = 1/n. Hence, we need n = 100 samples.

Quiz 7.2

The arrival time of the third elevator is W = X

1

+ X

2

+ X

3

. Since each X

i

is uniform

(0, 30),

E [X

i

] = 15, Var [X

i

] =

(30 −0)

2

12

= 75. (1)

Thus E[W] = 3E[X

i

] = 45, and Var[W] = 3 Var[X

i

] = 225.

(1) By the Markov inequality,

P [W > 75] ≤

E [W]

75

=

45

75

=

3

5

(2)

(2) By the Chebyshev inequality,

P [W > 75] = P [W − E [W] > 30] (3)

≤ P [|W − E [W]| > 30] ≤

Var [W]

30

2

=

225

900

=

1

4

(4)

Quiz 7.3

Deﬁne the random variable W = (X − µ

X

)

2

. Observe that V

100

(X) = M

100

(W). By

Theorem 7.6, the mean square error is

E

_

(M

100

(W) −µ

W

)

2

_

=

Var[W]

100

(1)

Observe that µ

X

= 0 so that W = X

2

. Thus,

µ

W

= E

_

X

2

_

=

_

1

−1

x

2

f

X

(x) dx = 1/3 (2)

E

_

W

2

_

= E

_

X

4

_

=

_

1

−1

x

4

f

X

(x) dx = 1/5 (3)

Therefore Var[W] = E[W

2

] − µ

2

W

= 1/5 − (1/3)

2

= 4/45 and the mean square error is

4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4

Assuming the number n of samples is large, we can use a Gaussian approximation for

M

n

(X). SinceE[X] = p and Var[X] = p(1 − p), we apply Theorem 7.13 which says that

the interval estimate

M

n

(X) −c ≤ p ≤ M

n

(X) +c (1)

has conﬁdence coefﬁcient 1 −α where

α = 2 −2

_

c

√

n

p(1 − p)

_

. (2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must

have

_

c

√

n

p(1 − p)

_

≥ 0.95 (3)

for every value of p. Since (x) is an increasing function of x, we must satisfy c

√

n ≥

1.65p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥

1.65

4

√

n

=

0.41

√

n

. (4)

The 0.9 conﬁdence interval estimate of p is

M

n

(X) −

0.41

√

n

≤ p ≤ M

n

(X) +

0.41

√

n

. (5)

For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c

√

n/( p(1−p))) ≥ 0.995.

This implies c

√

n ≥ 2.58p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥ (0.25)(2.58)/

√

n. In this case, the 0.99 conﬁdence interval estimate is

M

n

(X) −

0.645

√

n

≤ p ≤ M

n

(X) +

0.645

√

n

. (6)

Note that if M

100

(X) = 0.4, then the 0.99 conﬁdence interval estimate is

0.3355 ≤ p ≤ 0.4645. (7)

The interval is wide because the 0.99 conﬁdence is high.

Quiz 7.5

Following the approach of bernoullitraces.m, we generate m = 1000 sample

paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the

fraction of sample paths that have sample mean within one standard error of p. The pro-

gram bernoullisample.m generates graphs the number of traces within one standard

error as a function of the time, i.e. the number of trials in each trace.

48

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function OK=bernoullisample(n,m,p);

x=reshape(bernoullirv(p,m*n),n,m);

nn=(1:n)’*ones(1,m);

MN=cumsum(x)./nn;

stderr=sqrt(p*(1-p))./sqrt((1:n)’);

stderrmat=stderr*ones(1,m);

OK=sum(abs(MN-p)<stderrmat,2)/m;

plot(1:n,OK,’-s’);

The following graph was generated by bernoullisample(100,5000,0.5):

0 10 20 30 40 50 60 70 80 90 100

0.4

0.5

0.6

0.7

0.8

0.9

1

As we would expect, as m gets large, the fraction of traces within one standard error ap-

proaches 2(1) −1 ≈ 0.68. The unusual sawtooth pattern, though perhaps unexpected, is

examined in Problem 7.5.2.

49

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 8

Quiz 8.1

From the problem statement, each X

i

has PDF and CDF

f

X

i

(x) =

_

e

−x

x ≥ 0

0 otherwise

F

X

i

(x) =

_

0 x < 0

1 −e

−x

x ≥ 0

(1)

Hence, the CDF of the maximum of X

1

, . . . , X

15

obeys

F

X

(x) = P [X ≤ x] = P [X

1

≤ x, X

2

≤ x, · · · , X

15

≤ x] = [P [X

i

≤ x]]

15

. (2)

This implies that for x ≥ 0,

F

X

(x) =

_

F

X

i

(x)

_

15

=

_

1 −e

−x

_

15

(3)

To design a signiﬁcance test, we must choose a rejection region for X. A reasonable choice

is to reject the hypothesis if X is too small. That is, let R = {X ≤ r}. For a signiﬁcance

level of α = 0.01, we obtain

α = P [X ≤ r] = (1 −e

−r

)

15

= 0.01 (4)

It is straightforward to show that

r = −ln

_

1 −(0.01)

1/15

_

= 1.33 (5)

Hence, if we observe X < 1.33, then we reject the hypothesis.

Quiz 8.2

From the problem statement, the conditional PMFs of K are

P

K|H

0

(k) =

_

10

4k

e

−10

4

k!

k = 0, 1, . . .

0 otherwise

(1)

P

K|H

1

(k) =

_

10

6k

e

−10

6

k!

k = 0, 1, . . .

0 otherwise

(2)

Since the two hypotheses are equally likely, the MAP and ML tests are the same. From

Theorem 8.6, the ML hypothesis rule is

k ∈ A

0

if P

K|H

0

(k) ≥ P

K|H

1

(k) ; k ∈ A

1

otherwise. (3)

This rule simpliﬁes to

k ∈ A

0

if k ≤ k

∗

=

10

6

−10

4

ln 100

= 214, 975.7; k ∈ A

1

otherwise. (4)

Thus if we observe at least 214, 976 photons, then we accept hypothesis H

1

.

50

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 8.3

For the QPSK system, a symbol error occurs when s

i

is transmitted but (X

1

, X

2

) ∈ A

j

for some j = i . For a QPSK system, it is easier to calculate the probability of a correct

decision. Given H

0

, the conditional probability of a correct decision is

P [C|H

0

] = P [X

1

> 0, X

2

> 0|H

0

] = P

_

√

E/2 + N

1

> 0,

√

E/2 + N

2

> 0

_

(1)

Because of the symmetry of the signals, P[C|H

0

] = P[C|H

i

] for all i . This implies the

probability of a correct decision is P[C] = P[C|H

0

]. Since N

1

and N

2

are iid Gaussian

(0, σ) random variables, we have

P [C] = P [C|H

0

] = P

_

√

E/2 + N

1

> 0

_

P

_

√

E/2 + N

2

> 0

_

(2)

=

_

P

_

N

1

> −

√

E/2

__

2

(3)

=

_

1 −

_

−

√

E/2

σ

__

2

(4)

Since (−x) = 1 − (x), we have P[C] =

2

(

_

E/2σ

2

). Equivalently, the probability

of error is

P

ERR

= 1 − P [C] = 1 −

2

_

_

E

2σ

2

_

(5)

Quiz 8.4

To generate the ROC, the existing program sqdistor already calculates this miss

probability P

MISS

= P

01

and the false alarm probability P

FA

= P

10

. The modiﬁed pro-

gram, sqdistroc.m is essentially the same as sqdistor except the output is a ma-

trix FM whose columns are the false alarm and miss probabilities. Next, the program

sqdistrocplot.m calls sqdistroc three times to generate a plot that compares the

receiver performance for the three requested values of d. Here is the modiﬁed code:

function FM=sqdistroc(v,d,m,T)

%square law distortion recvr

%P(error) for m bits tested

%transmit v volts or -v volts,

%add N volts, N is Gauss(0,1)

%add d(v+N)ˆ2 distortion

%receive 1 if x>T, otherwise 0

%FM = [P(FA) P(MISS)]

x=(v+randn(m,1));

[XX,TT]=ndgrid(x,T(:));

P01=sum((XX+d*(XX.ˆ2)< TT),1)/m;

x= -v+randn(m,1);

[XX,TT]=ndgrid(x,T(:));

P10=sum((XX+d*(XX.ˆ2)>TT),1)/m;

FM=[P10(:) P01(:)];

function FM=sqdistrocplot(v,m,T);

FM1=sqdistroc(v,0.1,m,T);

FM2=sqdistroc(v,0.2,m,T);

FM5=sqdistroc(v,0.3,m,T);

FM=[FM1 FM2 FM5];

loglog(FM1(:,1),FM1(:,2),’-k’, ...

FM2(:,1),FM2(:,2),’--k’, ...

FM5(:,1),FM5(:,2),’:k’);

legend(’\it d=0.1’,’\it d=0.2’,...

’\it d=0.3’,3)

ylabel(’P_{MISS}’);

xlabel(’P_{FA}’);

51

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

To see the effect of d, the commands

T=-3:0.1:3; sqdistrocplot(3,100000,T);

generated the plot shown in Figure 3.

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

P

M

I

S

S

P

FA

d=0.1

d=0.2

d=0.3

T=-3:0.1:3; sqdistrocplot(3,100000,T);

Figure 3: The receiver operating curve for the communications system of Quiz 8.4 with

squared distortion.

52

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 9

Quiz 9.1

(1) First, we calculate the marginal PDF for 0 ≤ y ≤ 1:

f

Y

(y) =

_

y

0

2(y + x) dx = 2xy + x

2

¸

¸

¸

x=y

x=0

= 3y

2

(1)

This implies the conditional PDF of X given Y is

f

X|Y

(x|y) =

f

X,Y

(x, y)

f

Y

(y)

=

_

2

3y

+

2x

3y

2

0 ≤ x ≤ y

0 otherwise

(2)

(2) The minimum mean square error estimate of X given Y = y is

ˆ x

M

(y) = E [X|Y = y] =

_

y

0

_

2x

3y

+

2x

2

3y

2

_

dx = 5y/9 (3)

Thus the MMSE estimator of X given Y is

ˆ

X

M

(Y) = 5Y/9.

(3) To obtain the conditional PDF f

Y|X

(y|x), we need the marginal PDF f

X

(x). For

0 ≤ x ≤ 1,

f

X

(x) =

_

1

x

2(y + x) dy = y

2

+2xy

¸

¸

¸

y=1

y=x

= 1 +2x −3x

2

(4)

(5)

For 0 ≤ x ≤ 1, the conditional PDF of Y given X is

f

Y|X

(y|x) =

_

2(y+x)

1+2x−3x

2

x ≤ y ≤ 1

0 otherwise

(6)

(4) The MMSE estimate of Y given X = x is

ˆ y

M

(x) = E [Y|X = x] =

_

1

x

2y

2

+2xy

1 +2x −3x

2

dy (7)

=

2y

3

/3 + xy

2

1 +2x −3x

2

¸

¸

¸

¸

y=1

y=x

(8)

=

2 +3x −5x

3

3 +6x −9x

2

(9)

53

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.2

(1) Since the expectation of the sum equals the sum of the expectations,

E [R] = E [T] + E [X] = 0 (1)

(2) Since T and X are independent, the variance of the sum R = T + X is

Var[R] = Var[T] +Var[X] = 9 +3 = 12 (2)

(3) Since T and R have expected values E[R] = E[T] = 0,

Cov [T, R] = E [T R] = E [T(T + X)] = E

_

T

2

_

+ E [T X] (3)

Since T and X are independent and have zero expected value, E[T X] = E[T]E[X] =

0 and E[T

2

] = Var[T]. Thus Cov[T, R] = Var[T] = 9.

(4) From Deﬁnition 4.8, the correlation coefﬁcient of T and R is

ρ

T,R

=

Cov [T, R]

√

Var[R] Var[T]

=

σ

T

σ

R

=

√

3/2 (4)

(5) From Theorem 9.4, the optimum linear estimate of T given R is

ˆ

T

L

(R) = ρ

T,R

σ

T

σ

R

(R − E [R]) + E [T] (5)

Since E[R] = E[T] = 0 and ρ

T,R

= σ

T

/σ

R

,

ˆ

T

L

(R) =

σ

2

T

σ

2

R

R =

σ

2

T

σ

2

T

+σ

2

X

R =

3

4

R (6)

Hence a

∗

= 3/4 and b

∗

= 0.

(6) By Theorem 9.4, the mean square error of the linear estimate is

e

∗

L

= Var[T](1 −ρ

2

T,R

) = 9(1 −3/4) = 9/4 (7)

Quiz 9.3

When R = r, the conditional PDF of X = Y −40−40 log

10

r is Gaussian with expected

value −40 −40 log

10

r and variance 64. The conditional PDF of X given R is

f

X|R

(x|r) =

1

√

128π

e

−(x+40+40 log

10

r)

2

/128

(1)

54

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the conditional PDF f

X|R

(x|r), we can use Deﬁnition 9.2 to write the ML estimate

of R given X = x as

ˆ r

ML

(x) = arg max

r≥0

f

X|R

(x|r) (2)

We observe that f

X|R

(x|r) is maximized when the exponent (x + 40 + 40 log

10

r)

2

is

minimized. This minimum occurs when the exponent is zero, yielding

log

10

r = −1 − x/40 (3)

or

ˆ r

ML

(x) = (0.1)10

−x/40

m (4)

If the result doesn’t look correct, note that a typical ﬁgure for the signal strength might be

x = −120 dB. This corresponds to a distance estimate of ˆ r

ML

(−120) = 100 m.

For the MAP estimate, we observe that the joint PDF of X and R is

f

X,R

(x, r) = f

X|R

(x|r) f

R

(r) =

1

10

6

√

32π

re

−(x+40+40 log

10

r)

2

/128

(5)

From Theorem 9.6, the MAP estimate of R given X = x is the value of r that maximizes

f

X,R

(x, r). That is,

ˆ r

MAP

(x) = arg max

0≤r≤1000

f

X,R

(x, r) (6)

Note that we have included the constraint r ≤ 1000 in the maximization to highlight the

fact that under our probability model, R ≤ 1000 m. Setting the derivative of f

X,R

(x, r)

with respect to r to zero yields

e

−(x+40+40 log

10

r)

2

/128

_

1 −

80 log

10

e

128

(x +40 +40 log

10

r)

_

= 0 (7)

Solving for r yields

r = 10

_

1

25 log

10

e

−1

_

10

−x/40

= (0.1236)10

−x/40

(8)

This is the MAP estimate of R given X = x as long as r ≤ 1000 m. When x ≤ −156.3 dB,

the above estimate will exceed 1000 m, which is not possible in our probability model.

Hence, the complete description of the MAP estimate is

ˆ r

MAP

(x) =

_

1000 x < −156.3

(0.1236)10

−x/40

x ≥ −156.3

(9)

For example, if x = −120dB, then ˆ r

MAP

(−120) = 123.6 m. When the measured signal

strength is not too low, the MAP estimate is 23.6% larger than the ML estimate. This re-

ﬂects the fact that large values of R are a priori more probable than small values. However,

for very low signal strengths, the MAP estimate takes into account that the distance can

never exceed 1000 m.

55

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.4

(1) From Theorem 9.4, the LMSE estimate of X

2

given Y

2

is

ˆ

X

2

(Y

2

) = a

∗

Y

2

+b

∗

where

a

∗

=

Cov [X

2

, Y

2

]

Var[Y

2

]

, b

∗

= µ

X

2

−a

∗

µ

Y

2

. (1)

Because E[X] = E[Y] = 0,

Cov [X

2

, Y

2

] = E [X

2

Y

2

] = E [X

2

(X

2

+ W

2

)] = E

_

X

2

2

_

= 1 (2)

Var[Y

2

] = Var[X

2

] +Var[W

2

] = E

_

X

2

2

_

+ E

_

W

2

2

_

= 1.1 (3)

It follows that a

∗

= 1/1.1. Because µ

X

2

= µ

Y

2

= 0, it follows that b

∗

= 0. Finally,

to compute the expected square error, we calculate the correlation coefﬁcient

ρ

X

2

,Y

2

=

Cov [X

2

, Y

2

]

σ

X

2

σ

Y

2

=

1

√

1.1

(4)

The expected square error is

e

∗

L

= Var[X

2

](1 −ρ

2

X

2

,Y

2

) = 1 −

1

1.1

=

1

11

= 0.0909 (5)

(2) Since Y = X + W and E[X] = E[W] = 0, it follows that E[Y] = 0. Thus we can

apply Theorem 9.7. Note that X and W have correlation matrices

R

X

=

_

1 −0.9

−0.9 1

_

, R

W

=

_

0.1 0

0 0.1

_

. (6)

In terms of Theorem 9.7, n = 2 and we wish to estimate X

2

given the observation

vector Y =

_

Y

1

Y

2

_

**. To apply Theorem 9.7, we need to ﬁnd R
**

Y

and R

YX

2

.

R

Y

= E

_

YY

_

= E

_

(X +W)(X

+W

)

_

(7)

= E

_

XX

+XW

+WX

+WW

_

. (8)

Because Xand Ware independent, E[XW

] = E[X]E[W

] = 0. Similarly, E[WX

] =

0. This implies

R

Y

= E

_

XX

_

+ E

_

WW

_

= R

X

+R

W

=

_

1.1 −0.9

−0.9 1.1

_

. (9)

In addition, we need to ﬁnd

R

YX

2

= E [YX

2

] =

_

E [Y

1

X

2

]

E [Y

2

X

2

]

_

=

_

E [(X

1

+ W

1

)X

2

]

E [(X

2

+ W

2

)X

2

]

_

. (10)

56

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since Xand Ware independent vectors, E[W

1

X

2

] = E[W

1

]E[X

2

] = 0 and E[W

2

X

2

] =

0. Thus

R

YX

2

=

_

E[X

1

X

2

]

E

_

X

2

2

_

_

=

_

−0.9

1

_

. (11)

By Theorem 9.7,

ˆ a = R

−1

Y

R

YX

2

=

_

−0.225

0.725

_

(12)

Therefore, the optimum linear estimator of X

2

given Y

1

and Y

2

is

ˆ

X

L

= ˆ a

Y = −0.225Y

1

+0.725Y

2

. (13)

The mean square error is

Var [X

2

] − ˆ a

R

YX

2

= Var [X] −a

1

r

Y

1

,X

2

−a

2

r

Y

2

,X

2

= 0.0725. (14)

Quiz 9.5

Since X and W have zero expected value, Y also has zero expected value. Thus, by

Theorem 9.7,

ˆ

X

L

(Y) = ˆ a

Y where ˆ a = R

−1

Y

R

YX

. Since X and W are independent,

E[WX] = 0 and E[XW

] = 0

. This implies

R

YX

= E [YX] = E [(1X +W)X] = 1E

_

X

2

_

= 1. (1)

By the same reasoning, the correlation matrix of Y is

R

Y

= E

_

YY

_

= E

_

(1X +W)(1

X +W

)

_

(2)

= 11

E

_

X

2

_

+1E

_

XW

_

+ E [WX] 1

+ E

_

WW

_

(3)

= 11

+R

W

(4)

Note that 11

**is a 20 ×20 matrix with every entry equal to 1. Thus,
**

ˆ a = R

−1

Y

R

YX

=

_

11

+R

W

_

−1

1 (5)

and the optimal linear estimator is

ˆ

X

L

(Y) = 1

_

11

+R

W

_

−1

Y (6)

The mean square error is

e

∗

L

= Var[X] − ˆ a

R

YX

= 1 −1

_

11

+R

W

_

−1

1 (7)

Now we note that R

W

has i, j th entry R

W

(i, j ) = c

|i −j |−1

. The question we must address

is what value c minimizes e

∗

L

. This problem is atypical in that one does not usually get

57

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

to choose the correlation structure of the noise. However, we will see that the answer is

somewhat instructive.

We note that the answer is not obviously apparent from Equation (7). In particular, we

observe that Var[W

i

] = R

W

(i, i ) = 1/c. Thus, when c is small, the noises W

i

have high

variance and we would expect our estimator to be poor. On the other hand, if c is large

W

i

and W

j

are highly correlated and the separate measurements of X are very dependent.

This would suggest that large values of c will also result in poor MSE. If this argument is

not clear, consider the extreme case in which every W

i

and W

j

have correlation coefﬁcient

ρ

i j

= 1. In this case, our 20 measurements will be all the same and one measurement is as

good as 20 measurements.

To ﬁnd the optimal value of c, we write a MATLAB function mquiz9(c) to calculate

the MSE for a given c and second function that ﬁnds plots the MSE for a range of values

of c.

function [mse,af]=mquiz9(c);

v1=ones(20,1);

RW=toeplitz(c.ˆ((0:19)-1));

RY=(v1*(v1’)) +RW;

af=(inv(RY))*v1;

mse=1-((v1’)*af);

function cmin=mquiz9minc(c);

msec=zeros(size(c));

for k=1:length(c),

[msec(k),af]=mquiz9(c(k));

end

plot(c,msec);

xlabel(’c’);ylabel(’e_Lˆ*’);

[msemin,optk]=min(msec);

cmin=c(optk);

Note in mquiz9 that v1 corresponds to the vector 1 of all ones. The following commands

ﬁnds the minimum c and also produces the following graph:

>> c=0.01:0.01:0.99;

>> mquiz9minc(c)

ans =

0.4500

0 0.5 1

0.2

0.4

0.6

0.8

1

c

e

L *

As we see in the graph, both small values and large values of c result in large MSE.

58

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 10

Quiz 10.1

There are many correct answers to this question. A correct answer speciﬁes enough

random variables to specify the sample path exactly. One choice for an alternate set of

random variables that would specify m(t, s) is

• m(0, s), the number of ongoing calls at the start of the experiment

• N, the number of new calls that arrive during the experiment

• X

1

, . . . , X

N

, the interarrival times of the N new arrivals

• H, the number of calls that hang up during the experiment

• D

1

, . . . , D

H

, the call completion times of the H calls that hang up

Quiz 10.2

(1) We obtain a continuous time, continuous valued process when we record the temper-

ature as a continuous waveform over time.

(2) If at every moment in time, we round the temperature to the nearest degree, then we

obtain a continuous time, discrete valued process.

(3) If we sample the process in part (a) every T seconds, then we obtain a discrete time,

continuous valued process.

(4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time,

discrete valued process.

Quiz 10.3

(1) Each resistor has resistance R in ohms with uniform PDF

f

R

(r) =

_

0.01 950 ≤ r ≤ 1050

0 otherwise

(1)

The probability that a test produces a 1% resistor is

p = P [990 ≤ R ≤ 1010] =

_

1010

990

(0.01) dr = 0.2 (2)

59

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) In t seconds, exactly t resistors are tested. Each resistor is a 1% resistor with proba-

bility p, independent of any other resistor. Consequently, the number of 1% resistors

found has the binomial PMF

P

N(t )

(n) =

_ _

t

n

_

p

n

(1 − p)

t −n

n = 0, 1, . . . , t

0 otherwise

(3)

(3) First we will ﬁnd the PMF of T

1

. This problem is easy if we view each resistor test

as an independent trial. A success occurs on a trial with probability p if we ﬁnd a

1% resistor. The ﬁrst 1% resistor is found at time T

1

= t if we observe failures on

trials 1, . . . , t − 1 followed by a success on trial t . Hence, just as in Example 2.11,

T

1

has the geometric PMF

P

T

1

(t ) =

_

(1 − p)

t −1

p t = 1, 2, . . .

9 otherwise

(4)

Since p = 0.2, the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is

P

T

1

(5) = (0.8)

4

(0.2) = 0.08192.

(4) From Theorem 2.5, a geometric random variable with success probability p has ex-

pected value 1/p. In this problem, E[T

1

] = 1/p = 5.

(5) Note that once we ﬁnd the ﬁrst 1% resistor, the number of additional trials needed to

ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/p

since each independent trial is a success with probability p. That is, T

2

= T

1

+ T

where T

**is independent and identically distributed to T
**

1

. Thus

E [T

2

|T

1

= 10] = E [T

1

|T

1

= 10] + E

_

T

|T

1

= 10

_

(5)

= 10 + E

_

T

_

= 10 +5 = 15 (6)

Quiz 10.4

Since each X

i

is a N(0, 1) random variable, each X

i

has PDF

f

X(i )

(x) =

1

√

2π

e

−x

2

/2

(1)

By Theorem 10.1, the joint PDF of X =

_

X

1

· · · X

n

_

is

f

X

(x) = f

X(1),...,X(n)

(x

1

, . . . , x

n

) =

k

i =1

f

X

(x

i

) =

1

(2π)

n/2

e

−(x

2

1

+···+x

2

n

)/2

(2)

60

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.5

The ﬁrst and second hours are nonoverlapping intervals. Since one hour equals 3600

sec and the Poisson process has a rate of 10 packets/sec, the expected number of packets

in each hour is E[M

i

] = α = 36, 000. This implies M

1

and M

2

are independent Poisson

random variables each with PMF

P

M

i

(m) =

_

α

m

e

−α

m!

m = 0, 1, 2, . . .

0 otherwise

(1)

Since M

1

and M

2

are independent, the joint PMF of M

1

and M

2

is

P

M

1

,M

2

(m

1

, m

2

) = P

M

1

(m

1

) P

M

2

(m

2

) =

⎧

⎪

⎪

⎨

⎪

⎪

⎩

α

m

1

+m

2

e

−2α

m

1

!m

2

!

m

1

= 0, 1, . . . ;

m

2

= 0, 1, . . . ,

0 otherwise.

(2)

Quiz 10.6

To answer whether N

**(t ) is a Poisson process, we look at the interarrival times. Let
**

X

1

, X

2

, . . . denote the interarrival times of the N(t ) process. Since we count only even-

numbered arrival for N

(t ), the time until the ﬁrst arrival of the N

(t ) is Y

1

= X

1

+ X

2

.

Since X

1

and X

2

are independent exponential (λ) random variables, Y

1

is an Erlang (n =

2, λ) random variable; see Theorem 6.11. Since Y

i

(t ), the i th interarrival time of the N

(t )

process, has the same PDF as Y

1

(t ), we can conclude that the interarrival times of N

(t )

are not exponential random variables. Thus N

**(t ) is not a Poisson process.
**

Quiz 10.7

First, we note that for t > s,

X(t ) − X(s) =

W(t ) − W(s)

√

α

(1)

Since W(t ) −W(s) is a Gaussian random variable, Theorem 3.13 states that W(t ) −W(s)

is Gaussian with expected value

E [X(t ) − X(s)] =

E [W(t ) − W(s)]

√

α

= 0 (2)

and variance

E

_

(W(t ) − W(s))

2

_

=

E

_

(W(t ) − W(s))

2

_

α

=

α(t −s)

α

(3)

Consider s

≤ s < t . Since s ≥ s

, W(t ) − W(s) is independent of W(s

). This implies

[W(t ) − W(s)]/

√

α is independent of W(s

)/

√

α for all s ≥ s

. That is, X(t ) − X(s) is

independent of X(s

) for all s ≥ s

**. Thus X(t ) is a Brownian motion process with variance
**

Var[X(t )] = t .

61

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.8

First we ﬁnd the expected value

µ

Y

(t ) = µ

X

(t ) +µ

N

(t ) = µ

X

(t ). (1)

To ﬁnd the autocorrelation, we observe that since X(t ) and N(t ) are independent and since

N(t ) has zero expected value, E[X(t )N(t

)] = E[X(t )]E[N(t

)] = 0. Since R

Y

(t, τ) =

E[Y(t )Y(t +τ)], we have

R

Y

(t, τ) = E [(X(t ) + N(t )) (X(t +τ) + N(t +τ))] (2)

= E [X(t )X(t +τ)] + E [X(t )N(t +τ)]

+ E [X(t +τ)N(t )] + E [N(t )N(t +τ)] (3)

= R

X

(t, τ) + R

N

(t, τ). (4)

Quiz 10.9

From Deﬁnition 10.14, X

1

, X

2

, . . . is a stationary random sequence if for all sets of

time instants n

1

, . . . , n

m

and time offset k,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) (1)

Since the random sequence is iid,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (2)

Similarly, for time instants n

1

+k, . . . , n

m

+k,

f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (3)

We can conclude that the iid random sequence is stationary.

Quiz 10.10

We must check whether each function R(τ) meets the conditions of Theorem 10.12:

R(τ) ≥ 0 R(τ) = R(−τ) |R(τ)| ≤ R(0) (1)

(1) R

1

(τ) = e

−|τ|

meets all three conditions and thus is valid.

(2) R

2

(τ) = e

−τ

2

also is valid.

(3) R

3

(τ) = e

−τ

cos τ is not valid because

R

3

(−2π) = e

2π

cos 2π = e

2π

> 1 = R

3

(0) (2)

(4) R

4

(τ) = e

−τ

2

sin τ also cannot be an autocorrelation function because

R

4

(π/2) = e

−π/2

sin π/2 = e

−π/2

> 0 = R

4

(0) (3)

62

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.11

(1) The autocorrelation of Y(t ) is

R

Y

(t, τ) = E [Y(t )Y(t +τ)] (1)

= E [X(−t )X(−t −τ)] (2)

= R

X

(−t −(−t −τ)) = R

X

(τ) (3)

Since E[Y(t )] = E[X(−t )] = µ

X

, we can conclude that Y(t ) is a wide sense

stationary process. In fact, we see that by viewing a process backwards in time, we

see the same second order statistics.

(2) Since X(t ) and Y(t ) are both wide sense stationary processes, we can check whether

they are jointly wide sense stationary by seeing if R

XY

(t, τ) is just a function of τ.

In this case,

R

XY

(t, τ) = E [X(t )Y(t +τ)] (4)

= E [X(t )X(−t −τ)] (5)

= R

X

(t −(−t −τ)) = R

X

(2t +τ) (6)

Since R

XY

(t, τ) depends on both t and τ, we conclude that X(t ) and Y(t ) are not

jointly wide sense stationary. To see why this is, suppose R

X

(τ) = e

−|τ|

so that

samples of X(t ) far apart in time have almost no correlation. In this case, as t gets

larger, Y(t ) = X(−t ) and X(t ) become less and less correlated.

Quiz 10.12

From the problem statement,

E [X(t )] = E [X(t +1)] = 0 (1)

E [X(t )X(t +1)] = 1/2 (2)

Var[X(t )] = Var[X(t +1)] = 1 (3)

The Gaussian random vector X =

_

X(t ) X(t +1)

_

**has covariance matrix and corre-
**

sponding inverse

C

X

=

_

1 1/2

1/2 1

_

C

−1

X

=

4

3

_

1 −1/2

−1/2 1

_

(4)

Since

x

C

−1

X

x =

_

x

0

x

1

_

4

3

_

1 −1/2

−1/2 1

_ _

x

0

x

1

_

=

4

3

_

x

2

0

− x

0

x

+

x

2

1

_

(5)

the joint PDF of X(t ) and X(t +1) is the Gaussian vector PDF

f

X(t ),X(t +1)

(x

0

, x

1

) =

1

(2π)

n/2

[det (C

X

)]

1/2

exp

_

−

1

2

x

C

−1

X

x

_

(6)

=

1

√

3π

2

e

−

2

3

_

x

2

0

−x

0

x

1

+x

2

1

_

(7)

63

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 10 20 30 40 50 60 70 80 90 100

0

20

40

60

80

100

120

t

M

(

t

)

Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10.13.

Quiz 10.13

The simple structure of the switch simulation of Example 10.28 admits a deceptively

simple solution in terms of the vector of arrivals A and the vector of departures D. With the

introduction of call blocking. we cannot generate these vectors all at once. In particular,

when an arrival occurs at time t , we need to know that M(t ), the number of ongoing calls,

satisﬁes M(t ) < c = 120. Otherwise, when M(t ) = c, we must block the call. Call

blocking can be implemented by setting the service time of the call to zero so that the call

departs as soon as it arrives.

The blocking switch is an example of a discrete event system. The system evolves via

a sequence of discrete events, namely arrivals and departures, at discrete time instances. A

simulation of the system moves from one time instant to the next by maintaining a chrono-

logical schedule of future events (arrivals and departures) to be executed. The program

simply executes the event at the head of the schedule. The logic of such a simulation is

1. Start at time t = 0 with an empty system. Schedule the ﬁrst arrival to occur at S

1

, an

exponential (λ) random variable.

2. Examine the head-of-schedule event.

• When the head-of-schedule event is the kth arrival is at time t , check the state

M(t ).

– If M(t ) < c, admit the arrival, increase the system state n by 1, and sched-

ule a departure to occur at time t + S

n

, where S

k

is an exponential (λ)

random variable.

– If M(t ) = c, block the arrival, do not schedule a departure event.

• If the head of schedule event is a departure, reduce the system state n by 1.

3. Delete the head-of-schedule event and go to step 2.

After the head-of-schedule event is completed and any new events (departures in this sys-

tem) are scheduled, we know the system state cannot change until the next scheduled event.

64

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus we know that M(t ) will stay the same until then. In our simulation, we use the vector

t as the set of time instances at which we inspect the system state. Thus for all times t(i)

between the current head-of-schedule event and the next, we set m(i) to the current switch

state.

The complete program is shown in Figure 5. In most programming languages, it is

common to implement the event schedule as a linked list where each item in the list has

a data structure indicating an event timestamp and the type of the event. In MATLAB, a

simple (but not elegant) way to do this is to have maintain two vectors: time is a list

of timestamps of scheduled events and event is a the list of event types. In this case,

event(i)=1 if the i th scheduled event is an arrival, or event(i)=-1 if the i th sched-

uled event is a departure.

When the program is passed a vector t, the output [m a b] is such that m(i) is the

number of ongoing calls at time t(i) while a and b are the number of admits and blocks.

The following instructions

t=0:0.1:5000;

[m,a,b]=simblockswitch(10,0.1,120,t);

plot(t,m);

generated a simulation lasting 5,000 minutes. A sample path of the ﬁrst 100 minutes of

that simulation is shown in Figure 4. The 5,000 minute full simulation produced a=49658

admitted calls and b=239 blocked calls. We can estimate the probability a call is blocked

as

ˆ

P

b

=

b

a +b

= 0.0048. (1)

In Chapter 12, we will learn that the exact blocking probability is given by Equation (12.93),

a result known as the “Erlang-B formula.” From the Erlang-B formula, we can calculate

that the exact blocking probability is P

b

= 0.0057. One reason our simulation underesti-

mates the blocking probability is that in a 5,000 minute simulation, roughly the ﬁrst 100

minutes are needed to load up the switch since the switch is idle when the simulation starts

at time t = 0. However, this says that roughly the ﬁrst two percent of the simulation time

was unusual. Thus this would account for only part of the disparity. The rest of the gap

between 0.0048 and 0.0057 is that a simulation that includes only 239 blocks is not all that

likely to give a very accurate result for the blocking probability.

Note that in Chapter 12, we will learn that the blocking switch is an example of an

M/M/c/c queue, a kind of Markov chain. Chapter 12 develops techniques for analyzing

and simulating systems described by Markov chains that are much simpler than the discrete

event simulation technique shown here. Nevertheless, for very complicated systems, the

discrete event simulation is widely-used and often very efﬁcient simulation method.

65

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function [M,admits,blocks]=simblockswitch(lam,mu,c,t);

blocks=0; %total # blocks

admits=0; %total # admits

M=zeros(size(t));

n=0; % # in system

time=[ exponentialrv(lam,1) ];

event=[ 1 ]; %first event is an arrival

timenow=0;

tmax=max(t);

while (timenow<tmax)

M((timenow<=t)&(t<time(1)))=n;

timenow=time(1);

eventnow=event(1);

event(1)=[ ]; time(1)= [ ]; % clear current event

if (eventnow==1) % arrival

arrival=timenow+exponentialrv(lam,1); % next arrival

b4arrival=time<arrival;

event=[event(b4arrival) 1 event(˜b4arrival)];

time=[time(b4arrival) arrival time(˜b4arrival)];

if n<c %call admitted

admits=admits+1;

n=n+1;

depart=timenow+exponentialrv(mu,1);

b4depart=time<depart;

event=[event(b4depart) -1 event(˜b4depart)];

time=[time(b4depart) depart time(˜b4depart)];

else

blocks=blocks+1; %one more block, immed departure

disp(sprintf(’Time %10.3d Admits %10d Blocks %10d’,...

timenow,admits,blocks));

end

elseif (eventnow==-1) %departure

n=n-1;

end

end

Figure 5: Discrete event simulation of the blocking switch of Quiz 10.13.

66

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 11

Quiz 11.1

By Theorem 11.2,

µ

Y

= µ

X

_

∞

−∞

h(t )dt = 2

_

∞

0

e

−t

dt = 2 (1)

Since R

X

(τ) = δ(τ), the autocorrelation function of the output is

R

Y

(τ) =

_

∞

−∞

h(u)

_

∞

−∞

h(v)δ(τ +u −v) dv du =

_

∞

−∞

h(u)h(τ +u) du (2)

For τ > 0, we have

R

Y

(τ) =

_

∞

0

e

−u

e

−τ−u

du = e

−τ

_

∞

0

e

−2u

du =

1

2

e

−τ

(3)

For τ < 0, we can deduce that R

Y

(τ) =

1

2

e

−|τ|

by symmetry. Just to be safe though, we

can double check. For τ < 0,

R

Y

(τ) =

_

∞

−τ

h(u)h(τ +u) du =

_

∞

−τ

e

−u

e

−τ−u

du =

1

2

e

τ

(4)

Hence,

R

Y

(τ) =

1

2

e

−|τ|

(5)

Quiz 11.2

The expected value of the output is

µ

Y

= µ

X

∞

n=−∞

h

n

= 0.5(1 +−1) = 0 (1)

The autocorrelation of the output is

R

Y

[n] =

1

i =0

1

j =0

h

i

h

j

R

X

[n +i − j ] (2)

= 2R

X

[n] − R

X

[n −1] − R

X

[n +1] =

_

1 n = 0

0 otherwise

(3)

Since µ

Y

= 0, The variance of Y

n

is Var[Y

n

] = E[Y

2

n

] = R

Y

[0] = 1.

67

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−15 −10 −5 0 5 10 15

0

0.2

0.4

0.6

f

S

X

(

f

)

−1500−1000 −500 0 500 1000 1500

0

2

4

6

8

x 10

f

S

X

(

f

)

−0.2 −0.1 0 0.1 0.2

−5

0

5

10

τ

R

X

(

τ

)

−2 −1 0 1 2

x 10

−3

−5

0

5

10

τ

R

X

(

τ

)

(a) W = 10 (b) W = 1000

Figure 6: The autocorrelation R

X

(τ) and power spectral density S

X

( f ) for process X(t ) in

Quiz 11.5.

Quiz 11.3

By Theorem 11.8, Y =

_

Y

33

Y

34

Y

35

_

**is a Gaussian random vector since X
**

n

is

a Gaussian random process. Moreover, by Theorem 11.5, each Y

n

has expected value

E[Y

n

] = µ

X

∞

n=−∞

h

n

= 0. Thus E[Y] = 0. Fo ﬁnd the PDF of the Gaussian vector

Y, we need to ﬁnd the covariance matrix C

Y

, which equals the correlation matrix R

Y

since

Y has zero expected value. One way to ﬁnd the R

Y

is to observe that R

Y

has the Toeplitz

structure of Theorem 11.6 and to use Theorem 11.5 to ﬁnd the autocorrelation function

R

Y

[n] =

∞

i =−∞

∞

j =−∞

h

i

h

j

R

X

[n +i − j ]. (1)

Despite the fact that R

X

[k] is an impulse, using Equation (1) is surprisingly tedious because

we still need to sum over all i and j such that n +i − j = 0.

In this problem, it is simpler to observe that Y = HX where

X =

_

X

30

X

31

X

32

X

33

X

34

X

35

_

(2)

and

H =

1

4

⎡

⎣

1 1 1 1 0 0

0 1 1 1 1 0

0 0 1 1 1 1

⎤

⎦

. (3)

In this case, following Theorem 11.7, or by directly applying Theorem 5.13 with µ

X

= 0

and A = H, we obtain R

Y

= HR

X

H

. Since R

X

[n] = δ

n

, R

X

= I, the identity matrix.

68

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus

C

Y

= R

Y

= HH

=

1

16

⎡

⎣

4 3 2

3 4 3

2 3 4

⎤

⎦

. (4)

It follows (very quickly if you use MATLAB for 3 ×3 matrix inversion) that

C

−1

Y

= 16

⎡

⎣

7/12 −1/2 1/12

−1/2 1 −1/2

1/12 −1/2 7/12

⎤

⎦

. (5)

Thus, the PDF of Y is

f

Y

(y) =

1

(2π)

3/2

[det (C

Y

)]

1/2

exp

_

−

1

2

y

C

−1

Y

y

_

. (6)

A disagreeable amount of algebra will show det(C

Y

) = 3/1024 and that the PDF can be

“simpliﬁed” to

f

Y

(y) =

16

√

6π

3

exp

_

−8

_

7

12

y

2

33

+ y

2

34

+

7

12

y

2

35

− y

33

y

34

+

1

6

y

33

y

35

− y

34

y

35

__

. (7)

Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution

is that y

C

−1

Y

y is a very concise representation of the cross-terms in the exponent of f

Y

(y).

Quiz 11.4

This quiz is solved using Theorem 11.9 for the case of k = 1 and M = 2. In this case,

X

n

=

_

X

n−1

X

n

_

and

R

X

n

=

_

R

X

[0] R

X

[1]

R

X

[1] R

X

[0]

_

=

_

1.1 0.9

0.9 1.1

_

(1)

and

R

X

n

X

n+1

= E

__

X

n−1

X

n

_

X

n+1

_

=

_

R

X

[2]

R

X

[1]

_

=

_

0.81

0.9

_

. (2)

The MMSE linear ﬁrst order ﬁlter for predicting X

n+1

at time n is the ﬁlter h such that

←−

h = R

−1

X

n

R

X

n

X

n+1

=

_

1.1 0.9

0.9 1.1

_

−1

_

0.81

0.9

_

=

1

400

_

81

261

_

. (3)

It follows that the ﬁlter is h =

_

261/400 81/400

_

**and the MMSE linear predictor is
**

ˆ

X

n+1

=

81

400

X

n−1

+

261

400

X

n

. (4)

to ﬁnd the mean square error, one approach is to follow the method of Example 11.13 and

to directly calculate

e

∗

L

= E

_

(X

n+1

−

ˆ

X

n+1

)

2

_

. (5)

69

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This method is workable for this simple problem but becomes increasingly tedious for

higher order ﬁlters. Instead, we can derive the mean square error for an arbitary prediction

ﬁlter h. Since

ˆ

X

n+1

=

←−

h

X

n

,

e

∗

L

= E

_

_

X

n+1

−

←−

h

X

n

_

2

_

(6)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−

←−

h

X

n

)

_

(7)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−X

n

←−

h )

_

(8)

After a bit of algebra, we obtain

e

∗

L

= R

X

[0] −2

←−

h

R

X

n

X

n+1

+

←−

h

R

X

n

←−

h (9)

(10)

with the substitution

←−

h = R

−1

X

n

R

X

n

X

n+1

, we obtain

e

∗

L

= R

X

[0] −R

X

n

X

n+1

R

−1

X

n

R

X

n

X

n+1

(11)

= R

X

[0] −

←−

h

R

X

n

X

n+1

(12)

Note that this is essentially the same result as Theorem 9.7 with Y = X

n

, X = X

n+1

and

ˆ a

=

←−

h

. It is noteworthy that the result is derived in a much simpler way in the proof of

Theorem 9.7 by using the orthoginality property of the LMSE estimator.

In any case, the mean square error is

e

∗

L

= R

X

[0] −

←−

h

R

X

n

X

n+1

= 1.1 −

1

400

_

81 261

_

_

0.81

0.9

_

=

506

1451

= 0.3487. (13)

recalling that the blind estimate would yield a mean square error of Var[X] = 1.1, we see

that observing X

n−1

and X

n

improves the accuracy of our prediction of X

n+1

.

Quiz 11.5

(1) By Theorem 11.13(b), the average power of X(t ) is

E

_

X

2

(t )

_

=

_

∞

−∞

S

X

( f ) d f =

_

W

−W

5

W

d f = 10 Watts (1)

(2) The autocorrelation function is the inverse Fourier transform of S

X

( f ). Consulting

Table 11.1, we note that

S

X

( f ) = 10

1

2W

rect

_

f

2W

_

(2)

It follows that the inverse transform of S

X

( f ) is

R

X

(τ) = 10 sinc(2Wτ) = 10

sin(2πWτ)

2πWτ

(3)

(3) For W = 10 Hz and W = 1 kHZ, graphs of S

X

( f ) and R

X

(τ) appear in Figure 6.

70

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.6

In a sampled system, the discrete time impulse δ[n] has a ﬂat discrete Fourier transform.

That is, if R

X

[n] = 10δ[n], then

S

X

(φ) =

∞

n=−∞

10δ[n]e

−j 2πφn

= 10 (1)

Thus, R

X

[n] = 10δ[n]. (This quiz is really lame!)

Quiz 11.7

Since Y(t ) = X(t −t

0

),

R

XY

(t, τ) = E [X(t )Y(t +τ)] = E [X(t )X(t +τ −t

0

)] = R

X

(τ −t

0

) (1)

We see that R

XY

(t, τ) = R

XY

(τ) = R

X

(τ − t

0

). From Table 11.1, we recall the prop-

erty that g(τ − τ

0

) has Fourier transform G( f )e

−j 2π f τ

0

. Thus the Fourier transform of

R

XY

(τ) = R

X

(τ −t

0

) = g(τ −t

0

) is

S

XY

( f ) = S

X

( f )e

−j 2π f t

0

. (2)

Quiz 11.8

We solve this quiz using Theorem 11.17. First we need some preliminary facts. Let

a

0

= 5,000 so that

R

X

(τ) =

1

a

0

a

0

e

−a

0

|τ|

. (1)

Consulting with the Fourier transforms in Table 11.1, we see that

S

X

( f ) =

1

a

0

2a

2

0

a

2

0

+(2π f )

2

=

2a

0

a

2

0

+(2π f )

2

(2)

The RC ﬁlter has impulse response h(t ) = a

1

e

−a

1

t

u(t ), where u(t ) is the unit step function

and a

1

= 1/RC where RC = 10

−4

is the ﬁlter time constant. From Table 11.1,

H( f ) =

a

1

a

1

+ j 2π f

(3)

(1) Theorem 11.17,

S

XY

( f ) = H( f )S

X

( f ) =

2a

0

a

1

[a

1

+ j 2π f ]

_

a

2

0

+(2π f )

2

_. (4)

(2) Again by Theorem 11.17,

S

Y

( f ) = H

∗

( f )S

XY

( f ) = |H( f )|

2

S

X

( f ). (5)

71

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Note that

|H( f )|

2

= H( f )H

∗

( f ) =

a

1

(a

1

+ j 2π f )

a

1

(a

1

− j 2π f )

=

a

2

1

a

2

1

+(2π f )

2

(6)

Thus,

S

Y

( f ) = |H( f )|

2

S

X

( f ) =

2a

0

a

2

1

_

a

2

1

+(2π f )

2

_ _

a

2

0

+(2π f )

2

_ (7)

(3) To ﬁnd the average power at the ﬁlter output, we can either use basic calculus and

calculate

_

∞

−∞

S

Y

( f ) d f directly or we can ﬁnd R

Y

(τ) as an inverse transform of

S

Y

( f ). Using partial fractions and the Fourier transform table, the latter method is

actually less algebra. In particular, some algebra will show that

S

Y

( f ) =

K

0

a

2

0

+(2π f )

2

+

K

1

a

1

+(2π f )

2

(8)

where

K

0

=

2a

0

a

2

1

a

2

1

−a

2

0

, K

1

=

−2a

0

a

2

1

a

2

1

−a

2

0

. (9)

Thus,

S

Y

( f ) =

K

0

2a

2

0

2a

2

0

a

2

0

+(2π f )

2

+

K

1

2a

2

1

2a

2

1

a

1

+(2π f )

2

. (10)

Consulting with Table 11.1, we see that

R

Y

(τ) =

K

0

2a

2

0

a

0

e

−a

0

|τ|

+

K

1

2a

2

1

a

1

e

−a

1

|τ|

(11)

Substituting the values of K

0

and K

1

, we obtain

R

Y

(τ) =

a

2

1

e

−a

0

|τ|

−a

0

a

1

e

−a

1

|τ|

a

2

1

−a

2

0

. (12)

The average power of the Y(t ) process is

R

Y

(0) =

a

1

a

1

+a

0

=

2

3

. (13)

Note that the input signal has average power R

X

(0) = 1. Since the RC ﬁlter has a 3dB

bandwidth of 10,000 rad/sec and the signal X(t ) has most of its its signal energy below

5,000 rad/sec, the output signal has almost as much power as the input.

72

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.9

This quiz implements an example of Equations (11.146) and (11.147) for a system in

which we ﬁlter Y(t ) = X(t ) + N(t ) to produce an optimal linear estimate of X(t ). The

solution to this quiz is just to ﬁnd the ﬁlter

ˆ

H( f ) using Equation (11.146) and to calculate

the mean square error e

L

∗ using Equation (11.147).

Comment: Since the text omitted the derivations of Equations (11.146) and (11.147), we

note that Example 10.24 showed that

R

Y

(τ) = R

X

(τ) + R

N

(τ), R

Y X

(τ) = R

X

(τ). (1)

Taking Fourier transforms, it follows that

S

Y

( f ) = S

X

( f ) + S

N

( f ), S

Y X

( f ) = S

X

( f ). (2)

Now we can go on to the quiz, at peace with the derivations.

(1) Since µ

N

= 0, R

N

(0) = Var[N] = 1. This implies

R

N

(0) =

_

∞

−∞

S

N

( f ) d f =

_

B

−B

N

0

d f = 2N

0

B (3)

Thus N

0

= 1/(2B). Because the noise process N(t ) has constant power R

N

(0) = 1,

decreasing the single-sided bandwidth B increases the power spectral density of the

noise over frequencies | f | < B.

(2) Since R

X

(τ) = sinc(2Wτ), where W = 5,000 Hz, we see from Table 11.1 that

S

X

( f ) =

1

10

4

rect

_

f

10

4

_

. (4)

The noise power spectral density can be written as

S

N

( f ) = N

0

rect

_

f

2B

_

=

1

2B

rect

_

f

2B

_

, (5)

From Equation (11.146), the optimal ﬁlter is

ˆ

H( f ) =

S

X

( f )

S

X

( f ) + S

N

( f )

=

1

10

4

rect

_

f

10

4

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_. (6)

73

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) We produce the output

ˆ

X(t ) by passing the noisy signal Y(t ) through the ﬁlter

ˆ

H( f ).

From Equation (11.147), the mean square error of the estimate is

e

∗

L

=

_

∞

−∞

S

X

( f )S

N

( f )

S

X

( f ) + S

N

( f )

d f (7)

=

_

∞

−∞

1

10

4

rect

_

f

10

4

_

1

2B

rect

_

f

2B

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_ d f. (8)

To evaluate the MSE e

∗

L

, we need to whether B ≤ W. Since the problem asks us to

ﬁnd the largest possible B, let’s suppose B ≤ W. We can go back and consider the

case B > W later. When B ≤ W, the MSE is

e

∗

L

=

_

B

−B

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

10

4

1

10

4

+

1

2B

=

1

1 +

5,000

B

(9)

To obtain MSE e

∗

L

≤ 0.05 requires B ≤ 5,000/19 = 263.16 Hz.

Although this completes the solution to the quiz, what is happening may not be obvious.

The noise power is always Var[N] = 1 Watt, for all values of B. As B is decreased, the PSD

S

N

( f ) becomes increasingly tall, but only over a bandwidth B that is decreasing. Thus as

B descreases, the ﬁlter

ˆ

H( f ) makes an increasingly deep and narrow notch at frequencies

| f | ≤ B. Two examples of the ﬁlter

ˆ

H( f ) are shown in Figure 7. As B shrinks, the ﬁlter

suppresses less of the signal of X(t ). The result is that the MSE goes down.

Finally, we note that we can choose B very large and also achieve MSE e

∗

L

= 0.05. In

particular, when B > W = 5000, S

N

( f ) = 1/2B over frequencies | f | < W. In this case,

the Wiener ﬁlter

ˆ

H( f ) is an ideal (ﬂat) lowpass ﬁlter

ˆ

H( f ) =

⎧

⎨

⎩

1

10

4

1

10

4

+

1

2B

| f | < 5,000,

0 otherwise.

(10)

Thus increasing B spreads the constant 1 watt of power of N(t ) over more bandwidth. The

Wiener ﬁlter removes the noise that is outside the band of the desired signal. The mean

square error is

e

∗

L

=

_

5000

−5000

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

2B

1

10

4

+

1

2B

=

1

B

5000

+1

(11)

In this case, B ≥ 9.5 ×10

4

guarantees e

∗

L

≤ 0.05.

Quiz 11.10

It is fairly straightforward to ﬁnd S

X

(φ) and S

Y

(φ). The only thing to keep in mind is

to use fftc to transform the autocorrelation R

X

[ f ] into the power spectral density S

X

(φ).

The following MATLAB program generates and plots the functions shown in Figure 8

74

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

B = 500 B = 2500

Figure 7: Wiener ﬁlter for Quiz 11.9.

%mquiz11.m

N=32;

rx=[2 4 2]; SX=fftc(rx,N); %autocorrelation and PSD

stem(0:N-1,abs(sx));

xlabel(’n’);ylabel(’S_X(n/N)’);

h2=0.5*[1 1]; H2=fft(h2,N); %impulse/filter response: M=2

SY2=SX.* ((abs(H2)).ˆ2);

figure; stem(0:N-1,abs(SY2)); %PSD of Y for M=2

xlabel(’n’);ylabel(’S_{Y_2}(n/N)’);

h10=0.1*ones(1,10); H10=fft(h10,N); %impulse/filter response: M=10

SY10=sx.*((abs(H10)).ˆ2);

figure; stem(0:N-1,abs(SY10));

xlabel(’n’);ylabel(’S_{Y_{10}}(n/N)’);

Relative to M = 2, when M = 10, the ﬁlter H(φ) ﬁlters out almost all of the high

frequency components of X(t ). In the context of Example 11.26, the low pass moving

average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter

output that varies very slowly.

As an aside, note that the vectors SX, SY2 and SY10 in mquiz11 should all be real-

valued vectors. However, the ﬁnite numerical precision of MATLAB results in tiny imagi-

nary parts. Although these imaginary parts have no computational signiﬁcance, they tend

to confuse the stem function. Hence, we generate stem plots of the magnitude of each

power spectral density.

75

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 5 10 15 20 25 30 35

0

5

10

n

S

X

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

2

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

1

0

(

n

/

N

)

Figure 8: For Quiz 11.10, graphs of S

X

(φ), S

Y

(n/N) for M = 2, and S

φ

(n/N) for M = 10

using an N = 32 point DFT.

76

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 12

Quiz 12.1

The system has two states depending on whether the previous packet was received in

error. From the problem statement, we are given the conditional probabilities

P

_

X

n+1

= 0|X

n

= 0

_

= 0.99 P

_

X

n+1

= 1|X

n

= 1

_

= 0.9 (1)

Since each X

n

must be either 0 or 1, we can conclude that

P

_

X

n+1

= 1|X

n

= 0

_

= 0.01 P

_

X

n+1

= 0|X

n

= 1

_

= 0.1 (2)

These conditional probabilities correspond to the transition matrix and Markov chain:

0 1

0.01

0.1

0.99 0.9

P =

_

0.99 0.01

0.10 0.90

_

(3)

Quiz 12.2

From the problem statement, the Markov chain and the transition matrix are

0 1 1

0.6 0.2

0.2 0.6

0.4 0.6 0.4

P =

⎡

⎣

0.4 0.6 0

0.2 0.6 0.2

0 0.6 0.4

⎤

⎦

(1)

The eigenvalues of P are

λ

1

= 0 λ

2

= 0.4 λ

3

= 1 (2)

We can diagonalize P into

P = S

−1

DS =

⎡

⎣

−0.6 0.5 1

0.4 0 1

−0.6 −0.5 1

⎤

⎦

⎡

⎣

λ

1

0 0

0 λ

2

0

0 0 λ

3

⎤

⎦

⎡

⎣

−0.5 1 −0.5

1 0 −1

0.2 0.6 0.2

⎤

⎦

(3)

where s

i

, the i th row of S, is the left eigenvector of P satisfying s

i

P = λ

i

s

i

. Algebra will

verify that the n-step transition matrix is

P

n

= S

−1

D

n

S =

⎡

⎣

0.2 0.6 0.2

0.2 0.6 0.2

0.2 0.6 0.2

⎤

⎦

+(0.4)

n

⎡

⎣

0.5 0 −0.5

0 0 0

−0.5 0 0.5

⎤

⎦

(4)

Quiz 12.3

The Markov chain describing the factory status and the corresponding state transition

matrix are

77

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

2

0 1

0.9

0.1

1

1

P =

⎡

⎣

0.9 0.1 0

0 0 1

1 0 0

⎤

⎦

(1)

With π =

_

π

0

π

1

π

2

_

, the system of equations π

= π

P yields π

1

= 0.1π

0

and

π

2

= π

1

. This implies

π

0

+π

1

+π

2

= π

0

(1 +0.1 +0.1) = 1 (2)

It follows that the limiting state probabilities are

π

0

= 5/6, π

1

= 1/12, π

2

= 1/12. (3)

Quiz 12.4

The communicating classes are

C

1

= {0, 1} C

2

= {2, 3} C

3

= {4, 5, 6} (1)

The states in C

1

and C

3

are aperiodic. The states in C

2

have period 2. Once the system

enters a state in C

1

, the class C

1

is never left. Thus the states in C

1

are recurrent. That

is, C

1

is a recurrent class. Similarly, the states in C

3

are recurrent. On the other hand, the

states in C

2

are transient. Once the system exits C

2

, the states in C

2

are never reentered.

Quiz 12.5

At any time t , the state n can take on the values 0, 1, 2, . . .. The state transition proba-

bilities are

P

n−1,n

= P [K > n|K > n −1] =

P [K > n]

P [K > n −1]

(1)

P

n−1,0

= P [K = n|K > n −1] =

P [K = n]

P [K > n −1]

(2)

(3)

The Markov chain resembles

0 1

P K=2 [ ]

P K= [ 1]

3 4

P K=4 [ ]

2

P K=3 [ ]

P K=5 [ ]

1 1 1 1 1

… ...

78

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The stationary probabilities satisfy

π

0

= π

0

P [K = 1] +π

1

, (4)

π

1

= π

0

P [K = 2] +π

2

, (5)

.

.

.

π

k−1

= π

0

P [K = k] +π

k

, k = 1, 2, . . . (6)

From Equation (4), we obtain

π

1

= π

0

(1 − P [K = 1]) = π

0

P [K > 1] (7)

Similarly, Equation (5) implies

π

2

= π

1

−π

0

P [K = 2] = π

0

(P [K > 1] − P [K = 2]) = π

0

P [K > 2] (8)

This suggests that π

k

= π

0

P[K > k]. We verify this pattern by showing that π

k

=

π

0

P[K > k] satisﬁes Equation (6):

π

0

P [K > k −1] = π

0

P [K = k] +π

0

P [K > k] . (9)

When we apply

∞

k=0

π

k

= 1, we obtain π

0

∞

n=0

P[K > k] = 1. From Problem 2.5.11,

we recall that

∞

k=0

P[K > k] = E[K]. This implies

π

n

=

P [K > n]

E [K]

(10)

This Markov chain models repeated random countdowns. The system state is the time until

the counter expires. When the counter expires, the system is in state 0, and we randomly

reset the counter to a new value K = k and then we count down k units of time. Since we

spend one unit of time in each state, including state 0, we have k −1 units of time left after

the state 0 counter reset. If we have a random variable W such that the PMF of W satisﬁes

P

W

(n) = π

n

, then W has a discrete PMF representing the remaining time of the counter at

a time in the distant future.

Quiz 12.6

(1) By inspection, the number of transitions need to return to state 0 is always a multiple

of 2. Thus the period of state 0 is d = 2.

(2) To ﬁnd the stationary probabilities, we solve the system of equations π = πP and

3

i =0

π

i

= 1:

π

0

= (3/4)π

1

+(1/4)π

3

(1)

π

1

= (1/4)π

0

+(1/4)π

2

(2)

π

2

= (1/4)π

1

+(3/4)π

3

(3)

1 = π

0

+π

1

+π

2

+π

3

(4)

79

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Solving the second and third equations for π

2

and π

3

yields

π

2

= 4π

1

−π

0

π

3

= (4/3)π

2

−(1/3)π

1

= 5π

1

−(4/3)π

0

(5)

Substituting π

3

back into the ﬁrst equation yields

π

0

= (3/4)π

1

+(1/4)π

3

= (3/4)π

1

+(5/4)π

1

−(1/3)π

0

(6)

This implies π

1

= (2/3)π

0

. It follows from the ﬁrst and second equations that

π

2

= (5/3)π

0

and π

3

= 2π

0

. Lastly, we choose π

0

so the state probabilities sum to

1:

1 = π

0

+π

1

+π

2

+π

3

= π

0

_

1 +

2

3

+

5

3

+2

_

=

16

3

π

0

(7)

It follows that the state probabilities are

π

0

=

3

16

π

1

=

2

16

π

2

=

5

16

π

3

=

6

16

(8)

(3) Since the system starts in state 0 at time 0, we can use Theorem 12.14 to ﬁnd the

limiting probability that the system is in state 0 at time nd:

lim

n→∞

P

00

(nd) = dπ

0

=

3

8

(9)

Quiz 12.7

The Markov chain has the same structure as that in Example 12.22. The only difference

is the modiﬁed transition rates:

0 1

1

3 4

( ) 2/3

a

1 - ( ) 2/3

a

( ) 3/4

a

1 - 3/4 ( )

a

( ) 4/5

a

1 - 4/5 ( )

a

2

( ) 1/2

a

1- 1/2 ( )

a

…

The event T

00

> n occurs if the system reaches state n before returning to state 0, which

occurs with probability

P [T

00

> n] = 1 ×

_

1

2

_

α

×

_

2

3

_

α

×· · · ×

_

n −1

n

_

α

=

_

1

n

_

α

. (1)

Thus the CDF of T

00

satisﬁes F

T

00

(n) = 1−P[T

00

> n] = 1−1/n

α

. To determine whether

state 0 is recurrent, we observe that for all α > 0

P [V

00

] = lim

n→∞

F

T

00

(n) = lim

n→∞

1 −

1

n

α

= 1. (2)

80

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus state 0 is recurrent for all α > 0. Since the chain has only one communicating class,

all states are recurrent. ( We also note that if α = 0, then all states are transient.)

To determine whether the chain is null recurrent or positive recurrent, we need to calcu-

late E[T

00

]. In Example 12.24, we did this by deriving the PMF P

T

00

(n). In this problem,

it will be simpler to use the result of Problem 2.5.11 which says that

∞

k=0

P[K > k] =

E[K] for any non-negative integer-valued random variable K. Applying this result, the

expected time to return to state 0 is

E [T

00

] =

∞

n=0

P [T

00

> n] = 1 +

∞

n=1

1

n

α

. (3)

For 0 < α ≤ 1, 1/n

α

≥ 1/n and it follows that

E [T

00

] ≥ 1 +

∞

n=1

1

n

= ∞. (4)

We conclude that the Markov chain is null recurrent for 0 < α ≤ 1. On the other hand, for

α > 1,

E [T

00

] = 2 +

∞

n=2

1

n

α

. (5)

Note that for all n ≥ 2

1

n

α

≤

_

n

n−1

dx

x

α

(6)

This implies

E [T

00

] ≤ 2 +

∞

n=2

_

n

n−1

dx

x

α

(7)

= 2 +

_

∞

1

dx

x

α

(8)

= 2 +

x

−α+1

−α +1

¸

¸

¸

¸

∞

1

= 2 +

1

α −1

< ∞ (9)

Thus for all α > 1, the Markov chain is positive recurrent.

Quiz 12.8

The number of customers in the ”friendly” store is given by the Markov chain

1 i i+1

p p p

( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q

( ) 1-p q ( ) 1-p q ( ) 1-p q ( ) 1-p q

0

××× ×××

81

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

In the above chain, we note that (1 − p)q is the probability that no new customer arrives,

an existing customer gets one unit of service and then departs the store.

By applying Theorem 12.13 with state space partitioned between S = {0, 1, . . . , i } and

S

**= {i +1, i +2, . . .}, we see that for any state i ≥ 0,
**

π

i

p = π

i +1

(1 − p)q. (1)

This implies

π

i +1

=

p

(1 − p)q

π

i

. (2)

Since Equation (2) holds for i = 0, 1, . . ., we have that π

i

= π

0

α

i

where

α =

p

(1 − p)q

. (3)

Requiring the state probabilities to sum to 1, we have that for α < 1,

∞

i =0

π

i

= π

0

∞

i =0

α

i

=

π

0

1 −α

= 1. (4)

Thus for α < 1, the limiting state probabilities are

π

i

= (1 −α)α

i

, i = 0, 1, 2, . . . (5)

In addition, for α ≥ 1 or, equivalently, p ≥ q/(1 − q), the limiting state probabilities do

not exist.

Quiz 12.9

The continuous time Markov chain describing the processor is

0 1

2

3.01

3 4

2

3

2

3

2

2

3

0.01

0.01

0.01

Note that q

10

= 3.1 since the task completes at rate 3 per msec and the processor reboots

at rate 0.1 per msec and the rate to state 0 is the sum of those two rates. From the Markov

chain, we obtain the following useful equations for the stationary distribution.

5.01p

1

= 2p

0

+3p

2

5.01p

2

= 2p

1

+3p

3

5.01p

3

= 2p

2

+3p

4

3.01p

4

= 2p

3

We can solve these equations by working backward and solving for p

4

in terms of p

3

, p

3

in terms of p

2

and so on, yielding

p

4

=

20

31

p

3

p

3

=

620

981

p

2

p

2

=

19620

31431

p

1

p

1

=

628, 620

1, 014, 381

p

0

(1)

82

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Applying p

0

+ p

1

+ p

2

+ p

3

+ p

4

= 1 yields p

0

= 1, 014, 381/2, 443, 401 and the

stationary probabilities are

p

0

= 0.4151 p

1

= 0.2573 p

2

= 0.1606 p

3

= 0.1015 p

4

= 0.0655 (2)

Quiz 12.10

The M/M/c/∞queue has Markov chain

c c+1 1 0

λ λ λ λ λ

µ 2µ

cµ cµ cµ

From the Markov chain, the stationary probabilities must satisfy

p

n

=

_

(ρ/n) p

n−1

n = 1, 2, . . . , c

(ρ/c) p

n−1

n = c +1, c +2, . . .

(1)

It is straightforward to show that this implies

p

n

=

_

p

0

ρ

n

/n! n = 1, 2, . . . , c

p

0

(ρ/c)

n−c

ρ

c

/c! n = c +1, c +2, . . .

(2)

The requirement that

∞

n=0

p

n

= 1 yields

p

0

=

_

c

n=0

ρ

n

/n! +

ρ

c

c!

ρ/c

1 −ρ/c

_

−1

(3)

83

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Functions for Random Variables
**

bernoullipmf y=bernoullipmf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).

function pv=bernoullipmf(p,x) %For Bernoulli (p) rv X %input = vector x %output = vector pv %such that pv(i)=Prob(X=x(i)) pv=(1-p)*(x==0) + p*(x==1); pv=pv(:);

bernoullicdf

y=bernoullicdf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=bernoullicdf(p,x) %Usage: cdf=bernoullicdf(p,x) % For Bernoulli (p) rv X, %given input vector x, output is %vector pv such that pv(i)=Prob[X<=x(i)] x=floor(x(:)); allx=0:1; allcdf=cumsum(bernoullipmf(p,allx)); okx=(x>=0); %x_i < 1 are bad values x=(okx.*x); %set bad x_i=0 cdf= okx.*allcdf(x); %zeroes out bad x_i

bernoullirv

x=bernoullirv(p,m) Input: p is the success probability of a Bernoulli random variable X , m is a positive integer vector of possible sample values Output: x is a vector of m independent sample values of X

function x=bernoullirv(p,m) %return m samples of bernoulli (p) rv r=rand(m,1); x=(r>=(1-p));

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf

y=bignomialpmf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). Comment: This function should always produce the same output as binomialpmf(n,p,x); however, the function calculates the logarithm of the probability and thismay lead to small numerical innaccuracy.

function pmf=bignomialpmf(n,p,x) %binomial(n,p) rv X, %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] k=(0:n-1)’; a=log((p/(1-p))*((n-k)./(k+1))); L0=n*log(1-p); L=[L0; L0+cumsum(a)]; pb=exp(L); % pb=[P[X=0] ... P[X=n]]ˆt x=x(:); okx =(x>=0).*(x<=n).*(x==floor(x)); x=okx.*x; pmf=okx.*pb(x+1);

binomialcdf

y=binomialcdf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=binomialcdf(n,p,x) %Usage: cdf=binomialcdf(n,p,x) %For binomial(n,p) rv X, %and input vector x, output is %vector cdf: cdf(i)=P[X<=x(i)] x=floor(x(:)); %for noninteger x(i) allx=0:max(x); %calculate cdf from 0 to max(x) allcdf=cumsum(binomialpmf(n,p,allx)); okx=(x>=0); %x(i) < 0 are zero-prob values x=(okx.*x); %set zero-prob x(i)=0 cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

hot springs.m) Input: n and p are the parameters of a binomial random variable X . % pb=[P[X=0] .ˆ2) +(ny.y.rho.ˆ2) .r).*pb(x+1). pb=((1-pp)ˆn)*cumprod([1 ip]).muY. end pb=pb(:).. P[X=n]]ˆt x=x(:).p) samples r=rand(m..x) Input: n and p are the parameters of a binomial (n. f=exp(-((nx. bivariategausspdf function f=bivariategausspdf(muX.muY.sigmaX.y) %Usage: f=bivariategausspdf(muX.sigmaX.*(x<=n).*ny))/(2*(1-rhoˆ2))).rho) PDF nx=(x-muX)/sigmaX. cdf=binomialcdf(n.muY.sigmaY.sigmaX.sigmaY. 4 Address:104 pine meadows loop.*(x==floor(x)). pmf=okx. us (United States) Zip Code:71901 .p.1). else pp=1-p.(2*rho*nx.0:n).sigmaY.muY.p.x.sigmaY.rho of the bivariate Gaussian PDF.Name:joey iwatsuru Email:joeyiwat@yahoo. Input: Scalar parameters muX. binomialrv x=binomialrv(n. if pp < p pb=fliplr(pb).p) rv X. p) random variable X . m is a positive integer Output: x is a vector of m independent samples of random variable X function x=binomialrv(n.sigmaX./(i+1))*(pp/(1-pp)). scalars x and y. okx =(x>=0).x) %binomial(n. Output: f the value of the bivariate Gaussian PDF at x.p.y) %Evaluate the bivariate Gaussian (muX. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] if p<0.rho. end i=0:n-1.p. x=okx.p.x. AR.*x. function pmf=binomialpmf(n. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).5 pp=p.m) % m binomial(n. ny=(y-muY)/sigmaY.com Phone:5017621195 binomialpmf y=binomialpmf(n. f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2)). ip= ((n-i). x=count(cdf.

l.l.*(x<=l). duniformrv x=duniformrv(k.com Phone:5017621195 duniformcdf y=duniformcdf(k. x=k+count(cdf. l) random variable X .x) Input: k and l are the parameters of a discrete uniform (k. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=duniformrv(k.x) %Usage: cdf=duniformcdf(k. function pmf=duniformpmf(k.l.l.m) Input: k and l are the parameters of a discrete uniform (k. AR. hot springs. function cdf=duniformcdf(k. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).Name:joey iwatsuru Email:joeyiwat@yahoo.l. output is % vector cdf: cdf(i)=Prob[X<=x(i)] x=floor(x(:)).l) rv X % and input vector x. cdf=duniformcdf(k. duniformpmf y=duniformpmf(k. %set zero prob x(i)=k x=((1-okx)*k)+(okx.l) rv X. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] pmf= (x>=k).k:l). %x_i < k are zero prob values okx=(x>=k). l) random variable X .*(x==floor(x)).l.l) random variable r=rand(m. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).*allcdf(x-k+1). %x(i)=0 for zero prob x(i) cdf= okx. %for noninteger x_i allx=k:max(x).r). l) random variable X .x) % For discrete uniform (k. %allcdf = cdf values from 0 to max(x) allcdf=cumsum(duniformpmf(k.1). us (United States) Zip Code:71901 . 5 Address:104 pine meadows loop.l.m) %returns m samples of a discrete %uniform (k.x) Input: k and l are the parameters of a discrete uniform (k.x) %discrete uniform(k. pmf=pmf(:)/(l-k+1).allx)).*x).l.l.

2). vector x Output: Vector y such that yi = FX (xi ) = 1 − e−λxi .lambda. vector x Output: Vector y such that yi = f X (xi ) = λn xin−1 e−λxi /(n − 1)!.. %Usage: pb=erlangb(rho.lambda.c).m) Input: n and lambda are the parameters of an Erlang random variable X .x) Input: n and lambda are the parameters of an Erlang random variable X .lambda. and the number of servers c of an M/M/c/c queue. exponentialcdf y=exponentialcdf(lambda. us (United States) Zip Code:71901 .c) %returns the Erlang-B blocking %probability for sn M/M/c/c %queue with load rho pn=exp(-rho)*poissonpmf(rho.0-poissoncdf(lambda*x.ˆ(n-1)).n-1).lambda.m) y=exponentialrv(lambda. integer m Output: Length m vector x such that each xi is a sample of X function x=erlangrv(n.c) Input: Offered load rho (ρ = λ/µ). erlangcdf y=erlangcdf(n.*exp(-lambda*x). function f=erlangpdf(n. x=sum(reshape(y.m.x) Input: lambda is the parameter of an exponential random variable X .x) f=((lambdaˆn)/factorial(n)). function F=exponentialcdf(lambda. function F=erlangcdf(n. Output: pb. *(x.lambda.n). erlangrv x=erlangrv(n. AR.lambda.com Phone:5017621195 erlangb pb=erlangb(rho.0-exp(-lambda*x).Name:joey iwatsuru Email:joeyiwat@yahoo.x) Input: n and lambda are the parameters of an Erlang random variable X .x) F=1. erlangpdf y=erlangpdf(n. hot springs. the blocking probability of the queue function pb=erlangb(rho.x) F=1. vector x Output: Vector y such that yi = FX (xi ). pb=pn(c+1)/sum(pn)..m*n).0:c). 6 Address:104 pine meadows loop.

SY and probability grid PXY describing the ﬁnite random variables X and Y . pxi]. end finitecoeff rho=finitecoeff(SX. the correlation coefﬁcient of X and Y function rho=finitecoeff(SX. Output: rho.PXY).PXY). 7 Address:104 pine meadows loop.com Phone:5017621195 exponentialpdf y=exponentialpdf(lambda.1)).SY.sx(2). ey=finiteexp(SY. function f=exponentialpdf(lambda. %Usage: rho=finitecoeff(SX.m) x=-(1/lambda)*log(1-rand(m.PXY).x) f=lambda*exp(-lambda*x).} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % cdf: cdf(i)=P[X=x(i)] cdf=[].SY.x) Input: sx is the range of a ﬁnite random variable X .PXY) Input: Grids SX. AR.*SY. px is the corresponding probability assignment.PXY). finitecdf y=finitecdf(sx. for i=1:length(x) pxi= sum(p(find(s<=x(i)))).. hot springs. .PXY). cdf=[cdf.p. function cdf=finitecdf(s. vy=finitevar(SY.p. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).x) Input: lambda is the parameter of an exponential random variable X . rho=(R-ex*ey)/sqrt(vx*vy).PXY) %Calculate the correlation coefficient rho of %finite random variables X and Y ex=finiteexp(SX. R=finiteexp(SX.*(x>=0).Name:joey iwatsuru Email:joeyiwat@yahoo. integer m Output: Length m vector x such that each xi is a sample of X function x=exponentialrv(lambda. vector x Output: Vector y such that yi = f X (xi ) = λe−λxi . exponentialrv x=exponentialrv(lambda.SY.x) % finite random variable X: % vector sx of sample space % elements {sx(1).PXY). f=f. us (United States) Zip Code:71901 ..m) Input: lambda is the parameter of an exponential random variable X . vx=finitevar(SX.

end finiterv x=finiterv(sx..p) rv %s=s(:).p.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. the covariance of X and Y.PXY).px. SY. cdf=cumsum(p). ey=finiteexp(SY. r=rand(m.PXY).SY. finiteexp ex=finiteexp(sx.m) Input: sx is the range of a ﬁnite random variable X . px is the corresponding probability assignment.x) Input: sx is the range of a ﬁnite random variable X .PXY).m) % returns m samples % of finite (s. function covxy=finitecov(SX.PXY) %returns the covariance of %finite random variables X and Y %given by grids SX.PXY) Input: Grids SX.sx(2). 8 Address:104 pine meadows loop..} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % pmf: pmf(i)=P[X=x(i)] pmf=zeros(size(x(:))). vector of samples sx describing random variable X . function pmf=finitepmf(sx.SY.px) %returns the expected value E[X] %of finite random variable X described %by samples sx and probabilities px ex=sum((sx(:)).*(px(:))). . function x=finiterv(s. and PXY ex=finiteexp(SX. the expected value E[X ]. x=s(1+count(cdf.x) % finite random variable X: % vector sx of sample space % elements {sx(1). covxy=R-ex*ey.SY. finitepmf y=finitepmf(sx. R=finiteexp(SX.*SY.p.px) Input: Probability vector px. us (United States) Zip Code:71901 . m is positive integer Output: x is a vector of m sample values y(i) = FX (x(i)). SY and probability grid PXY describing the ﬁnite random variables X and Y . AR. Output: ex. for i=1:length(x) pmf(i)= sum(px(find(sx==x(i)))). %Usage: ex=finiteexp(sx. p is the corresponding probability assignment. function ex=finiteexp(sx.com Phone:5017621195 finitecov covxy=finitecov(SX.p.PXY).r)).px).p=p(:). x is a vector of possible sample values Output: y is a vector with y(i) = P[X = x(i)].1). %Usage: cxy=finitecov(SX. Output: covxy.

px). %Usage: ex=finitevar(sx. variance function v=finitevar(sx.x) f=exp(-(x-mu). hot springs.sigma.m) Input: mu and sigma are the parameters of an Gaussian random variable X .px) Input: Probability vector px and vector of samples sx describing random variable X .. function f=gausspdf(mu.com Phone:5017621195 finitevar v=finitevar(sx. vector x Output: Vector y such that yi = f X (xi ).sigma. Output: v. us (United States) Zip Code:71901 . ex=finiteexp(sx.ˆ2.sigma.sigma. gaussrv x=gaussrv(mu. vector x Output: Vector y such that yi = FX (xi ) = ((xi − µ)/σ ). function f=gausscdf(mu.1)).ˆ2/(2*sigmaˆ2))/. integer m Output: Length m vector x such that each xi is a sample of X function x=gaussrv(mu. AR. gausscdf y=gausscdf(mu.sigma.sigma.Name:joey iwatsuru Email:joeyiwat@yahoo. the Var[X ]..x) f=phi((x-mu)/sigma).px) % returns the variance Var[X] % of finite random variables X described by % samples sx and probabilities px ex2=finiteexp(sx.m) x=mu +(sigma*randn(m.px). sqrt(2*pi*sigmaˆ2).x) Input: mu and sigma are the parameters of an Guassian random variable X . gausspdf y=gausspdf(mu.px).x) Input: mu and sigma are the parameters of an Guassian random variable X . v=ex2-(exˆ2). 9 Address:104 pine meadows loop.

.D.5))*randn(n. If mu is a length n vector. then mu is the expected value vector.V]=svd(C).. us (United States) Zip Code:71901 . Output: n × m matrix x such that each column x(:. AR. if (length(mu)==1) mu=mu*ones(n.x) % for geometric(p) rv X.m)). each element of X is assumed to have mean mu. %For input vector x. gaussvector can be called in two ways: • C is the n × n covariance matrix. or a length 1 scalar. output is vector %cdf such that cdf_i=Prob(X<=x_i) x=(x(:)>=1). sqrt((2*pi)ˆn*det(C)). %each with mean mu %and covariance matrix C if (min(size(C))==1) C=toeplitz(C).Name:joey iwatsuru Email:joeyiwat@yahoo. C is the n × n covariance matrix.com Phone:5017621195 gaussvector x=gaussvector(mu.*floor(x(:)). hot springs. end [U. 10 Address:104 pine meadows loop.m) %output: m Gaussian vectors.x) Input: p is the parameter of a geometric random variable X . f=exp(-z’*inv(C)*z)/. cdf=1-((1-p). z=x(:)-mu(:). mu is either a length n vector.ˆx). m is an integer. mu is either a length n vector. CX ) random vector X.x) n=length(x).m).C. • C is the length n vector equal to the ﬁrst row of a symmetric Toeplitz covariance matrix CX . x=V*(Dˆ(0.C.C.x) Input: For a Gaussian (µX . geometriccdf y=geometriccdf(p. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). function cdf=geometriccdf(p. m is an integer. function f=gaussvectorpdf(mu. x is a length n vector.2).i) is a sample vector of X function x=gaussvector(mu. otherwise. CX ) random vector X. gaussvectorpdf f=gaussvector(mu.1).. Output: f is the Gaussian vector PDF f X (x) evaluated at x. +(mu(:)*ones(1.. mu is a length n vector.C. or a length 1 scalar. end n=size(C.m) Input: For a Gaussian (µX .

x) %geometric(p) rv X %out: pmf(i)=Prob[X=x(i)] x=x(:).m) % returns m samples of a geometric (p) rv r=rand(m.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . AR.ˆ(x-1)). geometricrv x=geometricrv(p.*pmf. pmf= p*((1-p).*(x==floor(x)). icdfrv x=icdfrv(@icdf.1). hot springs.m) %Usage: x=icdfrv(@icdf. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).u).m) %Usage: x=geometricrv(p. pmf= (x>0).m) %returns m samples of rv X %with inverse CDF icdf.m) Input: @icdfrv is a “handle” (a kind of pointer) to a M ATLAB function icdf.1). 11 Address:104 pine meadows loop. function pmf=geometricpmf(p. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=geometricrv(p. integer m Output: Length m vector x such that each xi is a sample of X function x=icdfrv(icdfhandle.m) Input: p is the parameters of a geometric random variable X .x) Input: p is the parameter of a geometric random variable X .com Phone:5017621195 geometricpmf y=geometricpmf(p.m that is M ATLAB’s representation of an inverse −1 CDF FX (x) of a random variable X .m u=rand(m. x=ceil(log(1-r)/log(1-p)). x=feval(icdfhandle.

x) %Usage: cdf=pascalcdf(k.Name:joey iwatsuru Email:joeyiwat@yahoo. i=(k:n-1)’. % for noninteger x(i) allx=k:max(x). %set zero-prob x(i)=k. %pb=all n-k+1 pascal probs pb=(pˆk)*cumprod(ip).*(x>=k).x) Input: k and p are the parameters of a Pascal (k. the output %is a vector cdf such that % cdf(i)=Prob[X<=x(i)] x=floor(x(:)).p.p.x) Input: k and p are the parameters of a Pascal (k.p./(i+1-k))]. 12 Address:104 pine meadows loop.*pb(x-k+1).(1-p)*(i. function cdf=pascalcdf(k. us (United States) Zip Code:71901 .p) rv X.p. function pmf=pascalpmf(k.x) %For a pascal (k.allx)).*allcdf(x-k+1). AR. %set bad x(i)=k to stop bad indexing x=(okx.*x) + k*(1-okx).com Phone:5017621195 pascalcdf y=pascalcdf(k. p) random variable X .p. p) random variable X . and %input vector x. % pmf(i)=0 unless x(i) >= k pmf=okx. okx=(x==floor(x)). %just so indexing is not fouled up x=(okx.p.*x) +((1-okx)*k). %x_i < k have zero-prob. % other values are OK okx=(x>=k). n=max(x). cdf= okx. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).x) %For Pascal (k. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). %allcdf holds all needed cdf values allcdf=cumsum(pascalpmf(k. output is a %vector pmf: pmf(i)=Prob[X=x(i)] x=x(:). pascalpmf y=pascalpmf(k.p) rv X %and input vector x. ip= [1 . hot springs.

rmax=max(r).com Phone:5017621195 pascalrv x=pascalrv(k. cdf=pascalcdf(k.*x).sx).p) rv r=rand(m. sx=xmin:xmax.p.%x(i)<0 -> cdf=0 x=(okx.sx)). phi y=phi(x) Input: Vector x Output: Vector y such that y(i) = (x(i)). m is a positive integer Output: x is a vector of m independent samples of random variable X function x=pascalrv(k. poissoncdf y=poissoncdf(alpha.*cdf(x+1). xmax=ceil(2*(k/p)).Name:joey iwatsuru Email:joeyiwat@yahoo. = function cdf=poissoncdf(alpha. xmin=k.m) Input: k and p are the parameters of a Pascal random variable X . %set max range sx=xmin:xmax.p. %cdf from 0 to max(x) okx=(x>=0).m) % return m samples of pascal(k.5*erf(x/sq2). x is a vector of possible sample values Output: y is a vector with y(i) FX (x(i)). hot springs. cdf=cumsum(poissonpmf(alpha. y= 0. while cdf(length(cdf)) <=rmax xmax=2*xmax. cdf=pascalcdf(k. end x=xmin+countless(cdf.x) Input: alpha is the parameter of a Poisson (α) random variable X .%set negative x(i)=0 cdf= okx. us (United States) Zip Code:71901 .sx). sx=0:max(x).1). AR.x) %output cdf(i)=Prob[X<=x(i)] x=floor(x(:)). function y=phi(x) sq2=sqrt(2).5 + 0.r).p.p. %cdf=0 for x(i)<0 13 Address:104 pine meadows loop.

%pmf(i)=0 for zero-prob x(i) poissonrv x=poissonrv(alpha.sx). cdf=poissoncdf(alpha.1).x) %Usage: F=uniformcdf(a. function pmf=poissonpmf(alpha. pmf=okx.Name:joey iwatsuru Email:joeyiwat@yahoo. x=okx.r). %out=vector pmf: pmf(i)=P[X=x(i)] x=x(:). m is a positive integer Output: x is a vector of m independent samples of random variable X function x=poissonrv(alpha.b. rmax=max(r). vector x Output: Vector y such that yi = FX (xi ) function F=uniformcdf(a. end x=xmin+countless(cdf. -alpha+ (k*log(alpha))-logfacts]). k=(1:max(x))’. xmax=ceil(2*alpha). hot springs.*((x>=a) & (x<b))/(b-a). okx=(x>=0).b.x) %Poisson (alpha) rv X.x) Input: a and ( b) are parameters for continuous uniform random variable X .*pb(x+1). 14 Address:104 pine meadows loop. sx=xmin:xmax.x) Input: alpha is the parameter of a Poisson (α) random variable X .. AR. pb=exp([-alpha.b.*(x==floor(x)).com Phone:5017621195 poissonpmf y=poissonpmf(alpha.sx). %while ( sum(cdf <=rmax) ==(xmax-xmin+1) ) while cdf(length(cdf)) <=rmax xmax=2*xmax.0*(x>=b).. xmin=0.x) %returns the CDF of a continuous %uniform rv evaluated at x F=x. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).m) %return m samples of poisson(alpha) rv X r=rand(m. . F=f+1.*x. %set max range sx=xmin:xmax. uniformcdf y=uniformcdf(a. us (United States) Zip Code:71901 . logfacts =cumsum(log(k)).m) Input: alpha is the parameter of a Poisson (α) random variable X . cdf=poissoncdf(alpha.

x) %returns the PDF of a continuous %uniform rv evaluated at x f=((x>=a) & (x<b))/(b-a). us (United States) Zip Code:71901 .x) Input: a and ( b) are parameters for continuous uniform random variable X .m) %Usage: x=uniformrv(a. uniformrv x=uniformrv(a.b) random varible x=a+(b-a)*rand(m.b.b. vector x Output: Vector y such that yi = f X (xi ) function f=uniformpdf(a.b.b.com Phone:5017621195 uniformpdf y=uniformpdf(a. 15 Address:104 pine meadows loop. hot springs. function x=uniformrv(a.1).m) Input: a and ( b) are parameters for continuous uniform random variable X .Name:joey iwatsuru Email:joeyiwat@yahoo.b.x) %Usage: f=uniformpdf(a. AR.b. positive integer m Output: m element vector x such that each x(i) is a sample of X .m) %Returns m samples of a %uniform (a.

1. state %check for integer p0 if (length(p0)==1) p0=((0:K)==p0). x=sqrt(alpha*delta).t) %Brownian motion process %sampled at t(1)<t(2)< .1).t(1:n-1)]. function pv = dmcstatprob(P) n=size(P. function w=brownian(alpha. alpha is the scaling constant of a Brownian motion process such that the ith increment has variance α(ti − ti−1 ). A=(eye(n)-P).*gaussrv(0. pv= (p0(:)’*expm(R*t))’.t) Input: t is a vector holding an ordered sequence of inspection times.p0. AR. hot springs.com Phone:5017621195 Functions for Stochastic Processes brownian w=brownian(alpha. length n vector p0 denoting the initial state probabilities.1)-1. delta=t-[0. t=t(:). 16 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. w=cumsum(x). us (United States) Zip Code:71901 . nonengative scalar t Output: Length n vector pv such that pv(t) is the state probability vector at time t of the Markov chain Comment: If p0 is a scalar integer. pv=([1 zeros(1. end R=Q-diag(sum(Q.2)). cmcstatprob pv=cmcstatprob(Q) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain Output: pv is the stationary probability vector for the continuous-time Markov chain function pv = cmcstatprob(Q) %Q has zero diagonal rates R=Q-diag(sum(Q.p0.t) Input: n × n state transition matrix Q for a continuous-time ﬁnite Markov chain.1)=ones(n. dmcstatprob pv=dmcstatprob(P) Input: n × n stochastic matrix P representing a discrete-time aperiodic irreducible ﬁnite Markov chain Output: pv is the stationary probability vector. A(:. R(:.1)=ones(n. n=length(t).n-1)]*Aˆ(-1))’. cmcprob pv=cmcprob(Q.n-1)]*Rˆ(-1))’. Output: w is a vector such that w(i) is the position at time t(i) of the particle in Brownian motion. n=size(Q.. %max no.1).1).n). then the simulation starts in state p0 function pv = cmcprob(Q.2)).t) %Q has zero diagonal rates %initial state probabilities p0 K=size(Q. pv=([1 zeros(1..1).

t) Input: lambda is the arrival rate of a Poisson process.. %truncate last holding time ST(n.T) function s=poissonarrivals(lambda. max no. us (United States) Zip Code:71901 .com Phone:5017621195 poissonarrivals s=poissonarrivals(lambda. while (s(length(s))< T).:). p00=Q(1+s.1)..1*lambda*T).T) Input: state transition matrix Q for a continuous-time ﬁnite Markov chain. n=ceil(0..6*T/R).. function ST=simcmc(Q. s=[s.p0.S].2) is the amount of time the system spends in state ST(i.1). of arrivals by t(i) s=poissonarrivals(lambda. hot springs. T ]. T ]: The output is an n × 2 matrix ST such that the ﬁrst column ST(:.t). t is a vector of “inspection times’.n)). T marks the end of an observation interval [0. cumsum(exponentialrv(lambda. see Problem 10. while (sum(ST(:. K=size(Q.. AR.n). s(n)] % s(n)<= T < s(n+1) n=ceil(1. Note that length n is a Poisson random variable with expected value λT .p0. s(n)]’ is a vector such that s(i) is ith arrival time. S=simcmcstep(Q.1). then the simulation starts in state p0. ST=[ST.t) %input: rate lambda>0.p0. 17 Address:104 pine meadows loop. s=ST(size(ST. N=count(s.n)). is random. . ST=simcmcstep(Q. Input: lambda is the arrival rate of a Poisson process.2)=T-sum(ST(1:n-1. s_new=s(length(s))+ . Comment: This code is pretty stupid. vector t %For a sample function of a %Poisson process of rate lambda. There are decidedly better ways to create a set of arrival times. vector p0 denoting the initial state probabilities.T).Name:joey iwatsuru Email:joeyiwat@yahoo. R=ps’*v.2) is the amount of time spent in each state.p00.’ Output: N is a vector such that N(i) is the number of arrival by inspection time t(i). simcmc ST=simcmc(Q. function N=poissonprocess(lambda. ST=ST(1:n. Comment: If p0 is a scalar integer.. end s=s(s<=T).T) %arrival times s=[s(1) .2*n).:)/v(1+s).2))<T). poissonprocess N=poissonprocess(lambda.13. end n=1+sum(cumsum(ST(:.2).2)). state %calc average trans. integer n Output: A simulation of the Markov chain system over the time interval [0. That is. Output: s=[s(1).1) is the sequence of system states and the second column ST(:. s=cumsum(exponentialrv(lambda.5.max(t)). %N(i) = no.2))<T). rate ps=cmcstatprob(Q). the number of state occupancy periods. ST(i.1)-1. s_new].. Note that n. v=sum(Q.

x(m+1)=finiterv(sx. state sx=0:K.1). %initialization if (length(p0)==1) %convert integer p0 to prob vector p0=((0:K)==p0). rates t=1. function S=simcmcstep(Q.1) is the length n sequence of system states and the second column ST(:. S(:. Comment: If p0 is a scalar integer.p0.n) function x=simdmc(P. end Input: n ×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov chain.2). end x(1)=finiterv(sx. length n vector p0 denoting the initial state probabilities.2) is the amount of time the system spends in state ST(i. Output: A simulation of the Markov chain system such that for the length n vector x. x(m) is the state at time m-1 of the Markov chain.. vector p0 denoting the initial state probabilities.n) K=size(P.1).n) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain. us (United States) Zip Code:71901 . then the simulation starts in state p0 18 Address:104 pine meadows loop.p0.p0.2) is the amount of time spent in each state.2).1)-1.p0.n+1). %highest no.p0. %state dep. %S=simcmcstep(Q.1)=simdmc(P. AR. end v=sum(Q. integer n Output: A simulation of n steps of the continuous-time Markov chain system: The output is an n × 2 matrix ST such that the ﬁrst column ST(:. That is.1). This program is the basis for simcmc. ST(i.1)-1. hot springs.:). simdmc x=simdmc(P. %x(m)= state at time m-1 for m=1:n. then the simulation starts in state p0.*exponentialrv(1. % init. P=diag(t)*Q. rate matrix Q.1)) .2)=t(1+S(:. .p0. state probabilities p0 K=size(Q. S(:.p0. integer n. state S=zeros(n+1./v.n).%init allocation %check for integer p0 if (length(p0)==1) p0=((0:K)==p0).Name:joey iwatsuru Email:joeyiwat@yahoo.P(x(m)+1. Comment: If p0 is a scalar integer. %max no. %state space x=zeros(n+1.n) % Simulate n steps of a cts % Markov Chain..1).com Phone:5017621195 simcmcstep S=simcmcstep(Q.n).

%each column of MX = x %each row of MY = y n=(sum((MX<=MY).y) %Usage n=count(x.MY]=ndgrid(x.y) Input: Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x strictly less than y(i). AR.y).MY]=ndgrid(x. Output: F is the N by N discrete Fourier transform matrix function F = dftmat(N). hot springs.MY]=ndgrid(x.y) %n(i)= # elements of x <= y(i) [MX. Usage: F=dftmat(N) %F is the N by N DFT matrix n=(0:N-1)’.y).y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x equal to y(i). function n=countequal(x. us (United States) Zip Code:71901 . function n=count(x. %each column of MX = x %each row of MY = y n=(sum((MX<MY). %each column of MX = x %each row of MY = y n=(sum((MX==MY).y) %n(j)= # elements of x = y(j) [MX. function n=countless(x. countequal n=countequal(x.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Random Utilities count n=count(x.y). countless n=countless(x.y) %Usage: n=countequal(x. dftmat F=dftmat(N) Input: Integer N .1))’. F=exp((-1.1))’.y) %Usage: n=countless(x.1))’.0j)*2*pi*(n*(n’))/N).y) %n(i)= # elements of x < y(i) [MX. 19 Address:104 pine meadows loop.y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x less than or equal to y(i).

. rel.2)] is a unique (X.1) fxy(k.. . L=1+floor(length(r)/2). %extend xy to include a sample %for all possible (X. N=N/sum(N).*exp((1.2)] % = kth unique pair [x y] and % fxy(k. if (nargin>1) N=varargin{2}(1). [U. freq. function fxy = freqxy(xy.3).SX.Y) pairs: xy=[xy. .Y %Output fxy is a K x 3 matrix: % [fxy(k. Grids SX and SY representing the sample space.J]=unique(xy. S=fftc(r) Input: Vector r=[r(1) .1:max(J))-1.SY) %Usage: fxy = freqxy(xy.1) fxy(k.[2 1 3]). us (United States) Zip Code:71901 . .0j)*phase).’rows’).size(R)).SX. . a list of random sample value pairs xy can be simulated by the commands S=[SX(:) SY(:)]. r(2k+1)] holding the time sequence r−k . %reorder fxy rows to match %rows of [SX(:) SY(:) PXY(:)]: fxy=sortrows(fxy.3)= corresp.N): N point DFT of r % fftc(r): length(r) DFT of r r=varargin{1}. 20 Address:104 pine meadows loop. N=hist(J. xy is an m × 2 matrix holding a list of sample values pairs. n=reshape(0:(N-1). SY(:) PXY(:)]. xy=finiterv(S.I. .:)= ith sample pair X. else N=(2*L)-1.SY) Input: For random variables X and Y . %DFT for a signal r %centered at the origin %Usage: % fftc(r.1) fxy(k. Output: fxy is a K × 3 matrix. fxy=[U N(:)]. . .N). SX(:) SY(:)]. S=R. SY and the probability grid PXY. yy(i. .PXY(:). r0 . AR.SX. The output fxy is ordered so that the rows match the ordering of rows in the matrix [SX(:) fftc S=fftc(r. rk centered around the origin. Comment: Given the grids SX. phase=2*pi*(n/N)*(L-1).3)] [fxy(k. Y ) pair with relative frequency fxy(k. Output: S is the DFT of r Comment: Supports the same calling conventions as fft.:) is the ith sample pair (X.Name:joey iwatsuru Email:joeyiwat@yahoo.2) fxy(k.SY) %xy is an m x 2 matrix: %xy(i.m). function S=fftc(varargin).N). hot springs.com Phone:5017621195 freqxy fxy=freqxy(xy. In each row [fxy(k. Y ). end R=fft(r.

Comment: The code is ugly because it makes sure to produce the right limit value at xi = 0. function h=pmfplot(sx.’VerticalAlignment’.0-(x==0)).5). y=sin(pi*xx).px.’-k’). rect y=rect(x) Input: Vector x Output: Vector y such that yi = rect(xi ) = 1 |xi | < 0.’y axis text’) Input: Sample space vector sx and PMF vector px for ﬁnite random variable PXY.xls.0*(x==0)). px is the PMF %xls and yls are x and y label strings nonzero=find(px).com Phone:5017621195 pmfplot pmfplot(sx. axis([xmin xmax 0 ymax]). if (nargin==4) xlabel(xls).5 0 otherwise function y=rect(x). 21 Address:104 pine meadows loop. sx]. y=1. sx=sx(nonzero). PM=[zeros(size(px)). xborder=0. xmin=xmin-xborder.xls. set(h. us (United States) Zip Code:71901 .’LineWidth’.1*max(px). optional text strings xls and yls Output: A plot of the PMF PX (x) in the bar style used in the text. px]. sx=(sx(:))’. sinc y=sinc(x) Input: Vector x Output: Vector y such that yi = sinc(xi ) = sin(π xi ) π xi function y=sinc(x).px.PM.3).yls) %sx and px are vectors.0*(abs(x)<0. end xmin=min(sx).Name:joey iwatsuru Email:joeyiwat@yahoo.’Bottom’). %Usage:y=rect(x).*y)+ (1. xmax=xmax+xborder. AR.’x’. xmax=max(sx). y=((1. hot springs. px=px(nonzero). h=plot(XM. XM = [sx.px.05*(xmax-xmin). ymax=1. px=(px(:))’. ylabel(yls./(pi*xx).yls) %Usage: pmfplot(sx. xx=x+(x==0).

% If S is an N by 2 matrix. end Y=[S(:.xlabel. us (United States) Zip Code:71901 .’VerticalAlignment’.com Phone:5017621195 simplot simplot(S.Y). hot springs.xls.’Bottom’).1) = state sequence.T) or ST=simcmcstep(Q.1)]. S(:. S(size(S.p0. ylabel(yls. %h=simplot(S. then the width of the stair is proportional to the time spent in that state.2)]). 22 Address:104 pine meadows loop. h=stairs(X. a discrete time chain is assumed % with visit times of one unit. if (nargin==3) xlabel(xls). then each stair has equal width.1). a cts time Markov chain % is assumed where % S(:. end Input: The simulated state sequence vector S generated by S=simdmc(P. % S(:.2)==1) S=[S ones(size(S))]. If S is n × 2 state/time matrix ST.p0.xlabel.1) .p0.n). X=cumsum([0 . Output: A “stairs” plot showing the sequence of simulation states over time.yls).n) or the n × 2 state/time matrix ST generated by either ST=simcmc(Q. Comment: If S is just a state sequence vector. % The cumulative sum % of visit times are transition instances.Name:joey iwatsuru Email:joeyiwat@yahoo. % h is a handle to a stairs plot of the state sequence % vs state transition times %in case of discrete time simulation if (size(S.ylabel) % Plots the output of a simulated state sequence % If S is N by 1.ylabel) function h=simplot(S.2) = state visit times. AR.

If you ﬁnd errors or have suggestions or comments. 2004 • The M ATLAB section quizzes at the end of each chapter use programs available for download as the archive matcode. AR.rutgers. When errors are found.m ﬁles in matcode.zip. there is a nonzero probability (in fact. a probability close to unity) that errors will be found.zip.pdf describing the general purpose . Goodman May 22. This archive has programs of general purpose programs for solving probability problems as well as speciﬁc .edu. Nevertheless.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. Yates and David J.m ﬁles associated with examples or quizzes in the text. • We have made a substantial effort to check the solution to every quiz. corrected solutions will be posted at the website. 1 Address:104 pine meadows loop. please send email to ryates@winlab. us (United States) Zip Code:71901 . Also available is a manual probmatlab.com Phone:5017621195 Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers Second Edition Quiz Solutions Roy D.

Since we have written down each pair Ai and Bi above. vdv.com Phone:5017621195 Quiz Solutions – Chapter 1 Quiz 1. dvd. AR. dvv. Also. A4 and B4 are collectively exhaustive. dvv. us (United States) Zip Code:71901 . dvd.Name:joey iwatsuru Email:joeyiwat@yahoo. ddv. dvd} (4) R ∩ M (6) T c − M (7) A4 = {vvv. vdd} Recall that Ai and Bi are collectively exhaustive if Ai ∪ Bi = S. ddv. vvd.2 (1) A1 = {vvv. The pair A4 and B4 are not mutually exclusive since dvd belongs to A4 and B4 . ddv. vdv. The pair A2 and B2 are mutually exclusive and collectively exhaustive. However. ddd} (3) A2 = {vvv. ddd} (5) A3 = {vvv. 2 Address:104 pine meadows loop. The pair A3 and B3 are mutually exclusive but not collectively exhaustive. dvd. we can simply check for these properties. vdd} (2) B1 = {dvv. ddd} (6) B3 = {vdv. M T O M T O M T O (1) R = T c (2) M ∪ O (3) M ∩ O M T O M T O M T O (4) R ∪ M Quiz 1. vdd. dvd} (4) B2 = {vdv. the shaded area represents the indicated set. Ai and Bi are mutually exclusive if Ai ∩ Bi = φ. The pair A1 and B1 are mutually exclusive and collectively exhaustive. vdd. vvd. vvd. hot springs.1 In the Venn diagrams for parts (a)-(g) below. ddv} (8) B4 = {ddd.

This allows us to ﬁll in two more table entries: V D L 0. . . .6 − 0. and DL. . s59 }] = 9 × 0. .05 and the complete table is V D L 0. .82 Quiz 1. .42 (6) P[T < 90] = P[{s51 . we can conclude that P[V B] = 0. s100 }] = 31 × 0.02 = 0. . In particular. .05 Finding the various probabilities is now straightforward: 3 Address:104 pine meadows loop.25. s100 }] = 11 × 0. This implies P[D B] = 0. . . s100 }] = 21 × 0. .35 ? The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.02 = 0. D B. .35 0. s89 }] = 39 × 0. We represent these events in the table: V D L 0. . the problem statement tells us how to ﬁll in the table.25 B 0. . .35 = 0. .35 0. AR.18 (5) P[T ≥ 80] = P[{s80 . (1) P[{s79 }] = 0. .62 (8) P[student passes] = P[{s60 .78 (7) P[a C grade or better] = P[{s70 .3 There are exactly 50 equally likely outcomes: s51 through s100 .02. us (United States) Zip Code:71901 .22 (4) P[F] = P[{s51 .35 ? B ? ? In a roundabout way.02 = 0. P [V ] = 0.02 = 0. V L. . s52 .35 0. .7 = P [V L] + P [V B] P [L] = 0. hot springs. s100 }] = 41 × 0.com Phone:5017621195 Quiz 1.25 B 0. .02 (3) P[A] = P[{s90 .02 (2) P[{s100 }] = 0. . .02 = 0.35.35 and that P[DL] = 0.4 We can describe this experiment by the event space consisting of the four possible events V B. Each of these outcomes has probability 0.02 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo.6 = P [V L] + P [DL] (1) (2) Since P[V L] = 0. .

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25 (2) P[D ∪ L] = P[V L] + P[DL] + P[D B] = 0.35 + 0.25 + 0.05 = 0.65. (3) P[V B] = 0.35 (4) P[V ∪ L] = P[V ] + P[L] − P[V L] = 0.7 + 0.6 − 0.35 = 0.95 (5) P[V ∪ D] = P[S] = 1 (6) P[L B] = P[L L c ] = 0 Quiz 1.5 (1) The probability of exactly two voice calls is P [N V = 2] = P [{vvd, vdv, dvv}] = 0.3 (2) The probability of at least one voice call is P [N V ≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] = 6(0.1) + 0.2 = 0.8 An easier way to get the same answer is to observe that P [N V ≥ 1] = 1 − P [N V < 1] = 1 − P [N V = 0] = 1 − P [{ddd}] = 0.8 (4) (2) (3) (1)

(3) The conditional probability of two voice calls followed by a data call given that there were two voice calls is 1 P [{vvd} , N V = 2] P [{vvd}] 0.1 = (5) = = P [{vvd} |N V = 2] = P [N V = 2] P [N V = 2] 0.3 3 (4) The conditional probability of two data calls followed by a voice call given there were two voice calls is P [{ddv} , N V = 2] P [{ddv} |N V = 2] = =0 (6) P [N V = 2] The joint event of the outcome ddv and exactly two voice calls has probability zero since there is only one voice call in the outcome ddv. (5) The conditional probability of exactly two voice calls given at least one voice call is P [N V = 2, N V ≥ 1] P [N V = 2] 0.3 3 = = = (7) P [N V = 2|Nv ≥ 1] = P [N V ≥ 1] P [N V ≥ 1] 0.8 8 (6) The conditional probability of at least one voice call given there were exactly two voice calls is P [N V ≥ 1, N V = 2] P [N V = 2] P [N V ≥ 1|N V = 2] = = =1 (8) P [N V = 2] P [N V = 2] Given that there were two voice calls, there must have been at least one voice call. 4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6 In this experiment, there are four outcomes with probabilities P[{vv}] = (0.8)2 = 0.64 P[{dv}] = (0.2)(0.8) = 0.16 P[{vd}] = (0.8)(0.2) = 0.16 P[{dd}] = (0.2)2 = 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes, we now can test for the independence of events. (1) First, we calculate the probability of the joint event: P [N V = 2, N V ≥ 1] = P [N V = 2] = P [{vv}] = 0.64 Next, we observe that P [N V ≥ 1] = P [{vd, dv, vv}] = 0.96 Finally, we make the comparison P [N V = 2] P [N V ≥ 1] = (0.64)(0.96) = P [N V = 2, N V ≥ 1] which shows the two events are dependent. (2) The probability of the joint event is P [N V ≥ 1, C1 = v] = P [{vd, vv}] = 0.80 From part (a), P[N V ≥ 1] = 0.96. Further, P[C1 = v] = 0.8 so that P [N V ≥ 1] P [C1 = v] = (0.96)(0.8) = 0.768 = P [N V ≥ 1, C1 = v] Hence, the events are dependent. (3) The problem statement that the calls were independent implies that the events the second call is a voice call, {C2 = v}, and the ﬁrst call is a data call, {C1 = d} are independent events. Just to be sure, we can do the calculations to check: P [C1 = d, C2 = v] = P [{dv}] = 0.16 (6) Since P[C1 = d]P[C2 = v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are independent. Note that this shouldn’t be surprising since we used the information that the calls were independent in the problem statement to determine the probabilities of the outcomes. (4) The probability of the joint event is P [C2 = v, N V is even] = P [{vv}] = 0.64 Also, each event has probability P [C2 = v] = P [{dv, vv}] = 0.8, P [N V is even] = P [{dd, vv}] = 0.68 (8) Thus, P[C2 = v]P[N V is even] = (0.8)(0.68) = 0.544. Since P[C2 = v, N V is even] = 0.544, the events are dependent. 5 (7) (5) (4) (3) (2) (1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7 Let Fi denote the event that that the user is found on page i. The tree for the experiment is

0.8 ¨ F1 0.8 ¨ F2 0.8 ¨ F3 ¨¨ ¨¨ ¨¨ ¨ ¨¨ ¨¨ c c c ¨¨ F3 F1 ¨ F2 ¨ 0.2 0.2 0.2

The user is found unless all three paging attempts fail. Thus the probability the user is found is c c c P [F] = 1 − P F1 F2 F3 = 1 − (0.2)3 = 0.992 (1) Quiz 1.8 (1) We can view choosing each bit in the code word as a subexperiment. Each subexperiment has two possible outcomes: 0 and 1. Thus by the fundamental principle of counting, there are 2 × 2 × 2 × 2 = 24 = 16 possible code words. (2) An experiment that can yield all possible code words with two zeroes is to choose which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There are 4 = 6 ways to do this. Hence, there are six code words with exactly two zeroes. 2 For this problem, it is also possible to simply enumerate the six code words: 1100, 1010, 1001, 0101, 0110, 0011. (3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst bit has only one outcome. For each of the next three bits, we have two choices. In this case, there are 1 × 2 × 2 × 2 = 8 ways of choosing a code word. (4) For the constant ratio code, we can specify a code word by choosing M of the bits to be ones. The other N − M bits will be zeroes. The number of ways of choosing such N a code word is M . For N = 8 and M = 3, there are 8 = 56 code words. 3 Quiz 1.9 (1) In this problem, k bits received in error is the same as k failures in 100 trials. The failure probability is = 1 − p and the success probability is 1 − = p. That is, the probability of k bits in error and 100 − k correctly received bits is P Sk,100−k = 100 k 6

k

(1 − )100−k

(1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

4. we ﬁrst generate a vector R of 100 random numbers. 700(0.10 Since the chip works only if all n transistors work.. then X(i)=1. we use the hist function to count how many occurences of each possible value of X(i).1. X(i)=1 if ﬂip i was heads. us (United States) Zip Code:71901 .*(R<=0. These three cases will have probabilities 0.4.4 < R(i) and R(i)<=0.0610 (6) Quiz 1.98 = 4950(0. Second. AR.4). That is.99) 8 = 0.01) (0.99) (2) The probability a packet is decoded correctly is just P [C] = P S0.98 + P S3.9)). X=(R<= 0.99 = 100(0.01. chip failures are also independent.9. 0. To see how this works. Let Ck denote the event that exactly k chips work.100 + P S1.1:3) (1) (2) (3) For a M ATLAB simulation. hot springs.3700 9 97 P S2.97 = 0. the transistors in the chip are like devices in series. and X(i)=3) is ﬂip i landed on the edge.100). The module works if either 8 chips work or 9 chips work. P [C8 ] = The probability a memory module works is P [M] = P [C8 ] + P [C9 ] = p 8n (9 − 8 p n ) Quiz 1. + (2*(R>0...com Phone:5017621195 For = 0. X(i)=2 if ﬂip i was tails.99)100 = 0.97 = 161.99) 2 3 99 (2) (3) (4) (5) = 0. Since transistor failures are independent of each other. P S0.11 R=rand(1. The probability that a chip works is P[C] = pn .9 < R(i). 8 P [C9 ] = (P [C])9 = p 9n . + (3*(R>0. we note there are three cases: • If R(i) <= 0.9819 = 0. Thus each P[Ck ] has the binomial probability 9 (P [C])8 (1 − P [C])9−8 = 9 p 8n (1 − p n ).99 + P S2.9)) .5 and 0. • If 0.1849 P S3. Y=hist(X. then X(i)=3.100 = (1 − )100 = (0..3660 P S1.4) . we generate vector X as a function of R to represent the 3 possible outcomes of a ﬂip.Name:joey iwatsuru Email:joeyiwat@yahoo. then X(i)=2. • If 0. 7 Address:104 pine meadows loop. Lastly.01) (0.01)(0.

3 Decoding each transmitted bit is an independent trial where we call a bit error a “success. X has the geometric PMF PX (x) = p(1 − p)x−1 x = 1.1 The sample space. the remaining parts are straightforward. . AR. we recall that the PMF must sum to 1.0387 8 (2) Address:104 pine meadows loop.1.24 2.2 (1) To ﬁnd c. Similar to Example 2. (2) P[N = 1] = PN (1) = c = 6/11 (3) P[N ≥ 2] = PN (2) + PN (3) = c/2 + c/3 = 5/11 (4) P[N > 3] = ∞ n=4 PN (n) = 0 Quiz 2. Now we can interpret each experiment in the generic context of independent trials. the trial is a success. 0 otherwise (1) (2) If p = 0.36 3. . hot springs. .1)(0. (1) The random variable X is the number of trials up to and including the ﬁrst success.5 0. probabilities and corresponding grades for the experiment are Outcome P[·] BB BC CB CC Quiz 2. Now that we have found c. us (United States) Zip Code:71901 .5 0. then the probability exactly 10 bits are sent is P [X = 10] = PX (10) = (0. That is.24 2. 2.9)9 = 0. 3 G 0. that is.” Each bit is in error.0 0.16 2 PN (n) = c 1 + n=1 1 1 + 2 3 =1 (1) This implies c = 6/11.Name:joey iwatsuru Email:joeyiwat@yahoo.11.com Phone:5017621195 Quiz Solutions – Chapter 2 Quiz 2. with probability p.

1.13.0645 2 (10) Quiz 2. .99)98 2 (6) (7) (8) (5) Random variable Z is the number of trials up to and including the third success. (3) The random variable Y is the number of successes in 100 independent trials. . P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)10 For p = 0. 4.. the probability that the third error occurs on bit 12 is PZ (12) = 11 (0. (1) P[Y < 1] = FY (1− ) = 0 9 Address:104 pine meadows loop. we must keep in + mind that when FY (y) has a discontinuity at y0 .4 Each of these probabilities can be read off the CDF FY (y). This x=10 sum is not too hard to calculate.99)100 + 100(0. the probability of exactly 2 errors is P [Y = 2] = PY (2) = 100 (0. 5.15) PZ (z) = z−1 3 p (1 − p)z−3 2 (9) Note that PZ (z) > 0 for z = 3.01)2 (0. Y has the binomial PMF PY (y) = 100 y p (1 − p)100−y y (4) (3) If p = 0. AR.com Phone:5017621195 The probability that at least 10 bits are sent is P[X ≥ 10] = ∞ PX (x). P[X ≥ 10] = 0.01)(0. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo.910 = 0.9207 100 (0.25)3 (0. . Thus Z has the Pascal PMF (see Example 2. hot springs.99)99 + = 0.1849 2 (5) (4) The probability of no more than 2 errors is P [Y ≤ 2] = PY (0) + PY (1) + PY (2) = (0.3487. FY (y) takes the upper value FY (y0 ).99)98 = 0. Just as in Example 2. However.01)2 (0. (6) If p = 0.01. That is. its even easier to observe that X ≥ 10 if the ﬁrst 10 bits are transmitted correctly.25. However.75)9 = 0.

1) = 62 (2) (3) (4) 10 Address:104 pine meadows loop. the expected value of T is E [T ] = 75PT (75) + 90PT (90) + 105PT (105) + 120PT (120) = (75 + 90 + 105)(0.Name:joey iwatsuru Email:joeyiwat@yahoo.3) = 29. 90.6 (1) As a function of N .8 = 0 Quiz 2. 105 PT (t) = 0.3 c = 40 (1) ⎩ 0 otherwise (2) The expected value of C is E [C] = 25(0. a call is a voice call and C = 25.8 − 0. hot springs.1 t = 120 ⎩ 0 otherwise From the PMF PT (t).3$$N =1 •T =105 $ (2) (1) $ $$ ¨¨$ rr rr0.3 N =2 •T =90 r rr 0.4 (5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = FY (1+ ) − FY (1− ) = 0. us (United States) Zip Code:71901 .5 cents Quiz 2.2 (4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − FY (2− ) = 1 − 0.com Phone:5017621195 (2) P[Y ≤ 1] = FY (1) = 0.8 = 0. we can draw the following tree: N =0 •T =120 0.7. This corresponds to the PMF ⎧ ⎨ 0.6 = 0. Otherwise.1¨¨ ¨ ¨ ¨ 0.7 c = 25 PC (c) = 0. we can write down the PMF of T : ⎧ ⎨ 0.6 (3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − FY (2) = 1 − 0.3) + 120(0. AR.3 t = 75.7) + 40(0.5 (1) With probability 0.6 (6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = FY (3+ ) − FY (3− ) = 0. the cost T is T = 25N + 40(3 − N ) = 120 − 15N (2) To ﬁnd the PMF of T . we have a data call and C = 40.3.3 N =3 •T =75 From the tree. with probability 0.

1) = 4. the expected number of applications is 4 E [A] = a=1 a PA (a) = 1(0.1) + 1(0.4) + 2(0.4) + 22 (0. hot springs.4 (2) (3) The variance of N is Var[N ] = E N 2 − (E [N ])2 = 2.4)2 = 0.10.4 − (1.663.5) = 2.2) + 8(0. (1) The expected value of N is 2 E [N ] = n=0 n PN (n) = 0(0.4 (1) (2) The second moment of N is 2 E N 2 = n=0 n 2 PN (n) = 02 (0. However.4) + 4(0. g(E[A]) = g(2) = 4. 2 g(A) = 6 A = 3 ⎩ 8 A=4 (3) By Theorem 2.1) + 12 (0.7 (1) Using Deﬁnition 2. Quiz 2. AR.8 The PMF PN (n) allows to calculate each of the desired quantities.44 = 0. the expected number of memory chips is 4 (2) E [M] = a=1 g(A)PA (a) = 4(0. E[M] = 4.1) = 2 (1) (2) The number of memory chips is M = g(A) where ⎧ ⎨ 4 A = 1.3) + 6(0.14. (3) 11 Address:104 pine meadows loop.2) + 4(0.4) + 2(0.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 .3) + 3(0.8 (3) Since E[A] = 2. The two quantities are different because g(A) is not of the form α A + β.5) = 1.com Phone:5017621195 Quiz 2.8 = g(E[A]).44 (4) The standard deviation is σ N = √ Var[N ] = √ 0.

com Phone:5017621195 Quiz 2. 3. . . . 7.005/0.10 (the law of total probability). 5 = 0. 9. 50 = 0(0.25) ⎩ 0 otherwise ⎧ ⎨ 0. . 3. 3. 10 ⎩ 0 otherwise (5) Once we have the conditional PMF.2(0. 5 = 0. 2.15625 12 Address:104 pine meadows loop. .80 (6) By Theorem 2. 2.2 n = 1. 5 0 otherwise (2) (3) The problem statement tells us that P[T ] = 1 − P[I ] = 3/4. . 7. .155 n = 1. we learn that the conditional PMF of N given the event I is 0. 2. 2. 3.19375 n = 1. 4.75) + 0. . 4.02(0. 3.Name:joey iwatsuru Email:joeyiwat@yahoo. AR.8 n = 1.00625 n = 6. E [N |N ≤ 10] = n 5 0 n ≤ 10 otherwise (7) (8) (9) n PN |N ≤10 (n) 10 (10) = n=1 n(0. .005)(5) = 0. calculating conditional expectations is easy. From Theorem 1. we ﬁnd the PMF of N is PN (n) = PN |T (n) P [T ] + PN |I (n) P [I ] ⎧ ⎨ 0.75) + 0. 2. 50 ⎩ 0 otherwise (4) First we ﬁnd 10 (3) (4) (5) P [N ≤ 10] = n=1 PN (n) = (0. 10 ⎩ 0 otherwise ⎧ ⎨ 0.9 (1) From the problem statement. 7. 8.155)(5) + (0.155/0.17. 9. us (United States) Zip Code:71901 .19375) + n=6 n(0. 50 PN |I (n) = (1) 0 otherwise (2) Also from the problem statement.25) n = 1. .005 n = 6. . 5 = 0. 4. . 4. hot springs. 5 n = 6. 7. 2.8 n = 6. the conditional PMF of N given N ≤ 10 is PN |N ≤10 (n) = PN (n) P[N ≤10] ⎧ ⎨ 0. the conditional PMF of N given the event T is PN |T (n) = 0.00625) (11) (12) = 3. 4.02 n = 1. 8.02(0.

00625) n=6 = n=1 n (0.71875 − (3. . Each time samplemean(k) is called produces a random output.5). . . What is observed in these ﬁgures is that for small n. plot(K./K.15625)2 = 2.i) of M holds a sequence m 1 . . M=zeros(k. 2.75684 (16) (17) Quiz 2. . The ith column M(:. Examples of the function calls (a) samplemean(100) and (b) samplemean(1000) are shown in Figure 1. we ﬁrst ﬁnd the conditional second moment E N 2 |N ≤ 10 = n 5 n 2 PN |N ≤10 (n) 10 (13) n 2 (0.M). end. M(:.10 The function samplemean(k) generates and plots ﬁve m n sequences for n = 1.19375) + 2 (14) (15) = 55(0. k.10. X=duniformrv(0.com Phone:5017621195 10 8 6 4 2 0 0 50 100 10 8 6 4 2 0 0 500 1000 (a) samplemean(100) (b) samplemean(1000) Figure 1: Two examples of the output of samplemean(k) (6) To ﬁnd the conditional variance. AR. function M=samplemean(k).00625) = 12. . us (United States) Zip Code:71901 . m 2 . . K=(1:k)’.k). for i=1:5. m n is fairly random but as n gets 13 Address:104 pine meadows loop.i)=cumsum(X). hot springs. m k .71875 The conditional variance is Var[N |N ≤ 10] = E N 2 |N ≤ 10 − (E [N |N ≤ 10])2 = 12.Name:joey iwatsuru Email:joeyiwat@yahoo.19375) + 330(0. .

hot springs. 14 Address:104 pine meadows loop. . Although each sequence m 1 .Name:joey iwatsuru Email:joeyiwat@yahoo. AR. . us (United States) Zip Code:71901 . . the sequences always converges to E[X ].com Phone:5017621195 large. m n gets close to E[X ] = 5. that we generate is random. This random convergence is analyzed in Chapter 7. m 2 .

λ = 1/2) PDF 0. AR.5] = 1 − P[Y ≤ 1.5)/4 = 5/8 Quiz 3. To ﬁnd c.com Phone:5017621195 Quiz Solutions – Chapter 3 Quiz 3.1 The CDF of Y is 1 FY(y) 0.5) = 1 − (1.5 0 0 2 y 4 ⎧ y<0 ⎨ 0 y/4 0 ≤ y ≤ 4 FY (y) = ⎩ 1 y>4 (1) From the CDF FY (y). we can calculate the probabilities: (1) P[Y ≤ −1] = FY (−1) = 0 (2) P[Y ≤ 1] = FY (1) = 1/4 (3) P[2 < Y ≤ 3] = FY (3) − FY (2) = 3/4 − 2/4 = 1/4 (4) P[Y > 1.Name:joey iwatsuru Email:joeyiwat@yahoo. We will evaluate this integral using integration by parts: ∞ −∞ f X (x) d x = 0 ∞ cxe−x/2 d x ∞ 0 (1) ∞ 0 = −2cxe−x/2 =0 + 2ce−x/2 d x (2) = −4ce−x/2 ∞ 0 = 4c (3) Thus c = 1/4 and X has the Erlang (n = 2. us (United States) Zip Code:71901 . we use ∞ the fact that −∞ f X (x) d x = 1.2 (1) First we will ﬁnd the constant c and then we will sketch the PDF.1 0 0 5 x 10 15 f X (x) = (x/4)e−x/2 x ≥ 0 0 otherwise fX(x) (4) 15 Address:104 pine meadows loop.2 0. hot springs.5] = 1 − FY (1.

(4) Similarly.Name:joey iwatsuru Email:joeyiwat@yahoo.5 0 0 5 x 10 15 FX (x) = 1− 0 x 2 + 1 e−x/2 x ≥ 0 otherwise (8) (3) From the CDF FX (x). (9) (10) Quiz 3. f Y (y) = f Y (−y)). P [0 ≤ X ≤ 4] = FX (4) − FX (0) = 1 − 3e−2 .e. hot springs.3 The PDF of Y is 3 fY(y) 2 1 0 −2 0 y 2 f Y (y) = 3y 2 /2 −1 ≤ y ≤ 1. (3) 16 Address:104 pine meadows loop. (2) The second moment of Y is E Y2 = ∞ −∞ y 2 f Y (y) dy = 1 −1 (3/2)y 4 dy = (3/10)y 5 1 −1 = 3/5. FX (x) = 0 x f X (y) dy = 0 x y −y/2 e dy 4 (5) (6) (7) x x 1 y − e−y/2 dy = − e−y/2 − 2 2 0 0 x −x/2 =1− e − e−x/2 2 The complete expression for the CDF is 1 FX(x) 0. we ﬁrst note X is a nonnegative random variable so that FX (x) = 0 for all x < 0. AR.. 0 otherwise. P [−2 ≤ X ≤ 2] = FX (2) − FX (−2) = 1 − 3e−1 . us (United States) Zip Code:71901 .com Phone:5017621195 (2) To ﬁnd the CDF FX (x). (1) (1) The expected value of Y is E [Y ] = ∞ −∞ y f Y (y) dy = 1 −1 (3/2)y 3 dy = (3/8)y 4 1 −1 = 0. For x ≥ 0. (2) Note that the above calculation wasn’t really necessary because E[Y ] = 0 whenever the PDF f Y (y) is an even function (i.

√ b = 3 + 3 3. Since E[X ] = 3 and Var[X ] = 9. b) random variable. Quiz 3.Name:joey iwatsuru Email:joeyiwat@yahoo. 0 otherwise. E[X ] = 1/λ and Var[X ] = 1/λ2 . (4) (2) We know X is a uniform (a. the peak value of the Gaussian PDF goes down. We start with the sketches.4 (1) When X is an exponential (λ) random variable. it is important to remember that as the standard deviation increases. The PDF of X is f X (x) = (1/3)e−x/3 x ≥ 0. (3) (4) The complete expression for the PDF of X is √ √ √ 1/(6 3) 3 − 3 3 ≤ x < 3 + 3 3.2.com Phone:5017621195 (3) The variance of Y is Var[Y ] = E Y 2 − (E [Y ])2 = 3/5. To ﬁnd a and b. The only valid solution with a < b is √ a = 3 − 3 3. AR. (1) √ Var[Y ] = √ 3/5. we apply Theorem 3. However.6 to write E [X ] = This implies a + b = 6. (4) The standard deviation of Y is σY = Quiz 3.5 Each of the requested probabilities can be calculated using or Q(z) and Table 3.4 0. fY(y) 0. hot springs. (5) (z) function and Table 3. a+b =3 2 Var[X ] = (b − a)2 = 9. The fact that Y has twice the standard deviation of X is reﬂected in the greater spread of f Y (y).1 (1) The PDFs of X and Y are shown below. we must have λ = 1/3.2 fX(x) 0 −5 x ← fX(x) ← f (y) Y 0 y 5 17 Address:104 pine meadows loop. us (United States) Zip Code:71901 . 12 (2) √ b − a = ±6 3. f X (x) = 0 otherwise.

(3) P[X = 1] = FX (1+ ) − FX (1− ) = 1 − 1/2 = 1/2. ⎩ 1 x ≥ 1. 2).5 0 −2 (1. (1) The following probabilities can be read directly from the CDF: (1) P[X ≤ 1] = FX (1) = 1. The resulting PDF is 0.5 ) = Q(1.5] = Q(3. 1).5 0 −2 0 x 2 ⎧ −1 ≤ x < 1. 1).5) = 2. (5) Since Y is Gaussian (0. P[Y > 3.33 × 10−4 .75) = 0 x 2 ⎧ x < −1.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. P [−1 < X ≤ 1] = FX (1) − FX (−1) = (1) − (−1) = 2 (1) − 1 = 0.6 The CDF of X is 1 FX(x) 0. (2) P[X < 1] = FX (1− ) = 1/2. P [−1 < Y ≤ 1] = FY (1) − FY (−1) 1 −1 = − σY σY (3) =2 1 − 1 = 0. (2) Quiz 3.75) = 1 − 2 0. 2).5 fX(x) 0.6826.0401.7 18 Address:104 pine meadows loop. P[X > 3. since X is Gaussian (0. (3) Since Y is Gaussian (0. ⎨ 0 FX (x) = (x + 1)/4 −1 ≤ x < 1.com Phone:5017621195 (2) Since X is Gaussian (0. ⎨ 1/4 f X (x) = (1/2)δ(x − 1) x = 1. us (United States) Zip Code:71901 . (4) We ﬁnd the PDF f Y (y) by taking the derivative of FY (y).5] = Q( 3. Quiz 3. AR. ⎩ 0 otherwise. 2 (4) (1) (2) (4) Again.383.

FX (x) = x −∞ f X (y) dy = 0 x (1 − y/2) dy = x − x 2 /4. 1.8 (1) P[Y ≤ 6] = 6 −∞ f Y (y) dy = 6 0 (1/10) dy = 0. we see that the jump in FY (y) at y = 1 is exactly equal to P[Y = 1]. FX (x) = x−x ⎩ 1 x > 2.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. FX (x) = 0 for x < 0.5 0 −1 X 0 1 x 2 3 ⎧ x < 0. Y is also nonnegative. ⎨ 0 2 /4 0 ≤ y < 1. Note that when y < 0 or y > 1. hot springs. FX (x) = 1 for x ≥ 2 since its always true that x ≤ 2. Also. for 0 ≤ x ≤ 2. (5) 0.com Phone:5017621195 (1) Since X is always nonnegative.6 .25 f Y (y) = 1 − y/2 + (1/4)δ(y − 1) 0 ≤ y ≤ 1 0 otherwise Y (6) Quiz 3. because Y ≤ 1. us (United States) Zip Code:71901 . the PDF is zero. Lastly. (4) By taking the derivative of FY (y). the complete expression for the CDF of Y is 1 F (y) 0. Thus FY (y) = 0 for y < 0. Finally. Using the CDF FX (x). for 0 < y < 1. Also. ⎨ 0 2 /4 0 ≤ x ≤ 2. FY (y) = 1 for all y ≥ 1.5 0 −1 Y (4) 0 As expected.5 f (y) 1 0. 19 Address:104 pine meadows loop. FY (y) = y−y ⎩ 1 y ≥ 1. (2) (2) The probability that Y = 1 is P [Y = 1] = P [X ≥ 1] = 1 − FX (1) = 1 − 3/4 = 1/4. FY (y) = P [Y ≤ y] = P [X ≤ y] = FX (y) .5 0 −1 0 1 y 2 3 1 y 2 3 ⎧ y < 0. (3) (3) Since X is nonnegative. (1) The complete CDF of X is 1 F (x) 0. we obtain the PDF f Y (y).

end end A second method exploits the fact that if T is an exponential (λ) random variable.15.m) generates the vector t. 1/6 0 ≤ y ≤ 6. we can calculate the conditional expectation E [Y |Y ≤ 6] = ∞ −∞ y f Y |Y ≤6 (y) dy = 6 0 y dy = 3. we can calculate the conditional expectation E [Y |Y > 8] = ∞ −∞ y f Y |Y >8 (y) dy = 10 8 y dy = 9. = otherwise. 0 otherwise.2 . = otherwise. AR. In this case the command t=2. i=i+1. if (x>2) t(i+1)=x.com Phone:5017621195 (2) From Deﬁnition 3. the conditional PDF of Y given Y ≤ 6 is f Y |Y ≤6 (y) = (3) The probability Y > 8 is P [Y > 8] = 8 10 f Y (y) P[Y ≤6] 0 y ≤ 6. x=exponentialrv(lambda. 20 Address:104 pine meadows loop. (3) (5) From the conditional PDF f Y |Y ≤6 (y). 10 (2) (4) From Deﬁnition 3. us (United States) Zip Code:71901 . while (i<m).0+exponentialrv(1/3. t=zeros(m. 6 (4) (6) From the conditional PDF f Y |Y >8 (y). then T = T + 2 has PDF f T (t) = f T |T >2 (t). hot springs. Here is a M ATLAB function that uses this method: function t=t2rv(m) i=0.lambda=1/3. 0 otherwise.1). (1) 1 dy = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. 2 (5) Quiz 3.9 A natural way to produce random variables with PDF f T |T >2 (t) is to generate samples of T with PDF f T (t) and then to discard those samples which fail to satisfy the condition T > 2.1). the conditional PDF of Y given Y > 8 is f Y |Y >8 (y) = f Y (y) P[Y >8] 0 y > 8.15. 1/2 8 < y ≤ 10.

24 + 0.G (1. 2) + PQ. ∞) = P[X ≤ ∞.16 + 0.G (0. Y ≤ −∞] = 0 since Y cannot take on the value −∞. This result is given in Theorem 4.2 From the joint PMF of Q and G given in the table.12 + 0.24 + 0. g) (4) (5) = 0. we can calculate the requested probabilities by summing the PMF over those values of Q and G that correspond to the event. (1) The probability that Q = 0 is P [Q = 0] = PQ.16 + 0.08 = 0.08 = 0. 0) + PQ. (3) FX. (1) FX.6 (2) The probability that Q = G is P [Q = G] = PQ.1 Each value of the joint CDF can be found by considering the corresponding probability.G (q. us (United States) Zip Code:71901 . Quiz 4. (2) FX.18 + 0.Y (−∞.G (0.G (q. 1) + PQ.12 + 0. Y ≤ y] = P[Y ≤ y] = FY (y).06 + 0.G (0.24 + 0. Y ≤ ∞] = 1.1.12 = 0. −∞) = P[X ≤ ∞.com Phone:5017621195 Quiz Solutions – Chapter 4 Quiz 4. Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on the value −∞. 3) = 0.Y (∞. 0) + PQ. g) (6) (7) = 0.Y (∞. AR. hot springs.18 + 0.Y (∞. 2) = P[X ≤ −∞. (4) FX.G (0.78 21 Address:104 pine meadows loop. y) = P[X ≤ ∞. 1) = 0.6 (4) The probability that G > Q is 1 3 P [G > Q] = q=0 g=q+1 PQ.Name:joey iwatsuru Email:joeyiwat@yahoo.18 (3) The probability that G > 1 is 3 1 (1) (2) (3) P [G > 1] = g=2 q=0 PQ.G (0.

Y (x.2 PB (b) 0.3. 2 0 0 2 1 f X. this corresponds to calculating the row sum across the table of the joint PMF. we convert to polar coordinates using the substitutions x = r cos θ .Name:joey iwatsuru Email:joeyiwat@yahoo.1 0 0. b) b = 0 b = 2 b = 4 PH (h) h = −1 0 0. The easiest way to calculate these marginal PMFs is to simply sum each row and column: PH.B (h. this corresponds to the column sum down the table of the joint PMF.2 h=0 h=1 0.Y (x. y = r sin θ and d x d y = r dr dθ .Y (x. we write P [A] = A y dy = (c/4)y 2 f X. y) d x d y = 1.3 By Theorem 4. us (United States) Zip Code:71901 . Speciﬁcally.2.4 To ﬁnd the constant c.2 0. y) d x d y = =c cx y d x dy y 0 2 0 (1) dy 2 0 x 2 /2 1 0 (2) =c (3) = (c/2) Thus c = 1. y) d x d y (4) To integrate over A.B (h.5 0.2 0.com Phone:5017621195 Quiz 4. b) (2) For each value of b. b) (1) For each value of h.1 0 0. hot springs. Similarly. yielding 2 1 Y P [A] = 0 π/2 0 1 0 1 r 2 sin θ cos θ r dr dθ π/2 0 2 π/2 (5) (6) A 1 X = = r 3 dr ⎛ 1 0 sin θ cos θ dθ ⎞ ⎠ = 1/8 r 4 /4 ⎝ sin θ 2 (7) 0 22 Address:104 pine meadows loop.3 Quiz 4. the marginal PMF of H is PH (h) = b=0. we apply ∞ ∞ −∞ −∞ ∞ ∞ −∞ −∞ (3) f X.4 0. the marginal PMF of B is 1 PB (b) = h=−1 PH.B (h.1 0.6 0. To calculate P[A]. AR.1 0.4 PH.

05 (T =18) 0. y) dy (1) For x < 0 or x > 1.20 (T =36) 0.com Phone:5017621195 Quiz 4.B (l. f X (x) = 0.10 (T =360) b = 28. the complete expression for the PDF of Y is f Y (y) = Quiz 4.10 (T =24) 0. 400 l = 2.05 t = 180 ⎪ ⎪ ⎪ 0.5 By Theorem 4. AR.Name:joey iwatsuru Email:joeyiwat@yahoo. For 0 ≤ y ≤ 1.1 t = 360 ⎪ ⎪ ⎩ 0 otherwise 23 (1) Address:104 pine meadows loop. We can write these down on the table for the joint PMF of L and B as follows: PL .20 (T =270) (3 + 6y 2 )/5 0 ≤ y ≤ 1 0 otherwise (6) From the table. us (United States) Zip Code:71901 . 600 0. we can calculate the time T needed for the transfer.10 (T =120) 0. For each pair of values of L and B. 000 b = 14. 90 ⎪ ⎪ ⎨ 0.Y (x. 776.05 t = 18 ⎪ ⎪ ⎪ 0.00 (T =540) b = 21. b) l = 518. ⎧ ⎪ 0. 800 0.Y (x.20 (T =90) 0. writing down the PMF of T is straightforward.6 (A) The time required for the transfer is T = L/B. 400 0.8. 592. y) dy (x + y 2 ) d x = 6 2 x /2 + x y 2 5 x=1 x=0 (4) 6 3 + 6y 2 = (1/2 + y 2 ) = 5 5 (5) 5 0 Since f Y (y) = 0 for y < 0 or y > 1. For 0 ≤ x ≤ 1. f Y (y) = = ∞ −∞ 6 1 f X.2 t = 270 ⎪ ⎪ ⎪ ⎪ 0. hot springs.05 (T =180) 0.1 t = 120 PT (t) = ⎪ 0.2 t = 36. the marginal PDF of X is f X (x) = ∞ −∞ f X. f X (x) = 6 5 1 0 (x + y 2 ) dy = 6 x y + y 3 /3 5 y=1 y=0 6x + 2 6 = (x + 1/3) = 5 5 (2) The complete expression for the PDf of X is f X (x) = (6x + 2)/5 0 ≤ x ≤ 1 0 otherwise (3) By the same method we obtain the marginal PDF for Y .1 t = 24 ⎪ ⎪ ⎪ ⎪ 0. 000 l = 7.

The calculus is simpler if we integrate over the region X Y > w.5) + 32 (0. Speciﬁcally. Thus f W (0) = 0 and f W (1) = 1.7 (A) It is helpful to ﬁrst make a table that includes the marginal PMFs.6 t = 60 0. PL .5.5) + 3(0. integrating over the region W ≤ w is fairly complex. the variance of L is Var [L] = E L 2 − (E [L])2 = 0. W = X Y satisﬁes 0 ≤ W ≤ 1. 24 t = 40 0.5. For 0 < w < 1.2 0. Since the second moment of L is E L 2 = 12 (0.1 0. Y 1 w w 1 XY > w FW (w) = 1 − P [X Y > w] =1− =1− 1 1 w w/x 1 w (2) (3) (4) (5) (6) dy dx XY = w X (1 − w/x) d x = 1 − x − w ln x|x=1 x=w = 1 − (1 − w + w ln w) = w − w ln w The complete expression for the CDF is ⎧ w<0 ⎨ 0 FW (w) = w − w ln w 0 ≤ w ≤ 1 ⎩ 1 w>1 By taking the derivative of the CDF.25 0.Name:joey iwatsuru Email:joeyiwat@yahoo. As shown below. we calculate the CDF FW (w) = P[W ≤ w].5 0.4 PL (l) 0. us (United States) Zip Code:71901 .15 0.25) + 2(0.3 0. we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1.25) + 22 (0.25) = 4. AR.1 0.T (l. t) l=1 l=2 l=3 PT (t) (1) The expected value of L is E [L] = 1(0.25 (7) (8) (1) (2) (3) Address:104 pine meadows loop.15 0.com Phone:5017621195 (B) First. hot springs.25) = 2. we ﬁnd the PDF is ⎧ 0 w<0 d FW (w) ⎨ f W (w) = = − ln w 0 ≤ w ≤ 1 ⎩ dw 0 w>1 Quiz 4.

f Y (y) = ∞ −∞ f X. the covariance of L and T is Cov [L .1) + 2(60)(0. it is straightforward to calculate the various expectations. T ] = E [L T ] − E [L] E [T ] = 96 − 2(48) = 0 (5) Since Cov[L .4) = 2400. (11) (B) As in the discrete case. For 0 ≤ x ≤ 1.4) = 48.6) + 602 (0. us (United States) Zip Code:71901 .Y (x. Thus Var[T ] = E T 2 − (E [T ])2 = 2400 − 482 = 96. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.16(a).Y (x. AR. y) d x = 0 2 xy dx = 1 2 x y 2 x=1 = x=0 y 2 (13) The complete expressions for the marginal PDFs are f X (x) = 2x 0 ≤ x ≤ 1 0 otherwise f Y (y) = y/2 0 ≤ y ≤ 2 0 otherwise (14) From the marginal PDFs. (3) The correlation is 3 (4) (5) (6) E [L T ] = t=40. f X (x) = ∞ −∞ f X. the correlation coefﬁcient is ρ L .15) + 2(40)(0. y) dy = 0 2 1 x y dy = x y 2 2 y=2 = 2x y=0 (12) Similarly.15) + 1(60)(0. the calculations become easier if we ﬁrst calculate the marginal PDFs f X (x) and f Y (y).1) = 96 (4) From Theorem 4.com Phone:5017621195 (2) The expected value of T is E [T ] = 40(0. 25 Address:104 pine meadows loop.3) + 3(40)(0.60 l=1 lt PL T (lt) (7) (8) (9) (10) = 1(40)(0.6) + 60(0. for 0 ≤ y ≤ 2.T = 0.2) + 3(60)(0. The second moment of T is E T 2 = 402 (0. T ] = 0.

t) P[A] (1) 0 lt > 80 otherwise (2) Address:104 pine meadows loop. (3) The correlation of X and Y is E [X Y ] = = ∞ ∞ −∞ −∞ 1 2 2 2 0 0 x y f X. 40) + PL . 40) and (L . 60).8 (A) Since the event V > 80 occurs only for the pairs (L .T (3. T ) = (3.45 By Deﬁnition 4. 60). AR. dy 1 0 (20) 2 x3 x y d x.com Phone:5017621195 (1) The ﬁrst and second moments of X are E [X ] = E X2 = ∞ −∞ ∞ −∞ x f X (x) d x = 0 1 2x 2 d x = 1 2 3 1 2 (15) (16) (17) x 2 f X (x) d x = 0 2x 3 d x = The variance of X is Var[X ] = E[X 2 ] − (E[X ])2 = 1/18. (22) (5) Since Cov[X. Y ] = 0.T (2. T ) = (2. P [A] = P [V > 80] = PL .T (l. Y ] = E [X Y ] − E [X ] E [Y ] = 2 8 − 9 3 4 3 = 0. hot springs. 60) + PL . y) d x. dy = 3 y3 3 = 0 8 9 (21) (4) The covariance of X and Y is Cov [X. (L .Y (x.T |A (l.Name:joey iwatsuru Email:joeyiwat@yahoo. the correlation coefﬁcient is ρ X. t) = 26 PL . T ) = (3. Quiz 4. PL .Y = 0.T (3. us (United States) Zip Code:71901 . (2) The ﬁrst and second moments of Y are E [Y ] = E Y2 4 1 2 y dy = 3 −∞ 0 2 ∞ 2 1 = y 2 f Y (y) dy = y 3 dy = 2 −∞ 0 2 y f Y (y) dy = ∞ 2 (18) (19) The variance of Y is Var[Y ] = E[Y 2 ] − (E[Y ])2 = 2 − 16/9 = 2/9. 60) = 0.9.

hot springs.T |A (l. we ﬁrst ﬁnd the conditional second moment E V 2 |A = l t (lt)2 PL . P [B] = B f X.801 8 5 2 dy The conditional PDF of X and Y is f X. y) d x d y = = = 60 40 60 40 60 3 80/y xy dx dy 4000 x2 2 3 (8) dy (9) (10) (11) y 4000 80/y 9 3200 y − 2 y 40 4000 2 9 4 3 = − ln ≈ 0. 80/y ≤ x ≤ 3 0 otherwise 27 (12) (13) Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. y) ∈ B 0 otherwise K x y 40 ≤ y ≤ 60. y) /P [B] (x.Y (x. t) (3) (4) 1 2 1 4 = (2 · 60) + (3 · 40) + (3 · 60) = 133 9 3 9 3 For the conditional variance Var[V |A]. y) = = f X. AR. us (United States) Zip Code:71901 .Y (x.T |A (l.T |A (l. t) (5) (6) 4 1 2 = (2 · 60)2 + (3 · 40)2 + (3 · 60)2 = 18. t) t = 40 t = 60 l=1 0 0 l=2 0 4/9 1/3 2/9 l=3 The conditional expectation of V can be found from the conditional PMF. E [V |A] = l t lt PL .Y |B (x. we ﬁrst calculate the probability of the conditioning event.com Phone:5017621195 We can represent this conditional PMF in the following table: PL . 400 9 3 9 It follows that Var [V |A] = E V 2 |A − (E [V |A])2 = 622 2 9 (7) (B) For continuous random variables X and Y .

2} and B has range S B = {0.78 −∞ −∞ 60 3 40 (x y)2 f X. 116. A table of the joint PMF will include all four possible combinations of A and B. AR. we can note that A has range S A = {0.Y |B (x. b) b=0 b=1 a=0 PB|A (0|0)PA (0) PB|A (1|0)PA (0) PB|A (0|2)PA (2) PB|A (1|2)PA (2) a=2 28 Address:104 pine meadows loop. us (United States) Zip Code:71901 .com Phone:5017621195 where K = (4000P[B])−1 . y) d x d y K x 2 y2 d x d y y2 x 3 x=3 x=80/y (14) (15) = (K /3) = (K /3) 80/y 60 40 60 40 dy (16) (17) (18) 27y 2 − 803 /y dy 60 40 = (K /3) 9y 3 − 803 ln y The conditional second moment of K given B is E W 2 |B = = ∞ ∞ ≈ 120.9 (24) (A) (1) The joint PMF of A and B can be found from the marginal and conditional PMFs via PA.B (a.30 Quiz 4. y) d x d y K x 3 y3 d x d y y3 x 4 x=3 x=80/y (19) (20) = (K /4) 80/y 60 40 60 40 dy (21) (22) ≈ 16.Name:joey iwatsuru Email:joeyiwat@yahoo. however. The conditional expectation of W given event B is E [W |B] = = ∞ ∞ −∞ −∞ 60 3 40 x y f X.B (a.Y |B (x. Incorporating the information from the given conditional PMFs can be confusing. The general form of the table is PA. hot springs. b) = PB|A (b|a)PA (a). 1}. Consequently.10 (23) = (K /4) 81y 3 − 804 /y dy 60 40 = (K /4) (81/4)y 4 − 804 ln y It follows that the conditional variance of W given B is Var [W |B] = E W 2 |B − (E [W |B])2 ≈ 1528.

we have b=0 b=1 PA.5)(0.3/0.4) (0.Y (x.62 a = 0 PA.6) (0. y) = f Y |X (y|x) f X (x) = (2) From the given conditional PDF f Y |X (y|x). AR. we can calculate the the conditional PMF ⎧ 0. us (United States) Zip Code:71901 . 0) ⎨ PA|B (a|0) = = 0.5 (1) (3) From the joint PMF PA.8)(0.B (a.4) (0. b) b = 0 b = 1 a=0 0.62 a = 2 (2) ⎩ PB (0) 0 otherwise ⎧ ⎨ 16/31 a = 0 = 15/31 a = 2 (3) ⎩ 0 otherwise (4) We can calculate the conditional variance Var[A|B = 0] using the conditional PMF PA|B (a|0).5)(0.32 0.5) = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. b) a=0 (0.com Phone:5017621195 Substituting values from PB|A (b|a) and PA (a).B (a.2)(0. it is easy to calculate the conditional expectation 1 E [B|A = 2] = b=0 b PB|A (b|2) = (0)(0. hot springs. First we calculate the conditional expected value E [A|B = 0] = a a PA|B (a|0) = 0(16/31) + 2(15/31) = 30/31 (4) The conditional second moment is E A2 |B = 0 = a a 2 PA|B (a|0) = 02 (16/31) + 22 (15/31) = 60/31 (5) The conditional variance is then Var[A|B = 0] = E A2 |B = 0 − (E [A|B = 0])2 = (B) (1) The joint PDF of X and Y is f X.B (a.32/0.08 0.B (a.6) a=2 or PA. b). 0 ≤ x ≤ 1 0 otherwise (7) 960 961 (6) Address:104 pine meadows loop. f Y |X (y|1/2) = 29 8y 0 ≤ y ≤ 1/2 0 otherwise (8) 6y 0 ≤ y ≤ x.3 a=2 (2) Given the conditional PMF PB|A (b|2).5) + (1)(0.3 0.

16 0.12 0. there are no obvious pairs q. (2) For random variables Q and G from Quiz 4. y) = 0. 1). we see that given Y = 1/2. 1/2) 6(1/2) =2 = f Y (1/2) 3/2 (10) ∞ −∞ f X.X 2 (x1 .08 0. To ﬁnd f Y (1/2).60 0. hot springs.40 0. f X |Y (x|1/2) = f X.1.06 0.Y (x.04 0. for 1/2 ≤ x ≤ 1.40 q=1 PG (g) 0. b) PDF. 0 ≤ x2 ≤ 2 0 otherwise 30 (2) (3) Address:104 pine meadows loop. PX. y) = PX (x)PY (y).2.com Phone:5017621195 (3) The conditional PDF of Y given X = 1/2 is f X |Y (x|1/2) = f X. we integrate the joint PDF. g that fail the independence requirement. we can conclude that X and Y are dependent. independence requires that either PX (x) = 0 or PY (y) = 0. y such that PX. PQ.Y (x.Y (x.10 0. we observe that PY (1) = 0. Unlike X and Y in part (a). us (United States) Zip Code:71901 .Y (x.1/2 ( ) d x = 1 1/2 6(1/2) d x = 3/2 (9) (4) From the pervious part.12 0. g) g = 0 g = 1 g = 2 g = 3 PQ (q) q=0 0.01. Hence Q and G are independent. g) = PQ (q)PG (g) for every pair q. (B) (1) Since X 1 and X 2 are independent. g) in Quiz 4.10 (A) (1) For random variables X and Y from Example 4. Note that whenever PX. AR.Y (0. However. Thus.30 0.24 0. f X 1 . the conditional PDF of X is uniform (1/2. x2 ) = f X 1 (x1 ) f X 2 (x2 ) = (1 − x1 /2)(1 − x2 /2) 0 ≤ x1 ≤ 2.09 and PX (0) = 0.2.G (q. In this case.G (q.20 Careful study of the table will verify that PQ. 1) = 0 = PX (0) PY (1) (1) (1 − 1/2)2 1 = 12 48 (11) Since we have found a pair x. by the deﬁnition of the uniform (a.Name:joey iwatsuru Email:joeyiwat@yahoo. Var [X |Y = 1/2] = Quiz 4.18 0.G (q. g. f Y (1/2) = Thus. we calculate the marginal PMFs from the table of the joint PMF PQ. it is not obvious whether they are independent. 1/2)/ f Y (1/2).

That is.Y (x. we have 1 2 2 e−2(x −x y+y )/3 . us (United States) Zip Code:71901 . the conditional expected value and standard deviation of X given Y = y are 2 E [X |Y = y] = y/2 σ X = σ1 (1 − ρ 2 ) = 3/4. P [Z ≤ z] = P [X 1 ≤ z. we see that E[X |Y = 2] = 1 and Var[X |Y = 2] = 3/4. (1) (2) By Theorem 4. we need to ﬁnd the CDF of each X i . σ2 = σY = 1.29. FZ (z) = (z − z 2 /4)2 (7) The complete expression for the CDF of Z is ⎧ z<0 ⎨ 0 2 /4)2 0 ≤ z ≤ 2 FZ (z) = (z − z ⎩ 1 z>1 (8) Quiz 4. The conditional PDF of X given Y = 2 is simply the Gaussian PDF 1 2 e−2(x−1) /3 .17 and Theorem 4. AR. hot springs. f X |Y (x|2) = √ 3π/2 (5) 31 Address:104 pine meadows loop. y) = √ 3π 2 (3) µ2 = µY = 0. from the problem statement. From the PDF f X (x). Speciﬁcally. X 2 ≤ z] = P [X 1 ≤ z] P [X 2 ≤ z] = [FX (z)]2 (4) (5) To complete the problem.11 This problem just requires identifying the various terms in Deﬁnition 4. f X. and that σ1 = σ X = 1. ˜ (4) When Y = y = 2.17. µ1 = µ X = 0. X 2 ) is found by observing that Z ≤ z iff X 1 ≤ z and X 2 ≤ z. (2) (1) Applying these facts to Deﬁnition 4. the CDF is ⎧ x <0 ⎨ 0 x 2 /4 0 ≤ x ≤ 2 FX (x) = f X (y) dy = (6) x−x ⎩ −∞ 1 x >2 Thus for 0 ≤ z ≤ 2. The CDF of Z = max(X 1 .Name:joey iwatsuru Email:joeyiwat@yahoo.30.com Phone:5017621195 (2) Let FX (x) denote the CDF of both X 1 and X 2 . we know that ρ = 1/2.

px=0. 0 otherwise. and an independent uniform (0. x) PMF. PY |X (y|x) = 1/x y = 1.y’]. 1) random variable U .*rand(m.25*ones(4. Instead. we use an alternate approach. First we observe that X has the discrete uniform (1.px. .4]. . x) PMF via Y = xU .3. 32 Address:104 pine meadows loop. Also.com Phone:5017621195 Quiz 4.1)). . That is. Y has a discrete uniform (1. PX (x) = 1/4 x = 1.m).12 One straightforward method is to follow the approach of Example 4.28. AR.1). 3. 4) PMF. xy=[x’. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo. 4. x=finiterv(sx. x 0 otherwise (1) Given X = x. . us (United States) Zip Code:71901 . This observation prompts the following program: function xy=dtrianglerv(m) sx=[1. given X = x. 2.2. we can generate a sample value of Y with a discrete uniform (1. y=ceil(x.

. PY (y) = P [Y1 = y1 .com Phone:5017621195 Quiz Solutions – Chapter 5 Quiz 5. . x3 ) = 0 unless 0 ≤ x 1 ≤ 33 Address:104 pine meadows loop. 6 d x1 = 6x2 .2 By deﬁnition of A. we have f X 1 . x2 ) = f X 2 .X 3 (x1 . Within these constraints.1 We ﬁnd P[C] by integrating the joint PDF over the region of interest. X 3 − X 2 = y3 ] = P [X 1 = y1 . 2. Speciﬁcally. x2 ) = 0 unless 0 ≤ x 1 ≤ x2 ≤ 1. 2. . hot springs.X 3 (x2 . AR.X 3 (x2 . x3 ) = 0 unless 0 ≤ x2 ≤ x3 ≤ 1. x3 ) = f X 1 . we must keep in mind that f X 1 . Y3 = y3 ] = P [X 1 = y1 . Since 0 < X 1 < X 2 < X 3 . y3 ∈ {1. X 2 = y2 + y1 . f X 2 .X 2 (x1 . (1) (2) =4 0 y2 dy2 0 y4 dy4 Quiz 5. Y1 = X 1 . us (United States) Zip Code:71901 . X 2 − X 1 = y2 . 6 d x2 = 6(x3 − x1 ). Y2 = X 2 − X 1 and Y3 = X 3 − X 2 . P [C] = 0 1/2 y2 1/2 y4 dy2 0 1/2 dy1 0 dy4 0 1/2 4dy3 = 1/4. . each Yi must be a strictly positive integer. y3 ∈ {1. the complete expression for the joint PMF of Y is PY (y) = (1 − p) p a y y1 . X 3 = y3 + y2 + y1 ] = (1 − p)3 p y1 +y2 +y3 (1) (2) (3) (4) By deﬁning the vector a = 1 1 1 .Name:joey iwatsuru Email:joeyiwat@yahoo. . y2 . x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X (x) d x3 = f X (x) d x1 = f X (x) d x2 = 1 6 d x3 = 6(1 − x2 ).} 0 otherwise (5) Quiz 5.3 First we note that each marginal PDF is nonzero only if any subset of the xi obeys the ordering contraints 0 ≤ x 1 ≤ x2 ≤ x3 ≤ 1.X 3 (x1 .X 2 (x1 . and that f X 1 . Y2 = y2 .}. Thus. . (1) (2) (3) x2 x2 0 x3 x1 In particular. y2 . for y1 .

x3 ) d x2 = 1 x1 1 6(1 − x2 ) d x2 = 3(1 − x1 )2 6x2 d x3 = 6x2 (1 − x2 ) 2 6x2 d x2 = 3x3 (7) (8) (9) x2 x3 0 The complete expressions are f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = 3(1 − x1 )2 0 ≤ x1 ≤ 1 0 otherwise 6x2 (1 − x2 ) 0 ≤ x2 ≤ 1 0 otherwise 2 3x3 0 ≤ x3 ≤ 1 0 otherwise (10) (11) (12) Quiz 5. x3 ) = 6(1 − x2 ) 0 ≤ x1 ≤ x2 ≤ 1 0 otherwise 6x2 0 ≤ x2 ≤ x3 ≤ 1 0 otherwise 6(x3 − x1 ) 0 ≤ x1 ≤ x3 ≤ 1 0 otherwise (4) (5) (6) Now we can ﬁnd the marginal PDFs. AR.X 3 (x1 .X 2 (x1 . x2 ) = f X 2 .Name:joey iwatsuru Email:joeyiwat@yahoo. The complete expressions are f X 1 .com Phone:5017621195 x3 ≤ 1. f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X 1 . w) = 4 0 ≤ v1 ≤ v2 ≤ 1. x3 ) d x3 = f X 2 .X 2 (x1 .4 In the PDF f Y (y).X 3 (x2 . hot springs. Y2 W= Y3 . When 0 ≤ xi ≤ 1 for each xi . the components have dependencies as a result of the ordering constraints Y1 ≤ Y2 and Y3 ≤ Y4 .X 3 (x2 . 0 ≤ w1 ≤ w2 ≤ 1 0 otherwise (2) Y1 .X 3 (x2 . Y4 (1) 34 Address:104 pine meadows loop. us (United States) Zip Code:71901 . We can separate these constraints by creating the vectors V= The joint PDF of V and W is f V. x2 ) d x2 = f X 2 .W (v. x3 ) = f X 1 .

3. . . x3 ∈ {0.19. .W (v.6) random variable and X 3 is a binomial (5. 5} ⎩ 0 otherwise We can ﬁnd the marginal PMF for each X i from the joint PMF PX (x). however it is simpler to just start from ﬁrst principles and observe that X 1 is the number of occurrences of L in ﬁve independent tests. A and R. PX (x) = (1) x1 .3) random variable.Name:joey iwatsuru Email:joeyiwat@yahoo. f W (w) = = 4(1 − w1 ) dw1 = 2 f V. Similarly. we see that X 1 is a binomial (n. conﬁrming that V and W are independent vectors. w) = f V (v) f W (w). x2 . w) dv1 dv2 1 0 1 v1 (6) (7) 4 dv2 dv1 = 2 It follows that V and W have PDFs f V (v) = 2 0 ≤ v1 ≤ v2 ≤ 1 . . If we view each test as a trial with success probability P[L] = 0. .6)x2 (0. w) dw1 dw2 1 w1 1 0 (3) (4) (5) 4 dw2 dw1 = Similarly. X 2 is a binomial (5.3.x2 . 0. 1. Quiz 5. the vector X = X 1 X 2 X 3 indicating the number of outcomes of each subexperiment has the multinomial PMF ⎧ 5 ⎨ x1 . each test is a subexperiment with three possible outcomes: L. 5 0 otherwise 35 5 x (2) Address:104 pine meadows loop. . for 0 ≤ w1 ≤ w2 ≤ 1.1) random variable. 0. For 0 ≤ v1 ≤ v2 ≤ 1. AR. p) = (5. That is.x3 (0.W (v.1)x3 x1 + x2 + x3 = 5. f V (v) = = 0 1 f V. us (United States) Zip Code:71901 . 0.com Phone:5017621195 We must verify that V and W are independent. .1.3)x1 (0.5 (A) Referring to Theorem 1. p2 = 0.W (v. for p1 = 0. 1. 0 otherwise f W (w) = 2 0 ≤ w 1 ≤ w2 ≤ 1 0 otherwise (8) It is easy to verify that f V. . hot springs.6 and p3 = 0. PX i (x) = pix (1 − pi )5−x x = 0. In ﬁve trials.

1)2 + 0. X 2 = w. 2.1)2 + 0. (1) (2) (3) E [X 2 ] = 0 1 E [X 3 ] = 0 1 To ﬁnd the correlation matrix R X . w = 4.6)2 (0. 2) + PX (2. Thus. 1) 5![0. or X 3 = w occurs. hot springs.3: E [X 1 ] = 0 1 ∞ −∞ x f X i (x) d x of µ X . PW (2) = PX (1. the constraints on y resulting from the constraints 0 ≤ X 1 ≤ X 2 ≤ X 3 can be much more complicated. f 3 X 2 2 2 2 (1/8)e−(y3 −4)/2 4 ≤ y1 ≤ y2 ≤ y3 = 0 otherwise (9) (10) (6) (7) (8) Note that for other matrices A.1458 = (3) (4) (5) In addition. we need to ﬁnd E[X i X j ] for all i and j. we must use Theorem 5. 1. X 2 and X 3 are not independent.6)2 (0.3(0.6 to ﬁnd the PMF of W . us (United States) Zip Code:71901 .1)] 2!2!1! = 0.10 to write f Y (y) = y1 − 4 y2 − 4 y3 − 4 1 . . for w = 3. In particular.Name:joey iwatsuru Email:joeyiwat@yahoo.6 We start by ﬁnding the components E[X i ] = the marginal PDFs f X i (x) found in Quiz 5. AR. 3x 3 d x = 3/4.288 PW (5) = PX 1 (5) + PX 2 (5) + PX 3 (5) = 0. Furthermore.6)(0.com Phone:5017621195 From the marginal PMFs. 2. we can apply Theorem 5. PW (0) = PW (1) = 0. 2) + PX (2. since X 1 + X 2 + X 3 = 5 and since each X i is non-negative. we see that X 1 . Quiz 5.0802 (B) Since each Yi = 2X i + 4. 6x 2 (1 − x) d x = 1/2. PW (3) = PX 1 (3) + PX 2 (3) + PX 3 (3) = 0. To do so. Hence. We start with 36 Address:104 pine meadows loop. and w = 5.32 (0.486 PW (4) = PX 1 (4) + PX 2 (4) + PX 3 (4) = 0. the event W = w occurs if and only if one of the mutually exclusive events X 1 = w. we use 3x(1 − x)2 d x = 1/4.32 (0.

1 x2 1 0 2 6x2 x3 d x3 d x2 x1 x2 f X 1 . x3 =1 x3 =x1 = 0 1 3 2 2 (2x1 x3 − 3x1 x3 ) d x1 = 0 1 2 4 [2x1 − 3x1 + x1 ] d x1 = 1/5.Name:joey iwatsuru Email:joeyiwat@yahoo. 6x 3 (1 − x) d x = 3/10.3. us (United States) Zip Code:71901 .X 2 (x1 . 1/4 3/8 ⎦ = 80 1 2 3 3/8 9/16 (17) (18) Address:104 pine meadows loop. hot springs. AR. X has correlation matrix ⎡ ⎤ 1/10 3/20 1/5 R X = ⎣3/20 3/10 2/5⎦ . (4) (5) (6) 2 E X2 = 2 E X3 = 1 0 1 0 Using marginal PDFs from Quiz 5. x2 ) . 1/5 2/5 3/5 Vector X has covariance matrix C X = R X − E [X] E [X] ⎡ ⎤ ⎡ ⎤ 1/10 3/20 1/5 1/4 ⎣3/20 3/10 2/5⎦ − ⎣1/2⎦ = 1/5 2/5 3/5 3/4 ⎡ ⎤ ⎡ 1/10 3/20 1/5 1/16 ⎣3/20 3/10 2/5⎦ − ⎣ 1/8 = 1/5 2/5 3/5 3/16 37 (15) (16) 1/4 1/2 3/4 ⎤ ⎡ ⎤ 3 2 1 1/8 3/16 1 ⎣ 2 4 2⎦ . 3x 4 d x = 3/5.com Phone:5017621195 the second moments: E 2 X1 = 0 1 3x 2 (1 − x)2 d x = 1/10. Summarizing the results. the cross terms are E [X 1 X 2 ] = = = 0 ∞ ∞ −∞ −∞ 1 1 0 1 x1 3 4 [x1 − 3x1 + 2x1 ] d x1 = 3/20. d x1 d x2 d x1 (7) (8) (9) (10) (11) (12) (13) (14) 6x1 x2 (1 − x2 ) d x2 E [X 2 X 3 ] = 0 1 = E [X 1 X 3 ] = 0 2 4 [3x2 − 3x2 ] d x2 = 2/5 1 x1 1 6x1 x3 (x3 − x1 ) d x3 d x1 .

0000. Here is the output of julytemps. The covariance matrix of Y is 1 × 1 and is just equal to Var[Y ].18 that µ X = b and that C X = AA = 2 1 1 −1 2 1 5 1 = .(1:31)). Next we calculate Var[Y ]. the ﬁrst two lines generate the 31 × 31 covariance matrix CT. computing the covariance matrix by calculus can be a time consuming task.97792616932396 38 Address:104 pine meadows loop. p=phi((T-80)/sqrt(CY)).Name:joey iwatsuru Email:joeyiwat@yahoo. A=ones(31.02207383067604 Columns 5 through 6 0. 0 (1) It follows from Theorem 5.50000000000000 0.0000 Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.16. us (United States) Zip Code:71901 .0. just a Gaussian random variable.0000 1. 1 −1 b= 2 .00002844263128 0. [D1 D2]=ndgrid((1:31). rounds off those probabilities. AR.7 We observe that X = AZ + b where A= 2 1 .8 First. Here is the long format output: >> format long >> julytemps([70 75 80 85 90 95]) ans = Columns 1 through 4 0. The expected value of Y is µY = µT = 80. Since T is a Gaussian random vector. Its just that the M ATLAB’s short format output.16 tells us that Y is a 1 dimensional Gaussian vector./(1+abs(D1-D2)). function p=julytemps(T).0221 0.99999999922010 0. Theorem 5. invoked with the command format short. i.com Phone:5017621195 This problem shows that even for fairly simple joint PDFs.1)/31.9779 1. by Theorem 5..m. or CT . we observe that Y = AT where A = 1/31 1/31 · · · 1/31 . 1 −1 1 2 (2) Quiz 5. CY=(A’)*CT*A.m: >> julytemps([70 75 80 85 90 95]) ans = 0. CT=36. Var[Y ] = ACT A . Thus.5000 0. The ﬁnal step is to use the (·) function to calculate P[Y < T ]. hot springs. In julytemps.0000 0.99997155736872 0.e. Quiz 5.

we see that ⎡ ⎤ c0 c1 · · · c30 . .Name:joey iwatsuru Email:joeyiwat@yahoo. The function julytemps2 use the toeplitz to generate the correlation matrix CT .com Phone:5017621195 The ndgrid function is a useful to way calculate many covariance matrices. function p=julytemps2(T). . However. In fact. us (United States) Zip Code:71901 .. CY=(A’)*CT*A.. ⎥ ⎢ . A=ones(31. 1 + |i − j| (1) If we write out the elements of the covariance matrix. C X has a special structure. 39 Address:104 pine meadows loop.. jth element is CT (i./(1+abs(0:30)). . ⎢ c1 c0 CT = ⎢ .1)/31. AR.0. the i. CT=toeplitz(c). p=phi((T-80)/sqrt(CY)). c30 · · · c1 c0 (2) This covariance matrix is known as a symmetric Toeplitz matrix. . in this problem. c1 ⎦ . We will see in Chapters 9 and 11 that Toeplitz covariance matrices are quite common. c=36. j) = c|i− j| = 36 . hot springs. ⎥.. M ATLAB has a toeplitz function for generating them. . ⎥ . ⎣ .

f W (w) = e−3w e y w 0 = 6 e−2w − e−3w (3) Since f W (w) = 0 for w < 0.5 Thus the variance of K i is Var[K i ] = E K i2 − (E [K i ])2 = 7. by Theorem 6. For w > 0. . By Theorem 6. hot springs. K n denote a sequence of iid random variables each with PMF PK (k) = 1/4 k = 1. Var[Wn ] = Var[K 1 ] + · · · + Var[K n ] = 1.1 Let K 1 . . . .com Phone:5017621195 Quiz Solutions – Chapter 6 Quiz 6. . us (United States) Zip Code:71901 . . . the variance of the sum equals the sum of the variances. we note that the ﬁrst two moments of K i are E [K i ] = (1 + 2 + 3 + 4)/4 = 2.25n Quiz 6. AR. the PDF of W = X + Y is f W (w) = ∞ −∞ f X (w − y) f Y (y) dy = 6 0 w e−3(w−y) e−2y dy (2) Fortunately. the expected value of Wn is E [Wn ] = E [K 1 ] + · · · + E [K n ] = n E [K i ] = 2. . First.3. Hence.5 E K i2 = (12 + 22 + 32 + 42 )/4 = 7.Name:joey iwatsuru Email:joeyiwat@yahoo.5n (5) Since the rolls are independent. this integral is easy to evaluate. 4 0 otherwise (1) We can write Wn in the form of Wn = K 1 + · · · + K n .25 Since E[K i ] = 2. (4) 40 Address:104 pine meadows loop.2 Random variables X and Y have PDFs f X (x) = 3e−3x x ≥ 0 0 otherwise f Y (y) = 2e−2y y ≥ 0 0 otherwise (1) (6) (4) (2) (3) Since X and Y are nonnegative. K n are independent. a conmplete expression for the PDF of W is f W (w) = 6e−2w 1 − e−w 0 w ≥ 0. That is. . the random variables K 1 .5. . W = X + Y is nonnegative. . .5.5)2 = 1. otherwise.5 − (2.

2 1 + es + e2s + e3s + e4s (1) We ﬁnd the moments by taking derivatives. we need only ﬁnd the expected value and variance.Name:joey iwatsuru Email:joeyiwat@yahoo.8 (4) (5) (6) (7) = 0.2(es + 4e2s + 9e3s + 16e4s ) s=0 s=0 =6 = 20 = 70. Theorem 6. Theorem 6. Since the expectation of the sum equals the sum of the expectations: E [W ] = α E [X 1 ] + α 2 E [X 2 ] + · · · + α n E [X n ] = 0 41 (3) Address:104 pine meadows loop.2(es + 8e2s + 27e3s + 64e4s ) s=0 s=0 = 0. Thus to ﬁnd the PDF of W .com Phone:5017621195 Quiz 6. The ﬁrst derivative of φ K (s) is d φ K (s) = 0. we continue to take derivatives: E K2 = E K3 E K4 d 2 φ K (s) ds 2 d 3 φ K (s) = ds 3 d 4 φ K (s) = ds 4 = 0.8 says the MGF of J is φ J (s) = (φ K (s))m = (2) (B) Since the set of α j X j are independent Gaussian random variables. us (United States) Zip Code:71901 .2)esk = 0.3 The MGF of K is 4 φ K (s) = E es K == k=0 (0. hot springs.10 says that W is a Gaussian random variable.2(es + 2e2s + 3e3s + 4e4s ) ds Evaluating the derivative at s = 0 yields E [K ] = d φ K (s) ds = 0.2(1 + 2 + 3 + 4) = 2 s=0 (2) (3) To ﬁnd higher-order moments.2(es + 16e2s + 81e3s + 256e4s ) s=0 s=0 Quiz 6.4 (A) Each K i has MGF φ K (s) = E es K i = es (1 − ens ) es + e2s + · · · + ens = n n(1 − es ) ems (1 − ens )m n m (1 − es )m (1) Since the sequence of K i is independent. AR.

hot springs. (3) (2) From Table 6.1. The corresponding PDF is f R (r ) = (1/5)e−r/5 r ≥ 0 0 otherwise (4) This quiz is an example of the general result that a geometric sum of exponential random variables is an exponential random variable. each X i has MGF φ X (s) and random variable N has MGF φ N (s) where φ X (s) = 1 .Name:joey iwatsuru Email:joeyiwat@yahoo.5 (1) From Table 6.12. 1−s φ N (s) = 1 s 5e .6 to write Var[W ] = α 2 − α 2n+2 [1 + n(1 − α 2 )] (1 − α 2 )2 (6) (4) (5) 2 With E[W ] = 0 and σW = Var[W ]. we see that R has the MGF of an exponential (1/5) random variable. us (United States) Zip Code:71901 . we can use Math Fact B. 1 − 4 es 5 (1) From Theorem 6. AR. 42 Address:104 pine meadows loop.1. we can write the PDF of W as f W (w) = 1 2 2π σW e−w 2 /2σ 2 W (7) Quiz 6. R has MGF φ R (s) = φ N (ln φ X (s)) = Substituting the expression for φ X (s) yields φ R (s) = 1 5 1 5 1 5 φ X (s) 1 − 4 φ X (s) 5 (2) −s . the variance of the sum equals the sum of the variances: Var[W ] = α 2 Var[X 1 ] + α 4 Var[X 2 ] + · · · + α 2n Var[X n ] = α 2 + 2(α 2 )2 + 3(α 2 )3 + · · · + n(α 2 )n Deﬁning q = α 2 .com Phone:5017621195 Since the α j X j are independent.

6 (1) The expected access time is E [X ] = ∞ −∞ x f X (x) d x = 0 12 x d x = 6 msec 12 (1) (2) The second moment of the access time is E X2 = ∞ −∞ x 2 f X (x) d x = 0 12 x2 d x = 48 12 (2) The variance of the access time is Var[X ] = E[X 2 ] − (E[X ])2 = 48 − 36 = 12. we can write A = X 1 + X 2 + · · · + X 12 Since the expectation of the sum equals the sum of the expectations.4013 Note that we used Table 3.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Quiz 6. Var[A] = Var[X 1 ] + · · · + Var[X 12 ] = 12 Var[X ] = 144 Hence. the standard deviation of A is σ A = 12 (5) To use the central limit theorem. AR.1 to look up (0. (6) (7) (8) (9) (5) (4) (3) (6) Once again. (3) Using X i to denote the access time of block i. us (United States) Zip Code:71901 .25).5987 = 0.0227 (10) (11) (12) 43 Address:104 pine meadows loop.9773 = 0. hot springs. we use the central limit theorem and Table 3. we write P [A > 75] = 1 − P [A ≤ 75] 75 − E [A] A − E [A] ≤ =1− P σA σA 75 − 72 ≈1− 12 = 1 − 0.1 to estimate P [A < 48] = P 48 − E [A] A − E [A] < σA σA 48 − 72 ≈ 12 = 1 − (2) = 1 − 0. E [A] = E [X 1 ] + · · · + E [X 12 ] = 12E [X ] = 72 msec (4) Since the X i are independent.

1 yields P [30 ≤ K 48 ≤ 42] ≈ Recalling that (−x) = 1 − 42 − 36 − 3 (x). X 2 .66 × 10−5 √ 12 12 (3) 44 Address:104 pine meadows loop. we ﬁnd that W has expected value and variance E [W ] = 3/λ = 6 Var[W ] = 3/λ2 = 12 (2) (1) By the Central Limit Theorem. λ) random variable.5 − 36 30 − 0.16666) − 1 = 0. we have (3) 30 − 36 3 = (2) − (−2) (2) (1) P [30 ≤ K 48 ≤ 42] ≈ 2 (2) − 1 = 0. X 3 are iid exponential (λ) random variables. The arrival time of the third train is W = X 1 + X 2 + X 3. hot springs.8 The train interarrival times X 1 . we can use the De Moivre-Laplace approximation to estimate P [30 ≤ K 48 ≤ 42] ≈ 42 + 0. (3) Using the ordinary central limit theorem and Table 3. (1) In Theorem 6. (1) The expected number of voice calls out of 48 calls is E[K 48 ] = 48P[V ] = 36. we found that the sum of three iid exponential (λ) random variables is an Erlang (n = 3.9545 (4) Since K 48 is a discrete random variable. us (United States) Zip Code:71901 .5 − 36 − 3 3 = 2 (2.11. (2) The variance of K 48 is Var[K 48 ] = 48P [V ] (1 − P [V ]) = 48(3/4)(1/4) = 9 Thus K 48 has standard deviation σ K 48 = 3.com Phone:5017621195 Quiz 6. From Appendix A. P [W > 20] = P √ W −6 20 − 6 > √ ≈ Q(7/ 3) = 2. AR.Name:joey iwatsuru Email:joeyiwat@yahoo.9687 (4) (5) Quiz 6.7 Random variable K n has a binomial distribution for n trials and success probability P[V ] = 3/4.

us (United States) Zip Code:71901 .0338 s=7/20 (7) (3) Theorem 3.19: %unifbinom100.py). A graph of the PMF PW (w) appears in Figure 2 With some thought.9 One solution to this problem is to follow the approach of Example 6. hot springs. px=binomialpmf(100. [PX.’\itP_W(w)’).0. P [W > 20] = 1 − FW (20) = e−10 1 + 10 102 + 1! 2! = 61e−10 = 0. PW=PX. Quiz 6. pmfplot(sw.com Phone:5017621195 (2) To use the Chernoff bound. By contrast. SW=SX+SY.sy). 45 Address:104 pine meadows loop. we note that the MGF of W is φW (s) = The Chernoff bound states that P [W > 20] ≤ min e−20s φ X (s) = min s≥0 s≥0 λ λ−s 3 = 1 (1 − 2s)3 e−20s (1 − 2s)3 (4) (5) To minimize h(s) = e−20s /(1 − 2s)3 . AR. pw=finitepmf(SW.PW. for λ = 1/2 and w = 20. it should be apparent that the finitepmf function is implementing the convolution of the two PMFs.’\itw’.sy=0:100. we set the derivative of h(s) to zero: −20(1 − 2s)3 e−20s + 6e−20s (1 − 2s)2 d h(s) = =0 ds (1 − 2s)6 (6) This implies 20(1 − 2s) = 6 or s = 7/20.sx). sw=unique(SW). py=duniformpmf(0.100.sw).Name:joey iwatsuru Email:joeyiwat@yahoo. the Central Limit Theorem approximation grossly underestimates the true probability.PY]=ndgrid(px.*PY.sy).0028 (9) (10) Although the Chernoff bound is relatively weak in that it overestimates the probability by roughly a factor of 12.pw. Applying s = 7/20 into the Chernoff bound yields P [W > 20] ≤ e−20s (1 − 2s)3 = (10/3)3 e−7 = 0. 3) random variable W satisﬁes 2 (λw)k e−λw FW (w) = 1 − (8) k! k=0 Equivalently. [SX.5. it is a valid bound.11 says that for any w > 0.m sx=0:100.SY]=ndgrid(sx. the CDF of the Erlang (λ.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0.01 0.008 PW(w) 0.006 0.004 0.002 0 0 20 40 60 80 100 w 120 140 160 180 200

Figure 2: From Quiz 6.9, the PMF PW (w) of the independent sum of a binomial (100, 0.5) random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Quiz Solutions – Chapter 7
**

Quiz 7.1 An exponential random variable with expected value 1 also has variance 1. By Theorem 7.1, Mn (X ) has variance Var[Mn (X )] = 1/n. Hence, we need n = 100 samples. Quiz 7.2 The arrival time of the third elevator is W = X 1 + X 2 + X 3 . Since each X i is uniform (0, 30), (30 − 0)2 Var [X i ] = = 75. (1) E [X i ] = 15, 12 Thus E[W ] = 3E[X i ] = 45, and Var[W ] = 3 Var[X i ] = 225. (1) By the Markov inequality, P [W > 75] ≤ (2) By the Chebyshev inequality, P [W > 75] = P [W − E [W ] > 30] ≤ P [|W − E [W ]| > 30] ≤ 225 Var [W ] 1 = = 2 900 4 30 (3) (4) E [W ] 45 3 = = 75 75 5 (2)

Quiz 7.3 Deﬁne the random variable W = (X − µ X )2 . Observe that V100 (X ) = M100 (W ). By Theorem 7.6, the mean square error is E (M100 (W ) − µW )2 = Observe that µ X = 0 so that W = X 2 . Thus, µW = E X

2

Var[W ] 100

(1)

=

1 −1 1 −1

x 2 f X (x) d x = 1/3 x 4 f X (x) d x = 1/5

(2) (3)

E W2 = E X4 =

Therefore Var[W ] = E[W 2 ] − µ2 = 1/5 − (1/3)2 = 4/45 and the mean square error is W 4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4 Assuming the number n of samples is large, we can use a Gaussian approximation for Mn (X ). SinceE[X ] = p and Var[X ] = p(1 − p), we apply Theorem 7.13 which says that the interval estimate Mn (X ) − c ≤ p ≤ Mn (X ) + c (1) has conﬁdence coefﬁcient 1 − α where α =2−2 √ c n . p(1 − p)

(2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must have √ c n ≥ 0.95 (3) p(1 − p) √ for every value of p. Since (x) is an increasing function of x, we must satisfy c n ≥ 1.65 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that 1.65 0.41 c≥ √ = √ . 4 n n The 0.9 conﬁdence interval estimate of p is 0.41 0.41 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n (5) (4)

√ For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c n/( p(1− p))) ≥ 0.995. √ This implies c n√ 2.58 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that ≥ c ≥ (0.25)(2.58)/ n. In this case, the 0.99 conﬁdence interval estimate is 0.645 0.645 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n Note that if M100 (X ) = 0.4, then the 0.99 conﬁdence interval estimate is 0.3355 ≤ p ≤ 0.4645. The interval is wide because the 0.99 conﬁdence is high. Quiz 7.5 Following the approach of bernoullitraces.m, we generate m = 1000 sample paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the fraction of sample paths that have sample mean within one standard error of p. The program bernoullisample.m generates graphs the number of traces within one standard error as a function of the time, i.e. the number of trials in each trace. 48 (7) (6)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

4 0 10 20 30 40 50 60 70 80 90 100 As we would expect. though perhaps unexpected.p). hot springs.8 0.m). stderr=sqrt(p*(1-p)).2)/m. plot(1:n. x=reshape(bernoullirv(p.7 0.5 0./sqrt((1:n)’). The following graph was generated by bernoullisample(100.9 0.m*n)./nn. The unusual sawtooth pattern. stderrmat=stderr*ones(1.6 0.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. OK=sum(abs(MN-p)<stderrmat.m).m).5): 1 0. 49 Address:104 pine meadows loop.’-s’).OK. us (United States) Zip Code:71901 .0.m.5000.2.com Phone:5017621195 function OK=bernoullisample(n. nn=(1:n)’*ones(1.68.n. is examined in Problem 7.5. MN=cumsum(x). as m gets large. the fraction of traces within one standard error approaches 2 (1) − 1 ≈ 0.

. . X 15 ≤ x] = [P [X i ≤ x]]15 . the ML hypothesis rule is k ∈ A0 if PK |H0 (k) ≥ PK |H1 (k) . otherwise k = 0. This implies that for x ≥ 0. This rule simpliﬁes to 106 − 104 k ∈ A0 if k ≤ k = = 214. we must choose a rejection region for X . . the CDF of the maximum of X 1 .01)1/15 = 1. let R = {X ≤ r }. then we accept hypothesis H1 . . Quiz 8. ln 100 ∗ k ∈ A1 otherwise.7. . 1. X 15 obeys FX (x) = P [X ≤ x] = P [X 1 ≤ x. the MAP and ML tests are the same. . 976 photons. 1. then we reject the hypothesis. FX (x) = FX i (x) 15 (2) = 1 − e−x 15 (3) To design a signiﬁcance test. . us (United States) Zip Code:71901 . AR.01.6.33 Hence.1 From the problem statement. we obtain α = P [X ≤ r ] = (1 − e−r )15 = 0.33. X 2 ≤ x. hot springs. From Theorem 8. the conditional PMFs of K are PK |H0 (k) = PK |H1 (k) = 104k e−10 k! 4 (4) (5) 0 106k e−10 k! 6 k = 0. (3) k ∈ A1 otherwise. · · · . if we observe X < 1. That is. (4) Thus if we observe at least 214.Name:joey iwatsuru Email:joeyiwat@yahoo. . .01 It is straightforward to show that r = − ln 1 − (0. otherwise (1) (2) 0 Since the two hypotheses are equally likely. 975. each X i has PDF and CDF f X i (x) = e−x x ≥ 0 0 otherwise FX i (x) = 0 x <0 1 − e−x x ≥ 0 (1) Hence. For a signiﬁcance level of α = 0. A reasonable choice is to reject the hypothesis if X is too small.com Phone:5017621195 Quiz Solutions – Chapter 8 Quiz 8. 50 Address:104 pine meadows loop.2 From the problem statement. .

X 2 ) ∈ A j for some j = i.T(:)). FM=[P10(:) P01(:)]. σ ) random variables.1)/m.3) ylabel(’P_{MISS}’).3. the conditional probability of a correct decision is √ √ P [C|H0 ] = P [X 1 > 0.Name:joey iwatsuru Email:joeyiwat@yahoo.m calls sqdistroc three times to generate a plot that compares the receiver performance for the three requested values of d.m. . legend(’\it d=0.m.3’.m is essentially the same as sqdistor except the output is a matrix FM whose columns are the false alarm and miss probabilities.’-k’. FM5(:.’--k’.1). FM5=sqdistroc(v.0. a symbol error occurs when si is transmitted but (X 1 .1)).1). The modiﬁed program.1) %add d(v+N)ˆ2 distortion %receive 1 if x>T.T). hot springs.3 For the QPSK system. Equivalently.FM2(:.2). it is easier to calculate the probability of a correct decision. . P[C|H0 ] = P[C|Hi ] for all i. we have √ √ P [C] = P [C|H0 ] = P E/2 + N1 > 0 P E/2 + N2 > 0 (2) √ 2 (3) = P N1 > − E/2 √ 2 − E/2 (4) = 1− σ Since (−x) = 1 − of error is (x). the existing program sqdistor already calculates this miss probability PMISS = P01 and the false alarm probability PFA = P10 . FM1=sqdistroc(v. [XX.1).2..2’. xlabel(’P_{FA}’).0. we have P[C] = 2( E/2σ 2 ). 51 Address:104 pine meadows loop.4 To generate the ROC.T). Next. FM2(:.d. P10=sum((XX+d*(XX.1)/m.. the program sqdistrocplot. %add N volts.T(:))..1. otherwise 0 %FM = [P(FA) P(MISS)] x=(v+randn(m. P01=sum((XX+d*(XX. E/2 + N2 > 0 (1) Because of the symmetry of the signals. the probability 2 PERR = 1 − P [C] = 1 − E 2σ 2 (5) Quiz 8..TT]=ndgrid(x.m. For a QPSK system.2). [XX. sqdistroc.ˆ2)< TT).. loglog(FM1(:.2).ˆ2)>TT).T).FM1(:. Since N1 and N2 are iid Gaussian (0. Given H0 .1).0.m. function FM=sqdistrocplot(v. FM2=sqdistroc(v. FM=[FM1 FM2 FM5].. N is Gauss(0.’:k’).m.T) %square law distortion recvr %P(error) for m bits tested %transmit v volts or -v volts. X 2 > 0|H0 ] = P E/2 + N1 > 0.com Phone:5017621195 Quiz 8.FM5(:.1’. Here is the modiﬁed code: function FM=sqdistroc(v. AR. us (United States) Zip Code:71901 .T).’\it d=0. ’\it d=0.TT]=ndgrid(x.. x= -v+randn(m. This implies the probability of a correct decision is P[C] = P[C|H0 ].

4 with squared distortion. 52 Address:104 pine meadows loop. 10 0 10 −1 10 PMISS 10 10 −2 −3 −4 10 −5 d=0. sqdistrocplot(3.1 d=0. the commands T=-3:0. us (United States) Zip Code:71901 .2 d=0.1:3.100000. Figure 3: The receiver operating curve for the communications system of Quiz 8.com Phone:5017621195 To see the effect of d. AR.T).Name:joey iwatsuru Email:joeyiwat@yahoo.1:3.3 −5 10 10 −4 10 −3 10 PFA −2 10 −1 10 0 T=-3:0.100000. sqdistrocplot(3.T). generated the plot shown in Figure 3. hot springs.

f X (x) = x 1 2(y + x) dy = y 2 + 2x y y=1 y=x = 1 + 2x − 3x 2 (4) (5) For 0 ≤ x ≤ 1.Y (x.Name:joey iwatsuru Email:joeyiwat@yahoo. For 0 ≤ x ≤ 1.com Phone:5017621195 Quiz Solutions – Chapter 9 Quiz 9. we need the marginal PDF f X (x). y) = f Y (y) 2 3y + 2x 3y 2 0 0≤x ≤y otherwise (2) (2) The minimum mean square error estimate of X given Y = y is x M (y) = E [X |Y = y] = ˆ 0 y 2x 2 2x + 2 3y 3y d x = 5y/9 (3) ˆ Thus the MMSE estimator of X given Y is X M (Y ) = 5Y /9. AR. hot springs. us (United States) Zip Code:71901 .1 (1) First. (3) To obtain the conditional PDF f Y |X (y|x). the conditional PDF of Y given X is f Y |X (y|x) = 2(y+x) 1+2x−3x 2 0 x ≤y≤1 otherwise (6) (4) The MMSE estimate of Y given X = x is y M (x) = E [Y |X = x] = ˆ x 1 2y 2 + 2x y dy 1 + 2x − 3x 2 y=1 y=x (7) (8) (9) 2y 3 /3 + x y 2 = 1 + 2x − 3x 2 = 2 + 3x − 5x 3 3 + 6x − 9x 2 53 Address:104 pine meadows loop. we calculate the marginal PDF for 0 ≤ y ≤ 1: f Y (y) = 0 y 2(y + x) d x = 2x y + x 2 x=y x=0 = 3y 2 (1) This implies the conditional PDF of X given Y is f X |Y (x|y) = f X.

R ) = 9(1 − 3/4) = 9/4 L 2 σT (5) σR R= 2 2 σT 2 2 σT + σ X R= 3 R 4 (6) (7) Quiz 9. the optimum linear estimate of T given R is σT ˆ TL (R) = ρT. E[T X ] = E[T ]E[X ] = 0 and E[T 2 ] = Var[T ].R = σT /σ R . Cov [T.4. the variance of the sum R = T + X is Var[R] = Var[T ] + Var[X ] = 9 + 3 = 12 (3) Since T and R have expected values E[R] = E[T ] = 0. ˆ TL (R) = Hence a ∗ = 3/4 and b∗ = 0. Thus Cov[T. (4) From Deﬁnition 4. us (United States) Zip Code:71901 . the mean square error of the linear estimate is 2 e∗ = Var[T ](1 − ρT. R] = = 3/2 σR Var[R] Var[T ] (4) (5) From Theorem 9. The conditional PDF of X given R is 1 2 f X |R (x|r ) = √ e−(x+40+40 log10 r ) /128 128π 54 (1) Address:104 pine meadows loop. E [R] = E [T ] + E [X ] = 0 (2) Since T and X are independent.3 When R = r .4.com Phone:5017621195 Quiz 9. R] = E [T R] = E [T (T + X )] = E T 2 + E [T X ] (3) (2) (1) Since T and X are independent and have zero expected value. AR.Name:joey iwatsuru Email:joeyiwat@yahoo.2 (1) Since the expectation of the sum equals the sum of the expectations.8. hot springs. the correlation coefﬁcient of T and R is ρT. (6) By Theorem 9. the conditional PDF of X = Y −40−40 log10 r is Gaussian with expected value −40 − 40 log10 r and variance 64.R (R − E [R]) + E [T ] σR Since E[R] = E[T ] = 0 and ρT.R = √ √ σT Cov [T. R] = Var[T ] = 9.

us (United States) Zip Code:71901 . ˆ For the MAP estimate.3 −x/40 x ≥ −156. When x ≤ −156. AR. note that a typical ﬁgure for the signal strength might be x = −120 dB. r ) ˆ 0≤r ≤1000 Note that we have included the constraint r ≤ 1000 in the maximization to highlight the fact that under our probability model. 55 Address:104 pine meadows loop.3 (0. However.6 m. r ) = f X |R (x|r ) f R (r ) = 106 32π 1 √ r e−(x+40+40 log10 r ) 2 /128 (5) From Theorem 9. then rMAP (−120) = 123. we observe that the joint PDF of X and R is f X.R (x. r ). the MAP estimate of R given X = x is the value of r that maximizes f X. hot springs.1)10−x/40 m ˆ (3) (4) If the result doesn’t look correct. which is not possible in our probability model. for very low signal strengths.R (x.1236)10−x/40 (8) This is the MAP estimate of R given X = x as long as r ≤ 1000 m. (6) rMAP (x) = arg max f X. This minimum occurs when the exponent is zero.Name:joey iwatsuru Email:joeyiwat@yahoo. the MAP estimate takes into account that the distance can never exceed 1000 m. When the measured signal ˆ strength is not too low.2 to write the ML estimate of R given X = x as rML (x) = arg max f X |R (x|r ) ˆ r ≥0 (2) We observe that f X |R (x|r ) is maximized when the exponent (x + 40 + 40 log10 r )2 is minimized.R (x. we can use Deﬁnition 9. This reﬂects the fact that large values of R are a priori more probable than small values. the complete description of the MAP estimate is rMAP (x) = ˆ 1000 x < −156. That is. if x = −120dB. Hence.R (x.1236)10 (9) For example.com Phone:5017621195 From the conditional PDF f X |R (x|r ).3 dB.6% larger than the ML estimate. Setting the derivative of f X. the MAP estimate is 23.6. R ≤ 1000 m. This corresponds to a distance estimate of rML (−120) = 100 m. r ) with respect to r to zero yields e−(x+40+40 log10 r ) Solving for r yields r = 10 1 25 log10 e −1 2 /128 1− 80 log10 e (x + 40 + 40 log10 r ) = 0 128 (7) 10−x/40 = (0. the above estimate will exceed 1000 m. yielding log10 r = −1 − x/40 or rML (x) = (0.

1 (4) 1 1 = = 0.1.7.9 1. hot springs.7.4 ˆ (1) From Theorem 9. E[WX ] = 0. E [Y2 X 2 ] E [(X 2 + W2 )X 2 ] 56 (10) 1.4. (7) (8) Because X and W are independent. we need to ﬁnd RYX 2 = E [YX 2 ] = E [Y1 X 2 ] E [(X 1 + W1 )X 2 ] = .0909 1.1 (9) Address:104 pine meadows loop. −0. Var[Y2 ] b ∗ = µ X 2 − a ∗ µ Y2 .1 0 . it follows that E[Y] = 0. Similarly. −0. it follows that b∗ = 0. n = 2 and we wish to estimate X 2 given the observation vector Y = Y1 Y2 .com Phone:5017621195 Quiz 9. Y2 ] = E [X 2 Y2 ] = E [X 2 (X 2 + W2 )] = E X 2 = 1 2 2 Var[Y2 ] = Var[X 2 ] + Var[W2 ] = E X 2 + E W2 = 1. Finally.1 (6) In terms of Theorem 9.9 .9 1 RW = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. Because µ X 2 = µY2 = 0. E[XW ] = E[X]E[W ] = 0.Y2 = The expected square error is 2 e∗ = Var[X 2 ](1 − ρ X 2 . RY = E YY = E (X + W)(X + W ) = E XX + XW + WX + WW .1 (2) (3) It follows that a ∗ = 1/1. the LMSE estimate of X 2 given Y2 is X 2 (Y2 ) = a ∗ Y2 + b∗ where a∗ = Cov [X 2 .1 11 (5) (2) Since Y = X + W and E[X] = E[W] = 0. This implies RY = E XX + E WW = RX + RW = In addition. 2 Cov [X 2 . we calculate the correlation coefﬁcient ρ X 2 . Y2 ] 1 =√ σ X 2 σY2 1.1 −0.Y2 ) = 1 − L Cov [X 2 . we need to ﬁnd RY and RYX 2 . To apply Theorem 9.7. 0 0. Thus we can apply Theorem 9. AR. to compute the expected square error. Note that X and W have correlation matrices RX = 1 −0. (1) Because E[X] = E[Y] = 0.9 . Y2 ] . us (United States) Zip Code:71901 .

E[W1 X 2 ] = E[W1 ]E[X 2 ] = 0 and E[W2 X 2 ] = 0. The mean square error is ˆ Var [X 2 ] − a RYX 2 = Var [X ] − a1rY1 .X 2 − a2rY2 .9 RYX 2 = = . j) = c|i− j|−1 . The question we must address is what value c minimizes e∗ . hot springs. (14) (13) Quiz 9. Y E[WX ] = 0 and E[X W ] = 0 . By the same reasoning.com Phone:5017621195 Since X and W are independent vectors.725Y2 . (11) 2 1 E X2 By Theorem 9.0725. ˆ a = R−1 RYX = 11 + RW Y and the optimal linear estimator is ˆ X L (Y) = 1 11 + RW The mean square error is ˆ e∗ = Var[X ] − a RYX = 1 − 1 11 + RW L −1 −1 −1 (1) (2) (3) (4) 1 (5) Y (6) 1 (7) Now we note that RW has i. X L (Y) = a Y where a = R−1 RYX .X 2 = 0.7. This problem is atypical in that one does not usually get L 57 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. This implies RYX = E [YX ] = E [(1X + W)X ] = 1E X 2 = 1. Since X and W are independent. ˆ a = R−1 RYX 2 = Y −0.225 0. by ˆ ˆ ˆ Theorem 9. Thus E[X 1 X 2 ] −0.5 Since X and W have zero expected value. Y also has zero expected value.725 (12) Therefore. AR. us (United States) Zip Code:71901 . jth entry RW (i. the correlation matrix of Y is RY = E YY = E (1X + W)(1 X + W ) = 11 E X 2 + 1E X W + E [WX ] 1 + E WW = 11 + RW Note that 11 is a 20 × 20 matrix with every entry equal to 1.7.225Y1 + 0. the optimum linear estimator of X 2 given Y1 and Y2 is ˆ ˆ X L = a Y = −0. Thus. Thus.

end plot(c. for k=1:length(c). i) = 1/c.01:0. RW=toeplitz(c. xlabel(’c’). We note that the answer is not obviously apparent from Equation (7). v1=ones(20. cmin=c(optk). If this argument is not clear.8 e* L 0. msec=zeros(size(c)). The following commands ﬁnds the minimum c and also produces the following graph: >> c=0. consider the extreme case in which every Wi and W j have correlation coefﬁcient ρi j = 1. In this case. when c is small. >> mquiz9minc(c) ans = 0.optk]=min(msec). we observe that Var[Wi ] = RW (i. we will see that the answer is somewhat instructive. In particular. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo. 58 Address:104 pine meadows loop. af=(inv(RY))*v1.1).com Phone:5017621195 to choose the correlation structure of the noise. Note in mquiz9 that v1 corresponds to the vector 1 of all ones. our 20 measurements will be all the same and one measurement is as good as 20 measurements. On the other hand.6 0. However. if c is large Wi and W j are highly correlated and the separate measurements of X are very dependent.ylabel(’e_Lˆ*’). [msemin. function cmin=mquiz9minc(c). the noises Wi have high variance and we would expect our estimator to be poor.ˆ((0:19)-1)). To ﬁnd the optimal value of c.af]=mquiz9(c).4500 1 0. function [mse.2 0 0. both small values and large values of c result in large MSE. [msec(k).4 0.99. mse=1-((v1’)*af).5 c 1 As we see in the graph.01:0.msec). AR. This would suggest that large values of c will also result in poor MSE. RY=(v1*(v1’)) +RW. Thus. hot springs. we write a M ATLAB function mquiz9(c) to calculate the MSE for a given c and second function that ﬁnds plots the MSE for a range of values of c.af]=mquiz9(c(k)).

3 (1) Each resistor has resistance R in ohms with uniform PDF f R (r ) = 0. continuous valued process. A correct answer speciﬁes enough random variables to specify the sample path exactly.Name:joey iwatsuru Email:joeyiwat@yahoo.2 (2) 59 Address:104 pine meadows loop.1 There are many correct answers to this question. discrete valued process. . . discrete valued process. the interarrival times of the N new arrivals • H .01 950 ≤ r ≤ 1050 0 otherwise (1) The probability that a test produces a 1% resistor is p = P [990 ≤ R ≤ 1010] = 1010 990 (0. s). . hot springs. the number of ongoing calls at the start of the experiment • N . we round the temperature to the nearest degree. . . (3) If we sample the process in part (a) every T seconds. then we obtain a continuous time. X N . Quiz 10. (4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time. D H . (2) If at every moment in time. the call completion times of the H calls that hang up Quiz 10.com Phone:5017621195 Quiz Solutions – Chapter 10 Quiz 10. . One choice for an alternate set of random variables that would specify m(t. AR. . s) is • m(0. then we obtain a discrete time.2 (1) We obtain a continuous time. . the number of calls that hang up during the experiment • D1 . the number of new calls that arrive during the experiment • X 1 .01) dr = 0. continuous valued process when we record the temperature as a continuous waveform over time. us (United States) Zip Code:71901 .

. exactly t resistors are tested. (5) Note that once we ﬁnd the ﬁrst 1% resistor..X (n) (x1 . the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is PT1 (5) = (0. the number of 1% resistors found has the binomial PMF PN (t) (n) = p n (1 − p)t−n n = 0. .com Phone:5017621195 (2) In t seconds. t − 1 followed by a success on trial t. AR. hot springs. (4) From Theorem 2. just as in Example 2. .2) = 0. xn ) = i=1 f X (xi ) = 1 2 2 e−(x1 +···+xn )/2 n/2 (2π ) (2) 60 Address:104 pine meadows loop. .2. independent of any other resistor. Consequently. us (United States) Zip Code:71901 . This problem is easy if we view each resistor test as an independent trial. . A success occurs on a trial with probability p if we ﬁnd a 1% resistor. T2 = T1 + T where T is independent and identically distributed to T1 . t 0 otherwise t n (3) (3) First we will ﬁnd the PMF of T1 ..08192. E[T1 ] = 1/ p = 5. In this problem. .11. each X i has PDF 1 2 f X (i) (x) = √ e−x /2 2π By Theorem 10. a geometric random variable with success probability p has expected value 1/ p. 1. .. . . Each resistor is a 1% resistor with probability p..5. 1) random variable.Name:joey iwatsuru Email:joeyiwat@yahoo. That is. Thus E [T2 |T1 = 10] = E [T1 |T1 = 10] + E T |T1 = 10 = 10 + E T = 10 + 5 = 15 (5) (6) Quiz 10.4 Since each X i is a N (0. . the number of additional trials needed to ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/ p since each independent trial is a success with probability p. . . 9 otherwise (4) Since p = 0. . T1 has the geometric PMF PT1 (t) = (1 − p)t−1 p t = 1. the joint PDF of X = X 1 · · · X n is k (1) f X (x) = f X (1).1. . Hence. 2. The ﬁrst 1% resistor is found at time T1 = t if we observe failures on trials 1.8)4 (0. .

This implies < √ [W (t) − W (s)]/ α is independent of W (s )/ α for all s ≥ s . . we can conclude that the interarrival times of N (t) are not exponential random variables.7 First. has the same PDF as Y1 (t). m 2 ) = PM1 (m 1 ) PM2 (m 2 ) = ⎪ ⎪ ⎩ 0 otherwise. we note that for t > s. denote the interarrival times of the N (t) process. X 2 . . 1. Let X 1 .11. Thus X (t) is a Brownian motion process with variance Var[X (t)] = t. 1. the ith interarrival time of the N (t) process.5 The ﬁrst and second hours are nonoverlapping intervals. . X (t) − X (s) is independent of X (s ) for all s ≥ s . 000. Since we count only evennumbered arrival for N (t).com Phone:5017621195 Quiz 10.M2 (m 1 . . otherwise (1) Since M1 and M2 are independent. see Theorem 6. Since one hour equals 3600 sec and the Poisson process has a rate of 10 packets/sec. hot springs. . . .Name:joey iwatsuru Email:joeyiwat@yahoo. Y1 is an Erlang (n = 2. ⎪ m 1 !m 2 ! ⎪ ⎨ m 2 = 0. . Theorem 3. . the time until the ﬁrst arrival of the N (t) is Y1 = X 1 + X 2 . 2. we look at the interarrival times. 1. (2) Quiz 10. .13 states that W (t) − W (s) is Gaussian with expected value E [X (t) − X (s)] = and variance E (W (t) − W (s))2 = E (W (t) − W (s))2 α(t − s) = α α (3) E [W (t) − W (s)] =0 √ α (2) Consider s ≤ s √ t. Quiz 10. . Since s ≥ s . Since Yi (t). X (t) − X (s) = W (t) − W (s) √ α (1) Since W (t) − W (s) is a Gaussian random variable. W (t) − W (s) is independent of W (s ). Thus N (t) is not a Poisson process. the expected number of packets in each hour is E[Mi ] = α = 36. us (United States) Zip Code:71901 . 61 Address:104 pine meadows loop. This implies M1 and M2 are independent Poisson random variables each with PMF PMi (m) = α m e−α m! 0 m = 0. . . Since X 1 and X 2 are independent exponential (λ) random variables. λ) random variable. the joint PMF of M1 and M2 is ⎧ α m 1 +m 2 e−2α m 1 = 0.6 To answer whether N (t) is a Poisson process. That is. AR. . PM1 .

we have RY (t.com Phone:5017621195 Quiz 10.Name:joey iwatsuru Email:joeyiwat@yahoo. τ ) + R N (t. X 2 . .. ..12: R(τ ) ≥ 0 R(τ ) = R(−τ ) |R(τ )| ≤ R(0) (1) (3) (2) (1) (1) R1 (τ ) = e−|τ | meets all three conditions and thus is valid. Since RY (t. . . τ ) = E [(X (t) + N (t)) (X (t + τ ) + N (t + τ ))] = E [X (t)X (t + τ )] + E [X (t)N (t + τ )] + E [X (t + τ )N (t)] + E [N (t)N (t + τ )] = R X (t.X nm +k (x1 . .14..10 We must check whether each function R(τ ) meets the conditions of Theorem 10.8 First we ﬁnd the expected value µY (t) = µ X (t) + µ N (t) = µ X (t). . xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) We can conclude that the iid random sequence is stationary. . for time instants n 1 + k. ..X nm (x1 . X 1 . . hot springs. . n m + k. E[X (t)N (t )] = E[X (t)]E[N (t )] = 0.. ..X nm (x1 .. us (United States) Zip Code:71901 . .9 From Deﬁnition 10. . (1) To ﬁnd the autocorrelation.. we observe that since X (t) and N (t) are independent and since N (t) has zero expected value..X nm +k (x1 . f X n1 +k . (2) (3) (4) Quiz 10. . f X n1 . . . . . Quiz 10. . xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) Similarly.. (2) R2 (τ ) = e−τ also is valid.. τ ) = E[Y (t)Y (t + τ )]. 2 (3) R3 (τ ) = e−τ cos τ is not valid because R3 (−2π ) = e2π cos 2π = e2π > 1 = R3 (0) (4) R4 (τ ) = e−τ sin τ also cannot be an autocorrelation function because 2 (2) R4 (π/2) = e−π/2 sin π/2 = e−π/2 > 0 = R4 (0) (3) 62 Address:104 pine meadows loop. .. . . AR. . is a stationary random sequence if for all sets of time instants n 1 . τ ). xm ) = f X n1 +k . ... .. f X n1 . . n m and time offset k.. . xm ) Since the random sequence is iid.

we conclude that X (t) and Y (t) are not jointly wide sense stationary. τ ) depends on both t and τ . we see the same second order statistics.X (t+1) (x0 .Name:joey iwatsuru Email:joeyiwat@yahoo. τ ) is just a function of τ . Quiz 10. R X Y (t. In this case. τ ) = E [Y (t)Y (t + τ )] = E [X (−t)X (−t − τ )] = R X (−t − (−t − τ )) = R X (τ ) (1) (2) (3) Since E[Y (t)] = E[X (−t)] = µ X . Y (t) = X (−t) and X (t) become less and less correlated. suppose R X (τ ) = e−|τ | so that samples of X (t) far apart in time have almost no correlation.12 From the problem statement. us (United States) Zip Code:71901 . hot springs. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (−t − τ )] = R X (t − (−t − τ )) = R X (2t + τ ) (4) (5) (6) Since R X Y (t. as t gets larger. In fact. we can check whether they are jointly wide sense stationary by seeing if R X Y (t.11 (1) The autocorrelation of Y (t) is RY (t. E [X (t)] = E [X (t + 1)] = 0 E [X (t)X (t + 1)] = 1/2 Var[X (t)] = Var[X (t + 1)] = 1 The Gaussian random vector X = X (t) X (t + 1) sponding inverse CX = Since 1 1/2 1/2 1 C−1 = X (1) (2) (3) has covariance matrix and corre- 4 1 −1/2 1 3 −1/2 (4) 4 4 2 1 −1/2 x0 2 x − x0 x+ x1 = 1 x1 3 −1/2 3 0 the joint PDF of X (t) and X (t + 1) is the Gaussian vector PDF x C−1 x = x0 x1 X f X (t). we can conclude that Y (t) is a wide sense stationary process. x1 ) = 1 (2π )n/2 [det (CX )]1/2 1 3π 2 e− 3 2 2 2 x0 −x0 x1 +x1 (5) 1 exp − x C−1 x X 2 (6) (7) =√ 63 Address:104 pine meadows loop.com Phone:5017621195 Quiz 10. In this case. AR. we see that by viewing a process backwards in time. (2) Since X (t) and Y (t) are both wide sense stationary processes. To see why this is.

With the introduction of call blocking. The program simply executes the event at the head of the schedule. and schedule a departure to occur at time t + Sn . The blocking switch is an example of a discrete event system. The logic of such a simulation is 1. • When the head-of-schedule event is the kth arrival is at time t. Quiz 10. check the state M(t). • If the head of schedule event is a departure. increase the system state n by 1. Schedule the ﬁrst arrival to occur at S1 . – If M(t) = c. In particular. 64 Address:104 pine meadows loop. at discrete time instances. hot springs.13. where Sk is an exponential (λ) random variable. After the head-of-schedule event is completed and any new events (departures in this system) are scheduled. Call blocking can be implemented by setting the service time of the call to zero so that the call departs as soon as it arrives. the number of ongoing calls. block the arrival.13 The simple structure of the switch simulation of Example 10.com Phone:5017621195 120 100 80 M(t) 60 40 20 0 0 10 20 30 40 50 t 60 70 80 90 100 Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10. Delete the head-of-schedule event and go to step 2. Examine the head-of-schedule event. Otherwise. when an arrival occurs at time t. AR. The system evolves via a sequence of discrete events. us (United States) Zip Code:71901 . an exponential (λ) random variable. satisﬁes M(t) < c = 120. do not schedule a departure event. we know the system state cannot change until the next scheduled event. we need to know that M(t). admit the arrival. A simulation of the system moves from one time instant to the next by maintaining a chronological schedule of future events (arrivals and departures) to be executed. namely arrivals and departures. 2. 3.Name:joey iwatsuru Email:joeyiwat@yahoo.28 admits a deceptively simple solution in terms of the vector of arrivals A and the vector of departures D. reduce the system state n by 1. Start at time t = 0 with an empty system. we cannot generate these vectors all at once. – If M(t) < c. when M(t) = c. we must block the call.

t). a result known as the “Erlang-B formula.0048 and 0. we use the vector t as the set of time instances at which we inspect the system state. In most programming languages. A sample path of the ﬁrst 100 minutes of that simulation is shown in Figure 4. AR.93). In M ATLAB. plot(t.0.120. we will learn that the exact blocking probability is given by Equation (12. The 5.1:5000. hot springs.0048. we can calculate that the exact blocking probability is Pb = 0. Nevertheless. In this case. The rest of the gap between 0. for very complicated systems. 65 Address:104 pine meadows loop. we set m(i) to the current switch state. The following instructions t=0:0.a. We can estimate the probability a call is blocked as b ˆ = 0. event(i)=1 if the ith scheduled event is an arrival.000 minute simulation.” From the Erlang-B formula. One reason our simulation underestimates the blocking probability is that in a 5.000 minutes. (1) Pb = a+b In Chapter 12. [m. a kind of Markov chain.000 minute full simulation produced a=49658 admitted calls and b=239 blocked calls. or event(i)=-1 if the ith scheduled event is a departure. it is common to implement the event schedule as a linked list where each item in the list has a data structure indicating an event timestamp and the type of the event. Thus this would account for only part of the disparity. When the program is passed a vector t. generated a simulation lasting 5. this says that roughly the ﬁrst two percent of the simulation time was unusual.0057. Chapter 12 develops techniques for analyzing and simulating systems described by Markov chains that are much simpler than the discrete event simulation technique shown here.b]=simblockswitch(10. a simple (but not elegant) way to do this is to have maintain two vectors: time is a list of timestamps of scheduled events and event is a the list of event types. The complete program is shown in Figure 5. However. we will learn that the blocking switch is an example of an M/M/c/c queue. Thus for all times t(i) between the current head-of-schedule event and the next.0057 is that a simulation that includes only 239 blocks is not all that likely to give a very accurate result for the blocking probability. us (United States) Zip Code:71901 .com Phone:5017621195 Thus we know that M(t) will stay the same until then. the discrete event simulation is widely-used and often very efﬁcient simulation method. In our simulation.1. the output [m a b] is such that m(i) is the number of ongoing calls at time t(i) while a and b are the number of admits and blocks.m). Note that in Chapter 12. roughly the ﬁrst 100 minutes are needed to load up the switch since the switch is idle when the simulation starts at time t = 0.Name:joey iwatsuru Email:joeyiwat@yahoo.

end elseif (eventnow==-1) %departure n=n-1.3d Admits %10d Blocks %10d’.admits. immed departure disp(sprintf(’Time %10. us (United States) Zip Code:71901 .mu. else blocks=blocks+1.13.admits. tmax=max(t). time(1)= [ ].1). while (timenow<tmax) M((timenow<=t)&(t<time(1)))=n. depart=timenow+exponentialrv(mu. %total # admits M=zeros(size(t)). % next arrival b4arrival=time<arrival. end end Figure 5: Discrete event simulation of the blocking switch of Quiz 10. eventnow=event(1).blocks]=simblockswitch(lam. %one more block.1) ]...com Phone:5017621195 function [M. event=[ 1 ]. n=n+1. AR. time=[time(b4arrival) arrival time(˜b4arrival)]. timenow.Name:joey iwatsuru Email:joeyiwat@yahoo. %total # blocks admits=0. %first event is an arrival timenow=0..1). time=[time(b4depart) depart time(˜b4depart)]. 66 Address:104 pine meadows loop. % clear current event if (eventnow==1) % arrival arrival=timenow+exponentialrv(lam. hot springs. timenow=time(1). blocks=0. if n<c %call admitted admits=admits+1. b4depart=time<depart.blocks)). event=[event(b4arrival) 1 event(˜b4arrival)].t). event=[event(b4depart) -1 event(˜b4depart)].c. % # in system time=[ exponentialrv(lam. n=0. event(1)=[ ].

AR. RY (τ ) = Hence.5(1 + −1) = 0 (1) The autocorrelation of the output is 1 1 RY [n] = i=0 j=0 h i h j R X [n + i − j] 1 n=0 0 otherwise (2) (3) = 2R X [n] − R X [n − 1] − R X [n + 1] = 2 Since µY = 0.com Phone:5017621195 Quiz Solutions – Chapter 11 Quiz 11.2 The expected value of the output is ∞ ∞ −τ h(u)h(τ + u) du = ∞ −τ 1 e−u e−τ −u du = eτ 2 (4) (5) µY = µ X n=−∞ h n = 0. µY = µ X ∞ −∞ h(t)dt = 2 0 ∞ e−t dt = 2 (1) Since R X (τ ) = δ(τ ). For τ < 0. 67 Address:104 pine meadows loop. hot springs.2. 1 RY (τ ) = e−|τ | 2 Quiz 11. we have RY (τ ) = 0 ∞ e−u e−τ −u du = e−τ 0 ∞ 1 e−2u du = e−τ 2 (3) For τ < 0. The variance of Yn is Var[Yn ] = E[Yn ] = RY [0] = 1. we can deduce that RY (τ ) = 1 e−|τ | by symmetry. Just to be safe though.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . the autocorrelation function of the output is RY (τ ) = ∞ −∞ ∞ h(u) −∞ h(v)δ(τ + u − v) dv du = ∞ −∞ h(u)h(τ + u) du (2) For τ > 0. we 2 can double check.1 By Theorem 11.

6 and to use Theorem 11.2 X (a) W = 10 (b) W = 1000 Figure 6: The autocorrelation R X (τ ) and power spectral density S X ( f ) for process X (t) in Quiz 11.5. which equals the correlation matrix RY since Y has zero expected value.2 0 −15 −10 −5 0 f 5 10 15 SX(f) 6 4 2 0 −1500−1000 −500 10 R (τ) 5 0 −5 −2 −1 0 τ 1 x 10 2 −3 0 f 500 1000 1500 10 RX(τ) 5 0 −5 −0.5 to ﬁnd the autocorrelation function ∞ ∞ RY [n] = i=−∞ j=−∞ h i h j R X [n + i − j].8. 4 0 0 1 1 1 1 (2) (3) In this case.2 −0. (1) Despite the fact that R X [k] is an impulse. Since R X [n] = δn . In this problem. us (United States) Zip Code:71901 .1 0 τ 0. we need to ﬁnd the covariance matrix CY .3 By Theorem 11. Y = Y33 Y34 Y35 is a Gaussian random vector since X n is a Gaussian random process. 68 Address:104 pine meadows loop. following Theorem 11.13 with µX = 0 and A = H.4 0. or by directly applying Theorem 5. it is simpler to observe that Y = HX where X = X 30 X 31 X 32 X 33 X 34 X 35 and ⎡ ⎤ 1 1 1 1 0 0 1 H = ⎣0 1 1 1 1 0⎦ .5.1 0. the identity matrix.7. Fo ﬁnd the PDF of the Gaussian vector Y.Name:joey iwatsuru Email:joeyiwat@yahoo. by Theorem 11. Quiz 11. each Yn has expected value E[Yn ] = µ X ∞ n=−∞ h n = 0. we obtain RY = HRX H .6 SX(f) 0. RX = I. Moreover.com Phone:5017621195 x 10 8 0. Thus E[Y] = 0. using Equation (1) is surprisingly tedious because we still need to sum over all i and j such that n + i − j = 0. One way to ﬁnd the RY is to observe that RY has the Toeplitz structure of Theorem 11. hot springs. AR.

4 This quiz is solved using Theorem 11.81 81 = .com Phone:5017621195 Thus ⎡ ⎤ 4 3 2 1 ⎣ 3 4 3⎦ . C−1 = 16 ⎣−1/2 Y 1/12 −1/2 7/12 Thus. (7) Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution is that y C−1 y is a very concise representation of the cross-terms in the exponent of f Y (y).Name:joey iwatsuru Email:joeyiwat@yahoo.9 1. In this case.1 0.1 1 0. X n+1 = 400 400 (4) to ﬁnd the mean square error. us (United States) Zip Code:71901 . 0. L 69 Address:104 pine meadows loop. AR. X n+1 = Xn 0. the PDF of Y is f Y (y) = 1 (2π )3/2 [det (CY )]1/2 1 exp − y C−1 y .9 h = R−1 RXn X n+1 = Xn 0.81 X n−1 R X [2] = .9 400 261 (3) It follows that the ﬁlter is h = 261/400 81/400 and the MMSE linear predictor is 81 261 ˆ X n−1 + Xn. one approach is to follow the method of Example 11.13 and to directly calculate ˆ (5) e∗ = E (X n+1 − X n+1 )2 .1 0. Xn = X n−1 X n and RXn = and RXn X n+1 = E 1.9 1.9 R X [0] R X [1] = 0. hot springs. CY = RY = HH = 16 2 3 4 (4) It follows (very quickly if you use M ATLAB for 3 × 3 matrix inversion) that ⎡ ⎤ 7/12 −1/2 1/12 1 −1/2⎦ . Y Quiz 11. Y 2 (5) (6) A disagreeable amount of algebra will show det(CY ) = 3/1024 and that the PDF can be “simpliﬁed” to 16 7 2 7 2 1 2 y33 + y34 + y35 − y33 y34 + y33 y35 − y34 y35 exp −8 f Y (y) = √ 3 12 12 6 6π .9 R X [1] −1 (1) (2) The MMSE linear ﬁrst order ﬁlter for predicting X n+1 at time n is the ﬁlter h such that ← − 1.9 for the case of k = 1 and M = 2.1 R X [1] R X [0] 0.

Since X n+1 = h Xn . we note that 1 f S X ( f ) = 10 rect (2) 2W 2W It follows that the inverse transform of S X ( f ) is sin(2π W τ ) R X (τ ) = 10 sinc(2W τ ) = 10 (3) 2π W τ (3) For W = 10 Hz and W = 1 kHZ.com Phone:5017621195 This method is workable for this simple problem but becomes increasingly tedious for higher order ﬁlters.9 400 1451 recalling that the blind estimate would yield a mean square error of Var[X ] = 1.5 (1) By Theorem 11.Name:joey iwatsuru Email:joeyiwat@yahoo. we obtain Xn e∗ = R X [0] − RXn X n+1 R−1 RXn X n+1 L Xn ← − = R X [0] − h RXn X n+1 (11) (12) Note that this is essentially the same result as Theorem 9. AR.81 81 261 e∗ = R X [0] − h RXn X n+1 = 1. X = X n+1 and ← − ˆ a = h .7 by using the orthoginality property of the LMSE estimator. (13) L 0. we can derive the mean square error for an arbitary prediction ← − ˆ ﬁlter h.1. In any case.1 − = = 0. we obtain ← − ← − ← − e∗ = R X [0] − 2 h RXn X n+1 + h RXn h L (9) (10) ← − with the substitution h = R−1 RXn X n+1 .7 with Y = Xn . the mean square error is 1 506 ← − 0.13(b). e∗ = E L ← − X n+1 − h Xn 2 (6) (7) (8) ← − ← − = E (X n+1 − h Xn )(X n+1 − h Xn ) ← − ← − = E (X n+1 − h Xn )(X n+1 − Xn h ) After a bit of algebra. It is noteworthy that the result is derived in a much simpler way in the proof of Theorem 9. we see that observing X n−1 and X n improves the accuracy of our prediction of X n+1 . us (United States) Zip Code:71901 . Quiz 11.3487. Instead. Consulting Table 11. graphs of S X ( f ) and R X (τ ) appear in Figure 6. the average power of X (t) is E X 2 (t) = ∞ −∞ W −W SX ( f ) d f = 5 d f = 10 Watts W (1) (2) The autocorrelation function is the inverse Fourier transform of S X ( f ).1. 70 Address:104 pine meadows loop. hot springs.

17. Let a0 = 5. the discrete time impulse δ[n] has a ﬂat discrete Fourier transform. where u(t) is the unit step function and a1 = 1/RC where RC = 10−4 is the ﬁlter time constant.com Phone:5017621195 Quiz 11. we see that 2 2a0 1 2a0 SX ( f ) = = 2 2 + (2π f )2 a0 a0 a0 + (2π f )2 (2) The RC ﬁlter has impulse response h(t) = a1 e−a1 t u(t).Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs.1.8 We solve this quiz using Theorem 11. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (t + τ − t0 )] = R X (τ − t0 ) (1) We see that R X Y (t.1. a0 Consulting with the Fourier transforms in Table 11. 71 (5) 2a0 a1 .000 so that 1 (1) R X (τ ) = a0 e−a0 |τ | .17. if R X [n] = 10δ[n]. (This quiz is really lame!) Quiz 11. AR.1. 2 [a1 + j2π f ] a0 + (2π f )2 (4) a1 a1 + j2π f (3) Address:104 pine meadows loop. R X [n] = 10δ[n].7 Since Y (t) = X (t − t0 ). S X Y ( f ) = H ( f )S X ( f ) = (2) Again by Theorem 11.17. us (United States) Zip Code:71901 . R X Y (t. From Table 11. From Table 11.6 In a sampled system. H( f ) = (1) Theorem 11. That is. SY ( f ) = H ∗ ( f )S X Y ( f ) = |H ( f )|2 S X ( f ). τ ) = R X Y (τ ) = R X (τ − t0 ). then ∞ S X (φ) = n=−∞ 10δ[n]e− j2π φn = 10 (1) Thus. (2) Quiz 11. Thus the Fourier transform of R X Y (τ ) = R X (τ − t0 ) = g(τ − t0 ) is S X Y ( f ) = S X ( f )e− j2π f t0 . First we need some preliminary facts. we recall the property that g(τ − τ0 ) has Fourier transform G( f )e− j2π f τ0 .

Name:joey iwatsuru Email:joeyiwat@yahoo. we can either use basic calculus and ∞ calculate −∞ SY ( f ) d f directly or we can ﬁnd RY (τ ) as an inverse transform of SY ( f ). hot springs. Since the RC ﬁlter has a 3dB bandwidth of 10. SY ( f ) = |H ( f )|2 S X ( f ) = 2 2a0 a1 2 2 a1 + (2π f )2 a0 + (2π f )2 2 a1 a1 a1 = 2 (a1 + j2π f ) (a1 − j2π f ) a1 + (2π f )2 (6) (7) (3) To ﬁnd the average power at the ﬁlter output.com Phone:5017621195 Note that |H ( f )|2 = H ( f )H ∗ ( f ) = Thus. (9) (10) Consulting with Table 11. 2 2 2a0 2a1 K0 K1 + 2 . us (United States) Zip Code:71901 . a1 + a0 3 (13) Note that the input signal has average power R X (0) = 1. SY ( f ) = 2 2 2a0 a0 + (2π f )2 2a1 a1 + (2π f )2 2 a0 K0 K1 + + (2π f )2 a1 + (2π f )2 2 −2a0 a1 2 2 a1 − a0 (8) 2 2a0 a1 2 a1 − a0 .000 rad/sec. the output signal has almost as much power as the input.1.000 rad/sec and the signal X (t) has most of its its signal energy below 5. Using partial fractions and the Fourier transform table. (12) The average power of the Y (t) process is RY (0) = a1 2 = . we see that RY (τ ) = K0 K1 a e−a0 |τ | + 2 a1 e−a1 |τ | 2 0 2a0 2a1 (11) Substituting the values of K 0 and K 1 . In particular. AR. 72 Address:104 pine meadows loop. 2 K1 = . we obtain RY (τ ) = 2 a1 e−a0 |τ | − a0 a1 e−a1 |τ | 2 2 a1 − a0 . the latter method is actually less algebra. some algebra will show that SY ( f ) = where K0 = Thus.

146) and (11. AR. we note that Example 10. Because the noise process N (t) has constant power R N (0) = 1.146) and to calculate the mean square error e L ∗ using Equation (11. at peace with the derivations.146). we see from Table 11.147). the optimal ﬁlter is ˆ H( f ) = SX ( f ) = SX ( f ) + SN ( f ) 1 104 1 104 rect + f 104 1 2B rect f 104 rect f 2B . The ˆ solution to this quiz is just to ﬁnd the ﬁlter H ( f ) using Equation (11. Taking Fourier transforms. (6) 73 Address:104 pine meadows loop. decreasing the single-sided bandwidth B increases the power spectral density of the noise over frequencies | f | < B. (1) Since µ N = 0. (2) RY X (τ ) = R X (τ ). us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Quiz 11. Comment: Since the text omitted the derivations of Equations (11. 4 10 104 (4) The noise power spectral density can be written as S N ( f ) = N0 rect f 2B = 1 f rect 2B 2B .147) for a system in which we ﬁlter Y (t) = X (t) + N (t) to produce an optimal linear estimate of X (t).146) and (11.9 This quiz implements an example of Equations (11. (1) Now we can go on to the quiz.000 Hz.24 showed that RY (τ ) = R X (τ ) + R N (τ ).1 that SX ( f ) = 1 f rect . (2) Since R X (τ ) = sinc(2W τ ). hot springs. it follows that SY ( f ) = S X ( f ) + S N ( f ). R N (0) = Var[N ] = 1. where W = 5. (5) From Equation (11.147). SY X ( f ) = S X ( f ). This implies R N (0) = ∞ −∞ SN ( f ) d f = B −B N0 d f = 2N0 B (3) Thus N0 = 1/(2B).

AR.000/19 = 263. Thus as ˆ B descreases.5 × 104 guarantees e∗ ≤ 0. the mean square error of the estimate is e∗ = L = ∞ −∞ ∞ −∞ S X ( f )S N ( f ) df SX ( f ) + SN ( f ) 1 104 1 104 (7) f 2B f 2B rect f 104 f 104 1 2B rect rect rect + 1 2B d f. B ≥ 9. As B shrinks. The following M ATLAB program generates and plots the functions shown in Figure 8 74 Address:104 pine meadows loop.000 B (9) To obtain MSE e∗ ≤ 0. let’s suppose B ≤ W . As B is decreased. We can go back and consider the case B > W later. the ﬁlter H ( f ) makes an increasingly deep and narrow notch at frequencies ˆ | f | ≤ B.05 requires B ≤ 5. ˆ the Wiener ﬁlter H ( f ) is an ideal (ﬂat) lowpass ﬁlter ⎧ 1 ⎨ 104 | f | < 5. (8) To evaluate the MSE e∗ .16 Hz. When B ≤ W . Finally. L Although this completes the solution to the quiz. what is happening may not be obvious. for all values of B.Name:joey iwatsuru Email:joeyiwat@yahoo. the PSD S N ( f ) becomes increasingly tall. The noise power is always Var[N ] = 1 Watt.10 It is fairly straightforward to ﬁnd S X (φ) and SY (φ). L Quiz 11.147). The result is that the MSE goes down. when B > W = 5000.05.000. the ﬁlter suppresses less of the signal of X (t). Thus increasing B spreads the constant 1 watt of power of N (t) over more bandwidth.com Phone:5017621195 ˆ ˆ (3) We produce the output X (t) by passing the noisy signal Y (t) through the ﬁlter H ( f ). Since the problem asks us to L ﬁnd the largest possible B. In this case. 1 ˆ + 1 (10) H( f ) = 104 2B ⎩ 0 otherwise. hot springs. us (United States) Zip Code:71901 . but only over a bandwidth B that is decreasing. From Equation (11. The mean square error is e∗ L = 1 1 104 2B 1 1 −5000 104 + 2B 5000 df = 1 2B 1 104 + 1 2B = 1 B 5000 +1 (11) In this case. Two examples of the ﬁlter H ( f ) are shown in Figure 7. we note that we can choose B very large and also achieve MSE e∗ = 0. S N ( f ) = 1/2B over frequencies | f | < W . In L particular. we need to whether B ≤ W . The Wiener ﬁlter removes the noise that is outside the band of the desired signal.05. the MSE is e∗ L = 1 1 104 2B 1 1 −B 104 + 2B B df = 1 104 1 104 + 1 2B = 1 1+ 5. The only thing to keep in mind is to use fftc to transform the autocorrelation R X [ f ] into the power spectral density S X (φ).

stem(0:N-1.ylabel(’S_{Y_{10}}(n/N)’).abs(sx)).N).5*[1 1]. the low pass moving average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter output that varies very slowly. rx=[2 4 2]. stem(0:N-1. %impulse/filter response: M=10 SY10=sx.com Phone:5017621195 1 H(f) 0. H10=fft(h10.10).ˆ2). figure. As an aside.N). h2=0. 75 Address:104 pine meadows loop. %autocorrelation and PSD stem(0:N-1. the ﬁlter H (φ) ﬁlters out almost all of the high frequency components of X (t).*((abs(H10)).9. AR. Hence.ylabel(’S_X(n/N)’). Although these imaginary parts have no computational signiﬁcance.abs(SY2)). In the context of Example 11. SY2 and SY10 in mquiz11 should all be realvalued vectors.ˆ2). h10=0.26.* ((abs(H2)). we generate stem plots of the magnitude of each power spectral density. xlabel(’n’).N). us (United States) Zip Code:71901 . %mquiz11. xlabel(’n’).5 0 H(f) −5000 −2000 0 f 2000 5000 1 0.5 0 −5000 −2000 0 f 2000 5000 B = 500 B = 2500 Figure 7: Wiener ﬁlter for Quiz 11. they tend to confuse the stem function. when M = 10.abs(SY10)). %PSD of Y for M=2 xlabel(’n’). the ﬁnite numerical precision of M ATLAB results in tiny imaginary parts. %impulse/filter response: M=2 SY2=SX. However.Name:joey iwatsuru Email:joeyiwat@yahoo. figure.ylabel(’S_{Y_2}(n/N)’). SX=fftc(rx.1*ones(1. hot springs. Relative to M = 2. note that the vectors SX.m N=32. H2=fft(h2.

hot springs. and Sφ (n/N ) for M = 10 using an N = 32 point DFT.Name:joey iwatsuru Email:joeyiwat@yahoo. graphs of S X (φ). us (United States) Zip Code:71901 .10. AR. 76 Address:104 pine meadows loop. SY (n/N ) for M = 2.com Phone:5017621195 10 SX(n/N) 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 2 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 10 5 0 0 5 10 15 n 20 25 30 35 Figure 8: For Quiz 11.

6 P = ⎣0.Name:joey iwatsuru Email:joeyiwat@yahoo.5 1 (3) where si .5 0 0.6 0.2 −0. we are given the conditional probabilities P X n+1 = 0|X n = 0 = 0.5 1 −0.01 0 0. we can conclude that P X n+1 = 1|X n = 0 = 0.6 0 0.01 0. AR.6 0.4 0.2⎦ + (0.2 0.2 0.9 P X n+1 = 0|X n = 1 = 0.5 0.6 0.10 0. Algebra will verify that the n-step transition matrix is ⎡ ⎡ ⎤ ⎤ 0.2 The eigenvalues of P are λ1 = 0 λ2 = 0.4 0 0 λ3 0. the Markov chain and the transition matrix are ⎡ ⎤ 0.6 0.6 0.2 0 0 ⎦ Pn = S−1 Dn S = ⎣0.6 −0.2 0. the ith row of S.99 P X n+1 = 1|X n = 1 = 0.4 0.5 −0.4 0. us (United States) Zip Code:71901 .5 0 −0.2 From the problem statement.9 (1) Since each X n must be either 0 or 1.com Phone:5017621195 Quiz Solutions – Chapter 12 Quiz 12.3 The Markov chain describing the factory status and the corresponding state transition matrix are 77 Address:104 pine meadows loop.2 0.5 0.5 1 λ1 0 0 0 −1 ⎦ 0 1 ⎦ ⎣ 0 λ2 0 ⎦ ⎣ 1 P = S−1 DS = ⎣ 0.6 0.2 0.1 (2) These conditional probabilities correspond to the transition matrix and Markov chain: 0. From the problem statement.6 0.1 The system has two states depending on whether the previous packet was received in error.6 0.4 0.01 0.2 0. hot springs.1 1 P= 0.2⎦ 1 0 1 0 0.90 (3) Quiz 12. is the left eigenvector of P satisfying si P = λi si .6 0.2 Quiz 12.4 λ3 = 1 (1) (2) We can diagonalize P into ⎤⎡ ⎡ ⎤ ⎤⎡ −0.99 0.4)n ⎣ 0 (4) −0.99 0.

. 1} C2 = {2.4 The communicating classes are C1 = {0. 1 … 78 Address:104 pine meadows loop. Once the system enters a state in C1 . Once the system exits C2 .com Phone:5017621195 0.1 + 0. the states in C2 are never reentered. (3) (2) The states in C1 and C3 are aperiodic.0 P [K > n] P [K > n − 1] P [K = n] = P [K = n|K > n − 1] = P [K > n − 1] (1) (2) (3) The Markov chain resembles P[K=5] P[K=4] P[K= 1] P[K=2] P[K=3] 0 1 1 1 2 1 3 1 4 . 6} (1) π1 = 1/12.n = P [K > n|K > n − 1] = Pn−1. The states in C2 have period 2. the class C1 is never left. . the system of equations π = π P yields π1 = 0.5 At any time t. 2. This implies π0 + π1 + π2 = π0 (1 + 0.Name:joey iwatsuru Email:joeyiwat@yahoo.1 0 1 1 1 ⎡ ⎤ 0. hot springs. the states in C3 are recurrent. the state n can take on the values 0. 1. Thus the states in C1 are recurrent. Similarly. π2 = 1/12.1 0 0 1⎦ P=⎣ 0 1 0 0 (1) 2 With π = π0 π1 π2 . That is. 3} C3 = {4.. 5. The state transition probabilities are Pn−1. On the other hand.. AR. Quiz 12. .1) = 1 It follows that the limiting state probabilities are π0 = 5/6.9 0. C1 is a recurrent class. the states in C2 are transient.1π0 and π2 = π1 .. us (United States) Zip Code:71901 .9 0. Quiz 12.

2. π1 = π0 P [K = 2] + π2 . and we randomly reset the counter to a new value K = k and then we count down k units of time. including state 0. hot springs. πk−1 = π0 P [K = k] + πk . The system state is the time until the counter expires.com Phone:5017621195 The stationary probabilities satisfy π0 = π0 P [K = 1] + π1 .5. the number of transitions need to return to state 0 is always a multiple of 2. we obtain π1 = π0 (1 − P [K = 1]) = π0 P [K > 1] Similarly. From Problem 2. . . Quiz 12. n=0 > k] = E[K ].Name:joey iwatsuru Email:joeyiwat@yahoo. the system is in state 0. This implies πn = P [K > n] E [K ] (10) This Markov chain models repeated random countdowns. we obtain π0 ∞ P[K > k] = 1. (6) This suggests that πk = π0 P[K > k]. We verify this pattern by showing that πk = π0 P[K > k] satisﬁes Equation (6): π0 P [K > k − 1] = π0 P [K = k] + π0 P [K > k] . When we apply we recall that ∞ k=0 πk ∞ k=0 P[K (9) = 1. . . From Equation (4). . Since we spend one unit of time in each state. (2) To ﬁnd the stationary probabilities. Equation (5) implies π2 = π1 − π0 P [K = 2] = π0 (P [K > 1] − P [K = 2]) = π0 P [K > 2] (8) (7) (4) (5) k = 1. Thus the period of state 0 is d = 2. . When the counter expires. us (United States) Zip Code:71901 .6 (1) By inspection. then W has a discrete PMF representing the remaining time of the counter at a time in the distant future. AR. If we have a random variable W such that the PMF of W satisﬁes PW (n) = πn . we solve the system of equations π = πP and 3 i=0 πi = 1: π0 = (3/4)π1 + (1/4)π3 π1 = (1/4)π0 + (1/4)π2 π2 = (1/4)π1 + (3/4)π3 1 = π0 + π1 + π2 + π3 79 (1) (2) (3) (4) Address:104 pine meadows loop.11. we have k − 1 units of time left after the state 0 counter reset.

(2/3) a 1 . we choose π0 so the state probabilities sum to 1: 16 2 5 1 = π0 + π1 + π2 + π3 = π0 1 + + + 2 = π0 (7) 3 3 3 It follows that the state probabilities are π0 = 3 16 π1 = 2 16 π2 = 5 16 π3 = 6 16 (8) (3) Since the system starts in state 0 at time 0. which occurs with probability P [T00 1 > n] = 1 × 2 α 2 × 3 α n−1 × ··· × n α = 1 n α .(1/2) a 1 1 . us (United States) Zip Code:71901 . It follows from the ﬁrst and second equations that π2 = (5/3)π0 and π3 = 2π0 . (1) Thus the CDF of T00 satisﬁes FT00 (n) = 1− P[T00 > n] = 1−1/n α . The only difference is the modiﬁed transition rates: 1 (1/2)a (2/3)a (3/4) a (4/5) a 0 1. we observe that for all α > 0 P [V00 ] = lim FT00 (n) = lim 1 − n→∞ n→∞ 1 = 1.14 to ﬁnd the limiting probability that the system is in state 0 at time nd: lim P00 (nd) = dπ0 = 3 8 (9) n→∞ Quiz 12. we can use Theorem 12.7 The Markov chain has the same structure as that in Example 12. To determine whether state 0 is recurrent. Lastly.(4/5)a a 2 3 4 … The event T00 > n occurs if the system reaches state n before returning to state 0.Name:joey iwatsuru Email:joeyiwat@yahoo.(3/4) 1 .22. AR. hot springs.com Phone:5017621195 Solving the second and third equations for π2 and π3 yields π2 = 4π1 − π0 π3 = (4/3)π2 − (1/3)π1 = 5π1 − (4/3)π0 (5) Substituting π3 back into the ﬁrst equation yields π0 = (3/4)π1 + (1/4)π3 = (3/4)π1 + (5/4)π1 − (1/3)π0 (6) This implies π1 = (2/3)π0 . nα (2) 80 Address:104 pine meadows loop.

Quiz 12. us (United States) Zip Code:71901 . Applying this result. it will be simpler to use the result of Problem 2. the expected time to return to state 0 is ∞ ∞ E [T00 ] = n=0 P [T00 > n] = 1 + n=1 1 .Name:joey iwatsuru Email:joeyiwat@yahoo.) To determine whether the chain is null recurrent or positive recurrent.11 which says that ∞ P[K > k] = k=0 E[K ] for any non-negative integer-valued random variable K . we need to calculate E[T00 ]. the Markov chain is positive recurrent. (5) nα n=2 Note that for all n ≥ 2 1 ≤ nα ∞ n n−1 dx xα (6) This implies E [T00 ] ≤ 2 + =2+ n n=2 n−1 ∞ dx 1 dx xα (7) (8) xα x −α+1 =2+ −α + 1 ∞ =2+ 1 1 <∞ α−1 (9) Thus for all α > 1. hot springs. we did this by deriving the PMF PT00 (n).24. On the other hand. Since the chain has only one communicating class. In Example 12. n (4) We conclude that the Markov chain is null recurrent for 0 < α ≤ 1.com Phone:5017621195 Thus state 0 is recurrent for all α > 0. AR.5. nα (3) For 0 < α ≤ 1. 1/n α ≥ 1/n and it follows that ∞ E [T00 ] ≥ 1 + n=1 1 = ∞.8 The number of customers in the ”friendly” store is given by the Markov chain (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) 0 (1-p)q 1 (1-p)q ××× i (1-p)q (1-p)q i+1 ××× 81 Address:104 pine meadows loop. then all states are transient. for α > 1. all states are recurrent. In this problem. ( We also note that if α = 0. ∞ 1 E [T00 ] = 2 + .

the limiting state probabilities are πi = (1 − α)α i .13 with state space partitioned between S = {0. an existing customer gets one unit of service and then departs the store.01 p4 = 2 p3 We can solve these equations by working backward and solving for p4 in terms of p3 . . yielding p4 = 20 p3 31 p3 = 620 p2 981 p2 = 82 19620 p1 31431 p1 = 628.01 p3 = 2 p2 + 3 p4 5.1 per msec and the rate to state 0 is the sum of those two rates. (5) In addition.9 The continuous time Markov chain describing the processor is 2 2 2 2 0 3.1 since the task completes at rate 3 per msec and the processor reboots at rate 0. p3 in terms of p2 and so on.01 p1 = 2 p0 + 3 p2 5. we have that πi = π0 α i where p .}. α= (1 − p)q Requiring the state probabilities to sum to 1. By applying Theorem 12.Name:joey iwatsuru Email:joeyiwat@yahoo. 381 (1) Address:104 pine meadows loop.01 0. . we note that (1 − p)q is the probability that no new customer arrives. .01 1 3 0. . From the Markov chain. . . πi p = πi+1 (1 − p)q. i = 0. AR. This implies πi+1 = p πi . i} and S = {i + 1. .com Phone:5017621195 In the above chain.. . we have that for α < 1. 1−α (4) Thus for α < 1. . the limiting state probabilities do not exist. . us (United States) Zip Code:71901 . hot springs.01 p2 = 2 p1 + 3 p3 3. 1. we see that for any state i ≥ 0. we obtain the following useful equations for the stationary distribution.01 0. 1. 1. i + 2.01 2 3 3 3 4 Note that q10 = 3. . 2. ∞ ∞ (3) πi = π0 i=0 i=0 αi = π0 = 1. 014. equivalently. 5. (1 − p)q (1) (2) Since Equation (2) holds for i = 0. for α ≥ 1 or. p ≥ q/(1 − q). . Quiz 12. . 620 p0 1.

.1015 p4 = 0. AR. (1) It is straightforward to show that this implies pn = The requirement that ∞ n=0 p0 ρ n /n! n = 1.com Phone:5017621195 Applying p0 + p1 + p2 + p3 + p4 = 1 yields p0 = 1.0655 Quiz 12. . 401 and the stationary probabilities are p0 = 0. c n−c c p0 (ρ/c) ρ /c! n = c + 1. . . pn = 1 yields c (2) p0 = n=0 ρ c ρ/c ρ /n! + c! 1 − ρ/c n −1 (3) 83 Address:104 pine meadows loop. c (ρ/c) pn−1 n = c + 1. 2. . 014. 381/2.2573 p2 = 0. . .1606 p3 = 0. . . us (United States) Zip Code:71901 . . hot springs.4151 p1 = 0. . . c + 2. . the stationary probabilities must satisfy pn = (ρ/n) pn−1 n = 1.Name:joey iwatsuru Email:joeyiwat@yahoo. 2. c + 2.10 The M/M/c/∞ queue has Markov chain λ λ λ λ λ (2) 0 µ 1 2µ cµ c cµ c+1 cµ From the Markov chain. . 443.

Copyright © 2016 Scribd Inc. .Términos de servicio.Accesibilidad.Privacidad.Sitio móvil.Idioma del sitio:

Copyright © 2016 Scribd Inc. .Términos de servicio.Accesibilidad.Privacidad.Sitio móvil.Idioma del sitio:

¿Está seguro?

This action might not be possible to undo. Are you sure you want to continue?

Lo hemos llevado donde lee en su other device.

Obtenga el título completo para continuar

Obtenga el título completo para seguir escuchando desde donde terminó, o reinicie la previsualización.

scribd