## ¿Está seguro?

This action might not be possible to undo. Are you sure you want to continue?

**A Friendly Introduction for Electrical and Computer Engineers
**

SECOND EDITION

MATLAB Function Reference

Roy D. Yates and David J. Goodman

May 22, 2004

This document is a supplemental reference for MATLAB functions described in the text Prob-

ability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers.

This document should be accompanied by matcode.zip, an archive of the corresponding MAT-

LAB .m ﬁles. Here are some points to keep in mind in using these functions.

• The actual programs can be found in the archive matcode.zip or in a directory matcode.

To use the functions, you will need to use the MATLAB command addpath to add this

directory to the path that MATLAB searches for executable .m ﬁles.

• The matcode archive has both general purpose programs for solving probability problems

as well as speciﬁc .m ﬁles associated with examples or quizzes in the text. This manual

describes only the general purpose .m ﬁles in matcode.zip. Other programs in the archive

are described in main text or in the Quiz Solution Manual.

• The MATLAB functions described here are intended as a supplement the text. The code is

not fully commented. Many comments and explanations relating to the code appear in the

text, the Quiz Solution Manual (available on the web) or in the Problem Solution Manual

(available on the web for instructors).

• The code is instructional. The focus is on MATLAB programming techniques to solve prob-

ability problems and to simulate experiments. The code is deﬁnitely not bulletproof; for

example, input range checking is generally neglected.

• This is a work in progress. At the moment (May, 2004), the homework solution manual has

a number of unsolved homework problems. As these solutions require the development of

additional MATLAB functions, these functions will be added to this reference manual.

• There is a nonzero probability (in fact, a probability close to unity) that errors will be found. If

you ﬁnd errors or have suggestions or comments, please send email to ryates@winlab.rutgers.edu.

When errors are found, revisions both to this document and the collection of MATLAB func-

tions will be posted.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Random Variables

bernoullipmf y=bernoullipmf(p,x)

function pv=bernoullipmf(p,x)

%For Bernoulli (p) rv X

%input = vector x

%output = vector pv

%such that pv(i)=Prob(X=x(i))

pv=(1-p)*(x==0) + p*(x==1);

pv=pv(:);

Input: p is the success probability of a Bernoulli

random variable X, x is a vector of possible

sample values

Output: y is a vector with y(i) = P

X

(x(i)).

bernoullicdf y=bernoullicdf(p,x)

function cdf=bernoullicdf(p,x)

%Usage: cdf=bernoullicdf(p,x)

% For Bernoulli (p) rv X,

%given input vector x, output is

%vector pv such that pv(i)=Prob[X<=x(i)]

x=floor(x(:));

allx=0:1;

allcdf=cumsum(bernoullipmf(p,allx));

okx=(x>=0); %x_i < 1 are bad values

x=(okx.*x); %set bad x_i=0

cdf= okx.*allcdf(x); %zeroes out bad x_i

Input: p is the success probability of

a Bernoulli random variable X,

x is a vector of possible sample

values

Output: y is a vector with y(i) =

F

X

(x(i)).

bernoullirv x=bernoullirv(p,m)

function x=bernoullirv(p,m)

%return m samples of bernoulli (p) rv

r=rand(m,1);

x=(r>=(1-p));

Input: p is the success probability of a

Bernoulli random variable X, m is

a positive integer vector of possible

sample values

Output: x is a vector of m independent

sample values of X

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf y=bignomialpmf(n,p,x)

function pmf=bignomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

k=(0:n-1)’;

a=log((p/(1-p))*((n-k)./(k+1)));

L0=n*log(1-p);

L=[L0; L0+cumsum(a)];

pb=exp(L);

% pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

Comment: This function should al-

ways produce the same output

as binomialpmf(n,p,x);

however, the function calcu-

lates the logarithmof the proba-

bility and thismay lead to small

numerical innaccuracy.

binomialcdf y=binomialcdf(n,p,x)

function cdf=binomialcdf(n,p,x)

%Usage: cdf=binomialcdf(n,p,x)

%For binomial(n,p) rv X,

%and input vector x, output is

%vector cdf: cdf(i)=P[X<=x(i)]

x=floor(x(:)); %for noninteger x(i)

allx=0:max(x);

%calculate cdf from 0 to max(x)

allcdf=cumsum(binomialpmf(n,p,allx));

okx=(x>=0); %x(i) < 0 are zero-prob values

x=(okx.*x); %set zero-prob x(i)=0

cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

Input: n and p are the pa-

rameters of a bino-

mial (n, p) random

variable X, x is a vec-

tor of possible sample

values

Output: y is a vector with

y(i) = F

X

(x(i)).

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

binomialpmf y=binomialpmf(n,p,x)

function pmf=binomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

if p<0.5

pp=p;

else

pp=1-p;

end

i=0:n-1;

ip= ((n-i)./(i+1))*(pp/(1-pp));

pb=((1-pp)ˆn)*cumprod([1 ip]);

if pp < p

pb=fliplr(pb);

end

pb=pb(:); % pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

binomialrv x=binomialrv(n,p,m)

function x=binomialrv(n,p,m)

% m binomial(n,p) samples

r=rand(m,1);

cdf=binomialcdf(n,p,0:n);

x=count(cdf,r);

Input: n and p are the parameters of a binomial ran-

dom variable X, m is a positive integer

Output: x is a vector of m independent samples of

random variable X

bivariategausspdf

function f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Usage: f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Evaluate the bivariate Gaussian (muX,muY,sigmaX,sigmaY,rho) PDF

nx=(x-muX)/sigmaX;

ny=(y-muY)/sigmaY;

f=exp(-((nx.ˆ2) +(ny.ˆ2) - (2*rho*nx.*ny))/(2*(1-rhoˆ2)));

f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2));

Input: Scalar parameters muX,muY,sigmaX,sigmaY,rho of the bivariate Gaussian PDF, scalars

x and y.

Output: f the value of the bivariate Gaussian PDF at x,y.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

duniformcdf y=duniformcdf(k,l,x)

function cdf=duniformcdf(k,l,x)

%Usage: cdf=duniformcdf(k,l,x)

% For discrete uniform (k,l) rv X

% and input vector x, output is

% vector cdf: cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); %for noninteger x_i

allx=k:max(x);

%allcdf = cdf values from 0 to max(x)

allcdf=cumsum(duniformpmf(k,l,allx));

%x_i < k are zero prob values

okx=(x>=k);

%set zero prob x(i)=k

x=((1-okx)*k)+(okx.*x);

%x(i)=0 for zero prob x(i)

cdf= okx.*allcdf(x-k+1);

Input: k and l are the parameters of

a discrete uniform (k, l) random

variable X, x is a vector of pos-

sible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

duniformpmf y=duniformpmf(k,l,x)

function pmf=duniformpmf(k,l,x)

%discrete uniform(k,l) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

pmf= (x>=k).*(x<=l).*(x==floor(x));

pmf=pmf(:)/(l-k+1);

Input: k and l are the parameters

of a discrete uniform (k, l) ran-

dom variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

duniformrv x=duniformrv(k,l,m)

function x=duniformrv(k,l,m)

%returns m samples of a discrete

%uniform (k,l) random variable

r=rand(m,1);

cdf=duniformcdf(k,l,k:l);

x=k+count(cdf,r);

Input: k and l are the parameters of a discrete

uniform (k, l) random variable X, m is a

positive integer

Output: x is a vector of m independent samples

of random variable X

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

erlangb pb=erlangb(rho,c)

function pb=erlangb(rho,c);

%Usage: pb=erlangb(rho,c)

%returns the Erlang-B blocking

%probability for sn M/M/c/c

%queue with load rho

pn=exp(-rho)*poissonpmf(rho,0:c);

pb=pn(c+1)/sum(pn);

Input: Offered load rho (ρ = λ/µ), and

the number of servers c of an M/M/c/c

queue.

Output: pb, the blocking probability of the

queue

erlangcdf y=erlangcdf(n,lambda,x)

function F=erlangcdf(n,lambda,x)

F=1.0-poissoncdf(lambda*x,n-1);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

).

erlangpdf y=erlangpdf(n,lambda,x)

function f=erlangpdf(n,lambda,x)

f=((lambdaˆn)/factorial(n))...

*(x.ˆ(n-1)).*exp(-lambda*x);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λ

n

x

n−1

i

e

−λx

i

/(n − 1)!.

erlangrv x=erlangrv(n,lambda,m)

function x=erlangrv(n,lambda,m)

y=exponentialrv(lambda,m*n);

x=sum(reshape(y,m,n),2);

Input: n and lambda are the parameters of an

Erlang random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

exponentialcdf y=exponentialcdf(lambda,x)

function F=exponentialcdf(lambda,x)

F=1.0-exp(-lambda*x);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

1 − e

−λx

i

.

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

exponentialpdf y=exponentialpdf(lambda,x)

function f=exponentialpdf(lambda,x)

f=lambda*exp(-lambda*x);

f=f.*(x>=0);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λe

−λx

i

.

exponentialrv x=exponentialrv(lambda,m)

function x=exponentialrv(lambda,m)

x=-(1/lambda)*log(1-rand(m,1));

Input: lambda is the parameter of an expo-

nential random variable X, integer m

Output: Length m vector x such that each x

i

is a sample of X

finitecdf y=finitecdf(sx,p,x)

function cdf=finitecdf(s,p,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% cdf: cdf(i)=P[X=x(i)]

cdf=[];

for i=1:length(x)

pxi= sum(p(find(s<=x(i))));

cdf=[cdf; pxi];

end

Input: sx is the range of a ﬁnite random variable

X, px is the corresponding probability as-

signment, x is a vector of possible sample

values

Output: y is a vector with y(i) = F

X

(x(i)).

finitecoeff rho=finitecoeff(SX,SY,PXY)

function rho=finitecoeff(SX,SY,PXY);

%Usage: rho=finitecoeff(SX,SY,PXY)

%Calculate the correlation coefficient rho of

%finite random variables X and Y

ex=finiteexp(SX,PXY); vx=finitevar(SX,PXY);

ey=finiteexp(SY,PXY); vy=finitevar(SY,PXY);

R=finiteexp(SX.*SY,PXY);

rho=(R-ex*ey)/sqrt(vx*vy);

Input: Grids SX, SY and

probability grid PXY de-

scribing the ﬁnite ran-

dom variables X and Y.

Output: rho, the correlation

coefﬁcient of X and Y

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitecov covxy=finitecov(SX,SY,PXY)

function covxy=finitecov(SX,SY,PXY);

%Usage: cxy=finitecov(SX,SY,PXY)

%returns the covariance of

%finite random variables X and Y

%given by grids SX, SY, and PXY

ex=finiteexp(SX,PXY);

ey=finiteexp(SY,PXY);

R=finiteexp(SX.*SY,PXY);

covxy=R-ex*ey;

Input: Grids SX, SY and probability grid

PXY describing the ﬁnite random

variables X and Y.

Output: covxy, the covariance of X and

Y.

finiteexp ex=finiteexp(sx,px)

function ex=finiteexp(sx,px);

%Usage: ex=finiteexp(sx,px)

%returns the expected value E[X]

%of finite random variable X described

%by samples sx and probabilities px

ex=sum((sx(:)).*(px(:)));

Input: Probability vector px, vector

of samples sx describing random

variable X.

Output: ex, the expected value E[X].

finitepmf y=finitepmf(sx,p,x)

function pmf=finitepmf(sx,px,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% pmf: pmf(i)=P[X=x(i)]

pmf=zeros(size(x(:)));

for i=1:length(x)

pmf(i)= sum(px(find(sx==x(i))));

end

Input: sx is the range of a ﬁnite random

variable X, px is the corresponding

probability assignment, x is a vector

of possible sample values

Output: y is a vector with y(i) =

P[X = x(i)].

finiterv x=finiterv(sx,p,m)

function x=finiterv(s,p,m)

% returns m samples

% of finite (s,p) rv

%s=s(:);p=p(:);

r=rand(m,1);

cdf=cumsum(p);

x=s(1+count(cdf,r));

Input: sx is the range of a ﬁnite random variable X, p

is the corresponding probability assignment, m is

positive integer

Output: x is a vector of m sample values y(i) =

F

X

(x(i)).

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitevar v=finitevar(sx,px)

function v=finitevar(sx,px);

%Usage: ex=finitevar(sx,px)

% returns the variance Var[X]

% of finite random variables X described by

% samples sx and probabilities px

ex2=finiteexp(sx.ˆ2,px);

ex=finiteexp(sx,px);

v=ex2-(exˆ2);

Input: Probability vector px

and vector of samples

sx describing random

variable X.

Output: v, the variance

Var[X].

gausscdf y=gausscdf(mu,sigma,x)

function f=gausscdf(mu,sigma,x)

f=phi((x-mu)/sigma);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

((x

i

− µ)/σ).

gausspdf y=gausspdf(mu,sigma,x)

function f=gausspdf(mu,sigma,x)

f=exp(-(x-mu).ˆ2/(2*sigmaˆ2))/...

sqrt(2*pi*sigmaˆ2);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

).

gaussrv x=gaussrv(mu,sigma,m)

function x=gaussrv(mu,sigma,m)

x=mu +(sigma*randn(m,1));

Input: mu and sigma are the parameters of an

Gaussian random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

gaussvector x=gaussvector(mu,C,m)

function x=gaussvector(mu,C,m)

%output: m Gaussian vectors,

%each with mean mu

%and covariance matrix C

if (min(size(C))==1)

C=toeplitz(C);

end

n=size(C,2);

if (length(mu)==1)

mu=mu*ones(n,1);

end

[U,D,V]=svd(C);

x=V*(Dˆ(0.5))*randn(n,m)...

+(mu(:)*ones(1,m));

Input: For a Gaussian (µ

X

, C

X

) random vector X,

gaussvector can be called in two ways:

• C is the n × n covariance matrix, mu is

either a length n vector, or a length 1

scalar, m is an integer.

• C is the length n vector equal to the ﬁrst

row of a symmetric Toeplitz covariance

matrix C

X

, mu is either a length n vec-

tor, or a length 1 scalar, m is an integer.

If mu is a length n vector, then mu is the ex-

pected value vector; otherwise, each element

of X is assumed to have mean mu.

Output: n × m matrix x such that each column

x(:,i) is a sample vector of X

gaussvectorpdf f=gaussvector(mu,C,x)

function f=gaussvectorpdf(mu,C,x)

n=length(x);

z=x(:)-mu(:);

f=exp(-z’*inv(C)*z)/...

sqrt((2*pi)ˆn*det(C));

Input: For a Gaussian (µ

X

, C

X

) random vec-

tor X, mu is a length n vector, C is the

n × n covariance matrix, x is a length n

vector.

Output: f is the Gaussian vector PDF f

X

(x)

evaluated at x.

geometriccdf y=geometriccdf(p,x)

function cdf=geometriccdf(p,x)

% for geometric(p) rv X,

%For input vector x, output is vector

%cdf such that cdf_i=Prob(X<=x_i)

x=(x(:)>=1).*floor(x(:));

cdf=1-((1-p).ˆx);

Input: p is the parameter of a geometric

random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

geometricpmf y=geometricpmf(p,x)

function pmf=geometricpmf(p,x)

%geometric(p) rv X

%out: pmf(i)=Prob[X=x(i)]

x=x(:);

pmf= p*((1-p).ˆ(x-1));

pmf= (x>0).*(x==floor(x)).*pmf;

Input: p is the parameter of a geometric random

variable X, x is a vector of possible sample

values

Output: y is a vector with y(i) = P

X

(x(i)).

geometricrv x=geometricrv(p,m)

function x=geometricrv(p,m)

%Usage: x=geometricrv(p,m)

% returns m samples of a geometric (p) rv

r=rand(m,1);

x=ceil(log(1-r)/log(1-p));

Input: p is the parameters of a

geometric random variable

X, m is a positive integer

Output: x is a vector of m inde-

pendent samples of random

variable X

icdfrv x=icdfrv(@icdf,m)

function x=icdfrv(icdfhandle,m)

%Usage: x=icdfrv(@icdf,m)

%returns m samples of rv X

%with inverse CDF icdf.m

u=rand(m,1);

x=feval(icdfhandle,u);

Input: @icdfrv is a “handle” (a kind of pointer)

to a MATLAB function icdf.m that is

MATLAB’s representation of an inverse

CDF F

−1

X

(x) of a random variable X, inte-

ger m

Output: Length m vector x such that each x

i

is a

sample of X

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalcdf y=pascalcdf(k,p,x)

function cdf=pascalcdf(k,p,x)

%Usage: cdf=pascalcdf(k,p,x)

%For a pascal (k,p) rv X

%and input vector x, the output

%is a vector cdf such that

% cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); % for noninteger x(i)

allx=k:max(x);

%allcdf holds all needed cdf values

allcdf=cumsum(pascalpmf(k,p,allx));

%x_i < k have zero-prob,

% other values are OK

okx=(x>=k);

%set zero-prob x(i)=k,

%just so indexing is not fouled up

x=(okx.*x) +((1-okx)*k);

cdf= okx.*allcdf(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

pascalpmf y=pascalpmf(k,p,x)

function pmf=pascalpmf(k,p,x)

%For Pascal (k,p) rv X, and

%input vector x, output is a

%vector pmf: pmf(i)=Prob[X=x(i)]

x=x(:);

n=max(x);

i=(k:n-1)’;

ip= [1 ;(1-p)*(i./(i+1-k))];

%pb=all n-k+1 pascal probs

pb=(pˆk)*cumprod(ip);

okx=(x==floor(x)).*(x>=k);

%set bad x(i)=k to stop bad indexing

x=(okx.*x) + k*(1-okx);

% pmf(i)=0 unless x(i) >= k

pmf=okx.*pb(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalrv x=pascalrv(k,p,m)

function x=pascalrv(k,p,m)

% return m samples of pascal(k,p) rv

r=rand(m,1);

rmax=max(r);

xmin=k;

xmax=ceil(2*(k/p)); %set max range

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

end

x=xmin+countless(cdf,r);

Input: k and p are the parameters of a Pas-

cal random variable X, m is a posi-

tive integer

Output: x is a vector of m independent

samples of random variable X

phi y=phi(x)

function y=phi(x)

sq2=sqrt(2);

y= 0.5 + 0.5*erf(x/sq2);

Input: Vector x

Output: Vector y such that y(i) = (x(i)).

poissoncdf y=poissoncdf(alpha,x)

function cdf=poissoncdf(alpha,x)

%output cdf(i)=Prob[X<=x(i)]

x=floor(x(:));

sx=0:max(x);

cdf=cumsum(poissonpmf(alpha,sx));

%cdf from 0 to max(x)

okx=(x>=0);%x(i)<0 -> cdf=0

x=(okx.*x);%set negative x(i)=0

cdf= okx.*cdf(x+1);

%cdf=0 for x(i)<0

Input: alpha is the parameter of a Poisson

(α) random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonpmf y=poissonpmf(alpha,x)

function pmf=poissonpmf(alpha,x)

%Poisson (alpha) rv X,

%out=vector pmf: pmf(i)=P[X=x(i)]

x=x(:);

k=(1:max(x))’;

logfacts =cumsum(log(k));

pb=exp([-alpha; ...

-alpha+ (k*log(alpha))-logfacts]);

okx=(x>=0).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

%pmf(i)=0 for zero-prob x(i)

Input: alpha is the parameter of a

Poisson (α) random variable X, x

is a vector of possible sample val-

ues

Output: y is a vector with y(i) =

P

X

(x(i)).

poissonrv x=poissonrv(alpha,m)

function x=poissonrv(alpha,m)

%return m samples of poisson(alpha) rv X

r=rand(m,1);

rmax=max(r);

xmin=0;

xmax=ceil(2*alpha); %set max range

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

%while ( sum(cdf <=rmax) ==(xmax-xmin+1) )

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

end

x=xmin+countless(cdf,r);

Input: alpha is the parameter of

a Poisson (α) random vari-

able X, m is a positive inte-

ger

Output: x is a vector of m inde-

pendent samples of random

variable X

uniformcdf y=uniformcdf(a,b,x)

function F=uniformcdf(a,b,x)

%Usage: F=uniformcdf(a,b,x)

%returns the CDF of a continuous

%uniform rv evaluated at x

F=x.*((x>=a) & (x<b))/(b-a);

F=f+1.0*(x>=b);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

)

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

uniformpdf y=uniformpdf(a,b,x)

function f=uniformpdf(a,b,x)

%Usage: f=uniformpdf(a,b,x)

%returns the PDF of a continuous

%uniform rv evaluated at x

f=((x>=a) & (x<b))/(b-a);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

)

uniformrv x=uniformrv(a,b,m)

function x=uniformrv(a,b,m)

%Usage: x=uniformrv(a,b,m)

%Returns m samples of a

%uniform (a,b) random varible

x=a+(b-a)*rand(m,1);

Input: a and ( b) are parameters for continuous uni-

form random variable X, positive integer m

Output: m element vector x such that each x(i) is

a sample of X.

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Stochastic Processes

brownian w=brownian(alpha,t)

function w=brownian(alpha,t)

%Brownian motion process

%sampled at t(1)<t(2)< ...

t=t(:);

n=length(t);

delta=t-[0;t(1:n-1)];

x=sqrt(alpha*delta).*gaussrv(0,1,n);

w=cumsum(x);

Input: t is a vector holding an ordered se-

quence of inspection times, alpha

is the scaling constant of a Brownian

motion process such that the i th in-

crement has variance α(t

i

− t

i −1

).

Output: w is a vector such that w(i) is

the position at time t(i) of the par-

ticle in Brownian motion.

cmcprob pv=cmcprob(Q,p0,t)

function pv = cmcprob(Q,p0,t)

%Q has zero diagonal rates

%initial state probabilities p0

K=size(Q,1)-1; %max no. state

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

R=Q-diag(sum(Q,2));

pv= (p0(:)’*expm(R*t))’;

Input: n × n state transition matrix Q for a

continuous-time ﬁnite Markov chain, length

n vector p0 denoting the initial state proba-

bilities, nonengative scalar t

Output: Length n vector pv such that pv(t) is

the state probability vector at time t of the

Markov chain

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0

cmcstatprob pv=cmcstatprob(Q)

function pv = cmcstatprob(Q)

%Q has zero diagonal rates

R=Q-diag(sum(Q,2));

n=size(Q,1);

R(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Rˆ(-1))’;

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain

Output: pv is the stationary probability vector for

the continuous-time Markov chain

dmcstatprob pv=dmcstatprob(P)

function pv = dmcstatprob(P)

n=size(P,1);

A=(eye(n)-P);

A(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Aˆ(-1))’;

Input: n × n stochastic matrix P representing

a discrete-time aperiodic irreducible ﬁnite

Markov chain

Output: pv is the stationary probability vector.

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonarrivals s=poissonarrivals(lambda,T)

function s=poissonarrivals(lambda,T)

%arrival times s=[s(1) ... s(n)]

% s(n)<= T < s(n+1)

n=ceil(1.1*lambda*T);

s=cumsum(exponentialrv(lambda,n));

while (s(length(s))< T),

s_new=s(length(s))+ ...

cumsum(exponentialrv(lambda,n));

s=[s; s_new];

end

s=s(s<=T);

Input: lambda is the arrival rate of a

Poisson process, T marks the end of

an observation interval [0, T].

Output: s=[s(1), ..., s(n)]’ is

a vector such that s(i) is i th arrival

time. Note that length n is a Poisson

random variable with expected value

λT.

Comment: This code is pretty stupid.

There are decidedly better ways to

create a set of arrival times; see Prob-

lem 10.13.5.

poissonprocess N=poissonprocess(lambda,t)

function N=poissonprocess(lambda,t)

%input: rate lambda>0, vector t

%For a sample function of a

%Poisson process of rate lambda,

%N(i) = no. of arrivals by t(i)

s=poissonarrivals(lambda,max(t));

N=count(s,t);

Input: lambda is the arrival rate of a Pois-

son process, t is a vector of “inspec-

tion times’.’

Output: N is a vector such that N(i) is the

number of arrival by inspection time

t(i).

simcmc ST=simcmc(Q,p0,T)

function ST=simcmc(Q,p0,T);

K=size(Q,1)-1; max no. state

%calc average trans. rate

ps=cmcstatprob(Q);

v=sum(Q,2); R=ps’*v;

n=ceil(0.6*T/R);

ST=simcmcstep(Q,p0,2*n);

while (sum(ST(:,2))<T),

s=ST(size(ST,1),1);

p00=Q(1+s,:)/v(1+s);

S=simcmcstep(Q,p00,n);

ST=[ST;S];

end

n=1+sum(cumsum(ST(:,2))<T);

ST=ST(1:n,:);

%truncate last holding time

ST(n,2)=T-sum(ST(1:n-1,2));

Input: state transition matrix Q for a continuous-time

ﬁnite Markov chain, vector p0 denoting the ini-

tial state probabilities, integer n

Output: A simulation of the Markov chain system

over the time interval [0, T]: The output is an

n × 2 matrix ST such that the ﬁrst column

ST(:,1) is the sequence of system states and

the second column ST(:,2) is the amount of

time spent in each state. That is, ST(i,2) is

the amount of time the system spends in state

ST(i,1).

Comment: If p0 is a scalar integer, then the simula-

tion starts in state p0. Note that n, the number

of state occupancy periods, is random.

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simcmcstep S=simcmcstep(Q,p0,n)

function S=simcmcstep(Q,p0,n);

%S=simcmcstep(Q,p0,n)

% Simulate n steps of a cts

% Markov Chain, rate matrix Q,

% init. state probabilities p0

K=size(Q,1)-1; %max no. state

S=zeros(n+1,2);%init allocation

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

v=sum(Q,2); %state dep. rates

t=1./v;

P=diag(t)*Q;

S(:,1)=simdmc(P,p0,n);

S(:,2)=t(1+S(:,1)) ...

.*exponentialrv(1,n+1);

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain, vector p0 denot-

ing the initial state probabilities, integer n

Output: A simulation of n steps of the

continuous-time Markov chain system:

The output is an n × 2 matrix ST such that

the ﬁrst column ST(:,1) is the length n

sequence of system states and the second

column ST(:,2) is the amount of time

spent in each state. That is, ST(i,2) is

the amount of time the system spends in

state ST(i,1).

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0. This program is

the basis for simcmc.

simdmc x=simdmc(P,p0,n)

function x=simdmc(P,p0,n)

K=size(P,1)-1; %highest no. state

sx=0:K; %state space

x=zeros(n+1,1); %initialization

if (length(p0)==1) %convert integer p0 to prob vector

p0=((0:K)==p0);

end

x(1)=finiterv(sx,p0,1); %x(m)= state at time m-1

for m=1:n,

x(m+1)=finiterv(sx,P(x(m)+1,:),1);

end

Input: n×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov

chain, length n vector p0 denoting the initial state probabilities, integer n.

Output: A simulation of the Markov chain system such that for the length n vector x, x(m) is the

state at time m-1 of the Markov chain.

Comment: If p0 is a scalar integer, then the simulation starts in state p0

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Random Utilities

count n=count(x,y)

function n=count(x,y)

%Usage n=count(x,y)

%n(i)= # elements of x <= y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<=MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x less than or equal to y(i).

countequal n=countequal(x,y)

function n=countequal(x,y)

%Usage: n=countequal(x,y)

%n(j)= # elements of x = y(j)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX==MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x equal to y(i).

countless n=countless(x,y)

function n=countless(x,y)

%Usage: n=countless(x,y)

%n(i)= # elements of x < y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<MY),1))’;

Input:

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x strictly less than y(i).

dftmat F=dftmat(N)

function F = dftmat(N);

Usage: F=dftmat(N)

%F is the N by N DFT matrix

n=(0:N-1)’;

F=exp((-1.0j)*2*pi*(n*(n’))/N);

Input: Integer N.

Output: F is the N by N discrete Fourier trans-

form matrix

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

freqxy fxy=freqxy(xy,SX,SY)

function fxy = freqxy(xy,SX,SY)

%Usage: fxy = freqxy(xy,SX,SY)

%xy is an m x 2 matrix:

%xy(i,:)= ith sample pair X,Y

%Output fxy is a K x 3 matrix:

% [fxy(k,1) fxy(k,2)]

% = kth unique pair [x y] and

% fxy(k,3)= corresp. rel. freq.

%extend xy to include a sample

%for all possible (X,Y) pairs:

xy=[xy; SX(:) SY(:)];

[U,I,J]=unique(xy,’rows’);

N=hist(J,1:max(J))-1;

N=N/sum(N);

fxy=[U N(:)];

%reorder fxy rows to match

%rows of [SX(:) SY(:) PXY(:)]:

fxy=sortrows(fxy,[2 1 3]);

Input: For random variables X and Y, xy is

an m × 2 matrix holding a list of sample

values pairs; yy(i,:) is the i th sample

pair (X, Y). Grids SX and SY represent-

ing the sample space.

Output: fxy is a K × 3 matrix. In each row

[fxy(k,1) fxy(k,2) fxy(k,3)]

[fxy(k,1) fxy(k,2)] is a unique

(X, Y) pair with relative frequency

fxy(k,3).

Comment: Given the grids SX, SY and the

probability grid PXY, a list of random

sample value pairs xy can be simulated

by the commands

S=[SX(:) SY(:)];

xy=finiterv(S,PXY(:),m);

The output fxy is ordered so that the

rows match the ordering of rows in the

matrix

[SX(:) SY(:) PXY(:)].

fftc S=fftc(r,N); S=fftc(r)

function S=fftc(varargin);

%DFT for a signal r

%centered at the origin

%Usage:

% fftc(r,N): N point DFT of r

% fftc(r): length(r) DFT of r

r=varargin{1};

L=1+floor(length(r)/2);

if (nargin>1)

N=varargin{2}(1);

else

N=(2*L)-1;

end

R=fft(r,N);

n=reshape(0:(N-1),size(R));

phase=2*pi*(n/N)*(L-1);

S=R.*exp((1.0j)*phase);

Input: Vector r=[r(1) ... r(2k+1)]

holding the time sequence r

−k

, . . . , r

0

, . . . , r

k

centered around the origin.

Output: S is the DFT of r

Comment: Supports the same calling conventions

as fft.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pmfplot pmfplot(sx,px,’x’,’y axis text’)

function h=pmfplot(sx,px,xls,yls)

%Usage: pmfplot(sx,px,xls,yls)

%sx and px are vectors, px is the PMF

%xls and yls are x and y label strings

nonzero=find(px);

sx=sx(nonzero); px=px(nonzero);

sx=(sx(:))’; px=(px(:))’;

XM = [sx; sx];

PM=[zeros(size(px)); px];

h=plot(XM,PM,’-k’);

set(h,’LineWidth’,3);

if (nargin==4)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

xmin=min(sx); xmax=max(sx);

xborder=0.05*(xmax-xmin);

xmax=xmax+xborder;

xmin=xmin-xborder;

ymax=1.1*max(px);

axis([xmin xmax 0 ymax]);

Input: Sample space vector sx

and PMF vector px for ﬁ-

nite random variable PXY,

optional text strings xls

and yls

Output: A plot of the PMF

P

X

(x) in the bar style used

in the text.

rect y=rect(x)

function y=rect(x);

%Usage:y=rect(x);

y=1.0*(abs(x)<0.5);

Input: Vector x

Output: Vector y such that

y

i

= rect(x

i

) =

1 |x

i

| < 0.5

0 otherwise

sinc y=sinc(x)

function y=sinc(x);

xx=x+(x==0);

y=sin(pi*xx)./(pi*xx);

y=((1.0-(x==0)).*y)+ (1.0*(x==0));

Input: Vector x

Output: Vector y such that

y

i

= sinc(x

i

) =

sin(πx

i

)

πx

i

Comment: The code is ugly because it makes

sure to produce the right limit value at

x

i

= 0.

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simplot simplot(S,xlabel,ylabel)

function h=simplot(S,xls,yls);

%h=simplot(S,xlabel,ylabel)

% Plots the output of a simulated state sequence

% If S is N by 1, a discrete time chain is assumed

% with visit times of one unit.

% If S is an N by 2 matrix, a cts time Markov chain

% is assumed where

% S(:,1) = state sequence.

% S(:,2) = state visit times.

% The cumulative sum

% of visit times are transition instances.

% h is a handle to a stairs plot of the state sequence

% vs state transition times

%in case of discrete time simulation

if (size(S,2)==1)

S=[S ones(size(S))];

end

Y=[S(:,1) ; S(size(S,1),1)];

X=cumsum([0 ; S(:,2)]);

h=stairs(X,Y);

if (nargin==3)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

Input: The simulated state sequence vector S generated by S=simdmc(P,p0,n) or the n × 2

state/time matrix ST generated by either

ST=simcmc(Q,p0,T)

or

ST=simcmcstep(Q,p0,n).

Output: A “stairs” plot showing the sequence of simulation states over time.

Comment: If S is just a state sequence vector, then each stair has equal width. If S is n × 2

state/time matrix ST, then the width of the stair is proportional to the time spent in that state.

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Probability and Stochastic Processes

A Friendly Introduction for Electrical and Computer Engineers

Second Edition

Quiz Solutions

Roy D. Yates and David J. Goodman

May 22, 2004

• The MATLAB section quizzes at the end of each chapter use programs available for

download as the archive matcode.zip. This archive has programs of general pur-

pose programs for solving probability problems as well as speciﬁc .m ﬁles associated

with examples or quizzes in the text. Also available is a manual probmatlab.pdf

describing the general purpose .m ﬁles in matcode.zip.

• We have made a substantial effort to check the solution to every quiz. Nevertheless,

there is a nonzero probability (in fact, a probability close to unity) that errors will be

found. If you ﬁnd errors or have suggestions or comments, please send email to

ryates@winlab.rutgers.edu.

When errors are found, corrected solutions will be posted at the website.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 1

Quiz 1.1

In the Venn diagrams for parts (a)-(g) below, the shaded area represents the indicated

set.

M

O

T

M

O

T

M

O

T

(1) R = T

c

(2) M ∪ O (3) M ∩ O

M

O

T

M

O

T

M

O

T

(4) R ∪ M (4) R ∩ M (6) T

c

− M

Quiz 1.2

(1) A

1

= {vvv, vvd, vdv, vdd}

(2) B

1

= {dvv, dvd, ddv, ddd}

(3) A

2

= {vvv, vvd, dvv, dvd}

(4) B

2

= {vdv, vdd, ddv, ddd}

(5) A

3

= {vvv, ddd}

(6) B

3

= {vdv, dvd}

(7) A

4

= {vvv, vvd, vdv, dvv, vdd, dvd, ddv}

(8) B

4

= {ddd, ddv, dvd, vdd}

Recall that A

i

and B

i

are collectively exhaustive if A

i

∪ B

i

= S. Also, A

i

and B

i

are

mutually exclusive if A

i

∩ B

i

= φ. Since we have written down each pair A

i

and B

i

above,

we can simply check for these properties.

The pair A

1

and B

1

are mutually exclusive and collectively exhaustive. The pair A

2

and

B

2

are mutually exclusive and collectively exhaustive. The pair A

3

and B

3

are mutually

exclusive but not collectively exhaustive. The pair A

4

and B

4

are not mutually exclusive

since dvd belongs to A

4

and B

4

. However, A

4

and B

4

are collectively exhaustive.

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.3

There are exactly 50 equally likely outcomes: s

51

through s

100

. Each of these outcomes

has probability 0.02.

(1) P[{s

79

}] = 0.02

(2) P[{s

100

}] = 0.02

(3) P[A] = P[{s

90

, . . . , s

100

}] = 11 ×0.02 = 0.22

(4) P[F] = P[{s

51

, . . . , s

59

}] = 9 ×0.02 = 0.18

(5) P[T ≥ 80] = P[{s

80

, . . . , s

100

}] = 21 ×0.02 = 0.42

(6) P[T < 90] = P[{s

51

, s

52

, . . . , s

89

}] = 39 ×0.02 = 0.78

(7) P[a C grade or better] = P[{s

70

, . . . , s

100

}] = 31 ×0.02 = 0.62

(8) P[student passes] = P[{s

60

, . . . , s

100

}] = 41 ×0.02 = 0.82

Quiz 1.4

We can describe this experiment by the event space consisting of the four possible

events V B, V L, DB, and DL. We represent these events in the table:

V D

L 0.35 ?

B ? ?

In a roundabout way, the problem statement tells us how to ﬁll in the table. In particular,

P [V] = 0.7 = P [V L] + P [V B] (1)

P [L] = 0.6 = P [V L] + P [DL] (2)

Since P[V L] = 0.35, we can conclude that P[V B] = 0.35 and that P[DL] = 0.6 −

0.35 = 0.25. This allows us to ﬁll in two more table entries:

V D

L 0.35 0.25

B 0.35 ?

The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.

This implies P[DB] = 0.05 and the complete table is

V D

L 0.35 0.25

B 0.35 0.05

Finding the various probabilities is now straightforward:

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25

(2) P[D ∪ L] = P[V L] + P[DL] + P[DB] = 0.35 +0.25 +0.05 = 0.65.

(3) P[V B] = 0.35

(4) P[V ∪ L] = P[V] + P[L] − P[V L] = 0.7 +0.6 −0.35 = 0.95

(5) P[V ∪ D] = P[S] = 1

(6) P[LB] = P[LL

c

] = 0

Quiz 1.5

(1) The probability of exactly two voice calls is

P [N

V

= 2] = P [{vvd, vdv, dvv}] = 0.3 (1)

(2) The probability of at least one voice call is

P [N

V

≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] (2)

= 6(0.1) +0.2 = 0.8 (3)

An easier way to get the same answer is to observe that

P [N

V

≥ 1] = 1 − P [N

V

< 1] = 1 − P [N

V

= 0] = 1 − P [{ddd}] = 0.8 (4)

(3) The conditional probability of two voice calls followed by a data call given that there

were two voice calls is

P [{vvd} |N

V

= 2] =

P [{vvd} , N

V

= 2]

P [N

V

= 2]

=

P [{vvd}]

P [N

V

= 2]

=

0.1

0.3

=

1

3

(5)

(4) The conditional probability of two data calls followed by a voice call given there

were two voice calls is

P [{ddv} |N

V

= 2] =

P [{ddv} , N

V

= 2]

P [N

V

= 2]

= 0 (6)

The joint event of the outcome ddv and exactly two voice calls has probability zero

since there is only one voice call in the outcome ddv.

(5) The conditional probability of exactly two voice calls given at least one voice call is

P [N

V

= 2|N

v

≥ 1] =

P [N

V

= 2, N

V

≥ 1]

P [N

V

≥ 1]

=

P [N

V

= 2]

P [N

V

≥ 1]

=

0.3

0.8

=

3

8

(7)

(6) The conditional probability of at least one voice call given there were exactly two

voice calls is

P [N

V

≥ 1|N

V

= 2] =

P [N

V

≥ 1, N

V

= 2]

P [N

V

= 2]

=

P [N

V

= 2]

P [N

V

= 2]

= 1 (8)

Given that there were two voice calls, there must have been at least one voice call.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6

In this experiment, there are four outcomes with probabilities

P[{vv}] = (0.8)

2

= 0.64 P[{vd}] = (0.8)(0.2) = 0.16

P[{dv}] = (0.2)(0.8) = 0.16 P[{dd}] = (0.2)

2

= 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition

and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes,

we now can test for the independence of events.

(1) First, we calculate the probability of the joint event:

P [N

V

= 2, N

V

≥ 1] = P [N

V

= 2] = P [{vv}] = 0.64 (1)

Next, we observe that

P [N

V

≥ 1] = P [{vd, dv, vv}] = 0.96 (2)

Finally, we make the comparison

P [N

V

= 2] P [N

V

≥ 1] = (0.64)(0.96) = P [N

V

= 2, N

V

≥ 1] (3)

which shows the two events are dependent.

(2) The probability of the joint event is

P [N

V

≥ 1, C

1

= v] = P [{vd, vv}] = 0.80 (4)

From part (a), P[N

V

≥ 1] = 0.96. Further, P[C

1

= v] = 0.8 so that

P [N

V

≥ 1] P [C

1

= v] = (0.96)(0.8) = 0.768 = P [N

V

≥ 1, C

1

= v] (5)

Hence, the events are dependent.

(3) The problem statement that the calls were independent implies that the events the

second call is a voice call, {C

2

= v}, and the ﬁrst call is a data call, {C

1

= d} are

independent events. Just to be sure, we can do the calculations to check:

P [C

1

= d, C

2

= v] = P [{dv}] = 0.16 (6)

Since P[C

1

= d]P[C

2

= v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are

independent. Note that this shouldn’t be surprising since we used the information that

the calls were independent in the problem statement to determine the probabilities of

the outcomes.

(4) The probability of the joint event is

P [C

2

= v, N

V

is even] = P [{vv}] = 0.64 (7)

Also, each event has probability

P [C

2

= v] = P [{dv, vv}] = 0.8, P [N

V

is even] = P [{dd, vv}] = 0.68 (8)

Thus, P[C

2

= v]P[N

V

is even] = (0.8)(0.68) = 0.544. Since P[C

2

= v, N

V

is even] =

0.544, the events are dependent.

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7

Let F

i

denote the event that that the user is found on page i . The tree for the experiment

is

¨

¨

¨

¨

¨

¨

F

1

0.8

F

c

1

0.2

¨

¨

¨

¨

¨

¨

F

2

0.8

F

c

2

0.2

¨

¨

¨

¨

¨

¨

F

3

0.8

F

c

3

0.2

The user is found unless all three paging attempts fail. Thus the probability the user is

found is

P [F] = 1 − P

_

F

c

1

F

c

2

F

c

3

_

= 1 −(0.2)

3

= 0.992 (1)

Quiz 1.8

(1) We can view choosing each bit in the code word as a subexperiment. Each subex-

periment has two possible outcomes: 0 and 1. Thus by the fundamental principle of

counting, there are 2 ×2 ×2 ×2 = 2

4

= 16 possible code words.

(2) An experiment that can yield all possible code words with two zeroes is to choose

which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There

are

_

4

2

_

= 6 ways to do this. Hence, there are six code words with exactly two zeroes.

For this problem, it is also possible to simply enumerate the six code words:

1100, 1010, 1001, 0101, 0110, 0011.

(3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst

bit has only one outcome. For each of the next three bits, we have two choices. In

this case, there are 1 ×2 ×2 ×2 = 8 ways of choosing a code word.

(4) For the constant ratio code, we can specify a code word by choosing M of the bits to

be ones. The other N −M bits will be zeroes. The number of ways of choosing such

a code word is

_

N

M

_

. For N = 8 and M = 3, there are

_

8

3

_

= 56 code words.

Quiz 1.9

(1) In this problem, k bits received in error is the same as k failures in 100 trials. The

failure probability is = 1 − p and the success probability is 1 − = p. That is, the

probability of k bits in error and 100 −k correctly received bits is

P

_

S

k,100−k

_

=

_

100

k

_

k

(1 −)

100−k

(1)

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

For = 0.01,

P

_

S

0,100

_

= (1 −)

100

= (0.99)

100

= 0.3660 (2)

P

_

S

1,99

_

= 100(0.01)(0.99)

99

= 0.3700 (3)

P

_

S

2,98

_

= 4950(0.01)

2

(0.99)

9

8 = 0.1849 (4)

P

_

S

3,97

_

= 161, 700(0.01)

3

(0.99)

97

= 0.0610 (5)

(2) The probability a packet is decoded correctly is just

P [C] = P

_

S

0,100

_

+ P

_

S

1,99

_

+ P

_

S

2,98

_

+ P

_

S

3,97

_

= 0.9819 (6)

Quiz 1.10

Since the chip works only if all n transistors work, the transistors in the chip are like

devices in series. The probability that a chip works is P[C] = p

n

.

The module works if either 8 chips work or 9 chips work. Let C

k

denote the event that

exactly k chips work. Since transistor failures are independent of each other, chip failures

are also independent. Thus each P[C

k

] has the binomial probability

P [C

8

] =

_

9

8

_

(P [C])

8

(1 − P [C])

9−8

= 9p

8n

(1 − p

n

), (1)

P [C

9

] = (P [C])

9

= p

9n

. (2)

The probability a memory module works is

P [M] = P [C

8

] + P [C

9

] = p

8n

(9 −8p

n

) (3)

Quiz 1.11

R=rand(1,100);

X=(R<= 0.4) ...

+ (2*(R>0.4).*(R<=0.9)) ...

+ (3*(R>0.9));

Y=hist(X,1:3)

For a MATLAB simulation, we ﬁrst gen-

erate a vector R of 100 random numbers.

Second, we generate vector X as a func-

tion of R to represent the 3 possible out-

comes of a ﬂip. That is, X(i)=1 if ﬂip i

was heads, X(i)=2 if ﬂip i was tails, and

X(i)=3) is ﬂip i landed on the edge.

To see how this works, we note there are three cases:

• If R(i) <= 0.4, then X(i)=1.

• If 0.4 < R(i) and R(i)<=0.9, then X(i)=2.

• If 0.9 < R(i), then X(i)=3.

These three cases will have probabilities 0.4, 0.5 and 0.1. Lastly, we use the hist function

to count how many occurences of each possible value of X(i).

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 2

Quiz 2.1

The sample space, probabilities and corresponding grades for the experiment are

Outcome P[·] G

BB 0.36 3.0

BC 0.24 2.5

CB 0.24 2.5

CC 0.16 2

Quiz 2.2

(1) To ﬁnd c, we recall that the PMF must sum to 1. That is,

3

n=1

P

N

(n) = c

_

1 +

1

2

+

1

3

_

= 1 (1)

This implies c = 6/11. Now that we have found c, the remaining parts are straight-

forward.

(2) P[N = 1] = P

N

(1) = c = 6/11

(3) P[N ≥ 2] = P

N

(2) + P

N

(3) = c/2 +c/3 = 5/11

(4) P[N > 3] =

∞

n=4

P

N

(n) = 0

Quiz 2.3

Decoding each transmitted bit is an independent trial where we call a bit error a “suc-

cess.” Each bit is in error, that is, the trial is a success, with probability p. Now we can

interpret each experiment in the generic context of independent trials.

(1) The random variable X is the number of trials up to and including the ﬁrst success.

Similar to Example 2.11, X has the geometric PMF

P

X

(x) =

_

p(1 − p)

x−1

x = 1, 2, . . .

0 otherwise

(1)

(2) If p = 0.1, then the probability exactly 10 bits are sent is

P [X = 10] = P

X

(10) = (0.1)(0.9)

9

= 0.0387 (2)

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The probability that at least 10 bits are sent is P[X ≥ 10] =

∞

x=10

P

X

(x). This

sum is not too hard to calculate. However, its even easier to observe that X ≥ 10 if

the ﬁrst 10 bits are transmitted correctly. That is,

P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)

10

(3)

For p = 0.1, P[X ≥ 10] = 0.9

10

= 0.3487.

(3) The random variable Y is the number of successes in 100 independent trials. Just as

in Example 2.13, Y has the binomial PMF

P

Y

(y) =

_

100

y

_

p

y

(1 − p)

100−y

(4)

If p = 0.01, the probability of exactly 2 errors is

P [Y = 2] = P

Y

(2) =

_

100

2

_

(0.01)

2

(0.99)

98

= 0.1849 (5)

(4) The probability of no more than 2 errors is

P [Y ≤ 2] = P

Y

(0) + P

Y

(1) + P

Y

(2) (6)

= (0.99)

100

+100(0.01)(0.99)

99

+

_

100

2

_

(0.01)

2

(0.99)

98

(7)

= 0.9207 (8)

(5) Random variable Z is the number of trials up to and including the third success. Thus

Z has the Pascal PMF (see Example 2.15)

P

Z

(z) =

_

z −1

2

_

p

3

(1 − p)

z−3

(9)

Note that P

Z

(z) > 0 for z = 3, 4, 5, . . ..

(6) If p = 0.25, the probability that the third error occurs on bit 12 is

P

Z

(12) =

_

11

2

_

(0.25)

3

(0.75)

9

= 0.0645 (10)

Quiz 2.4

Each of these probabilities can be read off the CDF F

Y

(y). However, we must keep in

mind that when F

Y

(y) has a discontinuity at y

0

, F

Y

(y) takes the upper value F

Y

(y

+

0

).

(1) P[Y < 1] = F

Y

(1

−

) = 0

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) P[Y ≤ 1] = F

Y

(1) = 0.6

(3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − F

Y

(2) = 1 −0.8 = 0.2

(4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − F

Y

(2

−

) = 1 −0.6 = 0.4

(5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = F

Y

(1

+

) − F

Y

(1

−

) = 0.6

(6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = F

Y

(3

+

) − F

Y

(3

−

) = 0.8 −0.8 = 0

Quiz 2.5

(1) With probability 0.7, a call is a voice call and C = 25. Otherwise, with probability

0.3, we have a data call and C = 40. This corresponds to the PMF

P

C

(c) =

⎧

⎨

⎩

0.7 c = 25

0.3 c = 40

0 otherwise

(1)

(2) The expected value of C is

E [C] = 25(0.7) +40(0.3) = 29.5 cents (2)

Quiz 2.6

(1) As a function of N, the cost T is

T = 25N +40(3 − N) = 120 −15N (1)

(2) To ﬁnd the PMF of T, we can draw the following tree:

¨

¨

¨

¨

¨

¨

¨

N=0

0.1

r

r

r

r

r

r

r

N=3

0.3

$

$

$

$

$

$

$N=1 0.3

N=2 0.3

•T=120

•T=105

•T=90

•T=75

From the tree, we can write down the PMF of T:

P

T

(t ) =

⎧

⎨

⎩

0.3 t = 75, 90, 105

0.1 t = 120

0 otherwise

(2)

From the PMF P

T

(t ), the expected value of T is

E [T] = 75P

T

(75) +90P

T

(90) +105P

T

(105) +120P

T

(120) (3)

= (75 +90 +105)(0.3) +120(0.1) = 62 (4)

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.7

(1) Using Deﬁnition 2.14, the expected number of applications is

E [A] =

4

a=1

aP

A

(a) = 1(0.4) +2(0.3) +3(0.2) +4(0.1) = 2 (1)

(2) The number of memory chips is M = g(A) where

g(A) =

⎧

⎨

⎩

4 A = 1, 2

6 A = 3

8 A = 4

(2)

(3) By Theorem 2.10, the expected number of memory chips is

E [M] =

4

a=1

g(A)P

A

(a) = 4(0.4) +4(0.3) +6(0.2) +8(0.1) = 4.8 (3)

Since E[A] = 2, g(E[A]) = g(2) = 4. However, E[M] = 4.8 = g(E[A]). The two

quantities are different because g(A) is not of the form αA +β.

Quiz 2.8

The PMF P

N

(n) allows to calculate each of the desired quantities.

(1) The expected value of N is

E [N] =

2

n=0

nP

N

(n) = 0(0.1) +1(0.4) +2(0.5) = 1.4 (1)

(2) The second moment of N is

E

_

N

2

_

=

2

n=0

n

2

P

N

(n) = 0

2

(0.1) +1

2

(0.4) +2

2

(0.5) = 2.4 (2)

(3) The variance of N is

Var[N] = E

_

N

2

_

−(E [N])

2

= 2.4 −(1.4)

2

= 0.44 (3)

(4) The standard deviation is σ

N

=

√

Var[N] =

√

0.44 = 0.663.

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.9

(1) From the problem statement, we learn that the conditional PMF of N given the event

I is

P

N|I

(n) =

_

0.02 n = 1, 2, . . . , 50

0 otherwise

(1)

(2) Also from the problem statement, the conditional PMF of N given the event T is

P

N|T

(n) =

_

0.2 n = 1, 2, 3, 4, 5

0 otherwise

(2)

(3) The problem statement tells us that P[T] = 1 − P[I ] = 3/4. From Theorem 1.10

(the law of total probability), we ﬁnd the PMF of N is

P

N

(n) = P

N|T

(n) P [T] + P

N|I

(n) P [I ] (3)

=

⎧

⎨

⎩

0.2(0.75) +0.02(0.25) n = 1, 2, 3, 4, 5

0(0.75) +0.02(0.25) n = 6, 7, . . . , 50

0 otherwise

(4)

=

⎧

⎨

⎩

0.155 n = 1, 2, 3, 4, 5

0.005 n = 6, 7, . . . , 50

0 otherwise

(5)

(4) First we ﬁnd

P [N ≤ 10] =

10

n=1

P

N

(n) = (0.155)(5) +(0.005)(5) = 0.80 (6)

By Theorem 2.17, the conditional PMF of N given N ≤ 10 is

P

N|N≤10

(n) =

_

P

N

(n)

P[N≤10]

n ≤ 10

0 otherwise

(7)

=

⎧

⎨

⎩

0.155/0.8 n = 1, 2, 3, 4, 5

0.005/0.8 n = 6, 7, 8, 9, 10

0 otherwise

(8)

=

⎧

⎨

⎩

0.19375 n = 1, 2, 3, 4, 5

0.00625 n = 6, 7, 8, 9, 10

0 otherwise

(9)

(5) Once we have the conditional PMF, calculating conditional expectations is easy.

E [N|N ≤ 10] =

n

nP

N|N≤10

(n) (10)

=

5

n=1

n(0.19375) +

10

n=6

n(0.00625) (11)

= 3.15625 (12)

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 50 100

0

2

4

6

8

10

0 500 1000

0

2

4

6

8

10

(a) samplemean(100) (b) samplemean(1000)

Figure 1: Two examples of the output of samplemean(k)

(6) To ﬁnd the conditional variance, we ﬁrst ﬁnd the conditional second moment

E

_

N

2

|N ≤ 10

_

=

n

n

2

P

N|N≤10

(n) (13)

=

5

n=1

n

2

(0.19375) +

10

n=6

n

2

(0.00625) (14)

= 55(0.19375) +330(0.00625) = 12.71875 (15)

The conditional variance is

Var[N|N ≤ 10] = E

_

N

2

|N ≤ 10

_

−(E [N|N ≤ 10])

2

(16)

= 12.71875 −(3.15625)

2

= 2.75684 (17)

Quiz 2.10

The function samplemean(k) generates and plots ﬁve m

n

sequences for n = 1, 2, . . . , k.

The i th column M(:,i) of M holds a sequence m

1

, m

2

, . . . , m

k

.

function M=samplemean(k);

K=(1:k)’;

M=zeros(k,5);

for i=1:5,

X=duniformrv(0,10,k);

M(:,i)=cumsum(X)./K;

end;

plot(K,M);

Examples of the function calls (a) samplemean(100) and (b) samplemean(1000)

are shown in Figure 1. Each time samplemean(k) is called produces a random output.

What is observed in these ﬁgures is that for small n, m

n

is fairly random but as n gets

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

large, m

n

gets close to E[X] = 5. Although each sequence m

1

, m

2

, . . . that we generate is

random, the sequences always converges to E[X]. This random convergence is analyzed

in Chapter 7.

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 3

Quiz 3.1

The CDF of Y is

0 2 4

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0

y/4 0 ≤ y ≤ 4

1 y > 4

(1)

From the CDF F

Y

(y), we can calculate the probabilities:

(1) P[Y ≤ −1] = F

Y

(−1) = 0

(2) P[Y ≤ 1] = F

Y

(1) = 1/4

(3) P[2 < Y ≤ 3] = F

Y

(3) − F

Y

(2) = 3/4 −2/4 = 1/4

(4) P[Y > 1.5] = 1 − P[Y ≤ 1.5] = 1 − F

Y

(1.5) = 1 −(1.5)/4 = 5/8

Quiz 3.2

(1) First we will ﬁnd the constant c and then we will sketch the PDF. To ﬁnd c, we use

the fact that

_

∞

−∞

f

X

(x) dx = 1. We will evaluate this integral using integration by

parts:

_

∞

−∞

f

X

(x) dx =

_

∞

0

cxe

−x/2

dx (1)

= −2cxe

−x/2

¸

¸

¸

∞

0

. ,, .

=0

+

_

∞

0

2ce

−x/2

dx (2)

= −4ce

−x/2

¸

¸

¸

∞

0

= 4c (3)

Thus c = 1/4 and X has the Erlang (n = 2, λ = 1/2) PDF

0 5 10 15

0

0.1

0.2

x

f

X

(

x

)

f

X

(x) =

_

(x/4)e

−x/2

x ≥ 0

0 otherwise

(4)

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To ﬁnd the CDF F

X

(x), we ﬁrst note X is a nonnegative random variable so that

F

X

(x) = 0 for all x < 0. For x ≥ 0,

F

X

(x) =

_

x

0

f

X

(y) dy =

_

x

0

y

4

e

−y/2

dy (5)

= −

y

2

e

−y/2

¸

¸

¸

x

0

−

_

x

0

−

1

2

e

−y/2

dy (6)

= 1 −

x

2

e

−x/2

−e

−x/2

(7)

The complete expression for the CDF is

0 5 10 15

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

_

1 −

_

x

2

+1

_

e

−x/2

x ≥ 0

0 otherwise

(8)

(3) From the CDF F

X

(x),

P [0 ≤ X ≤ 4] = F

X

(4) − F

X

(0) = 1 −3e

−2

. (9)

(4) Similarly,

P [−2 ≤ X ≤ 2] = F

X

(2) − F

X

(−2) = 1 −3e

−1

. (10)

Quiz 3.3

The PDF of Y is

−2 0 2

0

1

2

3

y

f

Y

(

y

)

f

Y

(y) =

_

3y

2

/2 −1 ≤ y ≤ 1,

0 otherwise.

(1)

(1) The expected value of Y is

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

1

−1

(3/2)y

3

dy = (3/8)y

4

¸

¸

¸

1

−1

= 0. (2)

Note that the above calculation wasn’t really necessary because E[Y] = 0 whenever

the PDF f

Y

(y) is an even function (i.e., f

Y

(y) = f

Y

(−y)).

(2) The second moment of Y is

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

1

−1

(3/2)y

4

dy = (3/10)y

5

¸

¸

¸

1

−1

= 3/5. (3)

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The variance of Y is

Var[Y] = E

_

Y

2

_

−(E [Y])

2

= 3/5. (4)

(4) The standard deviation of Y is σ

Y

=

√

Var[Y] =

√

3/5.

Quiz 3.4

(1) When X is an exponential (λ) random variable, E[X] = 1/λ and Var[X] = 1/λ

2

.

Since E[X] = 3 and Var[X] = 9, we must have λ = 1/3. The PDF of X is

f

X

(x) =

_

(1/3)e

−x/3

x ≥ 0,

0 otherwise.

(1)

(2) We know X is a uniform (a, b) random variable. To ﬁnd a and b, we apply Theo-

rem 3.6 to write

E [X] =

a +b

2

= 3 Var[X] =

(b −a)

2

12

= 9. (2)

This implies

a +b = 6, b −a = ±6

√

3. (3)

The only valid solution with a < b is

a = 3 −3

√

3, b = 3 +3

√

3. (4)

The complete expression for the PDF of X is

f

X

(x) =

_

1/(6

√

3) 3 −3

√

3 ≤ x < 3 +3

√

3,

0 otherwise.

(5)

Quiz 3.5

Each of the requested probabilities can be calculated using (z) function and Table 3.1

or Q(z) and Table 3.2. We start with the sketches.

(1) The PDFs of X and Y are shown below. The fact that Y has twice the standard

deviation of X is reﬂected in the greater spread of f

Y

(y). However, it is important

to remember that as the standard deviation increases, the peak value of the Gaussian

PDF goes down.

−5 0 5

0

0.2

0.4

x y

f

X

(

x

)

f

Y

(

y

)

← f

X

(x)

← f

Y

(y)

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Since X is Gaussian (0, 1),

P [−1 < X ≤ 1] = F

X

(1) − F

X

(−1) (1)

= (1) −(−1) = 2(1) −1 = 0.6826. (2)

(3) Since Y is Gaussian (0, 2),

P [−1 < Y ≤ 1] = F

Y

(1) − F

Y

(−1) (3)

=

_

1

σ

Y

_

−

_

−1

σ

Y

_

= 2

_

1

2

_

−1 = 0.383. (4)

(4) Again, since X is Gaussian (0, 1), P[X > 3.5] = Q(3.5) = 2.33 ×10

−4

.

(5) Since Y is Gaussian (0, 2), P[Y > 3.5] = Q(

3.5

2

) = Q(1.75) = 1 − (1.75) =

0.0401.

Quiz 3.6

The CDF of X is

−2 0 2

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < −1,

(x +1)/4 −1 ≤ x < 1,

1 x ≥ 1.

(1)

The following probabilities can be read directly from the CDF:

(1) P[X ≤ 1] = F

X

(1) = 1.

(2) P[X < 1] = F

X

(1

−

) = 1/2.

(3) P[X = 1] = F

X

(1

+

) − F

X

(1

−

) = 1 −1/2 = 1/2.

(4) We ﬁnd the PDF f

Y

(y) by taking the derivative of F

Y

(y). The resulting PDF is

−2 0 2

0

0.5

x

f

X

(

x

)

0.5

f

X

(x) =

⎧

⎨

⎩

1/4 −1 ≤ x < 1,

(1/2)δ(x −1) x = 1,

0 otherwise.

(2)

Quiz 3.7

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) Since X is always nonnegative, F

X

(x) = 0 for x < 0. Also, F

X

(x) = 1 for x ≥ 2

since its always true that x ≤ 2. Lastly, for 0 ≤ x ≤ 2,

F

X

(x) =

_

x

−∞

f

X

(y) dy =

_

x

0

(1 − y/2) dy = x − x

2

/4. (1)

The complete CDF of X is

−1 0 1 2 3

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < 0,

x − x

2

/4 0 ≤ x ≤ 2,

1 x > 2.

(2)

(2) The probability that Y = 1 is

P [Y = 1] = P [X ≥ 1] = 1 − F

X

(1) = 1 −3/4 = 1/4. (3)

(3) Since X is nonnegative, Y is also nonnegative. Thus F

Y

(y) = 0 for y < 0. Also,

because Y ≤ 1, F

Y

(y) = 1 for all y ≥ 1. Finally, for 0 < y < 1,

F

Y

(y) = P [Y ≤ y] = P [X ≤ y] = F

X

(y) . (4)

Using the CDF F

X

(x), the complete expression for the CDF of Y is

−1 0 1 2 3

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0,

y − y

2

/4 0 ≤ y < 1,

1 y ≥ 1.

(5)

As expected, we see that the jump in F

Y

(y) at y = 1 is exactly equal to P[Y = 1].

(4) By taking the derivative of F

Y

(y), we obtain the PDF f

Y

(y). Note that when y < 0

or y > 1, the PDF is zero.

−1 0 1 2 3

0

0.5

1

1.5

y

f

Y

(

y

)

0.25

f

Y

(y) =

_

1 − y/2 +(1/4)δ(y −1) 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 3.8

(1) P[Y ≤ 6] =

_

6

−∞

f

Y

(y) dy =

_

6

0

(1/10) dy = 0.6 .

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) From Deﬁnition 3.15, the conditional PDF of Y given Y ≤ 6 is

f

Y|Y≤6

(y) =

_

f

Y

(y)

P[Y≤6]

y ≤ 6,

0 otherwise,

=

_

1/6 0 ≤ y ≤ 6,

0 otherwise.

(1)

(3) The probability Y > 8 is

P [Y > 8] =

_

10

8

1

10

dy = 0.2 . (2)

(4) From Deﬁnition 3.15, the conditional PDF of Y given Y > 8 is

f

Y|Y>8

(y) =

_

f

Y

(y)

P[Y>8]

y > 8,

0 otherwise,

=

_

1/2 8 < y ≤ 10,

0 otherwise.

(3)

(5) From the conditional PDF f

Y|Y≤6

(y), we can calculate the conditional expectation

E [Y|Y ≤ 6] =

_

∞

−∞

y f

Y|Y≤6

(y) dy =

_

6

0

y

6

dy = 3. (4)

(6) From the conditional PDF f

Y|Y>8

(y), we can calculate the conditional expectation

E [Y|Y > 8] =

_

∞

−∞

y f

Y|Y>8

(y) dy =

_

10

8

y

2

dy = 9. (5)

Quiz 3.9

A natural way to produce random variables with PDF f

T|T>2

(t ) is to generate samples

of T with PDF f

T

(t ) and then to discard those samples which fail to satisfy the condition

T > 2. Here is a MATLAB function that uses this method:

function t=t2rv(m)

i=0;lambda=1/3;

t=zeros(m,1);

while (i<m),

x=exponentialrv(lambda,1);

if (x>2)

t(i+1)=x;

i=i+1;

end

end

A second method exploits the fact that if T is an exponential (λ) random variable, then

T

= T +2 has PDF f

T

(t ) = f

T|T>2

(t ). In this case the command

t=2.0+exponentialrv(1/3,m)

generates the vector t.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 4

Quiz 4.1

Each value of the joint CDF can be found by considering the corresponding probability.

(1) F

X,Y

(−∞, 2) = P[X ≤ −∞, Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on

the value −∞.

(2) F

X,Y

(∞, ∞) = P[X ≤ ∞, Y ≤ ∞] = 1. This result is given in Theorem 4.1.

(3) F

X,Y

(∞, y) = P[X ≤ ∞, Y ≤ y] = P[Y ≤ y] = F

Y

(y).

(4) F

X,Y

(∞, −∞) = P[X ≤ ∞, Y ≤ −∞] = 0 since Y cannot take on the value −∞.

Quiz 4.2

From the joint PMF of Q and G given in the table, we can calculate the requested

probabilities by summing the PMF over those values of Q and G that correspond to the

event.

(1) The probability that Q = 0 is

P [Q = 0] = P

Q,G

(0, 0) + P

Q,G

(0, 1) + P

Q,G

(0, 2) + P

Q,G

(0, 3) (1)

= 0.06 +0.18 +0.24 +0.12 = 0.6 (2)

(2) The probability that Q = G is

P [Q = G] = P

Q,G

(0, 0) + P

Q,G

(1, 1) = 0.18 (3)

(3) The probability that G > 1 is

P [G > 1] =

3

g=2

1

q=0

P

Q,G

(q, g) (4)

= 0.24 +0.16 +0.12 +0.08 = 0.6 (5)

(4) The probability that G > Q is

P [G > Q] =

1

q=0

3

g=q+1

P

Q,G

(q, g) (6)

= 0.18 +0.24 +0.12 +0.16 +0.08 = 0.78 (7)

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.3

By Theorem 4.3, the marginal PMF of H is

P

H

(h) =

b=0,2,4

P

H,B

(h, b) (1)

For each value of h, this corresponds to calculating the row sum across the table of the joint

PMF. Similarly, the marginal PMF of B is

P

B

(b) =

1

h=−1

P

H,B

(h, b) (2)

For each value of b, this corresponds to the column sum down the table of the joint PMF.

The easiest way to calculate these marginal PMFs is to simply sum each row and column:

P

H,B

(h, b) b = 0 b = 2 b = 4 P

H

(h)

h = −1 0 0.4 0.2 0.6

h = 0 0.1 0 0.1 0.2

h = 1 0.1 0.1 0 0.2

P

B

(b) 0.2 0.5 0.3

(3)

Quiz 4.4

To ﬁnd the constant c, we apply

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy = 1. Speciﬁcally,

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy =

_

2

0

_

1

0

cxy dx dy (1)

= c

_

2

0

y

_

x

2

/2

¸

¸

¸

1

0

_

dy (2)

= (c/2)

_

2

0

y dy = (c/4)y

2

¸

¸

¸

2

0

= c (3)

Thus c = 1. To calculate P[A], we write

P [A] =

__

A

f

X,Y

(x, y) dx dy (4)

To integrate over A, we convert to polar coordinates using the substitutions x = r cos θ,

y = r sin θ and dx dy = r dr dθ, yielding

Y

X

1

1

2

A

P [A] =

_

π/2

0

_

1

0

r

2

sin θ cos θ r dr dθ (5)

=

_

_

1

0

r

3

dr

__

_

π/2

0

sin θ cos θ dθ

_

(6)

=

_

r

4

/4

¸

¸

¸

1

0

_

⎛

⎝

sin

2

θ

2

¸

¸

¸

¸

¸

π/2

0

⎞

⎠

= 1/8 (7)

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.5

By Theorem 4.8, the marginal PDF of X is

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy (1)

For x < 0 or x > 1, f

X

(x) = 0. For 0 ≤ x ≤ 1,

f

X

(x) =

6

5

_

1

0

(x + y

2

) dy =

6

5

_

xy + y

3

/3

_¸

¸

¸

y=1

y=0

=

6

5

(x +1/3) =

6x +2

5

(2)

The complete expression for the PDf of X is

f

X

(x) =

_

(6x +2)/5 0 ≤ x ≤ 1

0 otherwise

(3)

By the same method we obtain the marginal PDF for Y. For 0 ≤ y ≤ 1,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dy (4)

=

6

5

_

1

0

(x + y

2

) dx =

6

5

_

x

2

/2 + xy

2

_¸

¸

¸

x=1

x=0

=

6

5

(1/2 + y

2

) =

3 +6y

2

5

(5)

Since f

Y

(y) = 0 for y < 0 or y > 1, the complete expression for the PDF of Y is

f

Y

(y) =

_

(3 +6y

2

)/5 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 4.6

(A) The time required for the transfer is T = L/B. For each pair of values of L and B,

we can calculate the time T needed for the transfer. We can write these down on the

table for the joint PMF of L and B as follows:

P

L,B

(l, b) b = 14, 400 b = 21, 600 b = 28, 800

l = 518, 400 0.20 (T=36) 0.10 (T=24) 0.05 (T=18)

l = 2, 592, 000 0.05 (T=180) 0.10 (T=120) 0.20 (T=90)

l = 7, 776, 000 0.00 (T=540) 0.10 (T=360) 0.20 (T=270)

From the table, writing down the PMF of T is straightforward.

P

T

(t ) =

⎧

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎩

0.05 t = 18

0.1 t = 24

0.2 t = 36, 90

0.1 t = 120

0.05 t = 180

0.2 t = 270

0.1 t = 360

0 otherwise

(1)

23

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(B) First, we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1, W = XY satisﬁes

0 ≤ W ≤ 1. Thus f

W

(0) = 0 and f

W

(1) = 1. For 0 < w < 1, we calculate the

CDF F

W

(w) = P[W ≤ w]. As shown below, integrating over the region W ≤ w

is fairly complex. The calculus is simpler if we integrate over the region XY > w.

Speciﬁcally,

Y

X

1

1

XY > w

w

w

XY = w

F

W

(w) = 1 − P [XY > w] (2)

= 1 −

_

1

w

_

1

w/x

dy dx (3)

= 1 −

_

1

w

(1 −w/x) dx (4)

= 1 −

_

x −wln x|

x=1

x=w

_

(5)

= 1 −(1 −w +wln w) = w −wln w (6)

The complete expression for the CDF is

F

W

(w) =

⎧

⎨

⎩

0 w < 0

w −wln w 0 ≤ w ≤ 1

1 w > 1

(7)

By taking the derivative of the CDF, we ﬁnd the PDF is

f

W

(w) =

d F

W

(w)

dw

=

⎧

⎨

⎩

0 w < 0

−ln w 0 ≤ w ≤ 1

0 w > 1

(8)

Quiz 4.7

(A) It is helpful to ﬁrst make a table that includes the marginal PMFs.

P

L,T

(l, t ) t = 40 t = 60 P

L

(l)

l = 1 0.15 0.1 0.25

l = 2 0.3 0.2 0.5

l = 3 0.15 0.1 0.25

P

T

(t ) 0.6 0.4

(1) The expected value of L is

E [L] = 1(0.25) +2(0.5) +3(0.25) = 2. (1)

Since the second moment of L is

E

_

L

2

_

= 1

2

(0.25) +2

2

(0.5) +3

2

(0.25) = 4.5, (2)

the variance of L is

Var [L] = E

_

L

2

_

−(E [L])

2

= 0.5. (3)

24

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) The expected value of T is

E [T] = 40(0.6) +60(0.4) = 48. (4)

The second moment of T is

E

_

T

2

_

= 40

2

(0.6) +60

2

(0.4) = 2400. (5)

Thus

Var[T] = E

_

T

2

_

−(E [T])

2

= 2400 −48

2

= 96. (6)

(3) The correlation is

E [LT] =

t =40,60

3

l=1

lt P

LT

(lt ) (7)

= 1(40)(0.15) +2(40)(0.3) +3(40)(0.15) (8)

+1(60)(0.1) +2(60)(0.2) +3(60)(0.1) (9)

= 96 (10)

(4) From Theorem 4.16(a), the covariance of L and T is

Cov [L, T] = E [LT] − E [L] E [T] = 96 −2(48) = 0 (11)

(5) Since Cov[L, T] = 0, the correlation coefﬁcient is ρ

L,T

= 0.

(B) As in the discrete case, the calculations become easier if we ﬁrst calculate the marginal

PDFs f

X

(x) and f

Y

(y). For 0 ≤ x ≤ 1,

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy =

_

2

0

xy dy =

1

2

xy

2

¸

¸

¸

¸

y=2

y=0

= 2x (12)

Similarly, for 0 ≤ y ≤ 2,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dx =

_

2

0

xy dx =

1

2

x

2

y

¸

¸

¸

¸

x=1

x=0

=

y

2

(13)

The complete expressions for the marginal PDFs are

f

X

(x) =

_

2x 0 ≤ x ≤ 1

0 otherwise

f

Y

(y) =

_

y/2 0 ≤ y ≤ 2

0 otherwise

(14)

From the marginal PDFs, it is straightforward to calculate the various expectations.

25

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) The ﬁrst and second moments of X are

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

1

0

2x

2

dx =

2

3

(15)

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

1

0

2x

3

dx =

1

2

(16)

(17)

The variance of X is Var[X] = E[X

2

] −(E[X])

2

= 1/18.

(2) The ﬁrst and second moments of Y are

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

2

0

1

2

y

2

dy =

4

3

(18)

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

2

0

1

2

y

3

dy = 2 (19)

The variance of Y is Var[Y] = E[Y

2

] −(E[Y])

2

= 2 −16/9 = 2/9.

(3) The correlation of X and Y is

E [XY] =

_

∞

−∞

_

∞

−∞

xy f

X,Y

(x, y) dx, dy (20)

=

_

1

0

_

2

0

x

2

y

2

dx, dy =

x

3

3

¸

¸

¸

¸

1

0

y

3

3

¸

¸

¸

¸

2

0

=

8

9

(21)

(4) The covariance of X and Y is

Cov [X, Y] = E [XY] − E [X] E [Y] =

8

9

−

_

2

3

__

4

3

_

= 0. (22)

(5) Since Cov[X, Y] = 0, the correlation coefﬁcient is ρ

X,Y

= 0.

Quiz 4.8

(A) Since the event V > 80 occurs only for the pairs (L, T) = (2, 60), (L, T) = (3, 40)

and (L, T) = (3, 60),

P [A] = P [V > 80] = P

L,T

(2, 60) + P

L,T

(3, 40) + P

L,T

(3, 60) = 0.45 (1)

By Deﬁnition 4.9,

P

L,T| A

(l, t ) =

_

P

L,T

(l,t )

P[A]

lt > 80

0 otherwise

(2)

26

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We can represent this conditional PMF in the following table:

P

L,T| A

(l, t ) t = 40 t = 60

l = 1 0 0

l = 2 0 4/9

l = 3 1/3 2/9

The conditional expectation of V can be found from the conditional PMF.

E [V| A] =

l

t

lt P

L,T| A

(l, t ) (3)

= (2 · 60)

4

9

+(3 · 40)

1

3

+(3 · 60)

2

9

= 133

1

3

(4)

For the conditional variance Var[V| A], we ﬁrst ﬁnd the conditional second moment

E

_

V

2

| A

_

=

l

t

(lt )

2

P

L,T| A

(l, t ) (5)

= (2 · 60)

2

4

9

+(3 · 40)

2

1

3

+(3 · 60)

2

2

9

= 18, 400 (6)

It follows that

Var [V| A] = E

_

V

2

| A

_

−(E [V| A])

2

= 622

2

9

(7)

(B) For continuous random variables X and Y, we ﬁrst calculate the probability of the

conditioning event.

P [B] =

__

B

f

X,Y

(x, y) dx dy =

_

60

40

_

3

80/y

xy

4000

dx dy (8)

=

_

60

40

y

4000

_

x

2

2

¸

¸

¸

¸

3

80/y

_

dy (9)

=

_

60

40

y

4000

_

9

2

−

3200

y

2

_

dy (10)

=

9

8

−

4

5

ln

3

2

≈ 0.801 (11)

The conditional PDF of X and Y is

f

X,Y|B

(x, y) =

_

f

X,Y

(x, y) /P [B] (x, y) ∈ B

0 otherwise

(12)

=

_

Kxy 40 ≤ y ≤ 60, 80/y ≤ x ≤ 3

0 otherwise

(13)

27

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

where K = (4000P[B])

−1

. The conditional expectation of W given event B is

E [W|B] =

_

∞

−∞

_

∞

−∞

xy f

X,Y|B

(x, y) dx dy (14)

=

_

60

40

_

3

80/y

Kx

2

y

2

dx dy (15)

= (K/3)

_

60

40

y

2

x

3

¸

¸

¸

x=3

x=80/y

dy (16)

= (K/3)

_

60

40

_

27y

2

−80

3

/y

_

dy (17)

= (K/3)

_

9y

3

−80

3

ln y

_¸

¸

¸

60

40

≈ 120.78 (18)

The conditional second moment of K given B is

E

_

W

2

|B

_

=

_

∞

−∞

_

∞

−∞

(xy)

2

f

X,Y|B

(x, y) dx dy (19)

=

_

60

40

_

3

80/y

Kx

3

y

3

dx dy (20)

= (K/4)

_

60

40

y

3

x

4

¸

¸

¸

x=3

x=80/y

dy (21)

= (K/4)

_

60

40

_

81y

3

−80

4

/y

_

dy (22)

= (K/4)

_

(81/4)y

4

−80

4

ln y

_¸

¸

¸

60

40

≈ 16, 116.10 (23)

It follows that the conditional variance of W given B is

Var [W|B] = E

_

W

2

|B

_

−(E [W|B])

2

≈ 1528.30 (24)

Quiz 4.9

(A) (1) The joint PMF of A and B can be found from the marginal and conditional

PMFs via P

A,B

(a, b) = P

B| A

(b|a)P

A

(a). Incorporating the information from

the given conditional PMFs can be confusing, however. Consequently, we can

note that A has range S

A

= {0, 2} and B has range S

B

= {0, 1}. A table of the

joint PMF will include all four possible combinations of A and B. The general

form of the table is

P

A,B

(a, b) b = 0 b = 1

a = 0 P

B| A

(0|0)P

A

(0) P

B| A

(1|0)P

A

(0)

a = 2 P

B| A

(0|2)P

A

(2) P

B| A

(1|2)P

A

(2)

28

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Substituting values from P

B| A

(b|a) and P

A

(a), we have

P

A,B

(a, b) b = 0 b = 1

a = 0 (0.8)(0.4) (0.2)(0.4)

a = 2 (0.5)(0.6) (0.5)(0.6)

or

P

A,B

(a, b) b = 0 b = 1

a = 0 0.32 0.08

a = 2 0.3 0.3

(2) Given the conditional PMF P

B| A

(b|2), it is easy to calculate the conditional

expectation

E [B| A = 2] =

1

b=0

bP

B| A

(b|2) = (0)(0.5) +(1)(0.5) = 0.5 (1)

(3) From the joint PMF P

A,B

(a, b), we can calculate the the conditional PMF

P

A|B

(a|0) =

P

A,B

(a, 0)

P

B

(0)

=

⎧

⎨

⎩

0.32/0.62 a = 0

0.3/0.62 a = 2

0 otherwise

(2)

=

⎧

⎨

⎩

16/31 a = 0

15/31 a = 2

0 otherwise

(3)

(4) We can calculate the conditional variance Var[A|B = 0] using the conditional

PMF P

A|B

(a|0). First we calculate the conditional expected value

E [A|B = 0] =

a

aP

A|B

(a|0) = 0(16/31) +2(15/31) = 30/31 (4)

The conditional second moment is

E

_

A

2

|B = 0

_

=

a

a

2

P

A|B

(a|0) = 0

2

(16/31) +2

2

(15/31) = 60/31 (5)

The conditional variance is then

Var[A|B = 0] = E

_

A

2

|B = 0

_

−(E [A|B = 0])

2

=

960

961

(6)

(B) (1) The joint PDF of X and Y is

f

X,Y

(x, y) = f

Y|X

(y|x) f

X

(x) =

_

6y 0 ≤ y ≤ x, 0 ≤ x ≤ 1

0 otherwise

(7)

(2) From the given conditional PDF f

Y|X

(y|x),

f

Y|X

(y|1/2) =

_

8y 0 ≤ y ≤ 1/2

0 otherwise

(8)

29

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The conditional PDF of Y given X = 1/2 is f

X|Y

(x|1/2) = f

X,Y

(x, 1/2)/f

Y

(1/2).

To ﬁnd f

Y

(1/2), we integrate the joint PDF.

f

Y

(1/2) =

_

∞

−∞

f

X,1/2

( ) dx =

_

1

1/2

6(1/2) dx = 3/2 (9)

Thus, for 1/2 ≤ x ≤ 1,

f

X|Y

(x|1/2) =

f

X,Y

(x, 1/2)

f

Y

(1/2)

=

6(1/2)

3/2

= 2 (10)

(4) From the pervious part, we see that given Y = 1/2, the conditional PDF of X

is uniform (1/2, 1). Thus, by the deﬁnition of the uniform (a, b) PDF,

Var [X|Y = 1/2] =

(1 −1/2)

2

12

=

1

48

(11)

Quiz 4.10

(A) (1) For random variables X and Y from Example 4.1, we observe that P

Y

(1) =

0.09 and P

X

(0) = 0.01. However,

P

X,Y

(0, 1) = 0 = P

X

(0) P

Y

(1) (1)

Since we have found a pair x, y such that P

X,Y

(x, y) = P

X

(x)P

Y

(y), we can

conclude that X and Y are dependent. Note that whenever P

X,Y

(x, y) = 0,

independence requires that either P

X

(x) = 0 or P

Y

(y) = 0.

(2) For random variables Q and G from Quiz 4.2, it is not obvious whether they

are independent. Unlike X and Y in part (a), there are no obvious pairs q, g

that fail the independence requirement. In this case, we calculate the marginal

PMFs from the table of the joint PMF P

Q,G

(q, g) in Quiz 4.2.

P

Q,G

(q, g) g = 0 g = 1 g = 2 g = 3 P

Q

(q)

q = 0 0.06 0.18 0.24 0.12 0.60

q = 1 0.04 0.12 0.16 0.08 0.40

P

G

(g) 0.10 0.30 0.40 0.20

Careful study of the table will verify that P

Q,G

(q, g) = P

Q

(q)P

G

(g) for every

pair q, g. Hence Q and G are independent.

(B) (1) Since X

1

and X

2

are independent,

f

X

1

,X

2

(x

1

, x

2

) = f

X

1

(x

1

) f

X

2

(x

2

) (2)

=

_

(1 − x

1

/2)(1 − x

2

/2) 0 ≤ x

1

≤ 2, 0 ≤ x

2

≤ 2

0 otherwise

(3)

30

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Let F

X

(x) denote the CDF of both X

1

and X

2

. The CDF of Z = max(X

1

, X

2

)

is found by observing that Z ≤ z iff X

1

≤ z and X

2

≤ z. That is,

P [Z ≤ z] = P [X

1

≤ z, X

2

≤ z] (4)

= P [X

1

≤ z] P [X

2

≤ z] = [F

X

(z)]

2

(5)

To complete the problem, we need to ﬁnd the CDF of each X

i

. From the PDF

f

X

(x), the CDF is

F

X

(x) =

_

x

−∞

f

X

(y) dy =

⎧

⎨

⎩

0 x < 0

x − x

2

/4 0 ≤ x ≤ 2

1 x > 2

(6)

Thus for 0 ≤ z ≤ 2,

F

Z

(z) = (z − z

2

/4)

2

(7)

The complete expression for the CDF of Z is

F

Z

(z) =

⎧

⎨

⎩

0 z < 0

(z − z

2

/4)

2

0 ≤ z ≤ 2

1 z > 1

(8)

Quiz 4.11

This problem just requires identifying the various terms in Deﬁnition 4.17 and Theo-

rem 4.29. Speciﬁcally, from the problem statement, we know that ρ = 1/2,

µ

1

= µ

X

= 0, µ

2

= µ

Y

= 0, (1)

and that

σ

1

= σ

X

= 1, σ

2

= σ

Y

= 1. (2)

(1) Applying these facts to Deﬁnition 4.17, we have

f

X,Y

(x, y) =

1

√

3π

2

e

−2(x

2

−xy+y

2

)/3

. (3)

(2) By Theorem 4.30, the conditional expected value and standard deviation of X given

Y = y are

E [X|Y = y] = y/2 ˜ σ

X

= σ

2

1

(1 −ρ

2

) =

_

3/4. (4)

When Y = y = 2, we see that E[X|Y = 2] = 1 and Var[X|Y = 2] = 3/4. The

conditional PDF of X given Y = 2 is simply the Gaussian PDF

f

X|Y

(x|2) =

1

√

3π/2

e

−2(x−1)

2

/3

. (5)

31

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.12

One straightforward method is to follow the approach of Example 4.28. Instead, we use

an alternate approach. First we observe that X has the discrete uniform (1, 4) PMF. Also,

given X = x, Y has a discrete uniform (1, x) PMF. That is,

P

X

(x) =

_

1/4 x = 1, 2, 3, 4,

0 otherwise,

P

Y|X

(y|x) =

_

1/x y = 1, . . . , x

0 otherwise

(1)

Given X = x, and an independent uniform (0, 1) random variable U, we can generate a

sample value of Y with a discrete uniform (1, x) PMF via Y = xU. This observation

prompts the following program:

function xy=dtrianglerv(m)

sx=[1;2;3;4];

px=0.25*ones(4,1);

x=finiterv(sx,px,m);

y=ceil(x.*rand(m,1));

xy=[x’;y’];

32

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 5

Quiz 5.1

We ﬁnd P[C] by integrating the joint PDF over the region of interest. Speciﬁcally,

P [C] =

_

1/2

0

dy

2

_

y

2

0

dy

1

_

1/2

0

dy

4

_

y

4

0

4dy

3

(1)

= 4

_

_

1/2

0

y

2

dy

2

__

_

1/2

0

y

4

dy

4

_

= 1/4. (2)

Quiz 5.2

By deﬁnition of A, Y

1

= X

1

, Y

2

= X

2

−X

1

and Y

3

= X

3

−X

2

. Since 0 < X

1

< X

2

<

X

3

, each Y

i

must be a strictly positive integer. Thus, for y

1

, y

2

, y

3

∈ {1, 2, . . .},

P

Y

(y) = P [Y

1

= y

1

, Y

2

= y

2

, Y

3

= y

3

] (1)

= P [X

1

= y

1

, X

2

− X

1

= y

2

, X

3

− X

2

= y

3

] (2)

= P [X

1

= y

1

, X

2

= y

2

+ y

1

, X

3

= y

3

+ y

2

+ y

1

] (3)

= (1 − p)

3

p

y

1

+y

2

+y

3

(4)

By deﬁning the vector a =

_

1 1 1

_

**, the complete expression for the joint PMF of Y is
**

P

Y

(y) =

_

(1 − p) p

a

y

y

1

, y

2

, y

3

∈ {1, 2, . . .}

0 otherwise

(5)

Quiz 5.3

First we note that each marginal PDF is nonzero only if any subset of the x

i

obeys the

ordering contraints 0 ≤ x

1

≤ x

2

≤ x

3

≤ 1. Within these constraints, we have

f

X

1

,X

2

(x

1

, x

2

) =

_

∞

−∞

f

X

(x) dx

3

=

_

1

x

2

6 dx

3

= 6(1 − x

2

), (1)

f

X

2

,X

3

(x

2

, x

3

) =

_

∞

−∞

f

X

(x) dx

1

=

_

x

2

0

6 dx

1

= 6x

2

, (2)

f

X

1

,X

3

(x

1

, x

3

) =

_

∞

−∞

f

X

(x) dx

2

=

_

x

3

x

1

6 dx

2

= 6(x

3

− x

1

). (3)

In particular, we must keep in mind that f

X

1

,X

2

(x

1

, x

2

) = 0 unless 0 ≤ x

1

≤ x

2

≤ 1,

f

X

2

,X

3

(x

2

, x

3

) = 0 unless 0 ≤ x

2

≤ x

3

≤ 1, and that f

X

1

,X

3

(x

1

, x

3

) = 0 unless 0 ≤ x

1

≤

33

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

x

3

≤ 1. The complete expressions are

f

X

1

,X

2

(x

1

, x

2

) =

_

6(1 − x

2

) 0 ≤ x

1

≤ x

2

≤ 1

0 otherwise

(4)

f

X

2

,X

3

(x

2

, x

3

) =

_

6x

2

0 ≤ x

2

≤ x

3

≤ 1

0 otherwise

(5)

f

X

1

,X

3

(x

1

, x

3

) =

_

6(x

3

− x

1

) 0 ≤ x

1

≤ x

3

≤ 1

0 otherwise

(6)

Now we can ﬁnd the marginal PDFs. When 0 ≤ x

i

≤ 1 for each x

i

,

f

X

1

(x

1

) =

_

∞

−∞

f

X

1

,X

2

(x

1

, x

2

) dx

2

=

_

1

x

1

6(1 − x

2

) dx

2

= 3(1 − x

1

)

2

(7)

f

X

2

(x

2

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

3

=

_

1

x

2

6x

2

dx

3

= 6x

2

(1 − x

2

) (8)

f

X

3

(x

3

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

2

=

_

x

3

0

6x

2

dx

2

= 3x

2

3

(9)

The complete expressions are

f

X

1

(x

1

) =

_

3(1 − x

1

)

2

0 ≤ x

1

≤ 1

0 otherwise

(10)

f

X

2

(x

2

) =

_

6x

2

(1 − x

2

) 0 ≤ x

2

≤ 1

0 otherwise

(11)

f

X

3

(x

3

) =

_

3x

2

3

0 ≤ x

3

≤ 1

0 otherwise

(12)

Quiz 5.4

In the PDF f

Y

(y), the components have dependencies as a result of the ordering con-

straints Y

1

≤ Y

2

and Y

3

≤ Y

4

. We can separate these constraints by creating the vectors

V =

_

Y

1

Y

2

_

, W =

_

Y

3

Y

4

_

. (1)

The joint PDF of V and W is

f

V,W

(v, w) =

_

4 0 ≤ v

1

≤ v

2

≤ 1, 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(2)

34

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We must verify that V and W are independent. For 0 ≤ v

1

≤ v

2

≤ 1,

f

V

(v) =

__

f

V,W

(v, w) dw

1

dw

2

(3)

=

_

1

0

_

_

1

w

1

4 dw

2

_

dw

1

(4)

=

_

1

0

4(1 −w

1

) dw

1

= 2 (5)

Similarly, for 0 ≤ w

1

≤ w

2

≤ 1,

f

W

(w) =

__

f

V,W

(v, w) dv

1

dv

2

(6)

=

_

1

0

_

_

1

v

1

4 dv

2

_

dv

1

= 2 (7)

It follows that V and W have PDFs

f

V

(v) =

_

2 0 ≤ v

1

≤ v

2

≤ 1

0 otherwise

, f

W

(w) =

_

2 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(8)

It is easy to verify that f

V,W

(v, w) = f

V

(v) f

W

(w), conﬁrming that V and W are indepen-

dent vectors.

Quiz 5.5

(A) Referring to Theorem 1.19, each test is a subexperiment with three possible out-

comes: L, A and R. In ﬁve trials, the vector X =

_

X

1

X

2

X

3

_

indicating the

number of outcomes of each subexperiment has the multinomial PMF

P

X

(x) =

⎧

⎨

⎩

_

5

x

1

,x

2

,x

3

_

(0.3)

x

1

(0.6)

x

2

(0.1)

x

3

x

1

+ x

2

+ x

3

= 5;

x

1

, x

2

, x

3

∈ {0, 1, . . . , 5}

0 otherwise

(1)

We can ﬁnd the marginal PMF for each X

i

from the joint PMF P

X

(x); however it

is simpler to just start from ﬁrst principles and observe that X

1

is the number of

occurrences of L in ﬁve independent tests. If we view each test as a trial with success

probability P[L] = 0.3, we see that X

1

is a binomial (n, p) = (5, 0.3) random

variable. Similarly, X

2

is a binomial (5, 0.6) random variable and X

3

is a binomial

(5, 0.1) random variable. That is, for p

1

= 0.3, p

2

= 0.6 and p

3

= 0.1,

P

X

i

(x) =

_ _

5

x

_

p

x

i

(1 − p

i

)

5−x

x = 0, 1, . . . , 5

0 otherwise

(2)

35

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the marginal PMFs, we see that X

1

, X

2

and X

3

are not independent. Hence, we

must use Theorem 5.6 to ﬁnd the PMF of W. In particular, since X

1

+ X

2

+ X

3

= 5

and since each X

i

is non-negative, P

W

(0) = P

W

(1) = 0. Furthermore,

P

W

(2) = P

X

(1, 2, 2) + P

X

(2, 1, 2) + P

X

(2, 2, 1) (3)

=

5![0.3(0.6)

2

(0.1)

2

+0.3

2

(0.6)(0.1)

2

+0.3

2

(0.6)

2

(0.1)]

2!2!1!

(4)

= 0.1458 (5)

In addition, for w = 3, w = 4, and w = 5, the event W = w occurs if and only if

one of the mutually exclusive events X

1

= w, X

2

= w, or X

3

= w occurs. Thus,

P

W

(3) = P

X

1

(3) + P

X

2

(3) + P

X

3

(3) = 0.486 (6)

P

W

(4) = P

X

1

(4) + P

X

2

(4) + P

X

3

(4) = 0.288 (7)

P

W

(5) = P

X

1

(5) + P

X

2

(5) + P

X

3

(5) = 0.0802 (8)

(B) Since each Y

i

= 2X

i

+4, we can apply Theorem 5.10 to write

f

Y

(y) =

1

2

3

f

X

_

y

1

−4

2

,

y

2

−4

2

,

y

3

−4

2

_

(9)

=

_

(1/8)e

−(y

3

−4)/2

4 ≤ y

1

≤ y

2

≤ y

3

0 otherwise

(10)

Note that for other matrices A, the constraints on y resulting from the constraints

0 ≤ X

1

≤ X

2

≤ X

3

can be much more complicated.

Quiz 5.6

We start by ﬁnding the components E[X

i

] =

_

∞

−∞

x f

X

i

(x) dx of µ

X

. To do so, we use

the marginal PDFs f

X

i

(x) found in Quiz 5.3:

E [X

1

] =

_

1

0

3x(1 − x)

2

dx = 1/4, (1)

E [X

2

] =

_

1

0

6x

2

(1 − x) dx = 1/2, (2)

E [X

3

] =

_

1

0

3x

3

dx = 3/4. (3)

To ﬁnd the correlation matrix R

X

, we need to ﬁnd E[X

i

X

j

] for all i and j . We start with

36

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

the second moments:

E

_

X

2

1

_

=

_

1

0

3x

2

(1 − x)

2

dx = 1/10. (4)

E

_

X

2

2

_

=

_

1

0

6x

3

(1 − x) dx = 3/10. (5)

E

_

X

2

3

_

=

_

1

0

3x

4

dx = 3/5. (6)

Using marginal PDFs from Quiz 5.3, the cross terms are

E [X

1

X

2

] =

_

∞

−∞

_

∞

−∞

x

1

x

2

f

X

1

,X

2

(x

1

, x

2

) , dx

1

dx

2

(7)

=

_

1

0

_

_

1

x

1

6x

1

x

2

(1 − x

2

) dx

2

_

dx

1

(8)

=

_

1

0

[x

1

−3x

3

1

+2x

4

1

] dx

1

= 3/20. (9)

E [X

2

X

3

] =

_

1

0

_

1

x

2

6x

2

2

x

3

dx

3

dx

2

(10)

=

_

1

0

[3x

2

2

−3x

4

2

] dx

2

= 2/5 (11)

E [X

1

X

3

] =

_

1

0

_

1

x

1

6x

1

x

3

(x

3

− x

1

) dx

3

dx

1

. (12)

=

_

1

0

_

(2x

1

x

3

3

−3x

2

1

x

2

3

)

¸

¸

¸

x

3

=1

x

3

=x

1

_

dx

1

(13)

=

_

1

0

[2x

1

−3x

2

1

+ x

4

1

] dx

1

= 1/5. (14)

Summarizing the results, X has correlation matrix

R

X

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

. (15)

Vector X has covariance matrix

C

X

= R

X

− E [X] E [X]

(16)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/4

1/2

3/4

⎤

⎦

_

1/4 1/2 3/4

_

(17)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/16 1/8 3/16

1/8 1/4 3/8

3/16 3/8 9/16

⎤

⎦

=

1

80

⎡

⎣

3 2 1

2 4 2

1 2 3

⎤

⎦

. (18)

37

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This problemshows that even for fairly simple joint PDFs, computing the covariance matrix

by calculus can be a time consuming task.

Quiz 5.7

We observe that X = AZ +b where

A =

_

2 1

1 −1

_

, b =

_

2

0

_

. (1)

It follows from Theorem 5.18 that µ

X

= b and that

C

X

= AA

=

_

2 1

1 −1

_ _

2 1

1 −1

_

=

_

5 1

1 2

_

. (2)

Quiz 5.8

First, we observe that Y = AT where A =

_

1/31 1/31 · · · 1/31

_

. Since T is a

Gaussian random vector, Theorem 5.16 tells us that Y is a 1 dimensional Gaussian vector,

i.e., just a Gaussian random variable. The expected value of Y is µ

Y

= µ

T

= 80. The

covariance matrix of Y is 1 × 1 and is just equal to Var[Y]. Thus, by Theorem 5.16,

Var[Y] = AC

T

A

.

function p=julytemps(T);

[D1 D2]=ndgrid((1:31),(1:31));

CT=36./(1+abs(D1-D2));

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

In julytemps.m, the ﬁrst two lines gen-

erate the 31 ×31 covariance matrix CT, or

C

T

. Next we calculate Var[Y]. The ﬁnal

step is to use the (·) function to calculate

P[Y < T].

Here is the output of julytemps.m:

>> julytemps([70 75 80 85 90 95])

ans =

0.0000 0.0221 0.5000 0.9779 1.0000 1.0000

Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0000. Its

just that the MATLAB’s short format output, invoked with the command format short,

rounds off those probabilities. Here is the long format output:

>> format long

>> julytemps([70 75 80 85 90 95])

ans =

Columns 1 through 4

0.00002844263128 0.02207383067604 0.50000000000000 0.97792616932396

Columns 5 through 6

0.99997155736872 0.99999999922010

38

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The ndgrid function is a useful to way calculate many covariance matrices. However, in

this problem, C

X

has a special structure; the i, j th element is

C

T

(i, j ) = c

|i −j |

=

36

1 +|i − j |

. (1)

If we write out the elements of the covariance matrix, we see that

C

T

=

⎡

⎢

⎢

⎢

⎣

c

0

c

1

· · · c

30

c

1

c

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. c

1

c

30

· · · c

1

c

0

⎤

⎥

⎥

⎥

⎦

. (2)

This covariance matrix is known as a symmetric Toeplitz matrix. We will see in Chap-

ters 9 and 11 that Toeplitz covariance matrices are quite common. In fact, MATLAB has a

toeplitz function for generating them. The function julytemps2 use the toeplitz

to generate the correlation matrix C

T

.

function p=julytemps2(T);

c=36./(1+abs(0:30));

CT=toeplitz(c);

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

39

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 6

Quiz 6.1

Let K

1

, . . . , K

n

denote a sequence of iid random variables each with PMF

P

K

(k) =

_

1/4 k = 1, . . . , 4

0 otherwise

(1)

We can write W

n

in the form of W

n

= K

1

+ · · · + K

n

. First, we note that the ﬁrst two

moments of K

i

are

E [K

i

] = (1 +2 +3 +4)/4 = 2.5 (2)

E

_

K

2

i

_

= (1

2

+2

2

+3

2

+4

2

)/4 = 7.5 (3)

Thus the variance of K

i

is

Var[K

i

] = E

_

K

2

i

_

−(E [K

i

])

2

= 7.5 −(2.5)

2

= 1.25 (4)

Since E[K

i

] = 2.5, the expected value of W

n

is

E [W

n

] = E [K

1

] +· · · + E [K

n

] = nE [K

i

] = 2.5n (5)

Since the rolls are independent, the random variables K

1

, . . . , K

n

are independent. Hence,

by Theorem 6.3, the variance of the sum equals the sum of the variances. That is,

Var[W

n

] = Var[K

1

] +· · · +Var[K

n

] = 1.25n (6)

Quiz 6.2

Random variables X and Y have PDFs

f

X

(x) =

_

3e

−3x

x ≥ 0

0 otherwise

f

Y

(y) =

_

2e

−2y

y ≥ 0

0 otherwise

(1)

Since X and Y are nonnegative, W = X +Y is nonnegative. By Theorem 6.5, the PDF of

W = X +Y is

f

W

(w) =

_

∞

−∞

f

X

(w − y) f

Y

(y) dy = 6

_

w

0

e

−3(w−y)

e

−2y

dy (2)

Fortunately, this integral is easy to evaluate. For w > 0,

f

W

(w) = e

−3w

e

y

¸

¸

w

0

= 6

_

e

−2w

−e

−3w

_

(3)

Since f

W

(w) = 0 for w < 0, a conmplete expression for the PDF of W is

f

W

(w) =

_

6e

−2w

_

1 −e

−w

_

w ≥ 0,

0 otherwise.

(4)

40

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.3

The MGF of K is

φ

K

(s) = E

_

e

s K

_

==

4

k=0

(0.2)e

sk

= 0.2

_

1 +e

s

+e

2s

+e

3s

+e

4s

_

(1)

We ﬁnd the moments by taking derivatives. The ﬁrst derivative of φ

K

(s) is

dφ

K

(s)

ds

= 0.2(e

s

+2e

2s

+3e

3s

+4e

4s

) (2)

Evaluating the derivative at s = 0 yields

E [K] =

dφ

K

(s)

ds

¸

¸

¸

¸

s=0

= 0.2(1 +2 +3 +4) = 2 (3)

To ﬁnd higher-order moments, we continue to take derivatives:

E

_

K

2

_

=

d

2

φ

K

(s)

ds

2

¸

¸

¸

¸

s=0

= 0.2(e

s

+4e

2s

+9e

3s

+16e

4s

)

¸

¸

¸

s=0

= 6 (4)

E

_

K

3

_

=

d

3

φ

K

(s)

ds

3

¸

¸

¸

¸

s=0

= 0.2(e

s

+8e

2s

+27e

3s

+64e

4s

)

¸

¸

¸

s=0

= 20 (5)

E

_

K

4

_

=

d

4

φ

K

(s)

ds

4

¸

¸

¸

¸

s=0

= 0.2(e

s

+16e

2s

+81e

3s

+256e

4s

)

¸

¸

¸

s=0

= 70.8 (6)

(7)

Quiz 6.4

(A) Each K

i

has MGF

φ

K

(s) = E

_

e

s K

i

_

=

e

s

+e

2s

+· · · +e

ns

n

=

e

s

(1 −e

ns

)

n(1 −e

s

)

(1)

Since the sequence of K

i

is independent, Theorem 6.8 says the MGF of J is

φ

J

(s) = (φ

K

(s))

m

=

e

ms

(1 −e

ns

)

m

n

m

(1 −e

s

)

m

(2)

(B) Since the set of α

j

X

j

are independent Gaussian random variables, Theorem 6.10

says that W is a Gaussian random variable. Thus to ﬁnd the PDF of W, we need

only ﬁnd the expected value and variance. Since the expectation of the sum equals

the sum of the expectations:

E [W] = αE [X

1

] +α

2

E [X

2

] +· · · +α

n

E [X

n

] = 0 (3)

41

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since the α

j

X

j

are independent, the variance of the sum equals the sum of the vari-

ances:

Var[W] = α

2

Var[X

1

] +α

4

Var[X

2

] +· · · +α

2n

Var[X

n

] (4)

= α

2

+2(α

2

)

2

+3(α

2

)

3

+· · · +n(α

2

)

n

(5)

Deﬁning q = α

2

, we can use Math Fact B.6 to write

Var[W] =

α

2

−α

2n+2

[1 +n(1 −α

2

)]

(1 −α

2

)

2

(6)

With E[W] = 0 and σ

2

W

= Var[W], we can write the PDF of W as

f

W

(w) =

1

_

2πσ

2

W

e

−w

2

/2σ

2

W

(7)

Quiz 6.5

(1) From Table 6.1, each X

i

has MGF φ

X

(s) and random variable N has MGF φ

N

(s)

where

φ

X

(s) =

1

1 −s

, φ

N

(s) =

1

5

e

s

1 −

4

5

e

s

. (1)

From Theorem 6.12, R has MGF

φ

R

(s) = φ

N

(ln φ

X

(s)) =

1

5

φ

X

(s)

1 −

4

5

φ

X

(s)

(2)

Substituting the expression for φ

X

(s) yields

φ

R

(s) =

1

5

1

5

−s

. (3)

(2) From Table 6.1, we see that R has the MGF of an exponential (1/5) random variable.

The corresponding PDF is

f

R

(r) =

_

(1/5)e

−r/5

r ≥ 0

0 otherwise

(4)

This quiz is an example of the general result that a geometric sum of exponential

random variables is an exponential random variable.

42

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.6

(1) The expected access time is

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

12

0

x

12

dx = 6 msec (1)

(2) The second moment of the access time is

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

12

0

x

2

12

dx = 48 (2)

The variance of the access time is Var[X] = E[X

2

] −(E[X])

2

= 48 −36 = 12.

(3) Using X

i

to denote the access time of block i , we can write

A = X

1

+ X

2

+· · · + X

12

(3)

Since the expectation of the sum equals the sum of the expectations,

E [A] = E [X

1

] +· · · + E [X

12

] = 12E [X] = 72 msec (4)

(4) Since the X

i

are independent,

Var[A] = Var[X

1

] +· · · +Var[X

12

] = 12 Var[X] = 144 (5)

Hence, the standard deviation of A is σ

A

= 12

(5) To use the central limit theorem, we write

P [A > 75] = 1 − P [A ≤ 75] (6)

= 1 − P

_

A − E [A]

σ

A

≤

75 − E [A]

σ

A

_

(7)

≈ 1 −

_

75 −72

12

_

(8)

= 1 −0.5987 = 0.4013 (9)

Note that we used Table 3.1 to look up (0.25).

(6) Once again, we use the central limit theorem and Table 3.1 to estimate

P [A < 48] = P

_

A − E [A]

σ

A

<

48 − E [A]

σ

A

_

(10)

≈

_

48 −72

12

_

(11)

= 1 −(2) = 1 −0.9773 = 0.0227 (12)

43

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.7

Random variable K

n

has a binomial distribution for n trials and success probability

P[V] = 3/4.

(1) The expected number of voice calls out of 48 calls is E[K

48

] = 48P[V] = 36.

(2) The variance of K

48

is

Var[K

48

] = 48P [V] (1 − P [V]) = 48(3/4)(1/4) = 9 (1)

Thus K

48

has standard deviation σ

K

48

= 3.

(3) Using the ordinary central limit theorem and Table 3.1 yields

P [30 ≤ K

48

≤ 42] ≈

_

42 −36

3

_

−

_

30 −36

3

_

= (2) −(−2) (2)

Recalling that (−x) = 1 −(x), we have

P [30 ≤ K

48

≤ 42] ≈ 2(2) −1 = 0.9545 (3)

(4) Since K

48

is a discrete random variable, we can use the De Moivre-Laplace approx-

imation to estimate

P [30 ≤ K

48

≤ 42] ≈

_

42 +0.5 −36

3

_

−

_

30 −0.5 −36

3

_

(4)

= 2(2.16666) −1 = 0.9687 (5)

Quiz 6.8

The train interarrival times X

1

, X

2

, X

3

are iid exponential (λ) random variables. The

arrival time of the third train is

W = X

1

+ X

2

+ X

3

. (1)

In Theorem 6.11, we found that the sum of three iid exponential (λ) random variables is an

Erlang (n = 3, λ) random variable. From Appendix A, we ﬁnd that W has expected value

and variance

E [W] = 3/λ = 6 Var[W] = 3/λ

2

= 12 (2)

(1) By the Central Limit Theorem,

P [W > 20] = P

_

W −6

√

12

>

20 −6

√

12

_

≈ Q(7/

√

3) = 2.66 ×10

−5

(3)

44

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To use the Chernoff bound, we note that the MGF of W is

φ

W

(s) =

_

λ

λ −s

_

3

=

1

(1 −2s)

3

(4)

The Chernoff bound states that

P [W > 20] ≤ min

s≥0

e

−20s

φ

X

(s) = min

s≥0

e

−20s

(1 −2s)

3

(5)

To minimize h(s) = e

−20s

/(1 −2s)

3

, we set the derivative of h(s) to zero:

dh(s)

ds

=

−20(1 −2s)

3

e

−20s

+6e

−20s

(1 −2s)

2

(1 −2s)

6

= 0 (6)

This implies 20(1 − 2s) = 6 or s = 7/20. Applying s = 7/20 into the Chernoff

bound yields

P [W > 20] ≤

e

−20s

(1 −2s)

3

¸

¸

¸

¸

s=7/20

= (10/3)

3

e

−7

= 0.0338 (7)

(3) Theorem 3.11 says that for any w > 0, the CDF of the Erlang (λ, 3) random variable

W satisﬁes

F

W

(w) = 1 −

2

k=0

(λw)

k

e

−λw

k!

(8)

Equivalently, for λ = 1/2 and w = 20,

P [W > 20] = 1 − F

W

(20) (9)

= e

−10

_

1 +

10

1!

+

10

2

2!

_

= 61e

−10

= 0.0028 (10)

Although the Chernoff bound is relatively weak in that it overestimates the proba-

bility by roughly a factor of 12, it is a valid bound. By contrast, the Central Limit

Theorem approximation grossly underestimates the true probability.

Quiz 6.9

One solution to this problem is to follow the approach of Example 6.19:

%unifbinom100.m

sx=0:100;sy=0:100;

px=binomialpmf(100,0.5,sx); py=duniformpmf(0,100,sy);

[SX,SY]=ndgrid(sx,sy); [PX,PY]=ndgrid(px,py);

SW=SX+SY; PW=PX.*PY;

sw=unique(SW); pw=finitepmf(SW,PW,sw);

pmfplot(sw,pw,’\itw’,’\itP_W(w)’);

A graph of the PMF P

W

(w) appears in Figure 2 With some thought, it should be apparent

that the finitepmf function is implementing the convolution of the two PMFs.

45

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 20 40 60 80 100 120 140 160 180 200

0

0.002

0.004

0.006

0.008

0.01

w

P

W

(

w

)

Figure 2: From Quiz 6.9, the PMF P

W

(w) of the independent sum of a binomial (100, 0.5)

random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 7

Quiz 7.1

An exponential random variable with expected value 1 also has variance 1. By Theo-

rem 7.1, M

n

(X) has variance Var[M

n

(X)] = 1/n. Hence, we need n = 100 samples.

Quiz 7.2

The arrival time of the third elevator is W = X

1

+ X

2

+ X

3

. Since each X

i

is uniform

(0, 30),

E [X

i

] = 15, Var [X

i

] =

(30 −0)

2

12

= 75. (1)

Thus E[W] = 3E[X

i

] = 45, and Var[W] = 3 Var[X

i

] = 225.

(1) By the Markov inequality,

P [W > 75] ≤

E [W]

75

=

45

75

=

3

5

(2)

(2) By the Chebyshev inequality,

P [W > 75] = P [W − E [W] > 30] (3)

≤ P [|W − E [W]| > 30] ≤

Var [W]

30

2

=

225

900

=

1

4

(4)

Quiz 7.3

Deﬁne the random variable W = (X − µ

X

)

2

. Observe that V

100

(X) = M

100

(W). By

Theorem 7.6, the mean square error is

E

_

(M

100

(W) −µ

W

)

2

_

=

Var[W]

100

(1)

Observe that µ

X

= 0 so that W = X

2

. Thus,

µ

W

= E

_

X

2

_

=

_

1

−1

x

2

f

X

(x) dx = 1/3 (2)

E

_

W

2

_

= E

_

X

4

_

=

_

1

−1

x

4

f

X

(x) dx = 1/5 (3)

Therefore Var[W] = E[W

2

] − µ

2

W

= 1/5 − (1/3)

2

= 4/45 and the mean square error is

4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4

Assuming the number n of samples is large, we can use a Gaussian approximation for

M

n

(X). SinceE[X] = p and Var[X] = p(1 − p), we apply Theorem 7.13 which says that

the interval estimate

M

n

(X) −c ≤ p ≤ M

n

(X) +c (1)

has conﬁdence coefﬁcient 1 −α where

α = 2 −2

_

c

√

n

p(1 − p)

_

. (2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must

have

_

c

√

n

p(1 − p)

_

≥ 0.95 (3)

for every value of p. Since (x) is an increasing function of x, we must satisfy c

√

n ≥

1.65p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥

1.65

4

√

n

=

0.41

√

n

. (4)

The 0.9 conﬁdence interval estimate of p is

M

n

(X) −

0.41

√

n

≤ p ≤ M

n

(X) +

0.41

√

n

. (5)

For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c

√

n/( p(1−p))) ≥ 0.995.

This implies c

√

n ≥ 2.58p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥ (0.25)(2.58)/

√

n. In this case, the 0.99 conﬁdence interval estimate is

M

n

(X) −

0.645

√

n

≤ p ≤ M

n

(X) +

0.645

√

n

. (6)

Note that if M

100

(X) = 0.4, then the 0.99 conﬁdence interval estimate is

0.3355 ≤ p ≤ 0.4645. (7)

The interval is wide because the 0.99 conﬁdence is high.

Quiz 7.5

Following the approach of bernoullitraces.m, we generate m = 1000 sample

paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the

fraction of sample paths that have sample mean within one standard error of p. The pro-

gram bernoullisample.m generates graphs the number of traces within one standard

error as a function of the time, i.e. the number of trials in each trace.

48

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function OK=bernoullisample(n,m,p);

x=reshape(bernoullirv(p,m*n),n,m);

nn=(1:n)’*ones(1,m);

MN=cumsum(x)./nn;

stderr=sqrt(p*(1-p))./sqrt((1:n)’);

stderrmat=stderr*ones(1,m);

OK=sum(abs(MN-p)<stderrmat,2)/m;

plot(1:n,OK,’-s’);

The following graph was generated by bernoullisample(100,5000,0.5):

0 10 20 30 40 50 60 70 80 90 100

0.4

0.5

0.6

0.7

0.8

0.9

1

As we would expect, as m gets large, the fraction of traces within one standard error ap-

proaches 2(1) −1 ≈ 0.68. The unusual sawtooth pattern, though perhaps unexpected, is

examined in Problem 7.5.2.

49

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 8

Quiz 8.1

From the problem statement, each X

i

has PDF and CDF

f

X

i

(x) =

_

e

−x

x ≥ 0

0 otherwise

F

X

i

(x) =

_

0 x < 0

1 −e

−x

x ≥ 0

(1)

Hence, the CDF of the maximum of X

1

, . . . , X

15

obeys

F

X

(x) = P [X ≤ x] = P [X

1

≤ x, X

2

≤ x, · · · , X

15

≤ x] = [P [X

i

≤ x]]

15

. (2)

This implies that for x ≥ 0,

F

X

(x) =

_

F

X

i

(x)

_

15

=

_

1 −e

−x

_

15

(3)

To design a signiﬁcance test, we must choose a rejection region for X. A reasonable choice

is to reject the hypothesis if X is too small. That is, let R = {X ≤ r}. For a signiﬁcance

level of α = 0.01, we obtain

α = P [X ≤ r] = (1 −e

−r

)

15

= 0.01 (4)

It is straightforward to show that

r = −ln

_

1 −(0.01)

1/15

_

= 1.33 (5)

Hence, if we observe X < 1.33, then we reject the hypothesis.

Quiz 8.2

From the problem statement, the conditional PMFs of K are

P

K|H

0

(k) =

_

10

4k

e

−10

4

k!

k = 0, 1, . . .

0 otherwise

(1)

P

K|H

1

(k) =

_

10

6k

e

−10

6

k!

k = 0, 1, . . .

0 otherwise

(2)

Since the two hypotheses are equally likely, the MAP and ML tests are the same. From

Theorem 8.6, the ML hypothesis rule is

k ∈ A

0

if P

K|H

0

(k) ≥ P

K|H

1

(k) ; k ∈ A

1

otherwise. (3)

This rule simpliﬁes to

k ∈ A

0

if k ≤ k

∗

=

10

6

−10

4

ln 100

= 214, 975.7; k ∈ A

1

otherwise. (4)

Thus if we observe at least 214, 976 photons, then we accept hypothesis H

1

.

50

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 8.3

For the QPSK system, a symbol error occurs when s

i

is transmitted but (X

1

, X

2

) ∈ A

j

for some j = i . For a QPSK system, it is easier to calculate the probability of a correct

decision. Given H

0

, the conditional probability of a correct decision is

P [C|H

0

] = P [X

1

> 0, X

2

> 0|H

0

] = P

_

√

E/2 + N

1

> 0,

√

E/2 + N

2

> 0

_

(1)

Because of the symmetry of the signals, P[C|H

0

] = P[C|H

i

] for all i . This implies the

probability of a correct decision is P[C] = P[C|H

0

]. Since N

1

and N

2

are iid Gaussian

(0, σ) random variables, we have

P [C] = P [C|H

0

] = P

_

√

E/2 + N

1

> 0

_

P

_

√

E/2 + N

2

> 0

_

(2)

=

_

P

_

N

1

> −

√

E/2

__

2

(3)

=

_

1 −

_

−

√

E/2

σ

__

2

(4)

Since (−x) = 1 − (x), we have P[C] =

2

(

_

E/2σ

2

). Equivalently, the probability

of error is

P

ERR

= 1 − P [C] = 1 −

2

_

_

E

2σ

2

_

(5)

Quiz 8.4

To generate the ROC, the existing program sqdistor already calculates this miss

probability P

MISS

= P

01

and the false alarm probability P

FA

= P

10

. The modiﬁed pro-

gram, sqdistroc.m is essentially the same as sqdistor except the output is a ma-

trix FM whose columns are the false alarm and miss probabilities. Next, the program

sqdistrocplot.m calls sqdistroc three times to generate a plot that compares the

receiver performance for the three requested values of d. Here is the modiﬁed code:

function FM=sqdistroc(v,d,m,T)

%square law distortion recvr

%P(error) for m bits tested

%transmit v volts or -v volts,

%add N volts, N is Gauss(0,1)

%add d(v+N)ˆ2 distortion

%receive 1 if x>T, otherwise 0

%FM = [P(FA) P(MISS)]

x=(v+randn(m,1));

[XX,TT]=ndgrid(x,T(:));

P01=sum((XX+d*(XX.ˆ2)< TT),1)/m;

x= -v+randn(m,1);

[XX,TT]=ndgrid(x,T(:));

P10=sum((XX+d*(XX.ˆ2)>TT),1)/m;

FM=[P10(:) P01(:)];

function FM=sqdistrocplot(v,m,T);

FM1=sqdistroc(v,0.1,m,T);

FM2=sqdistroc(v,0.2,m,T);

FM5=sqdistroc(v,0.3,m,T);

FM=[FM1 FM2 FM5];

loglog(FM1(:,1),FM1(:,2),’-k’, ...

FM2(:,1),FM2(:,2),’--k’, ...

FM5(:,1),FM5(:,2),’:k’);

legend(’\it d=0.1’,’\it d=0.2’,...

’\it d=0.3’,3)

ylabel(’P_{MISS}’);

xlabel(’P_{FA}’);

51

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

To see the effect of d, the commands

T=-3:0.1:3; sqdistrocplot(3,100000,T);

generated the plot shown in Figure 3.

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

P

M

I

S

S

P

FA

d=0.1

d=0.2

d=0.3

T=-3:0.1:3; sqdistrocplot(3,100000,T);

Figure 3: The receiver operating curve for the communications system of Quiz 8.4 with

squared distortion.

52

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 9

Quiz 9.1

(1) First, we calculate the marginal PDF for 0 ≤ y ≤ 1:

f

Y

(y) =

_

y

0

2(y + x) dx = 2xy + x

2

¸

¸

¸

x=y

x=0

= 3y

2

(1)

This implies the conditional PDF of X given Y is

f

X|Y

(x|y) =

f

X,Y

(x, y)

f

Y

(y)

=

_

2

3y

+

2x

3y

2

0 ≤ x ≤ y

0 otherwise

(2)

(2) The minimum mean square error estimate of X given Y = y is

ˆ x

M

(y) = E [X|Y = y] =

_

y

0

_

2x

3y

+

2x

2

3y

2

_

dx = 5y/9 (3)

Thus the MMSE estimator of X given Y is

ˆ

X

M

(Y) = 5Y/9.

(3) To obtain the conditional PDF f

Y|X

(y|x), we need the marginal PDF f

X

(x). For

0 ≤ x ≤ 1,

f

X

(x) =

_

1

x

2(y + x) dy = y

2

+2xy

¸

¸

¸

y=1

y=x

= 1 +2x −3x

2

(4)

(5)

For 0 ≤ x ≤ 1, the conditional PDF of Y given X is

f

Y|X

(y|x) =

_

2(y+x)

1+2x−3x

2

x ≤ y ≤ 1

0 otherwise

(6)

(4) The MMSE estimate of Y given X = x is

ˆ y

M

(x) = E [Y|X = x] =

_

1

x

2y

2

+2xy

1 +2x −3x

2

dy (7)

=

2y

3

/3 + xy

2

1 +2x −3x

2

¸

¸

¸

¸

y=1

y=x

(8)

=

2 +3x −5x

3

3 +6x −9x

2

(9)

53

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.2

(1) Since the expectation of the sum equals the sum of the expectations,

E [R] = E [T] + E [X] = 0 (1)

(2) Since T and X are independent, the variance of the sum R = T + X is

Var[R] = Var[T] +Var[X] = 9 +3 = 12 (2)

(3) Since T and R have expected values E[R] = E[T] = 0,

Cov [T, R] = E [T R] = E [T(T + X)] = E

_

T

2

_

+ E [T X] (3)

Since T and X are independent and have zero expected value, E[T X] = E[T]E[X] =

0 and E[T

2

] = Var[T]. Thus Cov[T, R] = Var[T] = 9.

(4) From Deﬁnition 4.8, the correlation coefﬁcient of T and R is

ρ

T,R

=

Cov [T, R]

√

Var[R] Var[T]

=

σ

T

σ

R

=

√

3/2 (4)

(5) From Theorem 9.4, the optimum linear estimate of T given R is

ˆ

T

L

(R) = ρ

T,R

σ

T

σ

R

(R − E [R]) + E [T] (5)

Since E[R] = E[T] = 0 and ρ

T,R

= σ

T

/σ

R

,

ˆ

T

L

(R) =

σ

2

T

σ

2

R

R =

σ

2

T

σ

2

T

+σ

2

X

R =

3

4

R (6)

Hence a

∗

= 3/4 and b

∗

= 0.

(6) By Theorem 9.4, the mean square error of the linear estimate is

e

∗

L

= Var[T](1 −ρ

2

T,R

) = 9(1 −3/4) = 9/4 (7)

Quiz 9.3

When R = r, the conditional PDF of X = Y −40−40 log

10

r is Gaussian with expected

value −40 −40 log

10

r and variance 64. The conditional PDF of X given R is

f

X|R

(x|r) =

1

√

128π

e

−(x+40+40 log

10

r)

2

/128

(1)

54

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the conditional PDF f

X|R

(x|r), we can use Deﬁnition 9.2 to write the ML estimate

of R given X = x as

ˆ r

ML

(x) = arg max

r≥0

f

X|R

(x|r) (2)

We observe that f

X|R

(x|r) is maximized when the exponent (x + 40 + 40 log

10

r)

2

is

minimized. This minimum occurs when the exponent is zero, yielding

log

10

r = −1 − x/40 (3)

or

ˆ r

ML

(x) = (0.1)10

−x/40

m (4)

If the result doesn’t look correct, note that a typical ﬁgure for the signal strength might be

x = −120 dB. This corresponds to a distance estimate of ˆ r

ML

(−120) = 100 m.

For the MAP estimate, we observe that the joint PDF of X and R is

f

X,R

(x, r) = f

X|R

(x|r) f

R

(r) =

1

10

6

√

32π

re

−(x+40+40 log

10

r)

2

/128

(5)

From Theorem 9.6, the MAP estimate of R given X = x is the value of r that maximizes

f

X,R

(x, r). That is,

ˆ r

MAP

(x) = arg max

0≤r≤1000

f

X,R

(x, r) (6)

Note that we have included the constraint r ≤ 1000 in the maximization to highlight the

fact that under our probability model, R ≤ 1000 m. Setting the derivative of f

X,R

(x, r)

with respect to r to zero yields

e

−(x+40+40 log

10

r)

2

/128

_

1 −

80 log

10

e

128

(x +40 +40 log

10

r)

_

= 0 (7)

Solving for r yields

r = 10

_

1

25 log

10

e

−1

_

10

−x/40

= (0.1236)10

−x/40

(8)

This is the MAP estimate of R given X = x as long as r ≤ 1000 m. When x ≤ −156.3 dB,

the above estimate will exceed 1000 m, which is not possible in our probability model.

Hence, the complete description of the MAP estimate is

ˆ r

MAP

(x) =

_

1000 x < −156.3

(0.1236)10

−x/40

x ≥ −156.3

(9)

For example, if x = −120dB, then ˆ r

MAP

(−120) = 123.6 m. When the measured signal

strength is not too low, the MAP estimate is 23.6% larger than the ML estimate. This re-

ﬂects the fact that large values of R are a priori more probable than small values. However,

for very low signal strengths, the MAP estimate takes into account that the distance can

never exceed 1000 m.

55

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.4

(1) From Theorem 9.4, the LMSE estimate of X

2

given Y

2

is

ˆ

X

2

(Y

2

) = a

∗

Y

2

+b

∗

where

a

∗

=

Cov [X

2

, Y

2

]

Var[Y

2

]

, b

∗

= µ

X

2

−a

∗

µ

Y

2

. (1)

Because E[X] = E[Y] = 0,

Cov [X

2

, Y

2

] = E [X

2

Y

2

] = E [X

2

(X

2

+ W

2

)] = E

_

X

2

2

_

= 1 (2)

Var[Y

2

] = Var[X

2

] +Var[W

2

] = E

_

X

2

2

_

+ E

_

W

2

2

_

= 1.1 (3)

It follows that a

∗

= 1/1.1. Because µ

X

2

= µ

Y

2

= 0, it follows that b

∗

= 0. Finally,

to compute the expected square error, we calculate the correlation coefﬁcient

ρ

X

2

,Y

2

=

Cov [X

2

, Y

2

]

σ

X

2

σ

Y

2

=

1

√

1.1

(4)

The expected square error is

e

∗

L

= Var[X

2

](1 −ρ

2

X

2

,Y

2

) = 1 −

1

1.1

=

1

11

= 0.0909 (5)

(2) Since Y = X + W and E[X] = E[W] = 0, it follows that E[Y] = 0. Thus we can

apply Theorem 9.7. Note that X and W have correlation matrices

R

X

=

_

1 −0.9

−0.9 1

_

, R

W

=

_

0.1 0

0 0.1

_

. (6)

In terms of Theorem 9.7, n = 2 and we wish to estimate X

2

given the observation

vector Y =

_

Y

1

Y

2

_

**. To apply Theorem 9.7, we need to ﬁnd R
**

Y

and R

YX

2

.

R

Y

= E

_

YY

_

= E

_

(X +W)(X

+W

)

_

(7)

= E

_

XX

+XW

+WX

+WW

_

. (8)

Because Xand Ware independent, E[XW

] = E[X]E[W

] = 0. Similarly, E[WX

] =

0. This implies

R

Y

= E

_

XX

_

+ E

_

WW

_

= R

X

+R

W

=

_

1.1 −0.9

−0.9 1.1

_

. (9)

In addition, we need to ﬁnd

R

YX

2

= E [YX

2

] =

_

E [Y

1

X

2

]

E [Y

2

X

2

]

_

=

_

E [(X

1

+ W

1

)X

2

]

E [(X

2

+ W

2

)X

2

]

_

. (10)

56

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since Xand Ware independent vectors, E[W

1

X

2

] = E[W

1

]E[X

2

] = 0 and E[W

2

X

2

] =

0. Thus

R

YX

2

=

_

E[X

1

X

2

]

E

_

X

2

2

_

_

=

_

−0.9

1

_

. (11)

By Theorem 9.7,

ˆ a = R

−1

Y

R

YX

2

=

_

−0.225

0.725

_

(12)

Therefore, the optimum linear estimator of X

2

given Y

1

and Y

2

is

ˆ

X

L

= ˆ a

Y = −0.225Y

1

+0.725Y

2

. (13)

The mean square error is

Var [X

2

] − ˆ a

R

YX

2

= Var [X] −a

1

r

Y

1

,X

2

−a

2

r

Y

2

,X

2

= 0.0725. (14)

Quiz 9.5

Since X and W have zero expected value, Y also has zero expected value. Thus, by

Theorem 9.7,

ˆ

X

L

(Y) = ˆ a

Y where ˆ a = R

−1

Y

R

YX

. Since X and W are independent,

E[WX] = 0 and E[XW

] = 0

. This implies

R

YX

= E [YX] = E [(1X +W)X] = 1E

_

X

2

_

= 1. (1)

By the same reasoning, the correlation matrix of Y is

R

Y

= E

_

YY

_

= E

_

(1X +W)(1

X +W

)

_

(2)

= 11

E

_

X

2

_

+1E

_

XW

_

+ E [WX] 1

+ E

_

WW

_

(3)

= 11

+R

W

(4)

Note that 11

**is a 20 ×20 matrix with every entry equal to 1. Thus,
**

ˆ a = R

−1

Y

R

YX

=

_

11

+R

W

_

−1

1 (5)

and the optimal linear estimator is

ˆ

X

L

(Y) = 1

_

11

+R

W

_

−1

Y (6)

The mean square error is

e

∗

L

= Var[X] − ˆ a

R

YX

= 1 −1

_

11

+R

W

_

−1

1 (7)

Now we note that R

W

has i, j th entry R

W

(i, j ) = c

|i −j |−1

. The question we must address

is what value c minimizes e

∗

L

. This problem is atypical in that one does not usually get

57

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

to choose the correlation structure of the noise. However, we will see that the answer is

somewhat instructive.

We note that the answer is not obviously apparent from Equation (7). In particular, we

observe that Var[W

i

] = R

W

(i, i ) = 1/c. Thus, when c is small, the noises W

i

have high

variance and we would expect our estimator to be poor. On the other hand, if c is large

W

i

and W

j

are highly correlated and the separate measurements of X are very dependent.

This would suggest that large values of c will also result in poor MSE. If this argument is

not clear, consider the extreme case in which every W

i

and W

j

have correlation coefﬁcient

ρ

i j

= 1. In this case, our 20 measurements will be all the same and one measurement is as

good as 20 measurements.

To ﬁnd the optimal value of c, we write a MATLAB function mquiz9(c) to calculate

the MSE for a given c and second function that ﬁnds plots the MSE for a range of values

of c.

function [mse,af]=mquiz9(c);

v1=ones(20,1);

RW=toeplitz(c.ˆ((0:19)-1));

RY=(v1*(v1’)) +RW;

af=(inv(RY))*v1;

mse=1-((v1’)*af);

function cmin=mquiz9minc(c);

msec=zeros(size(c));

for k=1:length(c),

[msec(k),af]=mquiz9(c(k));

end

plot(c,msec);

xlabel(’c’);ylabel(’e_Lˆ*’);

[msemin,optk]=min(msec);

cmin=c(optk);

Note in mquiz9 that v1 corresponds to the vector 1 of all ones. The following commands

ﬁnds the minimum c and also produces the following graph:

>> c=0.01:0.01:0.99;

>> mquiz9minc(c)

ans =

0.4500

0 0.5 1

0.2

0.4

0.6

0.8

1

c

e

L *

As we see in the graph, both small values and large values of c result in large MSE.

58

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 10

Quiz 10.1

There are many correct answers to this question. A correct answer speciﬁes enough

random variables to specify the sample path exactly. One choice for an alternate set of

random variables that would specify m(t, s) is

• m(0, s), the number of ongoing calls at the start of the experiment

• N, the number of new calls that arrive during the experiment

• X

1

, . . . , X

N

, the interarrival times of the N new arrivals

• H, the number of calls that hang up during the experiment

• D

1

, . . . , D

H

, the call completion times of the H calls that hang up

Quiz 10.2

(1) We obtain a continuous time, continuous valued process when we record the temper-

ature as a continuous waveform over time.

(2) If at every moment in time, we round the temperature to the nearest degree, then we

obtain a continuous time, discrete valued process.

(3) If we sample the process in part (a) every T seconds, then we obtain a discrete time,

continuous valued process.

(4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time,

discrete valued process.

Quiz 10.3

(1) Each resistor has resistance R in ohms with uniform PDF

f

R

(r) =

_

0.01 950 ≤ r ≤ 1050

0 otherwise

(1)

The probability that a test produces a 1% resistor is

p = P [990 ≤ R ≤ 1010] =

_

1010

990

(0.01) dr = 0.2 (2)

59

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) In t seconds, exactly t resistors are tested. Each resistor is a 1% resistor with proba-

bility p, independent of any other resistor. Consequently, the number of 1% resistors

found has the binomial PMF

P

N(t )

(n) =

_ _

t

n

_

p

n

(1 − p)

t −n

n = 0, 1, . . . , t

0 otherwise

(3)

(3) First we will ﬁnd the PMF of T

1

. This problem is easy if we view each resistor test

as an independent trial. A success occurs on a trial with probability p if we ﬁnd a

1% resistor. The ﬁrst 1% resistor is found at time T

1

= t if we observe failures on

trials 1, . . . , t − 1 followed by a success on trial t . Hence, just as in Example 2.11,

T

1

has the geometric PMF

P

T

1

(t ) =

_

(1 − p)

t −1

p t = 1, 2, . . .

9 otherwise

(4)

Since p = 0.2, the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is

P

T

1

(5) = (0.8)

4

(0.2) = 0.08192.

(4) From Theorem 2.5, a geometric random variable with success probability p has ex-

pected value 1/p. In this problem, E[T

1

] = 1/p = 5.

(5) Note that once we ﬁnd the ﬁrst 1% resistor, the number of additional trials needed to

ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/p

since each independent trial is a success with probability p. That is, T

2

= T

1

+ T

where T

**is independent and identically distributed to T
**

1

. Thus

E [T

2

|T

1

= 10] = E [T

1

|T

1

= 10] + E

_

T

|T

1

= 10

_

(5)

= 10 + E

_

T

_

= 10 +5 = 15 (6)

Quiz 10.4

Since each X

i

is a N(0, 1) random variable, each X

i

has PDF

f

X(i )

(x) =

1

√

2π

e

−x

2

/2

(1)

By Theorem 10.1, the joint PDF of X =

_

X

1

· · · X

n

_

is

f

X

(x) = f

X(1),...,X(n)

(x

1

, . . . , x

n

) =

k

i =1

f

X

(x

i

) =

1

(2π)

n/2

e

−(x

2

1

+···+x

2

n

)/2

(2)

60

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.5

The ﬁrst and second hours are nonoverlapping intervals. Since one hour equals 3600

sec and the Poisson process has a rate of 10 packets/sec, the expected number of packets

in each hour is E[M

i

] = α = 36, 000. This implies M

1

and M

2

are independent Poisson

random variables each with PMF

P

M

i

(m) =

_

α

m

e

−α

m!

m = 0, 1, 2, . . .

0 otherwise

(1)

Since M

1

and M

2

are independent, the joint PMF of M

1

and M

2

is

P

M

1

,M

2

(m

1

, m

2

) = P

M

1

(m

1

) P

M

2

(m

2

) =

⎧

⎪

⎪

⎨

⎪

⎪

⎩

α

m

1

+m

2

e

−2α

m

1

!m

2

!

m

1

= 0, 1, . . . ;

m

2

= 0, 1, . . . ,

0 otherwise.

(2)

Quiz 10.6

To answer whether N

**(t ) is a Poisson process, we look at the interarrival times. Let
**

X

1

, X

2

, . . . denote the interarrival times of the N(t ) process. Since we count only even-

numbered arrival for N

(t ), the time until the ﬁrst arrival of the N

(t ) is Y

1

= X

1

+ X

2

.

Since X

1

and X

2

are independent exponential (λ) random variables, Y

1

is an Erlang (n =

2, λ) random variable; see Theorem 6.11. Since Y

i

(t ), the i th interarrival time of the N

(t )

process, has the same PDF as Y

1

(t ), we can conclude that the interarrival times of N

(t )

are not exponential random variables. Thus N

**(t ) is not a Poisson process.
**

Quiz 10.7

First, we note that for t > s,

X(t ) − X(s) =

W(t ) − W(s)

√

α

(1)

Since W(t ) −W(s) is a Gaussian random variable, Theorem 3.13 states that W(t ) −W(s)

is Gaussian with expected value

E [X(t ) − X(s)] =

E [W(t ) − W(s)]

√

α

= 0 (2)

and variance

E

_

(W(t ) − W(s))

2

_

=

E

_

(W(t ) − W(s))

2

_

α

=

α(t −s)

α

(3)

Consider s

≤ s < t . Since s ≥ s

, W(t ) − W(s) is independent of W(s

). This implies

[W(t ) − W(s)]/

√

α is independent of W(s

)/

√

α for all s ≥ s

. That is, X(t ) − X(s) is

independent of X(s

) for all s ≥ s

**. Thus X(t ) is a Brownian motion process with variance
**

Var[X(t )] = t .

61

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.8

First we ﬁnd the expected value

µ

Y

(t ) = µ

X

(t ) +µ

N

(t ) = µ

X

(t ). (1)

To ﬁnd the autocorrelation, we observe that since X(t ) and N(t ) are independent and since

N(t ) has zero expected value, E[X(t )N(t

)] = E[X(t )]E[N(t

)] = 0. Since R

Y

(t, τ) =

E[Y(t )Y(t +τ)], we have

R

Y

(t, τ) = E [(X(t ) + N(t )) (X(t +τ) + N(t +τ))] (2)

= E [X(t )X(t +τ)] + E [X(t )N(t +τ)]

+ E [X(t +τ)N(t )] + E [N(t )N(t +τ)] (3)

= R

X

(t, τ) + R

N

(t, τ). (4)

Quiz 10.9

From Deﬁnition 10.14, X

1

, X

2

, . . . is a stationary random sequence if for all sets of

time instants n

1

, . . . , n

m

and time offset k,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) (1)

Since the random sequence is iid,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (2)

Similarly, for time instants n

1

+k, . . . , n

m

+k,

f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (3)

We can conclude that the iid random sequence is stationary.

Quiz 10.10

We must check whether each function R(τ) meets the conditions of Theorem 10.12:

R(τ) ≥ 0 R(τ) = R(−τ) |R(τ)| ≤ R(0) (1)

(1) R

1

(τ) = e

−|τ|

meets all three conditions and thus is valid.

(2) R

2

(τ) = e

−τ

2

also is valid.

(3) R

3

(τ) = e

−τ

cos τ is not valid because

R

3

(−2π) = e

2π

cos 2π = e

2π

> 1 = R

3

(0) (2)

(4) R

4

(τ) = e

−τ

2

sin τ also cannot be an autocorrelation function because

R

4

(π/2) = e

−π/2

sin π/2 = e

−π/2

> 0 = R

4

(0) (3)

62

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.11

(1) The autocorrelation of Y(t ) is

R

Y

(t, τ) = E [Y(t )Y(t +τ)] (1)

= E [X(−t )X(−t −τ)] (2)

= R

X

(−t −(−t −τ)) = R

X

(τ) (3)

Since E[Y(t )] = E[X(−t )] = µ

X

, we can conclude that Y(t ) is a wide sense

stationary process. In fact, we see that by viewing a process backwards in time, we

see the same second order statistics.

(2) Since X(t ) and Y(t ) are both wide sense stationary processes, we can check whether

they are jointly wide sense stationary by seeing if R

XY

(t, τ) is just a function of τ.

In this case,

R

XY

(t, τ) = E [X(t )Y(t +τ)] (4)

= E [X(t )X(−t −τ)] (5)

= R

X

(t −(−t −τ)) = R

X

(2t +τ) (6)

Since R

XY

(t, τ) depends on both t and τ, we conclude that X(t ) and Y(t ) are not

jointly wide sense stationary. To see why this is, suppose R

X

(τ) = e

−|τ|

so that

samples of X(t ) far apart in time have almost no correlation. In this case, as t gets

larger, Y(t ) = X(−t ) and X(t ) become less and less correlated.

Quiz 10.12

From the problem statement,

E [X(t )] = E [X(t +1)] = 0 (1)

E [X(t )X(t +1)] = 1/2 (2)

Var[X(t )] = Var[X(t +1)] = 1 (3)

The Gaussian random vector X =

_

X(t ) X(t +1)

_

**has covariance matrix and corre-
**

sponding inverse

C

X

=

_

1 1/2

1/2 1

_

C

−1

X

=

4

3

_

1 −1/2

−1/2 1

_

(4)

Since

x

C

−1

X

x =

_

x

0

x

1

_

4

3

_

1 −1/2

−1/2 1

_ _

x

0

x

1

_

=

4

3

_

x

2

0

− x

0

x

+

x

2

1

_

(5)

the joint PDF of X(t ) and X(t +1) is the Gaussian vector PDF

f

X(t ),X(t +1)

(x

0

, x

1

) =

1

(2π)

n/2

[det (C

X

)]

1/2

exp

_

−

1

2

x

C

−1

X

x

_

(6)

=

1

√

3π

2

e

−

2

3

_

x

2

0

−x

0

x

1

+x

2

1

_

(7)

63

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 10 20 30 40 50 60 70 80 90 100

0

20

40

60

80

100

120

t

M

(

t

)

Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10.13.

Quiz 10.13

The simple structure of the switch simulation of Example 10.28 admits a deceptively

simple solution in terms of the vector of arrivals A and the vector of departures D. With the

introduction of call blocking. we cannot generate these vectors all at once. In particular,

when an arrival occurs at time t , we need to know that M(t ), the number of ongoing calls,

satisﬁes M(t ) < c = 120. Otherwise, when M(t ) = c, we must block the call. Call

blocking can be implemented by setting the service time of the call to zero so that the call

departs as soon as it arrives.

The blocking switch is an example of a discrete event system. The system evolves via

a sequence of discrete events, namely arrivals and departures, at discrete time instances. A

simulation of the system moves from one time instant to the next by maintaining a chrono-

logical schedule of future events (arrivals and departures) to be executed. The program

simply executes the event at the head of the schedule. The logic of such a simulation is

1. Start at time t = 0 with an empty system. Schedule the ﬁrst arrival to occur at S

1

, an

exponential (λ) random variable.

2. Examine the head-of-schedule event.

• When the head-of-schedule event is the kth arrival is at time t , check the state

M(t ).

– If M(t ) < c, admit the arrival, increase the system state n by 1, and sched-

ule a departure to occur at time t + S

n

, where S

k

is an exponential (λ)

random variable.

– If M(t ) = c, block the arrival, do not schedule a departure event.

• If the head of schedule event is a departure, reduce the system state n by 1.

3. Delete the head-of-schedule event and go to step 2.

After the head-of-schedule event is completed and any new events (departures in this sys-

tem) are scheduled, we know the system state cannot change until the next scheduled event.

64

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus we know that M(t ) will stay the same until then. In our simulation, we use the vector

t as the set of time instances at which we inspect the system state. Thus for all times t(i)

between the current head-of-schedule event and the next, we set m(i) to the current switch

state.

The complete program is shown in Figure 5. In most programming languages, it is

common to implement the event schedule as a linked list where each item in the list has

a data structure indicating an event timestamp and the type of the event. In MATLAB, a

simple (but not elegant) way to do this is to have maintain two vectors: time is a list

of timestamps of scheduled events and event is a the list of event types. In this case,

event(i)=1 if the i th scheduled event is an arrival, or event(i)=-1 if the i th sched-

uled event is a departure.

When the program is passed a vector t, the output [m a b] is such that m(i) is the

number of ongoing calls at time t(i) while a and b are the number of admits and blocks.

The following instructions

t=0:0.1:5000;

[m,a,b]=simblockswitch(10,0.1,120,t);

plot(t,m);

generated a simulation lasting 5,000 minutes. A sample path of the ﬁrst 100 minutes of

that simulation is shown in Figure 4. The 5,000 minute full simulation produced a=49658

admitted calls and b=239 blocked calls. We can estimate the probability a call is blocked

as

ˆ

P

b

=

b

a +b

= 0.0048. (1)

In Chapter 12, we will learn that the exact blocking probability is given by Equation (12.93),

a result known as the “Erlang-B formula.” From the Erlang-B formula, we can calculate

that the exact blocking probability is P

b

= 0.0057. One reason our simulation underesti-

mates the blocking probability is that in a 5,000 minute simulation, roughly the ﬁrst 100

minutes are needed to load up the switch since the switch is idle when the simulation starts

at time t = 0. However, this says that roughly the ﬁrst two percent of the simulation time

was unusual. Thus this would account for only part of the disparity. The rest of the gap

between 0.0048 and 0.0057 is that a simulation that includes only 239 blocks is not all that

likely to give a very accurate result for the blocking probability.

Note that in Chapter 12, we will learn that the blocking switch is an example of an

M/M/c/c queue, a kind of Markov chain. Chapter 12 develops techniques for analyzing

and simulating systems described by Markov chains that are much simpler than the discrete

event simulation technique shown here. Nevertheless, for very complicated systems, the

discrete event simulation is widely-used and often very efﬁcient simulation method.

65

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function [M,admits,blocks]=simblockswitch(lam,mu,c,t);

blocks=0; %total # blocks

admits=0; %total # admits

M=zeros(size(t));

n=0; % # in system

time=[ exponentialrv(lam,1) ];

event=[ 1 ]; %first event is an arrival

timenow=0;

tmax=max(t);

while (timenow<tmax)

M((timenow<=t)&(t<time(1)))=n;

timenow=time(1);

eventnow=event(1);

event(1)=[ ]; time(1)= [ ]; % clear current event

if (eventnow==1) % arrival

arrival=timenow+exponentialrv(lam,1); % next arrival

b4arrival=time<arrival;

event=[event(b4arrival) 1 event(˜b4arrival)];

time=[time(b4arrival) arrival time(˜b4arrival)];

if n<c %call admitted

admits=admits+1;

n=n+1;

depart=timenow+exponentialrv(mu,1);

b4depart=time<depart;

event=[event(b4depart) -1 event(˜b4depart)];

time=[time(b4depart) depart time(˜b4depart)];

else

blocks=blocks+1; %one more block, immed departure

disp(sprintf(’Time %10.3d Admits %10d Blocks %10d’,...

timenow,admits,blocks));

end

elseif (eventnow==-1) %departure

n=n-1;

end

end

Figure 5: Discrete event simulation of the blocking switch of Quiz 10.13.

66

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 11

Quiz 11.1

By Theorem 11.2,

µ

Y

= µ

X

_

∞

−∞

h(t )dt = 2

_

∞

0

e

−t

dt = 2 (1)

Since R

X

(τ) = δ(τ), the autocorrelation function of the output is

R

Y

(τ) =

_

∞

−∞

h(u)

_

∞

−∞

h(v)δ(τ +u −v) dv du =

_

∞

−∞

h(u)h(τ +u) du (2)

For τ > 0, we have

R

Y

(τ) =

_

∞

0

e

−u

e

−τ−u

du = e

−τ

_

∞

0

e

−2u

du =

1

2

e

−τ

(3)

For τ < 0, we can deduce that R

Y

(τ) =

1

2

e

−|τ|

by symmetry. Just to be safe though, we

can double check. For τ < 0,

R

Y

(τ) =

_

∞

−τ

h(u)h(τ +u) du =

_

∞

−τ

e

−u

e

−τ−u

du =

1

2

e

τ

(4)

Hence,

R

Y

(τ) =

1

2

e

−|τ|

(5)

Quiz 11.2

The expected value of the output is

µ

Y

= µ

X

∞

n=−∞

h

n

= 0.5(1 +−1) = 0 (1)

The autocorrelation of the output is

R

Y

[n] =

1

i =0

1

j =0

h

i

h

j

R

X

[n +i − j ] (2)

= 2R

X

[n] − R

X

[n −1] − R

X

[n +1] =

_

1 n = 0

0 otherwise

(3)

Since µ

Y

= 0, The variance of Y

n

is Var[Y

n

] = E[Y

2

n

] = R

Y

[0] = 1.

67

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−15 −10 −5 0 5 10 15

0

0.2

0.4

0.6

f

S

X

(

f

)

−1500−1000 −500 0 500 1000 1500

0

2

4

6

8

x 10

f

S

X

(

f

)

−0.2 −0.1 0 0.1 0.2

−5

0

5

10

τ

R

X

(

τ

)

−2 −1 0 1 2

x 10

−3

−5

0

5

10

τ

R

X

(

τ

)

(a) W = 10 (b) W = 1000

Figure 6: The autocorrelation R

X

(τ) and power spectral density S

X

( f ) for process X(t ) in

Quiz 11.5.

Quiz 11.3

By Theorem 11.8, Y =

_

Y

33

Y

34

Y

35

_

**is a Gaussian random vector since X
**

n

is

a Gaussian random process. Moreover, by Theorem 11.5, each Y

n

has expected value

E[Y

n

] = µ

X

∞

n=−∞

h

n

= 0. Thus E[Y] = 0. Fo ﬁnd the PDF of the Gaussian vector

Y, we need to ﬁnd the covariance matrix C

Y

, which equals the correlation matrix R

Y

since

Y has zero expected value. One way to ﬁnd the R

Y

is to observe that R

Y

has the Toeplitz

structure of Theorem 11.6 and to use Theorem 11.5 to ﬁnd the autocorrelation function

R

Y

[n] =

∞

i =−∞

∞

j =−∞

h

i

h

j

R

X

[n +i − j ]. (1)

Despite the fact that R

X

[k] is an impulse, using Equation (1) is surprisingly tedious because

we still need to sum over all i and j such that n +i − j = 0.

In this problem, it is simpler to observe that Y = HX where

X =

_

X

30

X

31

X

32

X

33

X

34

X

35

_

(2)

and

H =

1

4

⎡

⎣

1 1 1 1 0 0

0 1 1 1 1 0

0 0 1 1 1 1

⎤

⎦

. (3)

In this case, following Theorem 11.7, or by directly applying Theorem 5.13 with µ

X

= 0

and A = H, we obtain R

Y

= HR

X

H

. Since R

X

[n] = δ

n

, R

X

= I, the identity matrix.

68

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus

C

Y

= R

Y

= HH

=

1

16

⎡

⎣

4 3 2

3 4 3

2 3 4

⎤

⎦

. (4)

It follows (very quickly if you use MATLAB for 3 ×3 matrix inversion) that

C

−1

Y

= 16

⎡

⎣

7/12 −1/2 1/12

−1/2 1 −1/2

1/12 −1/2 7/12

⎤

⎦

. (5)

Thus, the PDF of Y is

f

Y

(y) =

1

(2π)

3/2

[det (C

Y

)]

1/2

exp

_

−

1

2

y

C

−1

Y

y

_

. (6)

A disagreeable amount of algebra will show det(C

Y

) = 3/1024 and that the PDF can be

“simpliﬁed” to

f

Y

(y) =

16

√

6π

3

exp

_

−8

_

7

12

y

2

33

+ y

2

34

+

7

12

y

2

35

− y

33

y

34

+

1

6

y

33

y

35

− y

34

y

35

__

. (7)

Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution

is that y

C

−1

Y

y is a very concise representation of the cross-terms in the exponent of f

Y

(y).

Quiz 11.4

This quiz is solved using Theorem 11.9 for the case of k = 1 and M = 2. In this case,

X

n

=

_

X

n−1

X

n

_

and

R

X

n

=

_

R

X

[0] R

X

[1]

R

X

[1] R

X

[0]

_

=

_

1.1 0.9

0.9 1.1

_

(1)

and

R

X

n

X

n+1

= E

__

X

n−1

X

n

_

X

n+1

_

=

_

R

X

[2]

R

X

[1]

_

=

_

0.81

0.9

_

. (2)

The MMSE linear ﬁrst order ﬁlter for predicting X

n+1

at time n is the ﬁlter h such that

←−

h = R

−1

X

n

R

X

n

X

n+1

=

_

1.1 0.9

0.9 1.1

_

−1

_

0.81

0.9

_

=

1

400

_

81

261

_

. (3)

It follows that the ﬁlter is h =

_

261/400 81/400

_

**and the MMSE linear predictor is
**

ˆ

X

n+1

=

81

400

X

n−1

+

261

400

X

n

. (4)

to ﬁnd the mean square error, one approach is to follow the method of Example 11.13 and

to directly calculate

e

∗

L

= E

_

(X

n+1

−

ˆ

X

n+1

)

2

_

. (5)

69

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This method is workable for this simple problem but becomes increasingly tedious for

higher order ﬁlters. Instead, we can derive the mean square error for an arbitary prediction

ﬁlter h. Since

ˆ

X

n+1

=

←−

h

X

n

,

e

∗

L

= E

_

_

X

n+1

−

←−

h

X

n

_

2

_

(6)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−

←−

h

X

n

)

_

(7)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−X

n

←−

h )

_

(8)

After a bit of algebra, we obtain

e

∗

L

= R

X

[0] −2

←−

h

R

X

n

X

n+1

+

←−

h

R

X

n

←−

h (9)

(10)

with the substitution

←−

h = R

−1

X

n

R

X

n

X

n+1

, we obtain

e

∗

L

= R

X

[0] −R

X

n

X

n+1

R

−1

X

n

R

X

n

X

n+1

(11)

= R

X

[0] −

←−

h

R

X

n

X

n+1

(12)

Note that this is essentially the same result as Theorem 9.7 with Y = X

n

, X = X

n+1

and

ˆ a

=

←−

h

. It is noteworthy that the result is derived in a much simpler way in the proof of

Theorem 9.7 by using the orthoginality property of the LMSE estimator.

In any case, the mean square error is

e

∗

L

= R

X

[0] −

←−

h

R

X

n

X

n+1

= 1.1 −

1

400

_

81 261

_

_

0.81

0.9

_

=

506

1451

= 0.3487. (13)

recalling that the blind estimate would yield a mean square error of Var[X] = 1.1, we see

that observing X

n−1

and X

n

improves the accuracy of our prediction of X

n+1

.

Quiz 11.5

(1) By Theorem 11.13(b), the average power of X(t ) is

E

_

X

2

(t )

_

=

_

∞

−∞

S

X

( f ) d f =

_

W

−W

5

W

d f = 10 Watts (1)

(2) The autocorrelation function is the inverse Fourier transform of S

X

( f ). Consulting

Table 11.1, we note that

S

X

( f ) = 10

1

2W

rect

_

f

2W

_

(2)

It follows that the inverse transform of S

X

( f ) is

R

X

(τ) = 10 sinc(2Wτ) = 10

sin(2πWτ)

2πWτ

(3)

(3) For W = 10 Hz and W = 1 kHZ, graphs of S

X

( f ) and R

X

(τ) appear in Figure 6.

70

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.6

In a sampled system, the discrete time impulse δ[n] has a ﬂat discrete Fourier transform.

That is, if R

X

[n] = 10δ[n], then

S

X

(φ) =

∞

n=−∞

10δ[n]e

−j 2πφn

= 10 (1)

Thus, R

X

[n] = 10δ[n]. (This quiz is really lame!)

Quiz 11.7

Since Y(t ) = X(t −t

0

),

R

XY

(t, τ) = E [X(t )Y(t +τ)] = E [X(t )X(t +τ −t

0

)] = R

X

(τ −t

0

) (1)

We see that R

XY

(t, τ) = R

XY

(τ) = R

X

(τ − t

0

). From Table 11.1, we recall the prop-

erty that g(τ − τ

0

) has Fourier transform G( f )e

−j 2π f τ

0

. Thus the Fourier transform of

R

XY

(τ) = R

X

(τ −t

0

) = g(τ −t

0

) is

S

XY

( f ) = S

X

( f )e

−j 2π f t

0

. (2)

Quiz 11.8

We solve this quiz using Theorem 11.17. First we need some preliminary facts. Let

a

0

= 5,000 so that

R

X

(τ) =

1

a

0

a

0

e

−a

0

|τ|

. (1)

Consulting with the Fourier transforms in Table 11.1, we see that

S

X

( f ) =

1

a

0

2a

2

0

a

2

0

+(2π f )

2

=

2a

0

a

2

0

+(2π f )

2

(2)

The RC ﬁlter has impulse response h(t ) = a

1

e

−a

1

t

u(t ), where u(t ) is the unit step function

and a

1

= 1/RC where RC = 10

−4

is the ﬁlter time constant. From Table 11.1,

H( f ) =

a

1

a

1

+ j 2π f

(3)

(1) Theorem 11.17,

S

XY

( f ) = H( f )S

X

( f ) =

2a

0

a

1

[a

1

+ j 2π f ]

_

a

2

0

+(2π f )

2

_. (4)

(2) Again by Theorem 11.17,

S

Y

( f ) = H

∗

( f )S

XY

( f ) = |H( f )|

2

S

X

( f ). (5)

71

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Note that

|H( f )|

2

= H( f )H

∗

( f ) =

a

1

(a

1

+ j 2π f )

a

1

(a

1

− j 2π f )

=

a

2

1

a

2

1

+(2π f )

2

(6)

Thus,

S

Y

( f ) = |H( f )|

2

S

X

( f ) =

2a

0

a

2

1

_

a

2

1

+(2π f )

2

_ _

a

2

0

+(2π f )

2

_ (7)

(3) To ﬁnd the average power at the ﬁlter output, we can either use basic calculus and

calculate

_

∞

−∞

S

Y

( f ) d f directly or we can ﬁnd R

Y

(τ) as an inverse transform of

S

Y

( f ). Using partial fractions and the Fourier transform table, the latter method is

actually less algebra. In particular, some algebra will show that

S

Y

( f ) =

K

0

a

2

0

+(2π f )

2

+

K

1

a

1

+(2π f )

2

(8)

where

K

0

=

2a

0

a

2

1

a

2

1

−a

2

0

, K

1

=

−2a

0

a

2

1

a

2

1

−a

2

0

. (9)

Thus,

S

Y

( f ) =

K

0

2a

2

0

2a

2

0

a

2

0

+(2π f )

2

+

K

1

2a

2

1

2a

2

1

a

1

+(2π f )

2

. (10)

Consulting with Table 11.1, we see that

R

Y

(τ) =

K

0

2a

2

0

a

0

e

−a

0

|τ|

+

K

1

2a

2

1

a

1

e

−a

1

|τ|

(11)

Substituting the values of K

0

and K

1

, we obtain

R

Y

(τ) =

a

2

1

e

−a

0

|τ|

−a

0

a

1

e

−a

1

|τ|

a

2

1

−a

2

0

. (12)

The average power of the Y(t ) process is

R

Y

(0) =

a

1

a

1

+a

0

=

2

3

. (13)

Note that the input signal has average power R

X

(0) = 1. Since the RC ﬁlter has a 3dB

bandwidth of 10,000 rad/sec and the signal X(t ) has most of its its signal energy below

5,000 rad/sec, the output signal has almost as much power as the input.

72

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.9

This quiz implements an example of Equations (11.146) and (11.147) for a system in

which we ﬁlter Y(t ) = X(t ) + N(t ) to produce an optimal linear estimate of X(t ). The

solution to this quiz is just to ﬁnd the ﬁlter

ˆ

H( f ) using Equation (11.146) and to calculate

the mean square error e

L

∗ using Equation (11.147).

Comment: Since the text omitted the derivations of Equations (11.146) and (11.147), we

note that Example 10.24 showed that

R

Y

(τ) = R

X

(τ) + R

N

(τ), R

Y X

(τ) = R

X

(τ). (1)

Taking Fourier transforms, it follows that

S

Y

( f ) = S

X

( f ) + S

N

( f ), S

Y X

( f ) = S

X

( f ). (2)

Now we can go on to the quiz, at peace with the derivations.

(1) Since µ

N

= 0, R

N

(0) = Var[N] = 1. This implies

R

N

(0) =

_

∞

−∞

S

N

( f ) d f =

_

B

−B

N

0

d f = 2N

0

B (3)

Thus N

0

= 1/(2B). Because the noise process N(t ) has constant power R

N

(0) = 1,

decreasing the single-sided bandwidth B increases the power spectral density of the

noise over frequencies | f | < B.

(2) Since R

X

(τ) = sinc(2Wτ), where W = 5,000 Hz, we see from Table 11.1 that

S

X

( f ) =

1

10

4

rect

_

f

10

4

_

. (4)

The noise power spectral density can be written as

S

N

( f ) = N

0

rect

_

f

2B

_

=

1

2B

rect

_

f

2B

_

, (5)

From Equation (11.146), the optimal ﬁlter is

ˆ

H( f ) =

S

X

( f )

S

X

( f ) + S

N

( f )

=

1

10

4

rect

_

f

10

4

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_. (6)

73

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) We produce the output

ˆ

X(t ) by passing the noisy signal Y(t ) through the ﬁlter

ˆ

H( f ).

From Equation (11.147), the mean square error of the estimate is

e

∗

L

=

_

∞

−∞

S

X

( f )S

N

( f )

S

X

( f ) + S

N

( f )

d f (7)

=

_

∞

−∞

1

10

4

rect

_

f

10

4

_

1

2B

rect

_

f

2B

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_ d f. (8)

To evaluate the MSE e

∗

L

, we need to whether B ≤ W. Since the problem asks us to

ﬁnd the largest possible B, let’s suppose B ≤ W. We can go back and consider the

case B > W later. When B ≤ W, the MSE is

e

∗

L

=

_

B

−B

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

10

4

1

10

4

+

1

2B

=

1

1 +

5,000

B

(9)

To obtain MSE e

∗

L

≤ 0.05 requires B ≤ 5,000/19 = 263.16 Hz.

Although this completes the solution to the quiz, what is happening may not be obvious.

The noise power is always Var[N] = 1 Watt, for all values of B. As B is decreased, the PSD

S

N

( f ) becomes increasingly tall, but only over a bandwidth B that is decreasing. Thus as

B descreases, the ﬁlter

ˆ

H( f ) makes an increasingly deep and narrow notch at frequencies

| f | ≤ B. Two examples of the ﬁlter

ˆ

H( f ) are shown in Figure 7. As B shrinks, the ﬁlter

suppresses less of the signal of X(t ). The result is that the MSE goes down.

Finally, we note that we can choose B very large and also achieve MSE e

∗

L

= 0.05. In

particular, when B > W = 5000, S

N

( f ) = 1/2B over frequencies | f | < W. In this case,

the Wiener ﬁlter

ˆ

H( f ) is an ideal (ﬂat) lowpass ﬁlter

ˆ

H( f ) =

⎧

⎨

⎩

1

10

4

1

10

4

+

1

2B

| f | < 5,000,

0 otherwise.

(10)

Thus increasing B spreads the constant 1 watt of power of N(t ) over more bandwidth. The

Wiener ﬁlter removes the noise that is outside the band of the desired signal. The mean

square error is

e

∗

L

=

_

5000

−5000

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

2B

1

10

4

+

1

2B

=

1

B

5000

+1

(11)

In this case, B ≥ 9.5 ×10

4

guarantees e

∗

L

≤ 0.05.

Quiz 11.10

It is fairly straightforward to ﬁnd S

X

(φ) and S

Y

(φ). The only thing to keep in mind is

to use fftc to transform the autocorrelation R

X

[ f ] into the power spectral density S

X

(φ).

The following MATLAB program generates and plots the functions shown in Figure 8

74

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

B = 500 B = 2500

Figure 7: Wiener ﬁlter for Quiz 11.9.

%mquiz11.m

N=32;

rx=[2 4 2]; SX=fftc(rx,N); %autocorrelation and PSD

stem(0:N-1,abs(sx));

xlabel(’n’);ylabel(’S_X(n/N)’);

h2=0.5*[1 1]; H2=fft(h2,N); %impulse/filter response: M=2

SY2=SX.* ((abs(H2)).ˆ2);

figure; stem(0:N-1,abs(SY2)); %PSD of Y for M=2

xlabel(’n’);ylabel(’S_{Y_2}(n/N)’);

h10=0.1*ones(1,10); H10=fft(h10,N); %impulse/filter response: M=10

SY10=sx.*((abs(H10)).ˆ2);

figure; stem(0:N-1,abs(SY10));

xlabel(’n’);ylabel(’S_{Y_{10}}(n/N)’);

Relative to M = 2, when M = 10, the ﬁlter H(φ) ﬁlters out almost all of the high

frequency components of X(t ). In the context of Example 11.26, the low pass moving

average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter

output that varies very slowly.

As an aside, note that the vectors SX, SY2 and SY10 in mquiz11 should all be real-

valued vectors. However, the ﬁnite numerical precision of MATLAB results in tiny imagi-

nary parts. Although these imaginary parts have no computational signiﬁcance, they tend

to confuse the stem function. Hence, we generate stem plots of the magnitude of each

power spectral density.

75

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 5 10 15 20 25 30 35

0

5

10

n

S

X

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

2

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

1

0

(

n

/

N

)

Figure 8: For Quiz 11.10, graphs of S

X

(φ), S

Y

(n/N) for M = 2, and S

φ

(n/N) for M = 10

using an N = 32 point DFT.

76

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 12

Quiz 12.1

The system has two states depending on whether the previous packet was received in

error. From the problem statement, we are given the conditional probabilities

P

_

X

n+1

= 0|X

n

= 0

_

= 0.99 P

_

X

n+1

= 1|X

n

= 1

_

= 0.9 (1)

Since each X

n

must be either 0 or 1, we can conclude that

P

_

X

n+1

= 1|X

n

= 0

_

= 0.01 P

_

X

n+1

= 0|X

n

= 1

_

= 0.1 (2)

These conditional probabilities correspond to the transition matrix and Markov chain:

0 1

0.01

0.1

0.99 0.9

P =

_

0.99 0.01

0.10 0.90

_

(3)

Quiz 12.2

From the problem statement, the Markov chain and the transition matrix are

0 1 1

0.6 0.2

0.2 0.6

0.4 0.6 0.4

P =

⎡

⎣

0.4 0.6 0

0.2 0.6 0.2

0 0.6 0.4

⎤

⎦

(1)

The eigenvalues of P are

λ

1

= 0 λ

2

= 0.4 λ

3

= 1 (2)

We can diagonalize P into

P = S

−1

DS =

⎡

⎣

−0.6 0.5 1

0.4 0 1

−0.6 −0.5 1

⎤

⎦

⎡

⎣

λ

1

0 0

0 λ

2

0

0 0 λ

3

⎤

⎦

⎡

⎣

−0.5 1 −0.5

1 0 −1

0.2 0.6 0.2

⎤

⎦

(3)

where s

i

, the i th row of S, is the left eigenvector of P satisfying s

i

P = λ

i

s

i

. Algebra will

verify that the n-step transition matrix is

P

n

= S

−1

D

n

S =

⎡

⎣

0.2 0.6 0.2

0.2 0.6 0.2

0.2 0.6 0.2

⎤

⎦

+(0.4)

n

⎡

⎣

0.5 0 −0.5

0 0 0

−0.5 0 0.5

⎤

⎦

(4)

Quiz 12.3

The Markov chain describing the factory status and the corresponding state transition

matrix are

77

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

2

0 1

0.9

0.1

1

1

P =

⎡

⎣

0.9 0.1 0

0 0 1

1 0 0

⎤

⎦

(1)

With π =

_

π

0

π

1

π

2

_

, the system of equations π

= π

P yields π

1

= 0.1π

0

and

π

2

= π

1

. This implies

π

0

+π

1

+π

2

= π

0

(1 +0.1 +0.1) = 1 (2)

It follows that the limiting state probabilities are

π

0

= 5/6, π

1

= 1/12, π

2

= 1/12. (3)

Quiz 12.4

The communicating classes are

C

1

= {0, 1} C

2

= {2, 3} C

3

= {4, 5, 6} (1)

The states in C

1

and C

3

are aperiodic. The states in C

2

have period 2. Once the system

enters a state in C

1

, the class C

1

is never left. Thus the states in C

1

are recurrent. That

is, C

1

is a recurrent class. Similarly, the states in C

3

are recurrent. On the other hand, the

states in C

2

are transient. Once the system exits C

2

, the states in C

2

are never reentered.

Quiz 12.5

At any time t , the state n can take on the values 0, 1, 2, . . .. The state transition proba-

bilities are

P

n−1,n

= P [K > n|K > n −1] =

P [K > n]

P [K > n −1]

(1)

P

n−1,0

= P [K = n|K > n −1] =

P [K = n]

P [K > n −1]

(2)

(3)

The Markov chain resembles

0 1

P K=2 [ ]

P K= [ 1]

3 4

P K=4 [ ]

2

P K=3 [ ]

P K=5 [ ]

1 1 1 1 1

… ...

78

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The stationary probabilities satisfy

π

0

= π

0

P [K = 1] +π

1

, (4)

π

1

= π

0

P [K = 2] +π

2

, (5)

.

.

.

π

k−1

= π

0

P [K = k] +π

k

, k = 1, 2, . . . (6)

From Equation (4), we obtain

π

1

= π

0

(1 − P [K = 1]) = π

0

P [K > 1] (7)

Similarly, Equation (5) implies

π

2

= π

1

−π

0

P [K = 2] = π

0

(P [K > 1] − P [K = 2]) = π

0

P [K > 2] (8)

This suggests that π

k

= π

0

P[K > k]. We verify this pattern by showing that π

k

=

π

0

P[K > k] satisﬁes Equation (6):

π

0

P [K > k −1] = π

0

P [K = k] +π

0

P [K > k] . (9)

When we apply

∞

k=0

π

k

= 1, we obtain π

0

∞

n=0

P[K > k] = 1. From Problem 2.5.11,

we recall that

∞

k=0

P[K > k] = E[K]. This implies

π

n

=

P [K > n]

E [K]

(10)

This Markov chain models repeated random countdowns. The system state is the time until

the counter expires. When the counter expires, the system is in state 0, and we randomly

reset the counter to a new value K = k and then we count down k units of time. Since we

spend one unit of time in each state, including state 0, we have k −1 units of time left after

the state 0 counter reset. If we have a random variable W such that the PMF of W satisﬁes

P

W

(n) = π

n

, then W has a discrete PMF representing the remaining time of the counter at

a time in the distant future.

Quiz 12.6

(1) By inspection, the number of transitions need to return to state 0 is always a multiple

of 2. Thus the period of state 0 is d = 2.

(2) To ﬁnd the stationary probabilities, we solve the system of equations π = πP and

3

i =0

π

i

= 1:

π

0

= (3/4)π

1

+(1/4)π

3

(1)

π

1

= (1/4)π

0

+(1/4)π

2

(2)

π

2

= (1/4)π

1

+(3/4)π

3

(3)

1 = π

0

+π

1

+π

2

+π

3

(4)

79

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Solving the second and third equations for π

2

and π

3

yields

π

2

= 4π

1

−π

0

π

3

= (4/3)π

2

−(1/3)π

1

= 5π

1

−(4/3)π

0

(5)

Substituting π

3

back into the ﬁrst equation yields

π

0

= (3/4)π

1

+(1/4)π

3

= (3/4)π

1

+(5/4)π

1

−(1/3)π

0

(6)

This implies π

1

= (2/3)π

0

. It follows from the ﬁrst and second equations that

π

2

= (5/3)π

0

and π

3

= 2π

0

. Lastly, we choose π

0

so the state probabilities sum to

1:

1 = π

0

+π

1

+π

2

+π

3

= π

0

_

1 +

2

3

+

5

3

+2

_

=

16

3

π

0

(7)

It follows that the state probabilities are

π

0

=

3

16

π

1

=

2

16

π

2

=

5

16

π

3

=

6

16

(8)

(3) Since the system starts in state 0 at time 0, we can use Theorem 12.14 to ﬁnd the

limiting probability that the system is in state 0 at time nd:

lim

n→∞

P

00

(nd) = dπ

0

=

3

8

(9)

Quiz 12.7

The Markov chain has the same structure as that in Example 12.22. The only difference

is the modiﬁed transition rates:

0 1

1

3 4

( ) 2/3

a

1 - ( ) 2/3

a

( ) 3/4

a

1 - 3/4 ( )

a

( ) 4/5

a

1 - 4/5 ( )

a

2

( ) 1/2

a

1- 1/2 ( )

a

…

The event T

00

> n occurs if the system reaches state n before returning to state 0, which

occurs with probability

P [T

00

> n] = 1 ×

_

1

2

_

α

×

_

2

3

_

α

×· · · ×

_

n −1

n

_

α

=

_

1

n

_

α

. (1)

Thus the CDF of T

00

satisﬁes F

T

00

(n) = 1−P[T

00

> n] = 1−1/n

α

. To determine whether

state 0 is recurrent, we observe that for all α > 0

P [V

00

] = lim

n→∞

F

T

00

(n) = lim

n→∞

1 −

1

n

α

= 1. (2)

80

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus state 0 is recurrent for all α > 0. Since the chain has only one communicating class,

all states are recurrent. ( We also note that if α = 0, then all states are transient.)

To determine whether the chain is null recurrent or positive recurrent, we need to calcu-

late E[T

00

]. In Example 12.24, we did this by deriving the PMF P

T

00

(n). In this problem,

it will be simpler to use the result of Problem 2.5.11 which says that

∞

k=0

P[K > k] =

E[K] for any non-negative integer-valued random variable K. Applying this result, the

expected time to return to state 0 is

E [T

00

] =

∞

n=0

P [T

00

> n] = 1 +

∞

n=1

1

n

α

. (3)

For 0 < α ≤ 1, 1/n

α

≥ 1/n and it follows that

E [T

00

] ≥ 1 +

∞

n=1

1

n

= ∞. (4)

We conclude that the Markov chain is null recurrent for 0 < α ≤ 1. On the other hand, for

α > 1,

E [T

00

] = 2 +

∞

n=2

1

n

α

. (5)

Note that for all n ≥ 2

1

n

α

≤

_

n

n−1

dx

x

α

(6)

This implies

E [T

00

] ≤ 2 +

∞

n=2

_

n

n−1

dx

x

α

(7)

= 2 +

_

∞

1

dx

x

α

(8)

= 2 +

x

−α+1

−α +1

¸

¸

¸

¸

∞

1

= 2 +

1

α −1

< ∞ (9)

Thus for all α > 1, the Markov chain is positive recurrent.

Quiz 12.8

The number of customers in the ”friendly” store is given by the Markov chain

1 i i+1

p p p

( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q

( ) 1-p q ( ) 1-p q ( ) 1-p q ( ) 1-p q

0

××× ×××

81

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

In the above chain, we note that (1 − p)q is the probability that no new customer arrives,

an existing customer gets one unit of service and then departs the store.

By applying Theorem 12.13 with state space partitioned between S = {0, 1, . . . , i } and

S

**= {i +1, i +2, . . .}, we see that for any state i ≥ 0,
**

π

i

p = π

i +1

(1 − p)q. (1)

This implies

π

i +1

=

p

(1 − p)q

π

i

. (2)

Since Equation (2) holds for i = 0, 1, . . ., we have that π

i

= π

0

α

i

where

α =

p

(1 − p)q

. (3)

Requiring the state probabilities to sum to 1, we have that for α < 1,

∞

i =0

π

i

= π

0

∞

i =0

α

i

=

π

0

1 −α

= 1. (4)

Thus for α < 1, the limiting state probabilities are

π

i

= (1 −α)α

i

, i = 0, 1, 2, . . . (5)

In addition, for α ≥ 1 or, equivalently, p ≥ q/(1 − q), the limiting state probabilities do

not exist.

Quiz 12.9

The continuous time Markov chain describing the processor is

0 1

2

3.01

3 4

2

3

2

3

2

2

3

0.01

0.01

0.01

Note that q

10

= 3.1 since the task completes at rate 3 per msec and the processor reboots

at rate 0.1 per msec and the rate to state 0 is the sum of those two rates. From the Markov

chain, we obtain the following useful equations for the stationary distribution.

5.01p

1

= 2p

0

+3p

2

5.01p

2

= 2p

1

+3p

3

5.01p

3

= 2p

2

+3p

4

3.01p

4

= 2p

3

We can solve these equations by working backward and solving for p

4

in terms of p

3

, p

3

in terms of p

2

and so on, yielding

p

4

=

20

31

p

3

p

3

=

620

981

p

2

p

2

=

19620

31431

p

1

p

1

=

628, 620

1, 014, 381

p

0

(1)

82

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Applying p

0

+ p

1

+ p

2

+ p

3

+ p

4

= 1 yields p

0

= 1, 014, 381/2, 443, 401 and the

stationary probabilities are

p

0

= 0.4151 p

1

= 0.2573 p

2

= 0.1606 p

3

= 0.1015 p

4

= 0.0655 (2)

Quiz 12.10

The M/M/c/∞queue has Markov chain

c c+1 1 0

λ λ λ λ λ

µ 2µ

cµ cµ cµ

From the Markov chain, the stationary probabilities must satisfy

p

n

=

_

(ρ/n) p

n−1

n = 1, 2, . . . , c

(ρ/c) p

n−1

n = c +1, c +2, . . .

(1)

It is straightforward to show that this implies

p

n

=

_

p

0

ρ

n

/n! n = 1, 2, . . . , c

p

0

(ρ/c)

n−c

ρ

c

/c! n = c +1, c +2, . . .

(2)

The requirement that

∞

n=0

p

n

= 1 yields

p

0

=

_

c

n=0

ρ

n

/n! +

ρ

c

c!

ρ/c

1 −ρ/c

_

−1

(3)

83

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Functions for Random Variables
**

bernoullipmf y=bernoullipmf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).

function pv=bernoullipmf(p,x) %For Bernoulli (p) rv X %input = vector x %output = vector pv %such that pv(i)=Prob(X=x(i)) pv=(1-p)*(x==0) + p*(x==1); pv=pv(:);

bernoullicdf

y=bernoullicdf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=bernoullicdf(p,x) %Usage: cdf=bernoullicdf(p,x) % For Bernoulli (p) rv X, %given input vector x, output is %vector pv such that pv(i)=Prob[X<=x(i)] x=floor(x(:)); allx=0:1; allcdf=cumsum(bernoullipmf(p,allx)); okx=(x>=0); %x_i < 1 are bad values x=(okx.*x); %set bad x_i=0 cdf= okx.*allcdf(x); %zeroes out bad x_i

bernoullirv

x=bernoullirv(p,m) Input: p is the success probability of a Bernoulli random variable X , m is a positive integer vector of possible sample values Output: x is a vector of m independent sample values of X

function x=bernoullirv(p,m) %return m samples of bernoulli (p) rv r=rand(m,1); x=(r>=(1-p));

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf

y=bignomialpmf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). Comment: This function should always produce the same output as binomialpmf(n,p,x); however, the function calculates the logarithm of the probability and thismay lead to small numerical innaccuracy.

function pmf=bignomialpmf(n,p,x) %binomial(n,p) rv X, %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] k=(0:n-1)’; a=log((p/(1-p))*((n-k)./(k+1))); L0=n*log(1-p); L=[L0; L0+cumsum(a)]; pb=exp(L); % pb=[P[X=0] ... P[X=n]]ˆt x=x(:); okx =(x>=0).*(x<=n).*(x==floor(x)); x=okx.*x; pmf=okx.*pb(x+1);

binomialcdf

y=binomialcdf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=binomialcdf(n,p,x) %Usage: cdf=binomialcdf(n,p,x) %For binomial(n,p) rv X, %and input vector x, output is %vector cdf: cdf(i)=P[X<=x(i)] x=floor(x(:)); %for noninteger x(i) allx=0:max(x); %calculate cdf from 0 to max(x) allcdf=cumsum(binomialpmf(n,p,allx)); okx=(x>=0); %x(i) < 0 are zero-prob values x=(okx.*x); %set zero-prob x(i)=0 cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

m) % m binomial(n. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=binomialrv(n.5 pp=p. binomialrv x=binomialrv(n. % pb=[P[X=0] .x) %binomial(n. if pp < p pb=fliplr(pb). us (United States) Zip Code:71901 . cdf=binomialcdf(n.rho.y) %Usage: f=bivariategausspdf(muX. okx =(x>=0). x=okx.p) rv X. f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2)).rho. AR.com Phone:5017621195 binomialpmf y=binomialpmf(n.*x.sigmaX.p. end pb=pb(:).sigmaX.ˆ2) +(ny.p. scalars x and y.m) Input: n and p are the parameters of a binomial random variable X .r)..x.y) %Evaluate the bivariate Gaussian (muX.p. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] if p<0.muY.1). ip= ((n-i). hot springs.*ny))/(2*(1-rhoˆ2))). p) random variable X . Input: Scalar parameters muX.Name:joey iwatsuru Email:joeyiwat@yahoo.*pb(x+1). bivariategausspdf function f=bivariategausspdf(muX.p.sigmaX. function pmf=binomialpmf(n.*(x==floor(x)).rho) PDF nx=(x-muX)/sigmaX.sigmaY.p) samples r=rand(m. ny=(y-muY)/sigmaY.muY.muY.(2*rho*nx.*(x<=n).muY. pb=((1-pp)ˆn)*cumprod([1 ip])./(i+1))*(pp/(1-pp)). 4 Address:104 pine meadows loop. Output: f the value of the bivariate Gaussian PDF at x.sigmaY.rho of the bivariate Gaussian PDF.sigmaX. f=exp(-((nx.ˆ2) .sigmaY.x) Input: n and p are the parameters of a binomial (n. pmf=okx.sigmaY.p. x=count(cdf.0:n). P[X=n]]ˆt x=x(:). x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i))..y. else pp=1-p. end i=0:n-1.x.

l) random variable r=rand(m.l. l) random variable X .x) %discrete uniform(k. 5 Address:104 pine meadows loop. hot springs.l. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). %set zero prob x(i)=k x=((1-okx)*k)+(okx.m) Input: k and l are the parameters of a discrete uniform (k. %x(i)=0 for zero prob x(i) cdf= okx. l) random variable X . x=k+count(cdf.l.1). %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] pmf= (x>=k). AR.l. %for noninteger x_i allx=k:max(x).l. pmf=pmf(:)/(l-k+1).m) %returns m samples of a discrete %uniform (k. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).*x).l.l. l) random variable X . function pmf=duniformpmf(k. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=duniformrv(k.allx)).r).com Phone:5017621195 duniformcdf y=duniformcdf(k.x) Input: k and l are the parameters of a discrete uniform (k. %allcdf = cdf values from 0 to max(x) allcdf=cumsum(duniformpmf(k.l) rv X % and input vector x. duniformpmf y=duniformpmf(k. output is % vector cdf: cdf(i)=Prob[X<=x(i)] x=floor(x(:)).Name:joey iwatsuru Email:joeyiwat@yahoo.x) Input: k and l are the parameters of a discrete uniform (k.k:l). %x_i < k are zero prob values okx=(x>=k).l. function cdf=duniformcdf(k.l) rv X. cdf=duniformcdf(k.*(x==floor(x)).l.x) %Usage: cdf=duniformcdf(k. duniformrv x=duniformrv(k.x) % For discrete uniform (k.*allcdf(x-k+1).*(x<=l). us (United States) Zip Code:71901 .

c) %returns the Erlang-B blocking %probability for sn M/M/c/c %queue with load rho pn=exp(-rho)*poissonpmf(rho.com Phone:5017621195 erlangb pb=erlangb(rho. erlangpdf y=erlangpdf(n.m) y=exponentialrv(lambda. *(x.ˆ(n-1)).lambda. erlangcdf y=erlangcdf(n..lambda. erlangrv x=erlangrv(n.lambda. and the number of servers c of an M/M/c/c queue.Name:joey iwatsuru Email:joeyiwat@yahoo. %Usage: pb=erlangb(rho. 6 Address:104 pine meadows loop.x) F=1.x) Input: n and lambda are the parameters of an Erlang random variable X . vector x Output: Vector y such that yi = FX (xi ) = 1 − e−λxi .c) Input: Offered load rho (ρ = λ/µ).x) Input: n and lambda are the parameters of an Erlang random variable X .x) F=1.n). pb=pn(c+1)/sum(pn). function F=exponentialcdf(lambda.c).2). vector x Output: Vector y such that yi = f X (xi ) = λn xin−1 e−λxi /(n − 1)!.0-exp(-lambda*x).m) Input: n and lambda are the parameters of an Erlang random variable X .x) Input: lambda is the parameter of an exponential random variable X .0-poissoncdf(lambda*x. Output: pb. exponentialcdf y=exponentialcdf(lambda. hot springs.m*n). the blocking probability of the queue function pb=erlangb(rho. vector x Output: Vector y such that yi = FX (xi ).x) f=((lambdaˆn)/factorial(n)). function F=erlangcdf(n.lambda. us (United States) Zip Code:71901 .*exp(-lambda*x). AR.lambda..m. integer m Output: Length m vector x such that each xi is a sample of X function x=erlangrv(n. x=sum(reshape(y.0:c).lambda.n-1). function f=erlangpdf(n.

the correlation coefﬁcient of X and Y function rho=finitecoeff(SX. function f=exponentialpdf(lambda. pxi].x) % finite random variable X: % vector sx of sample space % elements {sx(1).x) f=lambda*exp(-lambda*x). hot springs.. . R=finiteexp(SX.PXY).PXY). vy=finitevar(SY. f=f. %Usage: rho=finitecoeff(SX. Output: rho.*SY.PXY).m) x=-(1/lambda)*log(1-rand(m.m) Input: lambda is the parameter of an exponential random variable X .x) Input: sx is the range of a ﬁnite random variable X ..SY.SY. for i=1:length(x) pxi= sum(p(find(s<=x(i)))).1)).PXY).SY.PXY) Input: Grids SX.} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % cdf: cdf(i)=P[X=x(i)] cdf=[]. SY and probability grid PXY describing the ﬁnite random variables X and Y . exponentialrv x=exponentialrv(lambda. ey=finiteexp(SY. us (United States) Zip Code:71901 . vector x Output: Vector y such that yi = f X (xi ) = λe−λxi .x) Input: lambda is the parameter of an exponential random variable X . rho=(R-ex*ey)/sqrt(vx*vy).Name:joey iwatsuru Email:joeyiwat@yahoo.sx(2). end finitecoeff rho=finitecoeff(SX. finitecdf y=finitecdf(sx.PXY). x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).PXY).PXY) %Calculate the correlation coefficient rho of %finite random variables X and Y ex=finiteexp(SX. integer m Output: Length m vector x such that each xi is a sample of X function x=exponentialrv(lambda. 7 Address:104 pine meadows loop.com Phone:5017621195 exponentialpdf y=exponentialpdf(lambda.*(x>=0). function cdf=finitecdf(s. AR.p.p. vx=finitevar(SX. px is the corresponding probability assignment. cdf=[cdf.

m is positive integer Output: x is a vector of m sample values y(i) = FX (x(i)).px).PXY).px. function pmf=finitepmf(sx. %Usage: cxy=finitecov(SX. finiteexp ex=finiteexp(sx. the expected value E[X ]. function x=finiterv(s. Output: ex..p.com Phone:5017621195 finitecov covxy=finitecov(SX. . p is the corresponding probability assignment. px is the corresponding probability assignment.r)).p) rv %s=s(:).m) Input: sx is the range of a ﬁnite random variable X . x=s(1+count(cdf.PXY) Input: Grids SX. us (United States) Zip Code:71901 .px) Input: Probability vector px. Output: covxy.SY. r=rand(m.x) Input: sx is the range of a ﬁnite random variable X .PXY).PXY). cdf=cumsum(p).p.1). for i=1:length(x) pmf(i)= sum(px(find(sx==x(i))))..Name:joey iwatsuru Email:joeyiwat@yahoo.*SY. ey=finiteexp(SY. %Usage: ex=finiteexp(sx. end finiterv x=finiterv(sx.PXY). and PXY ex=finiteexp(SX. the covariance of X and Y. SY.*(px(:))). finitepmf y=finitepmf(sx. vector of samples sx describing random variable X . hot springs. AR. function covxy=finitecov(SX.PXY) %returns the covariance of %finite random variables X and Y %given by grids SX.px) %returns the expected value E[X] %of finite random variable X described %by samples sx and probabilities px ex=sum((sx(:)).} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % pmf: pmf(i)=P[X=x(i)] pmf=zeros(size(x(:))). 8 Address:104 pine meadows loop. function ex=finiteexp(sx.p=p(:).SY. x is a vector of possible sample values Output: y is a vector with y(i) = P[X = x(i)].x) % finite random variable X: % vector sx of sample space % elements {sx(1).SY. covxy=R-ex*ey.sx(2). SY and probability grid PXY describing the ﬁnite random variables X and Y .m) % returns m samples % of finite (s.p. R=finiteexp(SX.

vector x Output: Vector y such that yi = FX (xi ) = ((xi − µ)/σ ).ˆ2/(2*sigmaˆ2))/.px) Input: Probability vector px and vector of samples sx describing random variable X . v=ex2-(exˆ2).com Phone:5017621195 finitevar v=finitevar(sx. %Usage: ex=finitevar(sx.sigma. hot springs.px) % returns the variance Var[X] % of finite random variables X described by % samples sx and probabilities px ex2=finiteexp(sx.1)).sigma. gausspdf y=gausspdf(mu. AR. variance function v=finitevar(sx.sigma.px). us (United States) Zip Code:71901 .m) Input: mu and sigma are the parameters of an Gaussian random variable X .px). 9 Address:104 pine meadows loop. vector x Output: Vector y such that yi = f X (xi ).. ex=finiteexp(sx.x) Input: mu and sigma are the parameters of an Guassian random variable X .x) f=exp(-(x-mu).x) f=phi((x-mu)/sigma).px). function f=gausspdf(mu.sigma.m) x=mu +(sigma*randn(m.sigma. the Var[X ].. gausscdf y=gausscdf(mu.Name:joey iwatsuru Email:joeyiwat@yahoo. integer m Output: Length m vector x such that each xi is a sample of X function x=gaussrv(mu. sqrt(2*pi*sigmaˆ2). gaussrv x=gaussrv(mu.ˆ2. function f=gausscdf(mu.x) Input: mu and sigma are the parameters of an Guassian random variable X . Output: v.sigma.

z=x(:)-mu(:).m) %output: m Gaussian vectors. Output: n × m matrix x such that each column x(:. %each with mean mu %and covariance matrix C if (min(size(C))==1) C=toeplitz(C).V]=svd(C)..D.. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).com Phone:5017621195 gaussvector x=gaussvector(mu. output is vector %cdf such that cdf_i=Prob(X<=x_i) x=(x(:)>=1). CX ) random vector X. us (United States) Zip Code:71901 . 10 Address:104 pine meadows loop.i) is a sample vector of X function x=gaussvector(mu. m is an integer. C is the n × n covariance matrix. AR.C. cdf=1-((1-p).. gaussvectorpdf f=gaussvector(mu.C.x) n=length(x). function f=gaussvectorpdf(mu. mu is either a length n vector.2). Output: f is the Gaussian vector PDF f X (x) evaluated at x. m is an integer. each element of X is assumed to have mean mu..m)). function cdf=geometriccdf(p.x) % for geometric(p) rv X. CX ) random vector X. otherwise. x=V*(Dˆ(0. x is a length n vector. mu is a length n vector.x) Input: p is the parameter of a geometric random variable X . or a length 1 scalar. +(mu(:)*ones(1.m) Input: For a Gaussian (µX . end [U. %For input vector x. f=exp(-z’*inv(C)*z)/.1). if (length(mu)==1) mu=mu*ones(n. geometriccdf y=geometriccdf(p.Name:joey iwatsuru Email:joeyiwat@yahoo. mu is either a length n vector.5))*randn(n.m). • C is the length n vector equal to the ﬁrst row of a symmetric Toeplitz covariance matrix CX . sqrt((2*pi)ˆn*det(C)). gaussvector can be called in two ways: • C is the n × n covariance matrix.*floor(x(:)). If mu is a length n vector.ˆx).C. hot springs.x) Input: For a Gaussian (µX . or a length 1 scalar.C. then mu is the expected value vector. end n=size(C.

x) Input: p is the parameter of a geometric random variable X . us (United States) Zip Code:71901 . integer m Output: Length m vector x such that each xi is a sample of X function x=icdfrv(icdfhandle. x=feval(icdfhandle. AR.com Phone:5017621195 geometricpmf y=geometricpmf(p. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=geometricrv(p.*(x==floor(x)). icdfrv x=icdfrv(@icdf.m that is M ATLAB’s representation of an inverse −1 CDF FX (x) of a random variable X .*pmf.Name:joey iwatsuru Email:joeyiwat@yahoo. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). 11 Address:104 pine meadows loop.m) %Usage: x=geometricrv(p. geometricrv x=geometricrv(p. hot springs.1).m) %returns m samples of rv X %with inverse CDF icdf.ˆ(x-1)).m) Input: @icdfrv is a “handle” (a kind of pointer) to a M ATLAB function icdf. x=ceil(log(1-r)/log(1-p)).m) %Usage: x=icdfrv(@icdf.m u=rand(m.x) %geometric(p) rv X %out: pmf(i)=Prob[X=x(i)] x=x(:). function pmf=geometricpmf(p. pmf= p*((1-p).u). pmf= (x>0).1).m) Input: p is the parameters of a geometric random variable X .m) % returns m samples of a geometric (p) rv r=rand(m.

%set zero-prob x(i)=k. %set bad x(i)=k to stop bad indexing x=(okx.com Phone:5017621195 pascalcdf y=pascalcdf(k. output is a %vector pmf: pmf(i)=Prob[X=x(i)] x=x(:). cdf= okx.allx)). function cdf=pascalcdf(k. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). pascalpmf y=pascalpmf(k.p.Name:joey iwatsuru Email:joeyiwat@yahoo.*(x>=k).(1-p)*(i. function pmf=pascalpmf(k.x) %For a pascal (k. %just so indexing is not fouled up x=(okx. % for noninteger x(i) allx=k:max(x). the output %is a vector cdf such that % cdf(i)=Prob[X<=x(i)] x=floor(x(:)). us (United States) Zip Code:71901 . %allcdf holds all needed cdf values allcdf=cumsum(pascalpmf(k.*pb(x-k+1). p) random variable X .p) rv X %and input vector x.x) Input: k and p are the parameters of a Pascal (k./(i+1-k))]. % other values are OK okx=(x>=k).p) rv X.p. AR.x) %For Pascal (k. i=(k:n-1)’.*x) + k*(1-okx). and %input vector x.p. okx=(x==floor(x)).*allcdf(x-k+1). hot springs. n=max(x).x) Input: k and p are the parameters of a Pascal (k.*x) +((1-okx)*k).p.x) %Usage: cdf=pascalcdf(k. % pmf(i)=0 unless x(i) >= k pmf=okx. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). p) random variable X . %x_i < k have zero-prob. %pb=all n-k+1 pascal probs pb=(pˆk)*cumprod(ip). ip= [1 .p.p. 12 Address:104 pine meadows loop.

us (United States) Zip Code:71901 . cdf=pascalcdf(k. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=pascalrv(k. %cdf=0 for x(i)<0 13 Address:104 pine meadows loop.p) rv r=rand(m. phi y=phi(x) Input: Vector x Output: Vector y such that y(i) = (x(i)).x) Input: alpha is the parameter of a Poisson (α) random variable X . xmax=ceil(2*(k/p)).p. function y=phi(x) sq2=sqrt(2). cdf=pascalcdf(k.5*erf(x/sq2).m) % return m samples of pascal(k. sx=0:max(x).r). y= 0. poissoncdf y=poissoncdf(alpha. rmax=max(r). = function cdf=poissoncdf(alpha.*x).%x(i)<0 -> cdf=0 x=(okx. xmin=k.%set negative x(i)=0 cdf= okx. end x=xmin+countless(cdf.x) %output cdf(i)=Prob[X<=x(i)] x=floor(x(:)). hot springs. %set max range sx=xmin:xmax.p. x is a vector of possible sample values Output: y is a vector with y(i) FX (x(i)).1). cdf=cumsum(poissonpmf(alpha. sx=xmin:xmax.p. %cdf from 0 to max(x) okx=(x>=0).com Phone:5017621195 pascalrv x=pascalrv(k.p.5 + 0.sx).*cdf(x+1).Name:joey iwatsuru Email:joeyiwat@yahoo.m) Input: k and p are the parameters of a Pascal random variable X . while cdf(length(cdf)) <=rmax xmax=2*xmax.sx)). AR.sx).

. xmin=0. vector x Output: Vector y such that yi = FX (xi ) function F=uniformcdf(a. function pmf=poissonpmf(alpha. %while ( sum(cdf <=rmax) ==(xmax-xmin+1) ) while cdf(length(cdf)) <=rmax xmax=2*xmax. -alpha+ (k*log(alpha))-logfacts]).x) %Poisson (alpha) rv X. AR. hot springs.x) %returns the CDF of a continuous %uniform rv evaluated at x F=x. k=(1:max(x))’.com Phone:5017621195 poissonpmf y=poissonpmf(alpha.*((x>=a) & (x<b))/(b-a). us (United States) Zip Code:71901 .0*(x>=b). 14 Address:104 pine meadows loop.b. pmf=okx.b. x=okx. %out=vector pmf: pmf(i)=P[X=x(i)] x=x(:).r).x) Input: alpha is the parameter of a Poisson (α) random variable X . cdf=poissoncdf(alpha. . F=f+1.sx).Name:joey iwatsuru Email:joeyiwat@yahoo.x) %Usage: F=uniformcdf(a. %pmf(i)=0 for zero-prob x(i) poissonrv x=poissonrv(alpha. okx=(x>=0).x) Input: a and ( b) are parameters for continuous uniform random variable X . sx=xmin:xmax. cdf=poissoncdf(alpha. logfacts =cumsum(log(k)).b. end x=xmin+countless(cdf. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=poissonrv(alpha.sx).. uniformcdf y=uniformcdf(a. rmax=max(r).1). %set max range sx=xmin:xmax.*x. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). xmax=ceil(2*alpha).*(x==floor(x)).m) %return m samples of poisson(alpha) rv X r=rand(m.m) Input: alpha is the parameter of a Poisson (α) random variable X . pb=exp([-alpha.*pb(x+1).

us (United States) Zip Code:71901 .m) Input: a and ( b) are parameters for continuous uniform random variable X .Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 uniformpdf y=uniformpdf(a.x) %Usage: f=uniformpdf(a. vector x Output: Vector y such that yi = f X (xi ) function f=uniformpdf(a.m) %Returns m samples of a %uniform (a. positive integer m Output: m element vector x such that each x(i) is a sample of X .b) random varible x=a+(b-a)*rand(m. uniformrv x=uniformrv(a. 15 Address:104 pine meadows loop.x) %returns the PDF of a continuous %uniform rv evaluated at x f=((x>=a) & (x<b))/(b-a). function x=uniformrv(a. hot springs. AR.m) %Usage: x=uniformrv(a.b.x) Input: a and ( b) are parameters for continuous uniform random variable X .b.b.b.b.1).b.

hot springs.t(1:n-1)]. pv=([1 zeros(1.com Phone:5017621195 Functions for Stochastic Processes brownian w=brownian(alpha. w=cumsum(x). %max no.1). A=(eye(n)-P). 16 Address:104 pine meadows loop.t) %Q has zero diagonal rates %initial state probabilities p0 K=size(Q.2)). n=size(Q. delta=t-[0.1). cmcprob pv=cmcprob(Q. then the simulation starts in state p0 function pv = cmcprob(Q.2)). t=t(:).n-1)]*Aˆ(-1))’.. cmcstatprob pv=cmcstatprob(Q) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain Output: pv is the stationary probability vector for the continuous-time Markov chain function pv = cmcstatprob(Q) %Q has zero diagonal rates R=Q-diag(sum(Q.n).n-1)]*Rˆ(-1))’. state %check for integer p0 if (length(p0)==1) p0=((0:K)==p0). AR. n=length(t). x=sqrt(alpha*delta).t) Input: t is a vector holding an ordered sequence of inspection times.Name:joey iwatsuru Email:joeyiwat@yahoo.1).1).1)-1. end R=Q-diag(sum(Q.p0. alpha is the scaling constant of a Brownian motion process such that the ith increment has variance α(ti − ti−1 ). pv=([1 zeros(1.p0.. function pv = dmcstatprob(P) n=size(P. Output: w is a vector such that w(i) is the position at time t(i) of the particle in Brownian motion. nonengative scalar t Output: Length n vector pv such that pv(t) is the state probability vector at time t of the Markov chain Comment: If p0 is a scalar integer. length n vector p0 denoting the initial state probabilities.t) Input: n × n state transition matrix Q for a continuous-time ﬁnite Markov chain.t) %Brownian motion process %sampled at t(1)<t(2)< . R(:. us (United States) Zip Code:71901 .*gaussrv(0. dmcstatprob pv=dmcstatprob(P) Input: n × n stochastic matrix P representing a discrete-time aperiodic irreducible ﬁnite Markov chain Output: pv is the stationary probability vector. function w=brownian(alpha.1)=ones(n. A(:.1)=ones(n.1. pv= (p0(:)’*expm(R*t))’.

end n=1+sum(cumsum(ST(:. Input: lambda is the arrival rate of a Poisson process. s(n)] % s(n)<= T < s(n+1) n=ceil(1.2) is the amount of time the system spends in state ST(i. Note that n. vector t %For a sample function of a %Poisson process of rate lambda. s=[s.13.. N=count(s.2*n).:).2))<T).T) %arrival times s=[s(1) . s_new=s(length(s))+ .p0. hot springs. ST=ST(1:n. while (sum(ST(:.1*lambda*T). rate ps=cmcstatprob(Q).t) Input: lambda is the arrival rate of a Poisson process. then the simulation starts in state p0.T).n).6*T/R).n)).n)).1). p00=Q(1+s.2)=T-sum(ST(1:n-1. n=ceil(0. There are decidedly better ways to create a set of arrival times. function ST=simcmc(Q. 17 Address:104 pine meadows loop..t) %input: rate lambda>0. ST=[ST.2)).1) is the sequence of system states and the second column ST(:..2))<T).T) function s=poissonarrivals(lambda. end s=s(s<=T). integer n Output: A simulation of the Markov chain system over the time interval [0. max no.:)/v(1+s).2). is random. s=cumsum(exponentialrv(lambda. cumsum(exponentialrv(lambda.Name:joey iwatsuru Email:joeyiwat@yahoo.p00. simcmc ST=simcmc(Q.2) is the amount of time spent in each state. Comment: This code is pretty stupid. of arrivals by t(i) s=poissonarrivals(lambda.1). state %calc average trans. s_new]. T ]: The output is an n × 2 matrix ST such that the ﬁrst column ST(:..p0.. t is a vector of “inspection times’. v=sum(Q. s=ST(size(ST. ST=simcmcstep(Q. s(n)]’ is a vector such that s(i) is ith arrival time. %N(i) = no.’ Output: N is a vector such that N(i) is the number of arrival by inspection time t(i). That is. AR.max(t)).. us (United States) Zip Code:71901 .T) Input: state transition matrix Q for a continuous-time ﬁnite Markov chain. %truncate last holding time ST(n. the number of state occupancy periods.p0. Comment: If p0 is a scalar integer. function N=poissonprocess(lambda. K=size(Q.. S=simcmcstep(Q. R=ps’*v. while (s(length(s))< T).com Phone:5017621195 poissonarrivals s=poissonarrivals(lambda.5. see Problem 10. .S]. Note that length n is a Poisson random variable with expected value λT . T marks the end of an observation interval [0. ST(i. T ].t). poissonprocess N=poissonprocess(lambda.1). vector p0 denoting the initial state probabilities.1)-1. Output: s=[s(1).

%S=simcmcstep(Q.Name:joey iwatsuru Email:joeyiwat@yahoo. end v=sum(Q. function S=simcmcstep(Q. S(:. rate matrix Q. % init.%init allocation %check for integer p0 if (length(p0)==1) p0=((0:K)==p0). Comment: If p0 is a scalar integer.n). This program is the basis for simcmc.n+1).com Phone:5017621195 simcmcstep S=simcmcstep(Q. x(m+1)=finiterv(sx.1). length n vector p0 denoting the initial state probabilities.n).1)-1.*exponentialrv(1. ST(i.p0.n) function x=simdmc(P.1).1)=simdmc(P.p0. Output: A simulation of the Markov chain system such that for the length n vector x. then the simulation starts in state p0 18 Address:104 pine meadows loop. S(:. state S=zeros(n+1. %state dep. integer n Output: A simulation of n steps of the continuous-time Markov chain system: The output is an n × 2 matrix ST such that the ﬁrst column ST(:. AR. x(m) is the state at time m-1 of the Markov chain.2).n) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain.p0. end Input: n ×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov chain./v.1). state probabilities p0 K=size(Q.:).p0. That is.1) is the length n sequence of system states and the second column ST(:.2)=t(1+S(:.2) is the amount of time the system spends in state ST(i. vector p0 denoting the initial state probabilities.1). .p0..2) is the amount of time spent in each state. simdmc x=simdmc(P. us (United States) Zip Code:71901 . end x(1)=finiterv(sx.n) K=size(P. %x(m)= state at time m-1 for m=1:n. %max no.. then the simulation starts in state p0. hot springs. %state space x=zeros(n+1.P(x(m)+1. %initialization if (length(p0)==1) %convert integer p0 to prob vector p0=((0:K)==p0).n) % Simulate n steps of a cts % Markov Chain. P=diag(t)*Q. Comment: If p0 is a scalar integer. %highest no. rates t=1.1)-1.1)) . integer n.p0.2).p0. state sx=0:K.

function n=count(x.y) %Usage: n=countless(x.y) %n(i)= # elements of x <= y(i) [MX.1))’. hot springs. AR. us (United States) Zip Code:71901 .1))’.y).y) Input: Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x strictly less than y(i).MY]=ndgrid(x. F=exp((-1.MY]=ndgrid(x. Usage: F=dftmat(N) %F is the N by N DFT matrix n=(0:N-1)’.y) %Usage n=count(x.Name:joey iwatsuru Email:joeyiwat@yahoo. function n=countequal(x.1))’.com Phone:5017621195 Random Utilities count n=count(x. 19 Address:104 pine meadows loop. %each column of MX = x %each row of MY = y n=(sum((MX==MY). %each column of MX = x %each row of MY = y n=(sum((MX<=MY).y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x equal to y(i). countequal n=countequal(x.y). Output: F is the N by N discrete Fourier transform matrix function F = dftmat(N). dftmat F=dftmat(N) Input: Integer N .MY]=ndgrid(x.y) %n(j)= # elements of x = y(j) [MX.0j)*2*pi*(n*(n’))/N).y) %n(i)= # elements of x < y(i) [MX. function n=countless(x.y) %Usage: n=countequal(x.y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x less than or equal to y(i).y). countless n=countless(x. %each column of MX = x %each row of MY = y n=(sum((MX<MY).

S=fftc(r) Input: Vector r=[r(1) . r(2k+1)] holding the time sequence r−k . a list of random sample value pairs xy can be simulated by the commands S=[SX(:) SY(:)].com Phone:5017621195 freqxy fxy=freqxy(xy.Name:joey iwatsuru Email:joeyiwat@yahoo. .N): N point DFT of r % fftc(r): length(r) DFT of r r=varargin{1}. SX(:) SY(:)]. . .SY) %xy is an m x 2 matrix: %xy(i. yy(i.SY) Input: For random variables X and Y .’rows’). Output: fxy is a K × 3 matrix. %extend xy to include a sample %for all possible (X.J]=unique(xy.N).SX. .*exp((1. N=N/sum(N). xy=finiterv(S. else N=(2*L)-1.3)] [fxy(k.:) is the ith sample pair (X. .size(R)).1:max(J))-1.. Output: S is the DFT of r Comment: Supports the same calling conventions as fft.0j)*phase). freq. SY(:) PXY(:)].N).SY) %Usage: fxy = freqxy(xy. L=1+floor(length(r)/2). %reorder fxy rows to match %rows of [SX(:) SY(:) PXY(:)]: fxy=sortrows(fxy.SX.I. S=R. The output fxy is ordered so that the rows match the ordering of rows in the matrix [SX(:) fftc S=fftc(r. xy is an m × 2 matrix holding a list of sample values pairs.m). function fxy = freqxy(xy.2)] is a unique (X.1) fxy(k. rk centered around the origin..1) fxy(k. n=reshape(0:(N-1).3).Y) pairs: xy=[xy. %DFT for a signal r %centered at the origin %Usage: % fftc(r. 20 Address:104 pine meadows loop. AR.2) fxy(k. function S=fftc(varargin).SX. rel. Y ). Y ) pair with relative frequency fxy(k. N=hist(J. fxy=[U N(:)].2)] % = kth unique pair [x y] and % fxy(k. phase=2*pi*(n/N)*(L-1). SY and the probability grid PXY.PXY(:). . Grids SX and SY representing the sample space. if (nargin>1) N=varargin{2}(1).Y %Output fxy is a K x 3 matrix: % [fxy(k. us (United States) Zip Code:71901 .3)= corresp. In each row [fxy(k.[2 1 3]). r0 . end R=fft(r. hot springs. .:)= ith sample pair X. . Comment: Given the grids SX. [U.1) fxy(k.

y=1.PM. PM=[zeros(size(px)). y=((1. axis([xmin xmax 0 ymax]).’y axis text’) Input: Sample space vector sx and PMF vector px for ﬁnite random variable PXY. 21 Address:104 pine meadows loop. sinc y=sinc(x) Input: Vector x Output: Vector y such that yi = sinc(xi ) = sin(π xi ) π xi function y=sinc(x).5 0 otherwise function y=rect(x). sx].05*(xmax-xmin). px=(px(:))’. xx=x+(x==0).px.’Bottom’).px. function h=pmfplot(sx. XM = [sx. xmax=max(sx). if (nargin==4) xlabel(xls).xls.0*(abs(x)<0. end xmin=min(sx). ymax=1. xmin=xmin-xborder. xborder=0. xmax=xmax+xborder.1*max(px).px. h=plot(XM. rect y=rect(x) Input: Vector x Output: Vector y such that yi = rect(xi ) = 1 |xi | < 0.*y)+ (1.’-k’). sx=(sx(:))’.’VerticalAlignment’.0-(x==0)).xls.yls) %Usage: pmfplot(sx.0*(x==0)). ylabel(yls. px=px(nonzero)./(pi*xx). set(h.3). hot springs. %Usage:y=rect(x).com Phone:5017621195 pmfplot pmfplot(sx. optional text strings xls and yls Output: A plot of the PMF PX (x) in the bar style used in the text.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . Comment: The code is ugly because it makes sure to produce the right limit value at xi = 0. px]. AR.yls) %sx and px are vectors.’x’.5). y=sin(pi*xx). px is the PMF %xls and yls are x and y label strings nonzero=find(px).’LineWidth’. sx=sx(nonzero).

then each stair has equal width. % If S is an N by 2 matrix. % h is a handle to a stairs plot of the state sequence % vs state transition times %in case of discrete time simulation if (size(S.xls. AR. % S(:.p0.n) or the n × 2 state/time matrix ST generated by either ST=simcmc(Q. X=cumsum([0 .com Phone:5017621195 simplot simplot(S.Y). If S is n × 2 state/time matrix ST.2) = state visit times.xlabel.2)==1) S=[S ones(size(S))].Name:joey iwatsuru Email:joeyiwat@yahoo.’VerticalAlignment’.1) .1)].1) = state sequence.p0. hot springs.ylabel) function h=simplot(S.n). end Input: The simulated state sequence vector S generated by S=simdmc(P.1). S(:.yls). Comment: If S is just a state sequence vector.p0.ylabel) % Plots the output of a simulated state sequence % If S is N by 1.2)]). %h=simplot(S. a cts time Markov chain % is assumed where % S(:. 22 Address:104 pine meadows loop. a discrete time chain is assumed % with visit times of one unit.T) or ST=simcmcstep(Q. Output: A “stairs” plot showing the sequence of simulation states over time. us (United States) Zip Code:71901 . h=stairs(X. ylabel(yls. if (nargin==3) xlabel(xls). then the width of the stair is proportional to the time spent in that state. S(size(S.’Bottom’). end Y=[S(:. % The cumulative sum % of visit times are transition instances.xlabel.

• We have made a substantial effort to check the solution to every quiz.m ﬁles associated with examples or quizzes in the text.edu. When errors are found. Goodman May 22.pdf describing the general purpose .com Phone:5017621195 Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers Second Edition Quiz Solutions Roy D. This archive has programs of general purpose programs for solving probability problems as well as speciﬁc .rutgers. there is a nonzero probability (in fact. 2004 • The M ATLAB section quizzes at the end of each chapter use programs available for download as the archive matcode. please send email to ryates@winlab. us (United States) Zip Code:71901 . Nevertheless. Yates and David J. AR. If you ﬁnd errors or have suggestions or comments. hot springs.m ﬁles in matcode. corrected solutions will be posted at the website.zip.zip.Name:joey iwatsuru Email:joeyiwat@yahoo. Also available is a manual probmatlab. 1 Address:104 pine meadows loop. a probability close to unity) that errors will be found.

A4 and B4 are collectively exhaustive. vvd. ddd} (3) A2 = {vvv. ddv. we can simply check for these properties. ddd} (5) A3 = {vvv. The pair A4 and B4 are not mutually exclusive since dvd belongs to A4 and B4 . ddv. Since we have written down each pair Ai and Bi above. dvd} (4) B2 = {vdv. dvd. us (United States) Zip Code:71901 . 2 Address:104 pine meadows loop. The pair A3 and B3 are mutually exclusive but not collectively exhaustive. dvv.com Phone:5017621195 Quiz Solutions – Chapter 1 Quiz 1. dvv. The pair A1 and B1 are mutually exclusive and collectively exhaustive. dvd. dvd} (4) R ∩ M (6) T c − M (7) A4 = {vvv.Name:joey iwatsuru Email:joeyiwat@yahoo. vdd} Recall that Ai and Bi are collectively exhaustive if Ai ∪ Bi = S. vdv.1 In the Venn diagrams for parts (a)-(g) below. ddv} (8) B4 = {ddd. Also. However. dvd. ddv. Ai and Bi are mutually exclusive if Ai ∩ Bi = φ. The pair A2 and B2 are mutually exclusive and collectively exhaustive.2 (1) A1 = {vvv. ddd} (6) B3 = {vdv. vdd} (2) B1 = {dvv. M T O M T O M T O (1) R = T c (2) M ∪ O (3) M ∩ O M T O M T O M T O (4) R ∪ M Quiz 1. the shaded area represents the indicated set. hot springs. vvd. vdd. AR. vdv. vvd. vdd.

.05 and the complete table is V D L 0. . .35. .78 (7) P[a C grade or better] = P[{s70 .35 0.02 = 0. . . us (United States) Zip Code:71901 .25.35 = 0. . In particular. (1) P[{s79 }] = 0.35 and that P[DL] = 0. s100 }] = 11 × 0.05 Finding the various probabilities is now straightforward: 3 Address:104 pine meadows loop. . . and DL. . .35 ? The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1. s52 . s100 }] = 21 × 0.4 We can describe this experiment by the event space consisting of the four possible events V B. We represent these events in the table: V D L 0. .35 ? B ? ? In a roundabout way. s100 }] = 41 × 0.02.02 = 0.18 (5) P[T ≥ 80] = P[{s80 . . s100 }] = 31 × 0. . hot springs. s59 }] = 9 × 0.25 B 0. the problem statement tells us how to ﬁll in the table.42 (6) P[T < 90] = P[{s51 . . .02 (3) P[A] = P[{s90 .3 There are exactly 50 equally likely outcomes: s51 through s100 . we can conclude that P[V B] = 0.02 = 0. . .7 = P [V L] + P [V B] P [L] = 0.02 = 0. This implies P[D B] = 0.02 = 0. . D B.6 − 0.62 (8) P[student passes] = P[{s60 . AR. P [V ] = 0. .35 0.Name:joey iwatsuru Email:joeyiwat@yahoo.02 (2) P[{s100 }] = 0.35 0.82 Quiz 1. This allows us to ﬁll in two more table entries: V D L 0. Each of these outcomes has probability 0. . . .22 (4) P[F] = P[{s51 . V L. s89 }] = 39 × 0.02 = 0.com Phone:5017621195 Quiz 1.6 = P [V L] + P [DL] (1) (2) Since P[V L] = 0.25 B 0. .

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25 (2) P[D ∪ L] = P[V L] + P[DL] + P[D B] = 0.35 + 0.25 + 0.05 = 0.65. (3) P[V B] = 0.35 (4) P[V ∪ L] = P[V ] + P[L] − P[V L] = 0.7 + 0.6 − 0.35 = 0.95 (5) P[V ∪ D] = P[S] = 1 (6) P[L B] = P[L L c ] = 0 Quiz 1.5 (1) The probability of exactly two voice calls is P [N V = 2] = P [{vvd, vdv, dvv}] = 0.3 (2) The probability of at least one voice call is P [N V ≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] = 6(0.1) + 0.2 = 0.8 An easier way to get the same answer is to observe that P [N V ≥ 1] = 1 − P [N V < 1] = 1 − P [N V = 0] = 1 − P [{ddd}] = 0.8 (4) (2) (3) (1)

(3) The conditional probability of two voice calls followed by a data call given that there were two voice calls is 1 P [{vvd} , N V = 2] P [{vvd}] 0.1 = (5) = = P [{vvd} |N V = 2] = P [N V = 2] P [N V = 2] 0.3 3 (4) The conditional probability of two data calls followed by a voice call given there were two voice calls is P [{ddv} , N V = 2] P [{ddv} |N V = 2] = =0 (6) P [N V = 2] The joint event of the outcome ddv and exactly two voice calls has probability zero since there is only one voice call in the outcome ddv. (5) The conditional probability of exactly two voice calls given at least one voice call is P [N V = 2, N V ≥ 1] P [N V = 2] 0.3 3 = = = (7) P [N V = 2|Nv ≥ 1] = P [N V ≥ 1] P [N V ≥ 1] 0.8 8 (6) The conditional probability of at least one voice call given there were exactly two voice calls is P [N V ≥ 1, N V = 2] P [N V = 2] P [N V ≥ 1|N V = 2] = = =1 (8) P [N V = 2] P [N V = 2] Given that there were two voice calls, there must have been at least one voice call. 4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6 In this experiment, there are four outcomes with probabilities P[{vv}] = (0.8)2 = 0.64 P[{dv}] = (0.2)(0.8) = 0.16 P[{vd}] = (0.8)(0.2) = 0.16 P[{dd}] = (0.2)2 = 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes, we now can test for the independence of events. (1) First, we calculate the probability of the joint event: P [N V = 2, N V ≥ 1] = P [N V = 2] = P [{vv}] = 0.64 Next, we observe that P [N V ≥ 1] = P [{vd, dv, vv}] = 0.96 Finally, we make the comparison P [N V = 2] P [N V ≥ 1] = (0.64)(0.96) = P [N V = 2, N V ≥ 1] which shows the two events are dependent. (2) The probability of the joint event is P [N V ≥ 1, C1 = v] = P [{vd, vv}] = 0.80 From part (a), P[N V ≥ 1] = 0.96. Further, P[C1 = v] = 0.8 so that P [N V ≥ 1] P [C1 = v] = (0.96)(0.8) = 0.768 = P [N V ≥ 1, C1 = v] Hence, the events are dependent. (3) The problem statement that the calls were independent implies that the events the second call is a voice call, {C2 = v}, and the ﬁrst call is a data call, {C1 = d} are independent events. Just to be sure, we can do the calculations to check: P [C1 = d, C2 = v] = P [{dv}] = 0.16 (6) Since P[C1 = d]P[C2 = v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are independent. Note that this shouldn’t be surprising since we used the information that the calls were independent in the problem statement to determine the probabilities of the outcomes. (4) The probability of the joint event is P [C2 = v, N V is even] = P [{vv}] = 0.64 Also, each event has probability P [C2 = v] = P [{dv, vv}] = 0.8, P [N V is even] = P [{dd, vv}] = 0.68 (8) Thus, P[C2 = v]P[N V is even] = (0.8)(0.68) = 0.544. Since P[C2 = v, N V is even] = 0.544, the events are dependent. 5 (7) (5) (4) (3) (2) (1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7 Let Fi denote the event that that the user is found on page i. The tree for the experiment is

0.8 ¨ F1 0.8 ¨ F2 0.8 ¨ F3 ¨¨ ¨¨ ¨¨ ¨ ¨¨ ¨¨ c c c ¨¨ F3 F1 ¨ F2 ¨ 0.2 0.2 0.2

The user is found unless all three paging attempts fail. Thus the probability the user is found is c c c P [F] = 1 − P F1 F2 F3 = 1 − (0.2)3 = 0.992 (1) Quiz 1.8 (1) We can view choosing each bit in the code word as a subexperiment. Each subexperiment has two possible outcomes: 0 and 1. Thus by the fundamental principle of counting, there are 2 × 2 × 2 × 2 = 24 = 16 possible code words. (2) An experiment that can yield all possible code words with two zeroes is to choose which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There are 4 = 6 ways to do this. Hence, there are six code words with exactly two zeroes. 2 For this problem, it is also possible to simply enumerate the six code words: 1100, 1010, 1001, 0101, 0110, 0011. (3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst bit has only one outcome. For each of the next three bits, we have two choices. In this case, there are 1 × 2 × 2 × 2 = 8 ways of choosing a code word. (4) For the constant ratio code, we can specify a code word by choosing M of the bits to be ones. The other N − M bits will be zeroes. The number of ways of choosing such N a code word is M . For N = 8 and M = 3, there are 8 = 56 code words. 3 Quiz 1.9 (1) In this problem, k bits received in error is the same as k failures in 100 trials. The failure probability is = 1 − p and the success probability is 1 − = p. That is, the probability of k bits in error and 100 − k correctly received bits is P Sk,100−k = 100 k 6

k

(1 − )100−k

(1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

we note there are three cases: • If R(i) <= 0. • If 0. To see how this works.4).Name:joey iwatsuru Email:joeyiwat@yahoo. we use the hist function to count how many occurences of each possible value of X(i).0610 (6) Quiz 1.100).100 = (1 − )100 = (0. 0. 700(0. Lastly.01) (0. 8 P [C9 ] = (P [C])9 = p 9n .1849 P S3.4.9)). AR.*(R<=0. then X(i)=2.9819 = 0. the transistors in the chip are like devices in series.99 + P S2.99) 2 3 99 (2) (3) (4) (5) = 0. • If 0.97 = 0. X=(R<= 0.5 and 0..99) 8 = 0.. then X(i)=1.. Since transistor failures are independent of each other. That is.9)) . The probability that a chip works is P[C] = pn . hot springs.98 = 4950(0. Thus each P[Ck ] has the binomial probability 9 (P [C])8 (1 − P [C])9−8 = 9 p 8n (1 − p n ). Second.01) (0.1..11 R=rand(1. P [C8 ] = The probability a memory module works is P [M] = P [C8 ] + P [C9 ] = p 8n (9 − 8 p n ) Quiz 1. + (2*(R>0.97 = 161.100 + P S1. Let Ck denote the event that exactly k chips work.4. we ﬁrst generate a vector R of 100 random numbers.98 + P S3. chip failures are also independent. we generate vector X as a function of R to represent the 3 possible outcomes of a ﬂip.10 Since the chip works only if all n transistors work. Y=hist(X.com Phone:5017621195 For = 0.01.01)(0.99) (2) The probability a packet is decoded correctly is just P [C] = P S0. X(i)=2 if ﬂip i was tails. These three cases will have probabilities 0. us (United States) Zip Code:71901 . The module works if either 8 chips work or 9 chips work. X(i)=1 if ﬂip i was heads. then X(i)=3.1:3) (1) (2) (3) For a M ATLAB simulation. + (3*(R>0. P S0.3660 P S1.9. and X(i)=3) is ﬂip i landed on the edge. 7 Address:104 pine meadows loop.9 < R(i).99 = 100(0.4) .4 < R(i) and R(i)<=0.3700 9 97 P S2.99)100 = 0.

0 otherwise (1) (2) If p = 0. Now that we have found c. us (United States) Zip Code:71901 .1)(0.36 3.24 2. we recall that the PMF must sum to 1. 3 G 0. That is.24 2. Now we can interpret each experiment in the generic context of independent trials.2 (1) To ﬁnd c.1.9)9 = 0. . with probability p.” Each bit is in error.0387 8 (2) Address:104 pine meadows loop. the remaining parts are straightforward. that is. (1) The random variable X is the number of trials up to and including the ﬁrst success.Name:joey iwatsuru Email:joeyiwat@yahoo. X has the geometric PMF PX (x) = p(1 − p)x−1 x = 1. 2. hot springs. Similar to Example 2.5 0.16 2 PN (n) = c 1 + n=1 1 1 + 2 3 =1 (1) This implies c = 6/11. AR. . then the probability exactly 10 bits are sent is P [X = 10] = PX (10) = (0.0 0.11.3 Decoding each transmitted bit is an independent trial where we call a bit error a “success. .1 The sample space. (2) P[N = 1] = PN (1) = c = 6/11 (3) P[N ≥ 2] = PN (2) + PN (3) = c/2 + c/3 = 5/11 (4) P[N > 3] = ∞ n=4 PN (n) = 0 Quiz 2. the trial is a success. probabilities and corresponding grades for the experiment are Outcome P[·] BB BC CB CC Quiz 2.5 0.com Phone:5017621195 Quiz Solutions – Chapter 2 Quiz 2.

.99)98 = 0. .13. the probability of exactly 2 errors is P [Y = 2] = PY (2) = 100 (0. Thus Z has the Pascal PMF (see Example 2. P[X ≥ 10] = 0.com Phone:5017621195 The probability that at least 10 bits are sent is P[X ≥ 10] = ∞ PX (x).25.4 Each of these probabilities can be read off the CDF FY (y). hot springs. Y has the binomial PMF PY (y) = 100 y p (1 − p)100−y y (4) (3) If p = 0.01)2 (0. .99)100 + 100(0. 4.99)98 2 (6) (7) (8) (5) Random variable Z is the number of trials up to and including the third success. FY (y) takes the upper value FY (y0 ). its even easier to observe that X ≥ 10 if the ﬁrst 10 bits are transmitted correctly.01)(0. (3) The random variable Y is the number of successes in 100 independent trials. . (6) If p = 0.15) PZ (z) = z−1 3 p (1 − p)z−3 2 (9) Note that PZ (z) > 0 for z = 3. This x=10 sum is not too hard to calculate. 5. However.0645 2 (10) Quiz 2. P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)10 For p = 0. Just as in Example 2.1849 2 (5) (4) The probability of no more than 2 errors is P [Y ≤ 2] = PY (0) + PY (1) + PY (2) = (0.01. That is.3487. (1) P[Y < 1] = FY (1− ) = 0 9 Address:104 pine meadows loop. AR.25)3 (0. we must keep in + mind that when FY (y) has a discontinuity at y0 . us (United States) Zip Code:71901 .01)2 (0.910 = 0.1. the probability that the third error occurs on bit 12 is PZ (12) = 11 (0.9207 100 (0.99)99 + = 0.75)9 = 0. However.Name:joey iwatsuru Email:joeyiwat@yahoo.

we can draw the following tree: N =0 •T =120 0.2 (4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − FY (2− ) = 1 − 0.3 N =2 •T =90 r rr 0.7.3 c = 40 (1) ⎩ 0 otherwise (2) The expected value of C is E [C] = 25(0.8 = 0.8 − 0.3 t = 75.7 c = 25 PC (c) = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. we can write down the PMF of T : ⎧ ⎨ 0.5 (1) With probability 0. with probability 0. This corresponds to the PMF ⎧ ⎨ 0.3) + 120(0.com Phone:5017621195 (2) P[Y ≤ 1] = FY (1) = 0. a call is a voice call and C = 25.6 (1) As a function of N . hot springs.5 cents Quiz 2.1 t = 120 ⎩ 0 otherwise From the PMF PT (t).7) + 40(0. AR. the expected value of T is E [T ] = 75PT (75) + 90PT (90) + 105PT (105) + 120PT (120) = (75 + 90 + 105)(0.6 = 0.3$$N =1 •T =105 $ (2) (1) $ $$ ¨¨$ rr rr0.1¨¨ ¨ ¨ ¨ 0. us (United States) Zip Code:71901 .8 = 0 Quiz 2.4 (5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = FY (1+ ) − FY (1− ) = 0. the cost T is T = 25N + 40(3 − N ) = 120 − 15N (2) To ﬁnd the PMF of T .6 (6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = FY (3+ ) − FY (3− ) = 0.1) = 62 (2) (3) (4) 10 Address:104 pine meadows loop. 105 PT (t) = 0.6 (3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − FY (2) = 1 − 0.3) = 29. Otherwise. we have a data call and C = 40.3. 90.3 N =3 •T =75 From the tree.

3) + 6(0. the expected number of applications is 4 E [A] = a=1 a PA (a) = 1(0.2) + 8(0.1) = 4. the expected number of memory chips is 4 (2) E [M] = a=1 g(A)PA (a) = 4(0. However.7 (1) Using Deﬁnition 2.663.4 (2) (3) The variance of N is Var[N ] = E N 2 − (E [N ])2 = 2.14. g(E[A]) = g(2) = 4.4) + 22 (0. (1) The expected value of N is 2 E [N ] = n=0 n PN (n) = 0(0. 2 g(A) = 6 A = 3 ⎩ 8 A=4 (3) By Theorem 2.3) + 3(0.5) = 2.com Phone:5017621195 Quiz 2.4)2 = 0.4) + 2(0. AR.4) + 4(0.5) = 1.8 The PMF PN (n) allows to calculate each of the desired quantities. E[M] = 4. (3) 11 Address:104 pine meadows loop.1) = 2 (1) (2) The number of memory chips is M = g(A) where ⎧ ⎨ 4 A = 1.4 (1) (2) The second moment of N is 2 E N 2 = n=0 n 2 PN (n) = 02 (0.44 = 0.44 (4) The standard deviation is σ N = √ Var[N ] = √ 0.8 (3) Since E[A] = 2. us (United States) Zip Code:71901 .4 − (1.1) + 1(0. The two quantities are different because g(A) is not of the form α A + β. hot springs.8 = g(E[A]).10.1) + 12 (0. Quiz 2.Name:joey iwatsuru Email:joeyiwat@yahoo.2) + 4(0.4) + 2(0.

19375) + n=6 n(0. . 8. 2. . 2.00625) (11) (12) = 3.00625 n = 6. .com Phone:5017621195 Quiz 2. 4. . 7.25) n = 1. From Theorem 1. the conditional PMF of N given the event T is PN |T (n) = 0.02 n = 1. 50 PN |I (n) = (1) 0 otherwise (2) Also from the problem statement.02(0. calculating conditional expectations is easy. we ﬁnd the PMF of N is PN (n) = PN |T (n) P [T ] + PN |I (n) P [I ] ⎧ ⎨ 0. us (United States) Zip Code:71901 . 7. . 10 ⎩ 0 otherwise (5) Once we have the conditional PMF. 9.25) ⎩ 0 otherwise ⎧ ⎨ 0. 50 ⎩ 0 otherwise (4) First we ﬁnd 10 (3) (4) (5) P [N ≤ 10] = n=1 PN (n) = (0. 8. 5 n = 6. 5 = 0.005)(5) = 0. 4.19375 n = 1.02(0. .155/0.2 n = 1. 10 ⎩ 0 otherwise ⎧ ⎨ 0.75) + 0. 2.80 (6) By Theorem 2. the conditional PMF of N given N ≤ 10 is PN |N ≤10 (n) = PN (n) P[N ≤10] ⎧ ⎨ 0.005/0.8 n = 6. E [N |N ≤ 10] = n 5 0 n ≤ 10 otherwise (7) (8) (9) n PN |N ≤10 (n) 10 (10) = n=1 n(0.155)(5) + (0. 3. . 2.2(0. 50 = 0(0. hot springs. 3. . . we learn that the conditional PMF of N given the event I is 0. . .155 n = 1. 2.Name:joey iwatsuru Email:joeyiwat@yahoo.005 n = 6.75) + 0. 5 = 0.9 (1) From the problem statement. 3.17.8 n = 1. 7. 2. 4. . 7. 4.15625 12 Address:104 pine meadows loop. 5 0 otherwise (2) (3) The problem statement tells us that P[T ] = 1 − P[I ] = 3/4. 5 = 0. 9. 4.10 (the law of total probability). 3. 3. AR.

The ith column M(:. m 2 . k. Examples of the function calls (a) samplemean(100) and (b) samplemean(1000) are shown in Figure 1. . end. function M=samplemean(k)./K. . we ﬁrst ﬁnd the conditional second moment E N 2 |N ≤ 10 = n 5 n 2 PN |N ≤10 (n) 10 (13) n 2 (0. . What is observed in these ﬁgures is that for small n. .i) of M holds a sequence m 1 . K=(1:k)’. m k .19375) + 330(0. m n is fairly random but as n gets 13 Address:104 pine meadows loop.75684 (16) (17) Quiz 2. .15625)2 = 2. M=zeros(k.10.k). .5). AR.71875 − (3.19375) + 2 (14) (15) = 55(0.00625) n=6 = n=1 n (0.71875 The conditional variance is Var[N |N ≤ 10] = E N 2 |N ≤ 10 − (E [N |N ≤ 10])2 = 12. .i)=cumsum(X). . plot(K. X=duniformrv(0.10 The function samplemean(k) generates and plots ﬁve m n sequences for n = 1.com Phone:5017621195 10 8 6 4 2 0 0 50 100 10 8 6 4 2 0 0 500 1000 (a) samplemean(100) (b) samplemean(1000) Figure 1: Two examples of the output of samplemean(k) (6) To ﬁnd the conditional variance. M(:. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo. Each time samplemean(k) is called produces a random output.M).00625) = 12. 2. hot springs. for i=1:5.

AR. This random convergence is analyzed in Chapter 7. . that we generate is random. . the sequences always converges to E[X ]. Although each sequence m 1 . us (United States) Zip Code:71901 . hot springs. 14 Address:104 pine meadows loop. m 2 . m n gets close to E[X ] = 5.com Phone:5017621195 large.Name:joey iwatsuru Email:joeyiwat@yahoo. .

1 0 0 5 x 10 15 f X (x) = (x/4)e−x/2 x ≥ 0 0 otherwise fX(x) (4) 15 Address:104 pine meadows loop. We will evaluate this integral using integration by parts: ∞ −∞ f X (x) d x = 0 ∞ cxe−x/2 d x ∞ 0 (1) ∞ 0 = −2cxe−x/2 =0 + 2ce−x/2 d x (2) = −4ce−x/2 ∞ 0 = 4c (3) Thus c = 1/4 and X has the Erlang (n = 2. us (United States) Zip Code:71901 . λ = 1/2) PDF 0. we can calculate the probabilities: (1) P[Y ≤ −1] = FY (−1) = 0 (2) P[Y ≤ 1] = FY (1) = 1/4 (3) P[2 < Y ≤ 3] = FY (3) − FY (2) = 3/4 − 2/4 = 1/4 (4) P[Y > 1. To ﬁnd c.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 Quiz Solutions – Chapter 3 Quiz 3. hot springs.5] = 1 − P[Y ≤ 1.2 0.5)/4 = 5/8 Quiz 3. AR.5) = 1 − (1.2 (1) First we will ﬁnd the constant c and then we will sketch the PDF.5 0 0 2 y 4 ⎧ y<0 ⎨ 0 y/4 0 ≤ y ≤ 4 FY (y) = ⎩ 1 y>4 (1) From the CDF FY (y).5] = 1 − FY (1. we use ∞ the fact that −∞ f X (x) d x = 1.1 The CDF of Y is 1 FY(y) 0.

us (United States) Zip Code:71901 . 0 otherwise. (3) 16 Address:104 pine meadows loop.3 The PDF of Y is 3 fY(y) 2 1 0 −2 0 y 2 f Y (y) = 3y 2 /2 −1 ≤ y ≤ 1. AR. (4) Similarly.e. hot springs. (2) The second moment of Y is E Y2 = ∞ −∞ y 2 f Y (y) dy = 1 −1 (3/2)y 4 dy = (3/10)y 5 1 −1 = 3/5. (9) (10) Quiz 3.Name:joey iwatsuru Email:joeyiwat@yahoo. f Y (y) = f Y (−y)). (2) Note that the above calculation wasn’t really necessary because E[Y ] = 0 whenever the PDF f Y (y) is an even function (i.com Phone:5017621195 (2) To ﬁnd the CDF FX (x). FX (x) = 0 x f X (y) dy = 0 x y −y/2 e dy 4 (5) (6) (7) x x 1 y − e−y/2 dy = − e−y/2 − 2 2 0 0 x −x/2 =1− e − e−x/2 2 The complete expression for the CDF is 1 FX(x) 0.. we ﬁrst note X is a nonnegative random variable so that FX (x) = 0 for all x < 0. P [0 ≤ X ≤ 4] = FX (4) − FX (0) = 1 − 3e−2 . (1) (1) The expected value of Y is E [Y ] = ∞ −∞ y f Y (y) dy = 1 −1 (3/2)y 3 dy = (3/8)y 4 1 −1 = 0. For x ≥ 0. P [−2 ≤ X ≤ 2] = FX (2) − FX (−2) = 1 − 3e−1 .5 0 0 5 x 10 15 FX (x) = 1− 0 x 2 + 1 e−x/2 x ≥ 0 otherwise (8) (3) From the CDF FX (x).

b) random variable. us (United States) Zip Code:71901 .2. √ b = 3 + 3 3. fY(y) 0. The PDF of X is f X (x) = (1/3)e−x/3 x ≥ 0. To ﬁnd a and b. The fact that Y has twice the standard deviation of X is reﬂected in the greater spread of f Y (y).2 fX(x) 0 −5 x ← fX(x) ← f (y) Y 0 y 5 17 Address:104 pine meadows loop. The only valid solution with a < b is √ a = 3 − 3 3. a+b =3 2 Var[X ] = (b − a)2 = 9. Quiz 3. (4) (2) We know X is a uniform (a.com Phone:5017621195 (3) The variance of Y is Var[Y ] = E Y 2 − (E [Y ])2 = 3/5. (1) √ Var[Y ] = √ 3/5. However. the peak value of the Gaussian PDF goes down. we must have λ = 1/3. We start with the sketches. (4) The standard deviation of Y is σY = Quiz 3.5 Each of the requested probabilities can be calculated using or Q(z) and Table 3. f X (x) = 0 otherwise. 0 otherwise. hot springs.1 (1) The PDFs of X and Y are shown below. (3) (4) The complete expression for the PDF of X is √ √ √ 1/(6 3) 3 − 3 3 ≤ x < 3 + 3 3. it is important to remember that as the standard deviation increases. E[X ] = 1/λ and Var[X ] = 1/λ2 .Name:joey iwatsuru Email:joeyiwat@yahoo. Since E[X ] = 3 and Var[X ] = 9. we apply Theorem 3. (5) (z) function and Table 3.4 (1) When X is an exponential (λ) random variable. 12 (2) √ b − a = ±6 3. AR.6 to write E [X ] = This implies a + b = 6.4 0.

⎩ 0 otherwise. 2).75) = 0 x 2 ⎧ x < −1. P[X > 3. (1) The following probabilities can be read directly from the CDF: (1) P[X ≤ 1] = FX (1) = 1.5 ) = Q(1.Name:joey iwatsuru Email:joeyiwat@yahoo.33 × 10−4 . (4) We ﬁnd the PDF f Y (y) by taking the derivative of FY (y).6 The CDF of X is 1 FX(x) 0. ⎨ 0 FX (x) = (x + 1)/4 −1 ≤ x < 1. The resulting PDF is 0. P[Y > 3. (3) Since Y is Gaussian (0.7 18 Address:104 pine meadows loop.6826.5] = Q( 3. ⎨ 1/4 f X (x) = (1/2)δ(x − 1) x = 1. (2) Quiz 3. P [−1 < Y ≤ 1] = FY (1) − FY (−1) 1 −1 = − σY σY (3) =2 1 − 1 = 0.5] = Q(3. 1).5 0 −2 (1. Quiz 3.com Phone:5017621195 (2) Since X is Gaussian (0. 2 (4) (1) (2) (4) Again.75) = 1 − 2 0. since X is Gaussian (0. AR. P [−1 < X ≤ 1] = FX (1) − FX (−1) = (1) − (−1) = 2 (1) − 1 = 0.5 0 −2 0 x 2 ⎧ −1 ≤ x < 1. (2) P[X < 1] = FX (1− ) = 1/2.383.5) = 2.5 fX(x) 0.0401. hot springs. 2). 1). ⎩ 1 x ≥ 1. (5) Since Y is Gaussian (0. (3) P[X = 1] = FX (1+ ) − FX (1− ) = 1 − 1/2 = 1/2. us (United States) Zip Code:71901 .

because Y ≤ 1. FX (x) = x −∞ f X (y) dy = 0 x (1 − y/2) dy = x − x 2 /4. Y is also nonnegative. FY (y) = y−y ⎩ 1 y ≥ 1. FY (y) = 1 for all y ≥ 1. FX (x) = 1 for x ≥ 2 since its always true that x ≤ 2. 19 Address:104 pine meadows loop. (3) (3) Since X is nonnegative.6 . we see that the jump in FY (y) at y = 1 is exactly equal to P[Y = 1]. Also. we obtain the PDF f Y (y). Lastly. Note that when y < 0 or y > 1. ⎨ 0 2 /4 0 ≤ y < 1. ⎨ 0 2 /4 0 ≤ x ≤ 2.com Phone:5017621195 (1) Since X is always nonnegative. FY (y) = P [Y ≤ y] = P [X ≤ y] = FX (y) . AR.5 0 −1 0 1 y 2 3 1 y 2 3 ⎧ y < 0. 1. for 0 < y < 1.5 0 −1 Y (4) 0 As expected. (4) By taking the derivative of FY (y). Also.5 f (y) 1 0.Name:joey iwatsuru Email:joeyiwat@yahoo. FX (x) = x−x ⎩ 1 x > 2. (1) The complete CDF of X is 1 F (x) 0. Finally. (5) 0. the PDF is zero. the complete expression for the CDF of Y is 1 F (y) 0.5 0 −1 X 0 1 x 2 3 ⎧ x < 0. (2) (2) The probability that Y = 1 is P [Y = 1] = P [X ≥ 1] = 1 − FX (1) = 1 − 3/4 = 1/4.25 f Y (y) = 1 − y/2 + (1/4)δ(y − 1) 0 ≤ y ≤ 1 0 otherwise Y (6) Quiz 3. us (United States) Zip Code:71901 .8 (1) P[Y ≤ 6] = 6 −∞ f Y (y) dy = 6 0 (1/10) dy = 0. hot springs. Thus FY (y) = 0 for y < 0. FX (x) = 0 for x < 0. for 0 ≤ x ≤ 2. Using the CDF FX (x).

= otherwise.com Phone:5017621195 (2) From Deﬁnition 3.15. the conditional PDF of Y given Y > 8 is f Y |Y >8 (y) = f Y (y) P[Y >8] 0 y > 8. 10 (2) (4) From Deﬁnition 3. the conditional PDF of Y given Y ≤ 6 is f Y |Y ≤6 (y) = (3) The probability Y > 8 is P [Y > 8] = 8 10 f Y (y) P[Y ≤6] 0 y ≤ 6.15. AR. we can calculate the conditional expectation E [Y |Y > 8] = ∞ −∞ y f Y |Y >8 (y) dy = 10 8 y dy = 9. x=exponentialrv(lambda. we can calculate the conditional expectation E [Y |Y ≤ 6] = ∞ −∞ y f Y |Y ≤6 (y) dy = 6 0 y dy = 3. i=i+1.lambda=1/3. 0 otherwise. = otherwise. 6 (4) (6) From the conditional PDF f Y |Y >8 (y).0+exponentialrv(1/3. (3) (5) From the conditional PDF f Y |Y ≤6 (y). while (i<m).9 A natural way to produce random variables with PDF f T |T >2 (t) is to generate samples of T with PDF f T (t) and then to discard those samples which fail to satisfy the condition T > 2. 2 (5) Quiz 3.Name:joey iwatsuru Email:joeyiwat@yahoo. t=zeros(m. Here is a M ATLAB function that uses this method: function t=t2rv(m) i=0. us (United States) Zip Code:71901 . 1/2 8 < y ≤ 10. (1) 1 dy = 0. In this case the command t=2. end end A second method exploits the fact that if T is an exponential (λ) random variable.m) generates the vector t. then T = T + 2 has PDF f T (t) = f T |T >2 (t). hot springs. 1/6 0 ≤ y ≤ 6.1). 20 Address:104 pine meadows loop.1).2 . 0 otherwise. if (x>2) t(i+1)=x.

AR.12 = 0.12 + 0.16 + 0. we can calculate the requested probabilities by summing the PMF over those values of Q and G that correspond to the event. g) (6) (7) = 0. g) (4) (5) = 0. Y ≤ −∞] = 0 since Y cannot take on the value −∞.com Phone:5017621195 Quiz Solutions – Chapter 4 Quiz 4. ∞) = P[X ≤ ∞.24 + 0.1 Each value of the joint CDF can be found by considering the corresponding probability. Quiz 4. (3) FX. 0) + PQ.78 21 Address:104 pine meadows loop.6 (2) The probability that Q = G is P [Q = G] = PQ.18 + 0. Y ≤ y] = P[Y ≤ y] = FY (y). 1) + PQ.1. Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on the value −∞.Y (∞.G (0. (2) FX.08 = 0.18 (3) The probability that G > 1 is 3 1 (1) (2) (3) P [G > 1] = g=2 q=0 PQ.Name:joey iwatsuru Email:joeyiwat@yahoo.Y (∞.16 + 0.06 + 0. −∞) = P[X ≤ ∞.08 = 0.24 + 0. y) = P[X ≤ ∞.G (q.6 (4) The probability that G > Q is 1 3 P [G > Q] = q=0 g=q+1 PQ.Y (−∞.G (0.12 + 0. This result is given in Theorem 4.24 + 0. 2) + PQ. 0) + PQ. Y ≤ ∞] = 1.G (0. 2) = P[X ≤ −∞.G (0. us (United States) Zip Code:71901 . 1) = 0.G (q. (1) The probability that Q = 0 is P [Q = 0] = PQ. (4) FX.Y (∞.2 From the joint PMF of Q and G given in the table. 3) = 0.G (1. hot springs.18 + 0.G (0. (1) FX.

Name:joey iwatsuru Email:joeyiwat@yahoo.1 0 0.3 Quiz 4.Y (x.Y (x. hot springs. y = r sin θ and d x d y = r dr dθ . y) d x d y = 1. b) b = 0 b = 2 b = 4 PH (h) h = −1 0 0. yielding 2 1 Y P [A] = 0 π/2 0 1 0 1 r 2 sin θ cos θ r dr dθ π/2 0 2 π/2 (5) (6) A 1 X = = r 3 dr ⎛ 1 0 sin θ cos θ dθ ⎞ ⎠ = 1/8 r 4 /4 ⎝ sin θ 2 (7) 0 22 Address:104 pine meadows loop.4 To ﬁnd the constant c. Similarly. we convert to polar coordinates using the substitutions x = r cos θ . y) d x d y (4) To integrate over A.1 0. we write P [A] = A y dy = (c/4)y 2 f X.B (h.6 0. this corresponds to the column sum down the table of the joint PMF. 2 0 0 2 1 f X.B (h.Y (x. the marginal PMF of B is 1 PB (b) = h=−1 PH.1 0 0. AR. To calculate P[A]. this corresponds to calculating the row sum across the table of the joint PMF.3 By Theorem 4. The easiest way to calculate these marginal PMFs is to simply sum each row and column: PH. the marginal PMF of H is PH (h) = b=0. b) (1) For each value of h.4 0.2 0. we apply ∞ ∞ −∞ −∞ ∞ ∞ −∞ −∞ (3) f X. us (United States) Zip Code:71901 .5 0. b) (2) For each value of b.4 PH.2 PB (b) 0. y) d x d y = =c cx y d x dy y 0 2 0 (1) dy 2 0 x 2 /2 1 0 (2) =c (3) = (c/2) Thus c = 1.3.1 0.com Phone:5017621195 Quiz 4. Speciﬁcally.2 0.2.2 h=0 h=1 0.B (h.

f Y (y) = = ∞ −∞ 6 1 f X.20 (T =90) 0. 600 0.05 t = 180 ⎪ ⎪ ⎪ 0.05 (T =18) 0.5 By Theorem 4. b) l = 518.10 (T =24) 0. 000 b = 14.10 (T =120) 0.05 t = 18 ⎪ ⎪ ⎪ 0. 400 l = 2. For 0 ≤ y ≤ 1. hot springs. 400 0.Y (x. 000 l = 7.10 (T =360) b = 28.1 t = 360 ⎪ ⎪ ⎩ 0 otherwise 23 (1) Address:104 pine meadows loop. 800 0. 776. the marginal PDF of X is f X (x) = ∞ −∞ f X.8. us (United States) Zip Code:71901 .00 (T =540) b = 21.Y (x.2 t = 270 ⎪ ⎪ ⎪ ⎪ 0. For each pair of values of L and B. 90 ⎪ ⎪ ⎨ 0.1 t = 120 PT (t) = ⎪ 0. y) dy (1) For x < 0 or x > 1. writing down the PMF of T is straightforward.6 (A) The time required for the transfer is T = L/B.05 (T =180) 0.Name:joey iwatsuru Email:joeyiwat@yahoo. 592. the complete expression for the PDF of Y is f Y (y) = Quiz 4. f X (x) = 0.1 t = 24 ⎪ ⎪ ⎪ ⎪ 0. AR.com Phone:5017621195 Quiz 4. For 0 ≤ x ≤ 1.20 (T =270) (3 + 6y 2 )/5 0 ≤ y ≤ 1 0 otherwise (6) From the table. f X (x) = 6 5 1 0 (x + y 2 ) dy = 6 x y + y 3 /3 5 y=1 y=0 6x + 2 6 = (x + 1/3) = 5 5 (2) The complete expression for the PDf of X is f X (x) = (6x + 2)/5 0 ≤ x ≤ 1 0 otherwise (3) By the same method we obtain the marginal PDF for Y . We can write these down on the table for the joint PMF of L and B as follows: PL . ⎧ ⎪ 0.B (l. we can calculate the time T needed for the transfer.2 t = 36.20 (T =36) 0. y) dy (x + y 2 ) d x = 6 2 x /2 + x y 2 5 x=1 x=0 (4) 6 3 + 6y 2 = (1/2 + y 2 ) = 5 5 (5) 5 0 Since f Y (y) = 0 for y < 0 or y > 1.

Thus f W (0) = 0 and f W (1) = 1. The calculus is simpler if we integrate over the region X Y > w. W = X Y satisﬁes 0 ≤ W ≤ 1. t) l=1 l=2 l=3 PT (t) (1) The expected value of L is E [L] = 1(0.T (l. PL .5. us (United States) Zip Code:71901 .1 0. As shown below.7 (A) It is helpful to ﬁrst make a table that includes the marginal PMFs.25) = 2.25) + 22 (0. AR. Since the second moment of L is E L 2 = 12 (0.25 (7) (8) (1) (2) (3) Address:104 pine meadows loop.5) + 3(0.15 0.3 0.6 t = 60 0.com Phone:5017621195 (B) First. the variance of L is Var [L] = E L 2 − (E [L])2 = 0. 24 t = 40 0.25 0. For 0 < w < 1.5) + 32 (0.25) = 4. we calculate the CDF FW (w) = P[W ≤ w].5.5 0.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. Speciﬁcally. we ﬁnd the PDF is ⎧ 0 w<0 d FW (w) ⎨ f W (w) = = − ln w 0 ≤ w ≤ 1 ⎩ dw 0 w>1 Quiz 4.1 0.15 0. Y 1 w w 1 XY > w FW (w) = 1 − P [X Y > w] =1− =1− 1 1 w w/x 1 w (2) (3) (4) (5) (6) dy dx XY = w X (1 − w/x) d x = 1 − x − w ln x|x=1 x=w = 1 − (1 − w + w ln w) = w − w ln w The complete expression for the CDF is ⎧ w<0 ⎨ 0 FW (w) = w − w ln w 0 ≤ w ≤ 1 ⎩ 1 w>1 By taking the derivative of the CDF. integrating over the region W ≤ w is fairly complex. we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1.2 0.4 PL (l) 0.25) + 2(0.

com Phone:5017621195 (2) The expected value of T is E [T ] = 40(0.16(a).4) = 48. it is straightforward to calculate the various expectations.1) = 96 (4) From Theorem 4.3) + 3(40)(0.6) + 602 (0. T ] = E [L T ] − E [L] E [T ] = 96 − 2(48) = 0 (5) Since Cov[L .15) + 1(60)(0. the calculations become easier if we ﬁrst calculate the marginal PDFs f X (x) and f Y (y). f X (x) = ∞ −∞ f X. For 0 ≤ x ≤ 1. the correlation coefﬁcient is ρ L . y) d x = 0 2 xy dx = 1 2 x y 2 x=1 = x=0 y 2 (13) The complete expressions for the marginal PDFs are f X (x) = 2x 0 ≤ x ≤ 1 0 otherwise f Y (y) = y/2 0 ≤ y ≤ 2 0 otherwise (14) From the marginal PDFs. (3) The correlation is 3 (4) (5) (6) E [L T ] = t=40. f Y (y) = ∞ −∞ f X. (11) (B) As in the discrete case. Thus Var[T ] = E T 2 − (E [T ])2 = 2400 − 482 = 96. The second moment of T is E T 2 = 402 (0. y) dy = 0 2 1 x y dy = x y 2 2 y=2 = 2x y=0 (12) Similarly.Y (x. the covariance of L and T is Cov [L .2) + 3(60)(0. for 0 ≤ y ≤ 2.Name:joey iwatsuru Email:joeyiwat@yahoo.4) = 2400.1) + 2(60)(0. AR. T ] = 0.60 l=1 lt PL T (lt) (7) (8) (9) (10) = 1(40)(0.15) + 2(40)(0.6) + 60(0.T = 0.Y (x. us (United States) Zip Code:71901 . hot springs. 25 Address:104 pine meadows loop.

dy = 3 y3 3 = 0 8 9 (21) (4) The covariance of X and Y is Cov [X.T |A (l.T (3.com Phone:5017621195 (1) The ﬁrst and second moments of X are E [X ] = E X2 = ∞ −∞ ∞ −∞ x f X (x) d x = 0 1 2x 2 d x = 1 2 3 1 2 (15) (16) (17) x 2 f X (x) d x = 0 2x 3 d x = The variance of X is Var[X ] = E[X 2 ] − (E[X ])2 = 1/18. 60) + PL . T ) = (3. (L .t) P[A] (1) 0 lt > 80 otherwise (2) Address:104 pine meadows loop. 40) and (L .T (3.45 By Deﬁnition 4. T ) = (3. 60). (3) The correlation of X and Y is E [X Y ] = = ∞ ∞ −∞ −∞ 1 2 2 2 0 0 x y f X.T (l.9. P [A] = P [V > 80] = PL . 60) = 0. us (United States) Zip Code:71901 . PL . (22) (5) Since Cov[X.Y = 0. T ) = (2.Name:joey iwatsuru Email:joeyiwat@yahoo. 40) + PL . AR.Y (x.T (2. t) = 26 PL . Quiz 4. hot springs. 60). the correlation coefﬁcient is ρ X. dy 1 0 (20) 2 x3 x y d x. y) d x. Y ] = E [X Y ] − E [X ] E [Y ] = 2 8 − 9 3 4 3 = 0. (2) The ﬁrst and second moments of Y are E [Y ] = E Y2 4 1 2 y dy = 3 −∞ 0 2 ∞ 2 1 = y 2 f Y (y) dy = y 3 dy = 2 −∞ 0 2 y f Y (y) dy = ∞ 2 (18) (19) The variance of Y is Var[Y ] = E[Y 2 ] − (E[Y ])2 = 2 − 16/9 = 2/9. Y ] = 0.8 (A) Since the event V > 80 occurs only for the pairs (L .

y) ∈ B 0 otherwise K x y 40 ≤ y ≤ 60.T |A (l.Name:joey iwatsuru Email:joeyiwat@yahoo. y) = = f X.Y (x. P [B] = B f X. we ﬁrst calculate the probability of the conditioning event.Y |B (x.T |A (l. y) d x d y = = = 60 40 60 40 60 3 80/y xy dx dy 4000 x2 2 3 (8) dy (9) (10) (11) y 4000 80/y 9 3200 y − 2 y 40 4000 2 9 4 3 = − ln ≈ 0. we ﬁrst ﬁnd the conditional second moment E V 2 |A = l t (lt)2 PL .801 8 5 2 dy The conditional PDF of X and Y is f X. t) t = 40 t = 60 l=1 0 0 l=2 0 4/9 1/3 2/9 l=3 The conditional expectation of V can be found from the conditional PMF. AR.com Phone:5017621195 We can represent this conditional PMF in the following table: PL .T |A (l.Y (x. E [V |A] = l t lt PL . t) (5) (6) 4 1 2 = (2 · 60)2 + (3 · 40)2 + (3 · 60)2 = 18. 80/y ≤ x ≤ 3 0 otherwise 27 (12) (13) Address:104 pine meadows loop. t) (3) (4) 1 2 1 4 = (2 · 60) + (3 · 40) + (3 · 60) = 133 9 3 9 3 For the conditional variance Var[V |A]. us (United States) Zip Code:71901 . y) /P [B] (x. 400 9 3 9 It follows that Var [V |A] = E V 2 |A − (E [V |A])2 = 622 2 9 (7) (B) For continuous random variables X and Y . hot springs.

us (United States) Zip Code:71901 . hot springs.30 Quiz 4.78 −∞ −∞ 60 3 40 (x y)2 f X. however. y) d x d y K x 3 y3 d x d y y3 x 4 x=3 x=80/y (19) (20) = (K /4) 80/y 60 40 60 40 dy (21) (22) ≈ 16. y) d x d y K x 2 y2 d x d y y2 x 3 x=3 x=80/y (14) (15) = (K /3) = (K /3) 80/y 60 40 60 40 dy (16) (17) (18) 27y 2 − 803 /y dy 60 40 = (K /3) 9y 3 − 803 ln y The conditional second moment of K given B is E W 2 |B = = ∞ ∞ ≈ 120. b) b=0 b=1 a=0 PB|A (0|0)PA (0) PB|A (1|0)PA (0) PB|A (0|2)PA (2) PB|A (1|2)PA (2) a=2 28 Address:104 pine meadows loop. b) = PB|A (b|a)PA (a). A table of the joint PMF will include all four possible combinations of A and B.Name:joey iwatsuru Email:joeyiwat@yahoo. Consequently.B (a. Incorporating the information from the given conditional PMFs can be confusing.9 (24) (A) (1) The joint PMF of A and B can be found from the marginal and conditional PMFs via PA. 1}.B (a. we can note that A has range S A = {0.Y |B (x. The general form of the table is PA. The conditional expectation of W given event B is E [W |B] = = ∞ ∞ −∞ −∞ 60 3 40 x y f X. 2} and B has range S B = {0. AR.com Phone:5017621195 where K = (4000P[B])−1 .Y |B (x. 116.10 (23) = (K /4) 81y 3 − 804 /y dy 60 40 = (K /4) (81/4)y 4 − 804 ln y It follows that the conditional variance of W given B is Var [W |B] = E W 2 |B − (E [W |B])2 ≈ 1528.

2)(0.6) (0. AR. b).62 a = 2 (2) ⎩ PB (0) 0 otherwise ⎧ ⎨ 16/31 a = 0 = 15/31 a = 2 (3) ⎩ 0 otherwise (4) We can calculate the conditional variance Var[A|B = 0] using the conditional PMF PA|B (a|0).3 a=2 (2) Given the conditional PMF PB|A (b|2). hot springs.B (a.32 0. y) = f Y |X (y|x) f X (x) = (2) From the given conditional PDF f Y |X (y|x). 0 ≤ x ≤ 1 0 otherwise (7) 960 961 (6) Address:104 pine meadows loop. we can calculate the the conditional PMF ⎧ 0.4) (0.5) + (1)(0.08 0.5) = 0. b) a=0 (0.B (a. us (United States) Zip Code:71901 .B (a. it is easy to calculate the conditional expectation 1 E [B|A = 2] = b=0 b PB|A (b|2) = (0)(0. we have b=0 b=1 PA. f Y |X (y|1/2) = 29 8y 0 ≤ y ≤ 1/2 0 otherwise (8) 6y 0 ≤ y ≤ x.32/0. First we calculate the conditional expected value E [A|B = 0] = a a PA|B (a|0) = 0(16/31) + 2(15/31) = 30/31 (4) The conditional second moment is E A2 |B = 0 = a a 2 PA|B (a|0) = 02 (16/31) + 22 (15/31) = 60/31 (5) The conditional variance is then Var[A|B = 0] = E A2 |B = 0 − (E [A|B = 0])2 = (B) (1) The joint PDF of X and Y is f X.4) (0.3 0.B (a.com Phone:5017621195 Substituting values from PB|A (b|a) and PA (a).6) a=2 or PA.8)(0. 0) ⎨ PA|B (a|0) = = 0.5)(0.5 (1) (3) From the joint PMF PA.62 a = 0 PA.Name:joey iwatsuru Email:joeyiwat@yahoo.3/0.Y (x. b) b = 0 b = 1 a=0 0.5)(0.

In this case. f Y (1/2) = Thus. AR. 0 ≤ x2 ≤ 2 0 otherwise 30 (2) (3) Address:104 pine meadows loop. g that fail the independence requirement. f X 1 .10 0.Name:joey iwatsuru Email:joeyiwat@yahoo. (B) (1) Since X 1 and X 2 are independent. (2) For random variables Q and G from Quiz 4.2.60 0.Y (x.Y (x.16 0. independence requires that either PX (x) = 0 or PY (y) = 0. y) = PX (x)PY (y). Thus. x2 ) = f X 1 (x1 ) f X 2 (x2 ) = (1 − x1 /2)(1 − x2 /2) 0 ≤ x1 ≤ 2. However. there are no obvious pairs q. g) = PQ (q)PG (g) for every pair q.40 0.12 0.18 0. we calculate the marginal PMFs from the table of the joint PMF PQ.com Phone:5017621195 (3) The conditional PDF of Y given X = 1/2 is f X |Y (x|1/2) = f X.09 and PX (0) = 0. 1/2) 6(1/2) =2 = f Y (1/2) 3/2 (10) ∞ −∞ f X. f X |Y (x|1/2) = f X. 1/2)/ f Y (1/2). 1).04 0.40 q=1 PG (g) 0. PQ.Y (0. g) g = 0 g = 1 g = 2 g = 3 PQ (q) q=0 0.06 0. b) PDF. Var [X |Y = 1/2] = Quiz 4. 1) = 0 = PX (0) PY (1) (1) (1 − 1/2)2 1 = 12 48 (11) Since we have found a pair x. Hence Q and G are independent.20 Careful study of the table will verify that PQ.30 0.Y (x.01.24 0.1.G (q. the conditional PDF of X is uniform (1/2. y such that PX. we see that given Y = 1/2. we integrate the joint PDF. To ﬁnd f Y (1/2). for 1/2 ≤ x ≤ 1.10 (A) (1) For random variables X and Y from Example 4. we can conclude that X and Y are dependent.1/2 ( ) d x = 1 1/2 6(1/2) d x = 3/2 (9) (4) From the pervious part. PX. Note that whenever PX. g) in Quiz 4.12 0. by the deﬁnition of the uniform (a. Unlike X and Y in part (a). it is not obvious whether they are independent.X 2 (x1 . us (United States) Zip Code:71901 .2. g. y) = 0. hot springs.08 0.Y (x.G (q. we observe that PY (1) = 0.G (q.

com Phone:5017621195 (2) Let FX (x) denote the CDF of both X 1 and X 2 . The CDF of Z = max(X 1 . we see that E[X |Y = 2] = 1 and Var[X |Y = 2] = 3/4.30. we have 1 2 2 e−2(x −x y+y )/3 .Name:joey iwatsuru Email:joeyiwat@yahoo. (1) (2) By Theorem 4. the conditional expected value and standard deviation of X given Y = y are 2 E [X |Y = y] = y/2 σ X = σ1 (1 − ρ 2 ) = 3/4. FZ (z) = (z − z 2 /4)2 (7) The complete expression for the CDF of Z is ⎧ z<0 ⎨ 0 2 /4)2 0 ≤ z ≤ 2 FZ (z) = (z − z ⎩ 1 z>1 (8) Quiz 4. ˜ (4) When Y = y = 2.29. we know that ρ = 1/2. µ1 = µ X = 0. (2) (1) Applying these facts to Deﬁnition 4. hot springs.17 and Theorem 4. and that σ1 = σ X = 1. f X. P [Z ≤ z] = P [X 1 ≤ z. f X |Y (x|2) = √ 3π/2 (5) 31 Address:104 pine meadows loop. us (United States) Zip Code:71901 . we need to ﬁnd the CDF of each X i . Speciﬁcally.17. the CDF is ⎧ x <0 ⎨ 0 x 2 /4 0 ≤ x ≤ 2 FX (x) = f X (y) dy = (6) x−x ⎩ −∞ 1 x >2 Thus for 0 ≤ z ≤ 2. X 2 ≤ z] = P [X 1 ≤ z] P [X 2 ≤ z] = [FX (z)]2 (4) (5) To complete the problem. X 2 ) is found by observing that Z ≤ z iff X 1 ≤ z and X 2 ≤ z.Y (x.11 This problem just requires identifying the various terms in Deﬁnition 4. From the PDF f X (x). σ2 = σY = 1. The conditional PDF of X given Y = 2 is simply the Gaussian PDF 1 2 e−2(x−1) /3 . That is. y) = √ 3π 2 (3) µ2 = µY = 0. from the problem statement. AR.

This observation prompts the following program: function xy=dtrianglerv(m) sx=[1. y=ceil(x. 2.y’].28. x 0 otherwise (1) Given X = x. and an independent uniform (0. . PX (x) = 1/4 x = 1. 4) PMF. That is.m). . Also. hot springs. AR. PY |X (y|x) = 1/x y = 1.px. given X = x.Name:joey iwatsuru Email:joeyiwat@yahoo. 1) random variable U . . 32 Address:104 pine meadows loop. Instead. 0 otherwise. px=0. 3. xy=[x’. us (United States) Zip Code:71901 .4].12 One straightforward method is to follow the approach of Example 4. 4. . x) PMF via Y = xU .*rand(m.com Phone:5017621195 Quiz 4.1).2. x) PMF.25*ones(4. we use an alternate approach. x=finiterv(sx.3.1)). we can generate a sample value of Y with a discrete uniform (1. Y has a discrete uniform (1. First we observe that X has the discrete uniform (1.

Y2 = y2 . Within these constraints. X 2 − X 1 = y2 . X 3 = y3 + y2 + y1 ] = (1 − p)3 p y1 +y2 +y3 (1) (2) (3) (4) By deﬁning the vector a = 1 1 1 . 6 d x2 = 6(x3 − x1 ).X 2 (x1 . we must keep in mind that f X 1 .}. us (United States) Zip Code:71901 . 2. x2 ) = f X 2 . and that f X 1 .X 3 (x2 . Thus. y3 ∈ {1.Name:joey iwatsuru Email:joeyiwat@yahoo.3 First we note that each marginal PDF is nonzero only if any subset of the xi obeys the ordering contraints 0 ≤ x 1 ≤ x2 ≤ x3 ≤ 1.X 3 (x1 . P [C] = 0 1/2 y2 1/2 y4 dy2 0 1/2 dy1 0 dy4 0 1/2 4dy3 = 1/4.1 We ﬁnd P[C] by integrating the joint PDF over the region of interest. . each Yi must be a strictly positive integer. x3 ) = 0 unless 0 ≤ x 1 ≤ 33 Address:104 pine meadows loop. (1) (2) (3) x2 x2 0 x3 x1 In particular. y3 ∈ {1.} 0 otherwise (5) Quiz 5. Y2 = X 2 − X 1 and Y3 = X 3 − X 2 . . . . Speciﬁcally. Y3 = y3 ] = P [X 1 = y1 . for y1 . AR. 6 d x1 = 6x2 . . 2.2 By deﬁnition of A.com Phone:5017621195 Quiz Solutions – Chapter 5 Quiz 5. PY (y) = P [Y1 = y1 . x3 ) = f X 1 . x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X (x) d x3 = f X (x) d x1 = f X (x) d x2 = 1 6 d x3 = 6(1 − x2 ). y2 . x3 ) = 0 unless 0 ≤ x2 ≤ x3 ≤ 1. the complete expression for the joint PMF of Y is PY (y) = (1 − p) p a y y1 .X 3 (x2 . y2 . hot springs. X 2 = y2 + y1 . Y1 = X 1 .X 3 (x1 . x2 ) = 0 unless 0 ≤ x 1 ≤ x2 ≤ 1. . we have f X 1 . Since 0 < X 1 < X 2 < X 3 . f X 2 . (1) (2) =4 0 y2 dy2 0 y4 dy4 Quiz 5. X 3 − X 2 = y3 ] = P [X 1 = y1 .X 2 (x1 .

X 3 (x1 .X 3 (x2 . The complete expressions are f X 1 . When 0 ≤ xi ≤ 1 for each xi . x2 ) d x2 = f X 2 .X 2 (x1 . x2 ) = f X 2 . w) = 4 0 ≤ v1 ≤ v2 ≤ 1. the components have dependencies as a result of the ordering constraints Y1 ≤ Y2 and Y3 ≤ Y4 . x3 ) d x3 = f X 2 . f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X 1 . AR. Y2 W= Y3 . x3 ) = f X 1 . Y4 (1) 34 Address:104 pine meadows loop.W (v.X 3 (x2 .com Phone:5017621195 x3 ≤ 1.4 In the PDF f Y (y). 0 ≤ w1 ≤ w2 ≤ 1 0 otherwise (2) Y1 . us (United States) Zip Code:71901 .X 2 (x1 .X 3 (x2 .Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. x3 ) = 6(1 − x2 ) 0 ≤ x1 ≤ x2 ≤ 1 0 otherwise 6x2 0 ≤ x2 ≤ x3 ≤ 1 0 otherwise 6(x3 − x1 ) 0 ≤ x1 ≤ x3 ≤ 1 0 otherwise (4) (5) (6) Now we can ﬁnd the marginal PDFs. We can separate these constraints by creating the vectors V= The joint PDF of V and W is f V. x3 ) d x2 = 1 x1 1 6(1 − x2 ) d x2 = 3(1 − x1 )2 6x2 d x3 = 6x2 (1 − x2 ) 2 6x2 d x2 = 3x3 (7) (8) (9) x2 x3 0 The complete expressions are f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = 3(1 − x1 )2 0 ≤ x1 ≤ 1 0 otherwise 6x2 (1 − x2 ) 0 ≤ x2 ≤ 1 0 otherwise 2 3x3 0 ≤ x3 ≤ 1 0 otherwise (10) (11) (12) Quiz 5.

p2 = 0.com Phone:5017621195 We must verify that V and W are independent.1) random variable. PX i (x) = pix (1 − pi )5−x x = 0. we see that X 1 is a binomial (n. . for 0 ≤ w1 ≤ w2 ≤ 1. If we view each test as a trial with success probability P[L] = 0. In ﬁve trials. f W (w) = = 4(1 − w1 ) dw1 = 2 f V.W (v. .5 (A) Referring to Theorem 1. w) dv1 dv2 1 0 1 v1 (6) (7) 4 dv2 dv1 = 2 It follows that V and W have PDFs f V (v) = 2 0 ≤ v1 ≤ v2 ≤ 1 . 0.19.3. . That is. w) = f V (v) f W (w). w) dw1 dw2 1 w1 1 0 (3) (4) (5) 4 dw2 dw1 = Similarly. . for p1 = 0. .W (v. . us (United States) Zip Code:71901 . conﬁrming that V and W are independent vectors.6) random variable and X 3 is a binomial (5. A and R. x2 . the vector X = X 1 X 2 X 3 indicating the number of outcomes of each subexperiment has the multinomial PMF ⎧ 5 ⎨ x1 . 0. 1. 0. each test is a subexperiment with three possible outcomes: L.3.3)x1 (0.6 and p3 = 0.1.x3 (0. 5} ⎩ 0 otherwise We can ﬁnd the marginal PMF for each X i from the joint PMF PX (x). 0 otherwise f W (w) = 2 0 ≤ w 1 ≤ w2 ≤ 1 0 otherwise (8) It is easy to verify that f V. x3 ∈ {0.6)x2 (0. PX (x) = (1) x1 .W (v.Name:joey iwatsuru Email:joeyiwat@yahoo. . AR. p) = (5. Quiz 5. X 2 is a binomial (5. hot springs. For 0 ≤ v1 ≤ v2 ≤ 1.3) random variable. f V (v) = = 0 1 f V.1)x3 x1 + x2 + x3 = 5. 5 0 otherwise 35 5 x (2) Address:104 pine meadows loop. Similarly. 1.x2 . . however it is simpler to just start from ﬁrst principles and observe that X 1 is the number of occurrences of L in ﬁve independent tests.

X 2 = w. or X 3 = w occurs.0802 (B) Since each Yi = 2X i + 4. w = 4.1)2 + 0.288 PW (5) = PX 1 (5) + PX 2 (5) + PX 3 (5) = 0.6)2 (0.6 We start by ﬁnding the components E[X i ] = the marginal PDFs f X i (x) found in Quiz 5. we can apply Theorem 5. for w = 3. PW (3) = PX 1 (3) + PX 2 (3) + PX 3 (3) = 0. X 2 and X 3 are not independent. . Thus. we need to ﬁnd E[X i X j ] for all i and j.6 to ﬁnd the PMF of W . we see that X 1 . Hence.6)(0. the constraints on y resulting from the constraints 0 ≤ X 1 ≤ X 2 ≤ X 3 can be much more complicated. 6x 2 (1 − x) d x = 1/2. we use 3x(1 − x)2 d x = 1/4. 2) + PX (2. and w = 5.Name:joey iwatsuru Email:joeyiwat@yahoo. PW (2) = PX (1. we must use Theorem 5.32 (0.1)2 + 0. AR.3(0. since X 1 + X 2 + X 3 = 5 and since each X i is non-negative. We start with 36 Address:104 pine meadows loop. In particular. 3x 3 d x = 3/4. (1) (2) (3) E [X 2 ] = 0 1 E [X 3 ] = 0 1 To ﬁnd the correlation matrix R X .com Phone:5017621195 From the marginal PMFs. To do so.486 PW (4) = PX 1 (4) + PX 2 (4) + PX 3 (4) = 0. f 3 X 2 2 2 2 (1/8)e−(y3 −4)/2 4 ≤ y1 ≤ y2 ≤ y3 = 0 otherwise (9) (10) (6) (7) (8) Note that for other matrices A.10 to write f Y (y) = y1 − 4 y2 − 4 y3 − 4 1 .1458 = (3) (4) (5) In addition. 1) 5![0. Quiz 5. the event W = w occurs if and only if one of the mutually exclusive events X 1 = w.3: E [X 1 ] = 0 1 ∞ −∞ x f X i (x) d x of µ X .32 (0. us (United States) Zip Code:71901 . 2. PW (0) = PW (1) = 0. hot springs. 2) + PX (2.1)] 2!2!1! = 0. 2.6)2 (0. 1. Furthermore.

6x 3 (1 − x) d x = 3/10. 1 x2 1 0 2 6x2 x3 d x3 d x2 x1 x2 f X 1 . d x1 d x2 d x1 (7) (8) (9) (10) (11) (12) (13) (14) 6x1 x2 (1 − x2 ) d x2 E [X 2 X 3 ] = 0 1 = E [X 1 X 3 ] = 0 2 4 [3x2 − 3x2 ] d x2 = 2/5 1 x1 1 6x1 x3 (x3 − x1 ) d x3 d x1 . 3x 4 d x = 3/5. x2 ) .3. the cross terms are E [X 1 X 2 ] = = = 0 ∞ ∞ −∞ −∞ 1 1 0 1 x1 3 4 [x1 − 3x1 + 2x1 ] d x1 = 3/20.X 2 (x1 . 1/4 3/8 ⎦ = 80 1 2 3 3/8 9/16 (17) (18) Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. 1/5 2/5 3/5 Vector X has covariance matrix C X = R X − E [X] E [X] ⎡ ⎤ ⎡ ⎤ 1/10 3/20 1/5 1/4 ⎣3/20 3/10 2/5⎦ − ⎣1/2⎦ = 1/5 2/5 3/5 3/4 ⎡ ⎤ ⎡ 1/10 3/20 1/5 1/16 ⎣3/20 3/10 2/5⎦ − ⎣ 1/8 = 1/5 2/5 3/5 3/16 37 (15) (16) 1/4 1/2 3/4 ⎤ ⎡ ⎤ 3 2 1 1/8 3/16 1 ⎣ 2 4 2⎦ .com Phone:5017621195 the second moments: E 2 X1 = 0 1 3x 2 (1 − x)2 d x = 1/10. hot springs. x3 =1 x3 =x1 = 0 1 3 2 2 (2x1 x3 − 3x1 x3 ) d x1 = 0 1 2 4 [2x1 − 3x1 + x1 ] d x1 = 1/5. us (United States) Zip Code:71901 . AR. Summarizing the results. X has correlation matrix ⎡ ⎤ 1/10 3/20 1/5 R X = ⎣3/20 3/10 2/5⎦ . (4) (5) (6) 2 E X2 = 2 E X3 = 1 0 1 0 Using marginal PDFs from Quiz 5.

computing the covariance matrix by calculus can be a time consuming task.7 We observe that X = AZ + b where A= 2 1 .Name:joey iwatsuru Email:joeyiwat@yahoo. just a Gaussian random variable.18 that µ X = b and that C X = AA = 2 1 1 −1 2 1 5 1 = . function p=julytemps(T). by Theorem 5..0221 0.50000000000000 0. The expected value of Y is µY = µT = 80.1)/31.e. Theorem 5.0.16.m: >> julytemps([70 75 80 85 90 95]) ans = 0.16 tells us that Y is a 1 dimensional Gaussian vector. rounds off those probabilities.97792616932396 38 Address:104 pine meadows loop. or CT . Here is the long format output: >> format long >> julytemps([70 75 80 85 90 95]) ans = Columns 1 through 4 0. Thus. invoked with the command format short. AR. The covariance matrix of Y is 1 × 1 and is just equal to Var[Y ]. Its just that the M ATLAB’s short format output. hot springs.00002844263128 0. p=phi((T-80)/sqrt(CY)).(1:31)).5000 0. the ﬁrst two lines generate the 31 × 31 covariance matrix CT. Var[Y ] = ACT A . Next we calculate Var[Y ]. Since T is a Gaussian random vector. [D1 D2]=ndgrid((1:31).8 First.com Phone:5017621195 This problem shows that even for fairly simple joint PDFs. The ﬁnal step is to use the (·) function to calculate P[Y < T ].m. we observe that Y = AT where A = 1/31 1/31 · · · 1/31 .99999999922010 0. 1 −1 1 2 (2) Quiz 5. 1 −1 b= 2 . In julytemps.02207383067604 Columns 5 through 6 0.99997155736872 0.0000 1. CY=(A’)*CT*A. A=ones(31.0000. Here is the output of julytemps. i. CT=36./(1+abs(D1-D2)). us (United States) Zip Code:71901 . 0 (1) It follows from Theorem 5.9779 1.0000 Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1. Quiz 5.0000 0.

⎢ c1 c0 CT = ⎢ . . the i. . jth element is CT (i. ⎥ ⎢ . A=ones(31. we see that ⎡ ⎤ c0 c1 · · · c30 . ⎥. . . 39 Address:104 pine meadows loop. us (United States) Zip Code:71901 ..com Phone:5017621195 The ndgrid function is a useful to way calculate many covariance matrices. We will see in Chapters 9 and 11 that Toeplitz covariance matrices are quite common../(1+abs(0:30)).1)/31. ⎣ . However.Name:joey iwatsuru Email:joeyiwat@yahoo. c1 ⎦ .. 1 + |i − j| (1) If we write out the elements of the covariance matrix. c30 · · · c1 c0 (2) This covariance matrix is known as a symmetric Toeplitz matrix.0. CT=toeplitz(c). The function julytemps2 use the toeplitz to generate the correlation matrix CT . AR. In fact. j) = c|i− j| = 36 . . CY=(A’)*CT*A.. ⎥ . p=phi((T-80)/sqrt(CY)). hot springs. function p=julytemps2(T). c=36. C X has a special structure. in this problem. M ATLAB has a toeplitz function for generating them.

by Theorem 6. .5 E K i2 = (12 + 22 + 32 + 42 )/4 = 7. .25n Quiz 6. .3.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . hot springs.5. . 4 0 otherwise (1) We can write Wn in the form of Wn = K 1 + · · · + K n . W = X + Y is nonnegative. the PDF of W = X + Y is f W (w) = ∞ −∞ f X (w − y) f Y (y) dy = 6 0 w e−3(w−y) e−2y dy (2) Fortunately. That is. (4) 40 Address:104 pine meadows loop.25 Since E[K i ] = 2. By Theorem 6.5)2 = 1.1 Let K 1 . . otherwise. Hence. Var[Wn ] = Var[K 1 ] + · · · + Var[K n ] = 1. a conmplete expression for the PDF of W is f W (w) = 6e−2w 1 − e−w 0 w ≥ 0. we note that the ﬁrst two moments of K i are E [K i ] = (1 + 2 + 3 + 4)/4 = 2. this integral is easy to evaluate.5n (5) Since the rolls are independent. .5 Thus the variance of K i is Var[K i ] = E K i2 − (E [K i ])2 = 7. . the random variables K 1 . K n are independent.5 − (2. the variance of the sum equals the sum of the variances. f W (w) = e−3w e y w 0 = 6 e−2w − e−3w (3) Since f W (w) = 0 for w < 0. For w > 0. First. K n denote a sequence of iid random variables each with PMF PK (k) = 1/4 k = 1. . .com Phone:5017621195 Quiz Solutions – Chapter 6 Quiz 6. AR. . the expected value of Wn is E [Wn ] = E [K 1 ] + · · · + E [K n ] = n E [K i ] = 2.2 Random variables X and Y have PDFs f X (x) = 3e−3x x ≥ 0 0 otherwise f Y (y) = 2e−2y y ≥ 0 0 otherwise (1) (6) (4) (2) (3) Since X and Y are nonnegative.5. . .

2(1 + 2 + 3 + 4) = 2 s=0 (2) (3) To ﬁnd higher-order moments.2(es + 8e2s + 27e3s + 64e4s ) s=0 s=0 = 0.10 says that W is a Gaussian random variable. us (United States) Zip Code:71901 .com Phone:5017621195 Quiz 6. Theorem 6. Thus to ﬁnd the PDF of W .2(es + 2e2s + 3e3s + 4e4s ) ds Evaluating the derivative at s = 0 yields E [K ] = d φ K (s) ds = 0.8 says the MGF of J is φ J (s) = (φ K (s))m = (2) (B) Since the set of α j X j are independent Gaussian random variables.4 (A) Each K i has MGF φ K (s) = E es K i = es (1 − ens ) es + e2s + · · · + ens = n n(1 − es ) ems (1 − ens )m n m (1 − es )m (1) Since the sequence of K i is independent. we continue to take derivatives: E K2 = E K3 E K4 d 2 φ K (s) ds 2 d 3 φ K (s) = ds 3 d 4 φ K (s) = ds 4 = 0.2(es + 4e2s + 9e3s + 16e4s ) s=0 s=0 =6 = 20 = 70.Name:joey iwatsuru Email:joeyiwat@yahoo.2 1 + es + e2s + e3s + e4s (1) We ﬁnd the moments by taking derivatives. Since the expectation of the sum equals the sum of the expectations: E [W ] = α E [X 1 ] + α 2 E [X 2 ] + · · · + α n E [X n ] = 0 41 (3) Address:104 pine meadows loop.2)esk = 0. hot springs.2(es + 16e2s + 81e3s + 256e4s ) s=0 s=0 Quiz 6. we need only ﬁnd the expected value and variance.3 The MGF of K is 4 φ K (s) = E es K == k=0 (0. Theorem 6.8 (4) (5) (6) (7) = 0. The ﬁrst derivative of φ K (s) is d φ K (s) = 0. AR.

The corresponding PDF is f R (r ) = (1/5)e−r/5 r ≥ 0 0 otherwise (4) This quiz is an example of the general result that a geometric sum of exponential random variables is an exponential random variable. the variance of the sum equals the sum of the variances: Var[W ] = α 2 Var[X 1 ] + α 4 Var[X 2 ] + · · · + α 2n Var[X n ] = α 2 + 2(α 2 )2 + 3(α 2 )3 + · · · + n(α 2 )n Deﬁning q = α 2 .6 to write Var[W ] = α 2 − α 2n+2 [1 + n(1 − α 2 )] (1 − α 2 )2 (6) (4) (5) 2 With E[W ] = 0 and σW = Var[W ]. we can use Math Fact B. we can write the PDF of W as f W (w) = 1 2 2π σW e−w 2 /2σ 2 W (7) Quiz 6. 1−s φ N (s) = 1 s 5e . 42 Address:104 pine meadows loop. hot springs. (3) (2) From Table 6.1. we see that R has the MGF of an exponential (1/5) random variable.com Phone:5017621195 Since the α j X j are independent.1. 1 − 4 es 5 (1) From Theorem 6. R has MGF φ R (s) = φ N (ln φ X (s)) = Substituting the expression for φ X (s) yields φ R (s) = 1 5 1 5 1 5 φ X (s) 1 − 4 φ X (s) 5 (2) −s . us (United States) Zip Code:71901 .5 (1) From Table 6.12.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. each X i has MGF φ X (s) and random variable N has MGF φ N (s) where φ X (s) = 1 .

hot springs. (3) Using X i to denote the access time of block i.4013 Note that we used Table 3. the standard deviation of A is σ A = 12 (5) To use the central limit theorem.9773 = 0.25).1 to estimate P [A < 48] = P 48 − E [A] A − E [A] < σA σA 48 − 72 ≈ 12 = 1 − (2) = 1 − 0. E [A] = E [X 1 ] + · · · + E [X 12 ] = 12E [X ] = 72 msec (4) Since the X i are independent.6 (1) The expected access time is E [X ] = ∞ −∞ x f X (x) d x = 0 12 x d x = 6 msec 12 (1) (2) The second moment of the access time is E X2 = ∞ −∞ x 2 f X (x) d x = 0 12 x2 d x = 48 12 (2) The variance of the access time is Var[X ] = E[X 2 ] − (E[X ])2 = 48 − 36 = 12. us (United States) Zip Code:71901 . we use the central limit theorem and Table 3. we write P [A > 75] = 1 − P [A ≤ 75] 75 − E [A] A − E [A] ≤ =1− P σA σA 75 − 72 ≈1− 12 = 1 − 0.0227 (10) (11) (12) 43 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo.1 to look up (0.5987 = 0. we can write A = X 1 + X 2 + · · · + X 12 Since the expectation of the sum equals the sum of the expectations. AR.com Phone:5017621195 Quiz 6. (6) (7) (8) (9) (5) (4) (3) (6) Once again. Var[A] = Var[X 1 ] + · · · + Var[X 12 ] = 12 Var[X ] = 144 Hence.

The arrival time of the third train is W = X 1 + X 2 + X 3.7 Random variable K n has a binomial distribution for n trials and success probability P[V ] = 3/4. X 3 are iid exponential (λ) random variables. (1) The expected number of voice calls out of 48 calls is E[K 48 ] = 48P[V ] = 36. λ) random variable.8 The train interarrival times X 1 . P [W > 20] = P √ W −6 20 − 6 > √ ≈ Q(7/ 3) = 2.66 × 10−5 √ 12 12 (3) 44 Address:104 pine meadows loop.9687 (4) (5) Quiz 6.Name:joey iwatsuru Email:joeyiwat@yahoo.11.com Phone:5017621195 Quiz 6.16666) − 1 = 0. us (United States) Zip Code:71901 . we can use the De Moivre-Laplace approximation to estimate P [30 ≤ K 48 ≤ 42] ≈ 42 + 0. (2) The variance of K 48 is Var[K 48 ] = 48P [V ] (1 − P [V ]) = 48(3/4)(1/4) = 9 Thus K 48 has standard deviation σ K 48 = 3. hot springs. (1) In Theorem 6. X 2 . we found that the sum of three iid exponential (λ) random variables is an Erlang (n = 3. we ﬁnd that W has expected value and variance E [W ] = 3/λ = 6 Var[W ] = 3/λ2 = 12 (2) (1) By the Central Limit Theorem. From Appendix A.5 − 36 − 3 3 = 2 (2.9545 (4) Since K 48 is a discrete random variable.5 − 36 30 − 0. (3) Using the ordinary central limit theorem and Table 3.1 yields P [30 ≤ K 48 ≤ 42] ≈ Recalling that (−x) = 1 − 42 − 36 − 3 (x). AR. we have (3) 30 − 36 3 = (2) − (−2) (2) (1) P [30 ≤ K 48 ≤ 42] ≈ 2 (2) − 1 = 0.

the CDF of the Erlang (λ.’\itw’.0338 s=7/20 (7) (3) Theorem 3. SW=SX+SY. for λ = 1/2 and w = 20.sx).PY]=ndgrid(px. the Central Limit Theorem approximation grossly underestimates the true probability.9 One solution to this problem is to follow the approach of Example 6.com Phone:5017621195 (2) To use the Chernoff bound.sy=0:100. Quiz 6. px=binomialpmf(100.py).m sx=0:100. it should be apparent that the finitepmf function is implementing the convolution of the two PMFs.5. [PX. P [W > 20] = 1 − FW (20) = e−10 1 + 10 102 + 1! 2! = 61e−10 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. we note that the MGF of W is φW (s) = The Chernoff bound states that P [W > 20] ≤ min e−20s φ X (s) = min s≥0 s≥0 λ λ−s 3 = 1 (1 − 2s)3 e−20s (1 − 2s)3 (4) (5) To minimize h(s) = e−20s /(1 − 2s)3 . [SX. PW=PX. it is a valid bound.pw.*PY. Applying s = 7/20 into the Chernoff bound yields P [W > 20] ≤ e−20s (1 − 2s)3 = (10/3)3 e−7 = 0. By contrast. pw=finitepmf(SW.100.11 says that for any w > 0.’\itP_W(w)’). pmfplot(sw. AR. 45 Address:104 pine meadows loop.0.sy). A graph of the PMF PW (w) appears in Figure 2 With some thought.19: %unifbinom100. sw=unique(SW). us (United States) Zip Code:71901 .PW. 3) random variable W satisﬁes 2 (λw)k e−λw FW (w) = 1 − (8) k! k=0 Equivalently. hot springs. py=duniformpmf(0.SY]=ndgrid(sx.sw).sy). we set the derivative of h(s) to zero: −20(1 − 2s)3 e−20s + 6e−20s (1 − 2s)2 d h(s) = =0 ds (1 − 2s)6 (6) This implies 20(1 − 2s) = 6 or s = 7/20.0028 (9) (10) Although the Chernoff bound is relatively weak in that it overestimates the probability by roughly a factor of 12.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0.01 0.008 PW(w) 0.006 0.004 0.002 0 0 20 40 60 80 100 w 120 140 160 180 200

Figure 2: From Quiz 6.9, the PMF PW (w) of the independent sum of a binomial (100, 0.5) random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Quiz Solutions – Chapter 7
**

Quiz 7.1 An exponential random variable with expected value 1 also has variance 1. By Theorem 7.1, Mn (X ) has variance Var[Mn (X )] = 1/n. Hence, we need n = 100 samples. Quiz 7.2 The arrival time of the third elevator is W = X 1 + X 2 + X 3 . Since each X i is uniform (0, 30), (30 − 0)2 Var [X i ] = = 75. (1) E [X i ] = 15, 12 Thus E[W ] = 3E[X i ] = 45, and Var[W ] = 3 Var[X i ] = 225. (1) By the Markov inequality, P [W > 75] ≤ (2) By the Chebyshev inequality, P [W > 75] = P [W − E [W ] > 30] ≤ P [|W − E [W ]| > 30] ≤ 225 Var [W ] 1 = = 2 900 4 30 (3) (4) E [W ] 45 3 = = 75 75 5 (2)

Quiz 7.3 Deﬁne the random variable W = (X − µ X )2 . Observe that V100 (X ) = M100 (W ). By Theorem 7.6, the mean square error is E (M100 (W ) − µW )2 = Observe that µ X = 0 so that W = X 2 . Thus, µW = E X

2

Var[W ] 100

(1)

=

1 −1 1 −1

x 2 f X (x) d x = 1/3 x 4 f X (x) d x = 1/5

(2) (3)

E W2 = E X4 =

Therefore Var[W ] = E[W 2 ] − µ2 = 1/5 − (1/3)2 = 4/45 and the mean square error is W 4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4 Assuming the number n of samples is large, we can use a Gaussian approximation for Mn (X ). SinceE[X ] = p and Var[X ] = p(1 − p), we apply Theorem 7.13 which says that the interval estimate Mn (X ) − c ≤ p ≤ Mn (X ) + c (1) has conﬁdence coefﬁcient 1 − α where α =2−2 √ c n . p(1 − p)

(2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must have √ c n ≥ 0.95 (3) p(1 − p) √ for every value of p. Since (x) is an increasing function of x, we must satisfy c n ≥ 1.65 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that 1.65 0.41 c≥ √ = √ . 4 n n The 0.9 conﬁdence interval estimate of p is 0.41 0.41 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n (5) (4)

√ For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c n/( p(1− p))) ≥ 0.995. √ This implies c n√ 2.58 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that ≥ c ≥ (0.25)(2.58)/ n. In this case, the 0.99 conﬁdence interval estimate is 0.645 0.645 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n Note that if M100 (X ) = 0.4, then the 0.99 conﬁdence interval estimate is 0.3355 ≤ p ≤ 0.4645. The interval is wide because the 0.99 conﬁdence is high. Quiz 7.5 Following the approach of bernoullitraces.m, we generate m = 1000 sample paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the fraction of sample paths that have sample mean within one standard error of p. The program bernoullisample.m generates graphs the number of traces within one standard error as a function of the time, i.e. the number of trials in each trace. 48 (7) (6)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

m./nn.m). The unusual sawtooth pattern.4 0 10 20 30 40 50 60 70 80 90 100 As we would expect. stderrmat=stderr*ones(1. plot(1:n. AR.Name:joey iwatsuru Email:joeyiwat@yahoo.5): 1 0. 49 Address:104 pine meadows loop.OK.7 0. MN=cumsum(x).68. is examined in Problem 7.5000.m).5 0. stderr=sqrt(p*(1-p)).2.8 0. as m gets large.m).com Phone:5017621195 function OK=bernoullisample(n. though perhaps unexpected.m*n). OK=sum(abs(MN-p)<stderrmat. x=reshape(bernoullirv(p. The following graph was generated by bernoullisample(100.2)/m.0.’-s’). us (United States) Zip Code:71901 . the fraction of traces within one standard error approaches 2 (1) − 1 ≈ 0.9 0. nn=(1:n)’*ones(1. hot springs.p).n.6 0./sqrt((1:n)’).5.

976 photons. then we reject the hypothesis.2 From the problem statement. This rule simpliﬁes to 106 − 104 k ∈ A0 if k ≤ k = = 214.1 From the problem statement. . 975. FX (x) = FX i (x) 15 (2) = 1 − e−x 15 (3) To design a signiﬁcance test. .01 It is straightforward to show that r = − ln 1 − (0. 50 Address:104 pine meadows loop.01. . hot springs.7. (4) Thus if we observe at least 214. . let R = {X ≤ r }.33. AR.Name:joey iwatsuru Email:joeyiwat@yahoo. the conditional PMFs of K are PK |H0 (k) = PK |H1 (k) = 104k e−10 k! 4 (4) (5) 0 106k e−10 k! 6 k = 0. For a signiﬁcance level of α = 0. us (United States) Zip Code:71901 . Quiz 8. X 2 ≤ x. . otherwise k = 0. ln 100 ∗ k ∈ A1 otherwise. . 1.com Phone:5017621195 Quiz Solutions – Chapter 8 Quiz 8.01)1/15 = 1. the MAP and ML tests are the same. 1. if we observe X < 1. then we accept hypothesis H1 . That is. . . each X i has PDF and CDF f X i (x) = e−x x ≥ 0 0 otherwise FX i (x) = 0 x <0 1 − e−x x ≥ 0 (1) Hence. X 15 obeys FX (x) = P [X ≤ x] = P [X 1 ≤ x. This implies that for x ≥ 0. the ML hypothesis rule is k ∈ A0 if PK |H0 (k) ≥ PK |H1 (k) . we must choose a rejection region for X . otherwise (1) (2) 0 Since the two hypotheses are equally likely. . the CDF of the maximum of X 1 . · · · . From Theorem 8.6.33 Hence. . we obtain α = P [X ≤ r ] = (1 − e−r )15 = 0. X 15 ≤ x] = [P [X i ≤ x]]15 . A reasonable choice is to reject the hypothesis if X is too small. (3) k ∈ A1 otherwise.

Since N1 and N2 are iid Gaussian (0. FM2(:. X 2 ) ∈ A j for some j = i. E/2 + N2 > 0 (1) Because of the symmetry of the signals. the existing program sqdistor already calculates this miss probability PMISS = P01 and the false alarm probability PFA = P10 .ˆ2)< TT). us (United States) Zip Code:71901 . σ ) random variables. otherwise 0 %FM = [P(FA) P(MISS)] x=(v+randn(m.2). FM2=sqdistroc(v. P10=sum((XX+d*(XX.’\it d=0. we have √ √ P [C] = P [C|H0 ] = P E/2 + N1 > 0 P E/2 + N2 > 0 (2) √ 2 (3) = P N1 > − E/2 √ 2 − E/2 (4) = 1− σ Since (−x) = 1 − of error is (x).3.. N is Gauss(0. X 2 > 0|H0 ] = P E/2 + N1 > 0. FM5=sqdistroc(v.1’.1) %add d(v+N)ˆ2 distortion %receive 1 if x>T.1).m. 51 Address:104 pine meadows loop.TT]=ndgrid(x.d.1). sqdistroc. function FM=sqdistrocplot(v.FM1(:. .. Here is the modiﬁed code: function FM=sqdistroc(v.m.2). we have P[C] = 2( E/2σ 2 ).3) ylabel(’P_{MISS}’).T(:)).1)). the probability 2 PERR = 1 − P [C] = 1 − E 2σ 2 (5) Quiz 8.FM2(:. P01=sum((XX+d*(XX.1).2). AR.2’.2.T) %square law distortion recvr %P(error) for m bits tested %transmit v volts or -v volts. %add N volts.0.’--k’. [XX. Given H0 .0. This implies the probability of a correct decision is P[C] = P[C|H0 ]. The modiﬁed program.4 To generate the ROC. For a QPSK system. x= -v+randn(m. Equivalently.TT]=ndgrid(x.m calls sqdistroc three times to generate a plot that compares the receiver performance for the three requested values of d. legend(’\it d=0. [XX.’-k’.T(:)). the conditional probability of a correct decision is √ √ P [C|H0 ] = P [X 1 > 0.’:k’).3 For the QPSK system.. FM=[FM1 FM2 FM5].com Phone:5017621195 Quiz 8.T). FM5(:.m.T). it is easier to calculate the probability of a correct decision. . P[C|H0 ] = P[C|Hi ] for all i.Name:joey iwatsuru Email:joeyiwat@yahoo..3’.FM5(:.1. hot springs.m. a symbol error occurs when si is transmitted but (X 1 ..m is essentially the same as sqdistor except the output is a matrix FM whose columns are the false alarm and miss probabilities.m.1). FM=[P10(:) P01(:)].ˆ2)>TT)..T). Next.1)/m.0. the program sqdistrocplot. FM1=sqdistroc(v.1)/m. ’\it d=0. loglog(FM1(:.T).. xlabel(’P_{FA}’).

1 d=0.3 −5 10 10 −4 10 −3 10 PFA −2 10 −1 10 0 T=-3:0.com Phone:5017621195 To see the effect of d. us (United States) Zip Code:71901 . Figure 3: The receiver operating curve for the communications system of Quiz 8. 52 Address:104 pine meadows loop.1:3.T).4 with squared distortion. AR. sqdistrocplot(3. generated the plot shown in Figure 3. the commands T=-3:0.Name:joey iwatsuru Email:joeyiwat@yahoo.1:3. 10 0 10 −1 10 PMISS 10 10 −2 −3 −4 10 −5 d=0. sqdistrocplot(3.100000.100000.T). hot springs.2 d=0.

Y (x. the conditional PDF of Y given X is f Y |X (y|x) = 2(y+x) 1+2x−3x 2 0 x ≤y≤1 otherwise (6) (4) The MMSE estimate of Y given X = x is y M (x) = E [Y |X = x] = ˆ x 1 2y 2 + 2x y dy 1 + 2x − 3x 2 y=1 y=x (7) (8) (9) 2y 3 /3 + x y 2 = 1 + 2x − 3x 2 = 2 + 3x − 5x 3 3 + 6x − 9x 2 53 Address:104 pine meadows loop. (3) To obtain the conditional PDF f Y |X (y|x).1 (1) First.Name:joey iwatsuru Email:joeyiwat@yahoo. we calculate the marginal PDF for 0 ≤ y ≤ 1: f Y (y) = 0 y 2(y + x) d x = 2x y + x 2 x=y x=0 = 3y 2 (1) This implies the conditional PDF of X given Y is f X |Y (x|y) = f X. AR. For 0 ≤ x ≤ 1. f X (x) = x 1 2(y + x) dy = y 2 + 2x y y=1 y=x = 1 + 2x − 3x 2 (4) (5) For 0 ≤ x ≤ 1. hot springs. us (United States) Zip Code:71901 . we need the marginal PDF f X (x). y) = f Y (y) 2 3y + 2x 3y 2 0 0≤x ≤y otherwise (2) (2) The minimum mean square error estimate of X given Y = y is x M (y) = E [X |Y = y] = ˆ 0 y 2x 2 2x + 2 3y 3y d x = 5y/9 (3) ˆ Thus the MMSE estimator of X given Y is X M (Y ) = 5Y /9.com Phone:5017621195 Quiz Solutions – Chapter 9 Quiz 9.

E[T X ] = E[T ]E[X ] = 0 and E[T 2 ] = Var[T ]. Thus Cov[T. the conditional PDF of X = Y −40−40 log10 r is Gaussian with expected value −40 − 40 log10 r and variance 64. R] = E [T R] = E [T (T + X )] = E T 2 + E [T X ] (3) (2) (1) Since T and X are independent and have zero expected value. the optimum linear estimate of T given R is σT ˆ TL (R) = ρT.R (R − E [R]) + E [T ] σR Since E[R] = E[T ] = 0 and ρT. E [R] = E [T ] + E [X ] = 0 (2) Since T and X are independent. The conditional PDF of X given R is 1 2 f X |R (x|r ) = √ e−(x+40+40 log10 r ) /128 128π 54 (1) Address:104 pine meadows loop. hot springs.4. R] = = 3/2 σR Var[R] Var[T ] (4) (5) From Theorem 9.3 When R = r .R = √ √ σT Cov [T. R] = Var[T ] = 9. the variance of the sum R = T + X is Var[R] = Var[T ] + Var[X ] = 9 + 3 = 12 (3) Since T and R have expected values E[R] = E[T ] = 0. AR. Cov [T. (6) By Theorem 9. (4) From Deﬁnition 4. the mean square error of the linear estimate is 2 e∗ = Var[T ](1 − ρT.Name:joey iwatsuru Email:joeyiwat@yahoo. ˆ TL (R) = Hence a ∗ = 3/4 and b∗ = 0. the correlation coefﬁcient of T and R is ρT.2 (1) Since the expectation of the sum equals the sum of the expectations. us (United States) Zip Code:71901 .R ) = 9(1 − 3/4) = 9/4 L 2 σT (5) σR R= 2 2 σT 2 2 σT + σ X R= 3 R 4 (6) (7) Quiz 9.4.com Phone:5017621195 Quiz 9.R = σT /σ R .8.

us (United States) Zip Code:71901 . AR. which is not possible in our probability model. r ) = f X |R (x|r ) f R (r ) = 106 32π 1 √ r e−(x+40+40 log10 r ) 2 /128 (5) From Theorem 9. However.3 dB.6 m. the MAP estimate takes into account that the distance can never exceed 1000 m. This corresponds to a distance estimate of rML (−120) = 100 m. ˆ For the MAP estimate. When the measured signal ˆ strength is not too low.R (x.R (x. the above estimate will exceed 1000 m. r ) with respect to r to zero yields e−(x+40+40 log10 r ) Solving for r yields r = 10 1 25 log10 e −1 2 /128 1− 80 log10 e (x + 40 + 40 log10 r ) = 0 128 (7) 10−x/40 = (0.2 to write the ML estimate of R given X = x as rML (x) = arg max f X |R (x|r ) ˆ r ≥0 (2) We observe that f X |R (x|r ) is maximized when the exponent (x + 40 + 40 log10 r )2 is minimized. if x = −120dB.com Phone:5017621195 From the conditional PDF f X |R (x|r ). 55 Address:104 pine meadows loop. the complete description of the MAP estimate is rMAP (x) = ˆ 1000 x < −156. r ). R ≤ 1000 m. (6) rMAP (x) = arg max f X. then rMAP (−120) = 123. for very low signal strengths.6. This reﬂects the fact that large values of R are a priori more probable than small values.3 (0.R (x. yielding log10 r = −1 − x/40 or rML (x) = (0. we observe that the joint PDF of X and R is f X. hot springs. the MAP estimate is 23.3 −x/40 x ≥ −156. This minimum occurs when the exponent is zero.1236)10 (9) For example. the MAP estimate of R given X = x is the value of r that maximizes f X.1)10−x/40 m ˆ (3) (4) If the result doesn’t look correct. r ) ˆ 0≤r ≤1000 Note that we have included the constraint r ≤ 1000 in the maximization to highlight the fact that under our probability model.Name:joey iwatsuru Email:joeyiwat@yahoo. When x ≤ −156. we can use Deﬁnition 9. Setting the derivative of f X.1236)10−x/40 (8) This is the MAP estimate of R given X = x as long as r ≤ 1000 m. note that a typical ﬁgure for the signal strength might be x = −120 dB. Hence.R (x.6% larger than the ML estimate. That is.

Because µ X 2 = µY2 = 0.1. RY = E YY = E (X + W)(X + W ) = E XX + XW + WX + WW .7.1 (9) Address:104 pine meadows loop. we need to ﬁnd RY and RYX 2 . (7) (8) Because X and W are independent. 0 0.9 1 RW = 0. E [Y2 X 2 ] E [(X 2 + W2 )X 2 ] 56 (10) 1. hot springs. Y2 ] 1 =√ σ X 2 σY2 1. to compute the expected square error.4.9 1. us (United States) Zip Code:71901 . −0. E[XW ] = E[X]E[W ] = 0.7.1 −0. it follows that b∗ = 0. n = 2 and we wish to estimate X 2 given the observation vector Y = Y1 Y2 . we calculate the correlation coefﬁcient ρ X 2 .9 .Y2 ) = 1 − L Cov [X 2 .4 ˆ (1) From Theorem 9.com Phone:5017621195 Quiz 9. we need to ﬁnd RYX 2 = E [YX 2 ] = E [Y1 X 2 ] E [(X 1 + W1 )X 2 ] = .1 (2) (3) It follows that a ∗ = 1/1.0909 1.7.Name:joey iwatsuru Email:joeyiwat@yahoo. Similarly. the LMSE estimate of X 2 given Y2 is X 2 (Y2 ) = a ∗ Y2 + b∗ where a∗ = Cov [X 2 .9 . AR. 2 Cov [X 2 . To apply Theorem 9. (1) Because E[X] = E[Y] = 0. Y2 ] . Thus we can apply Theorem 9. Y2 ] = E [X 2 Y2 ] = E [X 2 (X 2 + W2 )] = E X 2 = 1 2 2 Var[Y2 ] = Var[X 2 ] + Var[W2 ] = E X 2 + E W2 = 1.Y2 = The expected square error is 2 e∗ = Var[X 2 ](1 − ρ X 2 . Note that X and W have correlation matrices RX = 1 −0. Finally. −0.1 0 .1 (4) 1 1 = = 0. it follows that E[Y] = 0. This implies RY = E XX + E WW = RX + RW = In addition.1 11 (5) (2) Since Y = X + W and E[X] = E[W] = 0. E[WX ] = 0. Var[Y2 ] b ∗ = µ X 2 − a ∗ µ Y2 .1 (6) In terms of Theorem 9.

Since X and W are independent.725Y2 . X L (Y) = a Y where a = R−1 RYX .5 Since X and W have zero expected value. the optimum linear estimator of X 2 given Y1 and Y2 is ˆ ˆ X L = a Y = −0. (11) 2 1 E X2 By Theorem 9. jth entry RW (i. (14) (13) Quiz 9. ˆ a = R−1 RYX = 11 + RW Y and the optimal linear estimator is ˆ X L (Y) = 1 11 + RW The mean square error is ˆ e∗ = Var[X ] − a RYX = 1 − 1 11 + RW L −1 −1 −1 (1) (2) (3) (4) 1 (5) Y (6) 1 (7) Now we note that RW has i. By the same reasoning. the correlation matrix of Y is RY = E YY = E (1X + W)(1 X + W ) = 11 E X 2 + 1E X W + E [WX ] 1 + E WW = 11 + RW Note that 11 is a 20 × 20 matrix with every entry equal to 1.X 2 − a2rY2 . AR. E[W1 X 2 ] = E[W1 ]E[X 2 ] = 0 and E[W2 X 2 ] = 0. j) = c|i− j|−1 . hot springs.225 0.X 2 = 0. This problem is atypical in that one does not usually get L 57 Address:104 pine meadows loop.9 RYX 2 = = . Thus E[X 1 X 2 ] −0.725 (12) Therefore. us (United States) Zip Code:71901 .com Phone:5017621195 Since X and W are independent vectors.Name:joey iwatsuru Email:joeyiwat@yahoo.7.225Y1 + 0. The question we must address is what value c minimizes e∗ . by ˆ ˆ ˆ Theorem 9.7. Y E[WX ] = 0 and E[X W ] = 0 .0725. The mean square error is ˆ Var [X 2 ] − a RYX 2 = Var [X ] − a1rY1 . This implies RYX = E [YX ] = E [(1X + W)X ] = 1E X 2 = 1. Thus. Y also has zero expected value. Thus. ˆ a = R−1 RYX 2 = Y −0.

Note in mquiz9 that v1 corresponds to the vector 1 of all ones. On the other hand. However. hot springs. RW=toeplitz(c. This would suggest that large values of c will also result in poor MSE. [msemin. AR. when c is small. function cmin=mquiz9minc(c). v1=ones(20. Thus. In this case.8 e* L 0.Name:joey iwatsuru Email:joeyiwat@yahoo.99. We note that the answer is not obviously apparent from Equation (7). for k=1:length(c). our 20 measurements will be all the same and one measurement is as good as 20 measurements. both small values and large values of c result in large MSE. In particular.af]=mquiz9(c(k)).6 0. [msec(k). i) = 1/c.com Phone:5017621195 to choose the correlation structure of the noise.4 0.01:0.2 0 0. if c is large Wi and W j are highly correlated and the separate measurements of X are very dependent. function [mse.5 c 1 As we see in the graph. we write a M ATLAB function mquiz9(c) to calculate the MSE for a given c and second function that ﬁnds plots the MSE for a range of values of c. consider the extreme case in which every Wi and W j have correlation coefﬁcient ρi j = 1. msec=zeros(size(c)). end plot(c.1).ylabel(’e_Lˆ*’). the noises Wi have high variance and we would expect our estimator to be poor. xlabel(’c’). RY=(v1*(v1’)) +RW. 58 Address:104 pine meadows loop.af]=mquiz9(c).ˆ((0:19)-1)).01:0. we will see that the answer is somewhat instructive. cmin=c(optk). we observe that Var[Wi ] = RW (i.4500 1 0. >> mquiz9minc(c) ans = 0. If this argument is not clear. To ﬁnd the optimal value of c. The following commands ﬁnds the minimum c and also produces the following graph: >> c=0. af=(inv(RY))*v1.msec). mse=1-((v1’)*af).optk]=min(msec). us (United States) Zip Code:71901 .

(3) If we sample the process in part (a) every T seconds. . continuous valued process. then we obtain a continuous time. (4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time. the call completion times of the H calls that hang up Quiz 10. . . we round the temperature to the nearest degree.1 There are many correct answers to this question.2 (1) We obtain a continuous time.3 (1) Each resistor has resistance R in ohms with uniform PDF f R (r ) = 0. A correct answer speciﬁes enough random variables to specify the sample path exactly. s) is • m(0. X N .Name:joey iwatsuru Email:joeyiwat@yahoo. s).01 950 ≤ r ≤ 1050 0 otherwise (1) The probability that a test produces a 1% resistor is p = P [990 ≤ R ≤ 1010] = 1010 990 (0. . then we obtain a discrete time. discrete valued process. . D H . One choice for an alternate set of random variables that would specify m(t. the number of new calls that arrive during the experiment • X 1 . . hot springs. the number of calls that hang up during the experiment • D1 . . (2) If at every moment in time. Quiz 10. the interarrival times of the N new arrivals • H . discrete valued process. . continuous valued process when we record the temperature as a continuous waveform over time.01) dr = 0. us (United States) Zip Code:71901 .com Phone:5017621195 Quiz Solutions – Chapter 10 Quiz 10. AR. the number of ongoing calls at the start of the experiment • N .2 (2) 59 Address:104 pine meadows loop.

. . Consequently. (4) From Theorem 2. xn ) = i=1 f X (xi ) = 1 2 2 e−(x1 +···+xn )/2 n/2 (2π ) (2) 60 Address:104 pine meadows loop. 9 otherwise (4) Since p = 0. the number of additional trials needed to ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/ p since each independent trial is a success with probability p.2) = 0. independent of any other resistor. A success occurs on a trial with probability p if we ﬁnd a 1% resistor. .8)4 (0. . . . the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is PT1 (5) = (0.11.. just as in Example 2. That is. . Each resistor is a 1% resistor with probability p.1. . .com Phone:5017621195 (2) In t seconds. 1) random variable.4 Since each X i is a N (0. . The ﬁrst 1% resistor is found at time T1 = t if we observe failures on trials 1. the joint PDF of X = X 1 · · · X n is k (1) f X (x) = f X (1).Name:joey iwatsuru Email:joeyiwat@yahoo. . E[T1 ] = 1/ p = 5. a geometric random variable with success probability p has expected value 1/ p. the number of 1% resistors found has the binomial PMF PN (t) (n) = p n (1 − p)t−n n = 0. each X i has PDF 1 2 f X (i) (x) = √ e−x /2 2π By Theorem 10. exactly t resistors are tested. Thus E [T2 |T1 = 10] = E [T1 |T1 = 10] + E T |T1 = 10 = 10 + E T = 10 + 5 = 15 (5) (6) Quiz 10. 2. t 0 otherwise t n (3) (3) First we will ﬁnd the PMF of T1 . This problem is easy if we view each resistor test as an independent trial. hot springs..X (n) (x1 .2. ..08192.. us (United States) Zip Code:71901 .5. t − 1 followed by a success on trial t. . . In this problem. (5) Note that once we ﬁnd the ﬁrst 1% resistor. T1 has the geometric PMF PT1 (t) = (1 − p)t−1 p t = 1. T2 = T1 + T where T is independent and identically distributed to T1 . Hence. 1. AR. .

61 Address:104 pine meadows loop. has the same PDF as Y1 (t). Since one hour equals 3600 sec and the Poisson process has a rate of 10 packets/sec. Since X 1 and X 2 are independent exponential (λ) random variables. This implies < √ [W (t) − W (s)]/ α is independent of W (s )/ α for all s ≥ s . 1. Theorem 3. PM1 . . λ) random variable. . see Theorem 6. . That is. X (t) − X (s) = W (t) − W (s) √ α (1) Since W (t) − W (s) is a Gaussian random variable.11. . otherwise (1) Since M1 and M2 are independent. . the expected number of packets in each hour is E[Mi ] = α = 36. m 2 ) = PM1 (m 1 ) PM2 (m 2 ) = ⎪ ⎪ ⎩ 0 otherwise. us (United States) Zip Code:71901 . . Since s ≥ s . we note that for t > s.5 The ﬁrst and second hours are nonoverlapping intervals. Y1 is an Erlang (n = 2. AR. Since we count only evennumbered arrival for N (t). Since Yi (t). Thus N (t) is not a Poisson process.6 To answer whether N (t) is a Poisson process. . 1. Thus X (t) is a Brownian motion process with variance Var[X (t)] = t. X 2 .13 states that W (t) − W (s) is Gaussian with expected value E [X (t) − X (s)] = and variance E (W (t) − W (s))2 = E (W (t) − W (s))2 α(t − s) = α α (3) E [W (t) − W (s)] =0 √ α (2) Consider s ≤ s √ t. Quiz 10. . . . X (t) − X (s) is independent of X (s ) for all s ≥ s . . denote the interarrival times of the N (t) process. 1.M2 (m 1 .com Phone:5017621195 Quiz 10. . W (t) − W (s) is independent of W (s ). the time until the ﬁrst arrival of the N (t) is Y1 = X 1 + X 2 .Name:joey iwatsuru Email:joeyiwat@yahoo. the ith interarrival time of the N (t) process. This implies M1 and M2 are independent Poisson random variables each with PMF PMi (m) = α m e−α m! 0 m = 0. . we can conclude that the interarrival times of N (t) are not exponential random variables. the joint PMF of M1 and M2 is ⎧ α m 1 +m 2 e−2α m 1 = 0. ⎪ m 1 !m 2 ! ⎪ ⎨ m 2 = 0. 2. we look at the interarrival times. Let X 1 . (2) Quiz 10. 000.7 First. hot springs. .

com Phone:5017621195 Quiz 10. . .X nm +k (x1 .10 We must check whether each function R(τ ) meets the conditions of Theorem 10. . τ ) + R N (t. Quiz 10. .. n m and time offset k.8 First we ﬁnd the expected value µY (t) = µ X (t) + µ N (t) = µ X (t). (2) (3) (4) Quiz 10. 2 (3) R3 (τ ) = e−τ cos τ is not valid because R3 (−2π ) = e2π cos 2π = e2π > 1 = R3 (0) (4) R4 (τ ) = e−τ sin τ also cannot be an autocorrelation function because 2 (2) R4 (π/2) = e−π/2 sin π/2 = e−π/2 > 0 = R4 (0) (3) 62 Address:104 pine meadows loop. . n m + k.Name:joey iwatsuru Email:joeyiwat@yahoo. E[X (t)N (t )] = E[X (t)]E[N (t )] = 0.X nm +k (x1 .12: R(τ ) ≥ 0 R(τ ) = R(−τ ) |R(τ )| ≤ R(0) (1) (3) (2) (1) (1) R1 (τ ) = e−|τ | meets all three conditions and thus is valid.. .. . we observe that since X (t) and N (t) are independent and since N (t) has zero expected value. f X n1 . . Since RY (t. xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) We can conclude that the iid random sequence is stationary. (1) To ﬁnd the autocorrelation. hot springs. . . ..X nm (x1 . xm ) Since the random sequence is iid.. X 1 .. .. us (United States) Zip Code:71901 . τ ) = E[Y (t)Y (t + τ )]. . τ ) = E [(X (t) + N (t)) (X (t + τ ) + N (t + τ ))] = E [X (t)X (t + τ )] + E [X (t)N (t + τ )] + E [X (t + τ )N (t)] + E [N (t)N (t + τ )] = R X (t.. . . . X 2 . xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) Similarly. τ ). is a stationary random sequence if for all sets of time instants n 1 . .. (2) R2 (τ ) = e−τ also is valid. AR. .. . f X n1 +k . . . .X nm (x1 . ... ..14. . . f X n1 . ..9 From Deﬁnition 10.. we have RY (t.. xm ) = f X n1 +k . for time instants n 1 + k.

AR. Quiz 10. we can conclude that Y (t) is a wide sense stationary process. us (United States) Zip Code:71901 . we see the same second order statistics.X (t+1) (x0 . we see that by viewing a process backwards in time. τ ) depends on both t and τ .11 (1) The autocorrelation of Y (t) is RY (t. R X Y (t. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (−t − τ )] = R X (t − (−t − τ )) = R X (2t + τ ) (4) (5) (6) Since R X Y (t. suppose R X (τ ) = e−|τ | so that samples of X (t) far apart in time have almost no correlation. τ ) = E [Y (t)Y (t + τ )] = E [X (−t)X (−t − τ )] = R X (−t − (−t − τ )) = R X (τ ) (1) (2) (3) Since E[Y (t)] = E[X (−t)] = µ X . hot springs. To see why this is. In this case. we can check whether they are jointly wide sense stationary by seeing if R X Y (t.12 From the problem statement.Name:joey iwatsuru Email:joeyiwat@yahoo. Y (t) = X (−t) and X (t) become less and less correlated. we conclude that X (t) and Y (t) are not jointly wide sense stationary. x1 ) = 1 (2π )n/2 [det (CX )]1/2 1 3π 2 e− 3 2 2 2 x0 −x0 x1 +x1 (5) 1 exp − x C−1 x X 2 (6) (7) =√ 63 Address:104 pine meadows loop. τ ) is just a function of τ . (2) Since X (t) and Y (t) are both wide sense stationary processes. E [X (t)] = E [X (t + 1)] = 0 E [X (t)X (t + 1)] = 1/2 Var[X (t)] = Var[X (t + 1)] = 1 The Gaussian random vector X = X (t) X (t + 1) sponding inverse CX = Since 1 1/2 1/2 1 C−1 = X (1) (2) (3) has covariance matrix and corre- 4 1 −1/2 1 3 −1/2 (4) 4 4 2 1 −1/2 x0 2 x − x0 x+ x1 = 1 x1 3 −1/2 3 0 the joint PDF of X (t) and X (t + 1) is the Gaussian vector PDF x C−1 x = x0 x1 X f X (t). as t gets larger. In fact. In this case.com Phone:5017621195 Quiz 10.

After the head-of-schedule event is completed and any new events (departures in this system) are scheduled. admit the arrival. satisﬁes M(t) < c = 120. hot springs. A simulation of the system moves from one time instant to the next by maintaining a chronological schedule of future events (arrivals and departures) to be executed. we must block the call.13. when M(t) = c. The system evolves via a sequence of discrete events. The program simply executes the event at the head of the schedule. Call blocking can be implemented by setting the service time of the call to zero so that the call departs as soon as it arrives.com Phone:5017621195 120 100 80 M(t) 60 40 20 0 0 10 20 30 40 50 t 60 70 80 90 100 Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10. • When the head-of-schedule event is the kth arrival is at time t. namely arrivals and departures. – If M(t) < c. we cannot generate these vectors all at once. Schedule the ﬁrst arrival to occur at S1 . Start at time t = 0 with an empty system. where Sk is an exponential (λ) random variable. 2.28 admits a deceptively simple solution in terms of the vector of arrivals A and the vector of departures D. increase the system state n by 1. at discrete time instances. block the arrival. we know the system state cannot change until the next scheduled event. In particular. 3. • If the head of schedule event is a departure. reduce the system state n by 1. The blocking switch is an example of a discrete event system.13 The simple structure of the switch simulation of Example 10. Otherwise. us (United States) Zip Code:71901 . 64 Address:104 pine meadows loop. we need to know that M(t). check the state M(t). Quiz 10.Name:joey iwatsuru Email:joeyiwat@yahoo. Examine the head-of-schedule event. – If M(t) = c. Delete the head-of-schedule event and go to step 2. With the introduction of call blocking. do not schedule a departure event. The logic of such a simulation is 1. and schedule a departure to occur at time t + Sn . AR. when an arrival occurs at time t. the number of ongoing calls. an exponential (λ) random variable.

We can estimate the probability a call is blocked as b ˆ = 0.m). or event(i)=-1 if the ith scheduled event is a departure. When the program is passed a vector t. for very complicated systems. roughly the ﬁrst 100 minutes are needed to load up the switch since the switch is idle when the simulation starts at time t = 0.0048. In our simulation.1.t). The rest of the gap between 0. it is common to implement the event schedule as a linked list where each item in the list has a data structure indicating an event timestamp and the type of the event. generated a simulation lasting 5. hot springs. AR. (1) Pb = a+b In Chapter 12. this says that roughly the ﬁrst two percent of the simulation time was unusual. 65 Address:104 pine meadows loop. a simple (but not elegant) way to do this is to have maintain two vectors: time is a list of timestamps of scheduled events and event is a the list of event types. we can calculate that the exact blocking probability is Pb = 0.120.93). Nevertheless. we will learn that the blocking switch is an example of an M/M/c/c queue. plot(t. the output [m a b] is such that m(i) is the number of ongoing calls at time t(i) while a and b are the number of admits and blocks. Thus for all times t(i) between the current head-of-schedule event and the next.0057. Thus this would account for only part of the disparity. event(i)=1 if the ith scheduled event is an arrival. In M ATLAB. [m. a result known as the “Erlang-B formula.Name:joey iwatsuru Email:joeyiwat@yahoo.0. In most programming languages. Chapter 12 develops techniques for analyzing and simulating systems described by Markov chains that are much simpler than the discrete event simulation technique shown here.0048 and 0.a. we set m(i) to the current switch state.000 minute full simulation produced a=49658 admitted calls and b=239 blocked calls.1:5000. The 5.000 minutes.” From the Erlang-B formula. we use the vector t as the set of time instances at which we inspect the system state.000 minute simulation. us (United States) Zip Code:71901 . A sample path of the ﬁrst 100 minutes of that simulation is shown in Figure 4. a kind of Markov chain. However. In this case.com Phone:5017621195 Thus we know that M(t) will stay the same until then. The complete program is shown in Figure 5. One reason our simulation underestimates the blocking probability is that in a 5.b]=simblockswitch(10. Note that in Chapter 12.0057 is that a simulation that includes only 239 blocks is not all that likely to give a very accurate result for the blocking probability. the discrete event simulation is widely-used and often very efﬁcient simulation method. The following instructions t=0:0. we will learn that the exact blocking probability is given by Equation (12.

1) ].mu.. end end Figure 5: Discrete event simulation of the blocking switch of Quiz 10. n=n+1. tmax=max(t).1). blocks=0. time=[time(b4depart) depart time(˜b4depart)].1). hot springs.13.admits. event=[event(b4arrival) 1 event(˜b4arrival)].blocks]=simblockswitch(lam. b4depart=time<depart. time(1)= [ ]. n=0. % clear current event if (eventnow==1) % arrival arrival=timenow+exponentialrv(lam.. if n<c %call admitted admits=admits+1. 66 Address:104 pine meadows loop. event(1)=[ ]. %first event is an arrival timenow=0. %total # blocks admits=0. depart=timenow+exponentialrv(mu. event=[event(b4depart) -1 event(˜b4depart)]. event=[ 1 ]. while (timenow<tmax) M((timenow<=t)&(t<time(1)))=n. else blocks=blocks+1. % next arrival b4arrival=time<arrival. time=[time(b4arrival) arrival time(˜b4arrival)]..com Phone:5017621195 function [M. immed departure disp(sprintf(’Time %10. timenow=time(1).admits.Name:joey iwatsuru Email:joeyiwat@yahoo.t). us (United States) Zip Code:71901 . end elseif (eventnow==-1) %departure n=n-1.blocks)). % # in system time=[ exponentialrv(lam. AR. %one more block. %total # admits M=zeros(size(t)). eventnow=event(1). timenow.c.3d Admits %10d Blocks %10d’.

µY = µ X ∞ −∞ h(t)dt = 2 0 ∞ e−t dt = 2 (1) Since R X (τ ) = δ(τ ).2 The expected value of the output is ∞ ∞ −τ h(u)h(τ + u) du = ∞ −τ 1 e−u e−τ −u du = eτ 2 (4) (5) µY = µ X n=−∞ h n = 0. the autocorrelation function of the output is RY (τ ) = ∞ −∞ ∞ h(u) −∞ h(v)δ(τ + u − v) dv du = ∞ −∞ h(u)h(τ + u) du (2) For τ > 0. hot springs. RY (τ ) = Hence.Name:joey iwatsuru Email:joeyiwat@yahoo. we 2 can double check.1 By Theorem 11. AR. we can deduce that RY (τ ) = 1 e−|τ | by symmetry. 67 Address:104 pine meadows loop. Just to be safe though. 1 RY (τ ) = e−|τ | 2 Quiz 11. The variance of Yn is Var[Yn ] = E[Yn ] = RY [0] = 1.2.com Phone:5017621195 Quiz Solutions – Chapter 11 Quiz 11.5(1 + −1) = 0 (1) The autocorrelation of the output is 1 1 RY [n] = i=0 j=0 h i h j R X [n + i − j] 1 n=0 0 otherwise (2) (3) = 2R X [n] − R X [n − 1] − R X [n + 1] = 2 Since µY = 0. us (United States) Zip Code:71901 . For τ < 0. we have RY (τ ) = 0 ∞ e−u e−τ −u du = e−τ 0 ∞ 1 e−2u du = e−τ 2 (3) For τ < 0.

Thus E[Y] = 0. 4 0 0 1 1 1 1 (2) (3) In this case.8.1 0. Since R X [n] = δn .7.5.2 −0. In this problem. Y = Y33 Y34 Y35 is a Gaussian random vector since X n is a Gaussian random process. Moreover. by Theorem 11.2 0 −15 −10 −5 0 f 5 10 15 SX(f) 6 4 2 0 −1500−1000 −500 10 R (τ) 5 0 −5 −2 −1 0 τ 1 x 10 2 −3 0 f 500 1000 1500 10 RX(τ) 5 0 −5 −0.Name:joey iwatsuru Email:joeyiwat@yahoo.5 to ﬁnd the autocorrelation function ∞ ∞ RY [n] = i=−∞ j=−∞ h i h j R X [n + i − j]. each Yn has expected value E[Yn ] = µ X ∞ n=−∞ h n = 0. RX = I. AR. following Theorem 11.13 with µX = 0 and A = H. we obtain RY = HRX H . hot springs. 68 Address:104 pine meadows loop.6 and to use Theorem 11. it is simpler to observe that Y = HX where X = X 30 X 31 X 32 X 33 X 34 X 35 and ⎡ ⎤ 1 1 1 1 0 0 1 H = ⎣0 1 1 1 1 0⎦ . One way to ﬁnd the RY is to observe that RY has the Toeplitz structure of Theorem 11. Fo ﬁnd the PDF of the Gaussian vector Y.3 By Theorem 11.2 X (a) W = 10 (b) W = 1000 Figure 6: The autocorrelation R X (τ ) and power spectral density S X ( f ) for process X (t) in Quiz 11. using Equation (1) is surprisingly tedious because we still need to sum over all i and j such that n + i − j = 0. (1) Despite the fact that R X [k] is an impulse. which equals the correlation matrix RY since Y has zero expected value.com Phone:5017621195 x 10 8 0. us (United States) Zip Code:71901 . the identity matrix. or by directly applying Theorem 5.6 SX(f) 0. Quiz 11.5.4 0. we need to ﬁnd the covariance matrix CY .1 0 τ 0.

81 81 = . (7) Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution is that y C−1 y is a very concise representation of the cross-terms in the exponent of f Y (y). X n+1 = 400 400 (4) to ﬁnd the mean square error. one approach is to follow the method of Example 11.9 1. In this case.Name:joey iwatsuru Email:joeyiwat@yahoo.81 X n−1 R X [2] = .13 and to directly calculate ˆ (5) e∗ = E (X n+1 − X n+1 )2 . hot springs. 0. C−1 = 16 ⎣−1/2 Y 1/12 −1/2 7/12 Thus. L 69 Address:104 pine meadows loop. X n+1 = Xn 0. the PDF of Y is f Y (y) = 1 (2π )3/2 [det (CY )]1/2 1 exp − y C−1 y .9 1.9 400 261 (3) It follows that the ﬁlter is h = 261/400 81/400 and the MMSE linear predictor is 81 261 ˆ X n−1 + Xn.4 This quiz is solved using Theorem 11. Y 2 (5) (6) A disagreeable amount of algebra will show det(CY ) = 3/1024 and that the PDF can be “simpliﬁed” to 16 7 2 7 2 1 2 y33 + y34 + y35 − y33 y34 + y33 y35 − y34 y35 exp −8 f Y (y) = √ 3 12 12 6 6π . CY = RY = HH = 16 2 3 4 (4) It follows (very quickly if you use M ATLAB for 3 × 3 matrix inversion) that ⎡ ⎤ 7/12 −1/2 1/12 1 −1/2⎦ . AR. us (United States) Zip Code:71901 .1 0.9 R X [1] −1 (1) (2) The MMSE linear ﬁrst order ﬁlter for predicting X n+1 at time n is the ﬁlter h such that ← − 1.9 R X [0] R X [1] = 0.9 for the case of k = 1 and M = 2.1 0.1 R X [1] R X [0] 0.9 h = R−1 RXn X n+1 = Xn 0. Y Quiz 11.1 1 0.com Phone:5017621195 Thus ⎡ ⎤ 4 3 2 1 ⎣ 3 4 3⎦ . Xn = X n−1 X n and RXn = and RXn X n+1 = E 1.

AR. It is noteworthy that the result is derived in a much simpler way in the proof of Theorem 9. e∗ = E L ← − X n+1 − h Xn 2 (6) (7) (8) ← − ← − = E (X n+1 − h Xn )(X n+1 − h Xn ) ← − ← − = E (X n+1 − h Xn )(X n+1 − Xn h ) After a bit of algebra. Since X n+1 = h Xn . Quiz 11. In any case.9 400 1451 recalling that the blind estimate would yield a mean square error of Var[X ] = 1.5 (1) By Theorem 11. we note that 1 f S X ( f ) = 10 rect (2) 2W 2W It follows that the inverse transform of S X ( f ) is sin(2π W τ ) R X (τ ) = 10 sinc(2W τ ) = 10 (3) 2π W τ (3) For W = 10 Hz and W = 1 kHZ.7 by using the orthoginality property of the LMSE estimator. we obtain Xn e∗ = R X [0] − RXn X n+1 R−1 RXn X n+1 L Xn ← − = R X [0] − h RXn X n+1 (11) (12) Note that this is essentially the same result as Theorem 9.1. us (United States) Zip Code:71901 . the average power of X (t) is E X 2 (t) = ∞ −∞ W −W SX ( f ) d f = 5 d f = 10 Watts W (1) (2) The autocorrelation function is the inverse Fourier transform of S X ( f ). we can derive the mean square error for an arbitary prediction ← − ˆ ﬁlter h. hot springs.1 − = = 0. Instead.13(b). we see that observing X n−1 and X n improves the accuracy of our prediction of X n+1 .1.3487.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 This method is workable for this simple problem but becomes increasingly tedious for higher order ﬁlters. (13) L 0. the mean square error is 1 506 ← − 0. 70 Address:104 pine meadows loop.7 with Y = Xn . X = X n+1 and ← − ˆ a = h .81 81 261 e∗ = R X [0] − h RXn X n+1 = 1. graphs of S X ( f ) and R X (τ ) appear in Figure 6. Consulting Table 11. we obtain ← − ← − ← − e∗ = R X [0] − 2 h RXn X n+1 + h RXn h L (9) (10) ← − with the substitution h = R−1 RXn X n+1 .

17.1. τ ) = R X Y (τ ) = R X (τ − t0 ). S X Y ( f ) = H ( f )S X ( f ) = (2) Again by Theorem 11. From Table 11. H( f ) = (1) Theorem 11.7 Since Y (t) = X (t − t0 ). First we need some preliminary facts. That is. the discrete time impulse δ[n] has a ﬂat discrete Fourier transform. 2 [a1 + j2π f ] a0 + (2π f )2 (4) a1 a1 + j2π f (3) Address:104 pine meadows loop. a0 Consulting with the Fourier transforms in Table 11. R X [n] = 10δ[n].Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs.17. Let a0 = 5.8 We solve this quiz using Theorem 11. R X Y (t. if R X [n] = 10δ[n]. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (t + τ − t0 )] = R X (τ − t0 ) (1) We see that R X Y (t. (This quiz is really lame!) Quiz 11. SY ( f ) = H ∗ ( f )S X Y ( f ) = |H ( f )|2 S X ( f ). we recall the property that g(τ − τ0 ) has Fourier transform G( f )e− j2π f τ0 .6 In a sampled system. Thus the Fourier transform of R X Y (τ ) = R X (τ − t0 ) = g(τ − t0 ) is S X Y ( f ) = S X ( f )e− j2π f t0 .000 so that 1 (1) R X (τ ) = a0 e−a0 |τ | . (2) Quiz 11.1.1.17. 71 (5) 2a0 a1 .com Phone:5017621195 Quiz 11. where u(t) is the unit step function and a1 = 1/RC where RC = 10−4 is the ﬁlter time constant. AR. us (United States) Zip Code:71901 . then ∞ S X (φ) = n=−∞ 10δ[n]e− j2π φn = 10 (1) Thus. From Table 11. we see that 2 2a0 1 2a0 SX ( f ) = = 2 2 + (2π f )2 a0 a0 a0 + (2π f )2 (2) The RC ﬁlter has impulse response h(t) = a1 e−a1 t u(t).

we see that RY (τ ) = K0 K1 a e−a0 |τ | + 2 a1 e−a1 |τ | 2 0 2a0 2a1 (11) Substituting the values of K 0 and K 1 . (12) The average power of the Y (t) process is RY (0) = a1 2 = . some algebra will show that SY ( f ) = where K0 = Thus. we obtain RY (τ ) = 2 a1 e−a0 |τ | − a0 a1 e−a1 |τ | 2 2 a1 − a0 .1.com Phone:5017621195 Note that |H ( f )|2 = H ( f )H ∗ ( f ) = Thus. In particular. Using partial fractions and the Fourier transform table. the latter method is actually less algebra. 72 Address:104 pine meadows loop. Since the RC ﬁlter has a 3dB bandwidth of 10. 2 2 2a0 2a1 K0 K1 + 2 .000 rad/sec and the signal X (t) has most of its its signal energy below 5. we can either use basic calculus and ∞ calculate −∞ SY ( f ) d f directly or we can ﬁnd RY (τ ) as an inverse transform of SY ( f ). SY ( f ) = 2 2 2a0 a0 + (2π f )2 2a1 a1 + (2π f )2 2 a0 K0 K1 + + (2π f )2 a1 + (2π f )2 2 −2a0 a1 2 2 a1 − a0 (8) 2 2a0 a1 2 a1 − a0 . SY ( f ) = |H ( f )|2 S X ( f ) = 2 2a0 a1 2 2 a1 + (2π f )2 a0 + (2π f )2 2 a1 a1 a1 = 2 (a1 + j2π f ) (a1 − j2π f ) a1 + (2π f )2 (6) (7) (3) To ﬁnd the average power at the ﬁlter output. (9) (10) Consulting with Table 11. us (United States) Zip Code:71901 . the output signal has almost as much power as the input. AR. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo.000 rad/sec. 2 K1 = . a1 + a0 3 (13) Note that the input signal has average power R X (0) = 1.

146).Name:joey iwatsuru Email:joeyiwat@yahoo.146) and (11.9 This quiz implements an example of Equations (11.146) and to calculate the mean square error e L ∗ using Equation (11.147).147) for a system in which we ﬁlter Y (t) = X (t) + N (t) to produce an optimal linear estimate of X (t). (1) Since µ N = 0. we note that Example 10. decreasing the single-sided bandwidth B increases the power spectral density of the noise over frequencies | f | < B. where W = 5. (5) From Equation (11. Taking Fourier transforms. us (United States) Zip Code:71901 . This implies R N (0) = ∞ −∞ SN ( f ) d f = B −B N0 d f = 2N0 B (3) Thus N0 = 1/(2B). (1) Now we can go on to the quiz. (2) Since R X (τ ) = sinc(2W τ ). The ˆ solution to this quiz is just to ﬁnd the ﬁlter H ( f ) using Equation (11. 4 10 104 (4) The noise power spectral density can be written as S N ( f ) = N0 rect f 2B = 1 f rect 2B 2B .147). the optimal ﬁlter is ˆ H( f ) = SX ( f ) = SX ( f ) + SN ( f ) 1 104 1 104 rect + f 104 1 2B rect f 104 rect f 2B .146) and (11.1 that SX ( f ) = 1 f rect . (6) 73 Address:104 pine meadows loop. (2) RY X (τ ) = R X (τ ). AR. Because the noise process N (t) has constant power R N (0) = 1. at peace with the derivations. it follows that SY ( f ) = S X ( f ) + S N ( f ).000 Hz. SY X ( f ) = S X ( f ). R N (0) = Var[N ] = 1.com Phone:5017621195 Quiz 11. hot springs. Comment: Since the text omitted the derivations of Equations (11. we see from Table 11.24 showed that RY (τ ) = R X (τ ) + R N (τ ).

05. we note that we can choose B very large and also achieve MSE e∗ = 0. The mean square error is e∗ L = 1 1 104 2B 1 1 −5000 104 + 2B 5000 df = 1 2B 1 104 + 1 2B = 1 B 5000 +1 (11) In this case.5 × 104 guarantees e∗ ≤ 0. we need to whether B ≤ W . Since the problem asks us to L ﬁnd the largest possible B. In this case. Finally. hot springs.05 requires B ≤ 5. AR.000. the mean square error of the estimate is e∗ = L = ∞ −∞ ∞ −∞ S X ( f )S N ( f ) df SX ( f ) + SN ( f ) 1 104 1 104 (7) f 2B f 2B rect f 104 f 104 1 2B rect rect rect + 1 2B d f. Two examples of the ﬁlter H ( f ) are shown in Figure 7. ˆ the Wiener ﬁlter H ( f ) is an ideal (ﬂat) lowpass ﬁlter ⎧ 1 ⎨ 104 | f | < 5. for all values of B. When B ≤ W . B ≥ 9. (8) To evaluate the MSE e∗ .com Phone:5017621195 ˆ ˆ (3) We produce the output X (t) by passing the noisy signal Y (t) through the ﬁlter H ( f ).147). We can go back and consider the case B > W later. S N ( f ) = 1/2B over frequencies | f | < W . As B shrinks. when B > W = 5000.10 It is fairly straightforward to ﬁnd S X (φ) and SY (φ). In L particular. L Although this completes the solution to the quiz. let’s suppose B ≤ W . Thus increasing B spreads the constant 1 watt of power of N (t) over more bandwidth. us (United States) Zip Code:71901 .16 Hz. the ﬁlter suppresses less of the signal of X (t). The Wiener ﬁlter removes the noise that is outside the band of the desired signal. the MSE is e∗ L = 1 1 104 2B 1 1 −B 104 + 2B B df = 1 104 1 104 + 1 2B = 1 1+ 5. The only thing to keep in mind is to use fftc to transform the autocorrelation R X [ f ] into the power spectral density S X (φ).05. what is happening may not be obvious.000 B (9) To obtain MSE e∗ ≤ 0. As B is decreased. the ﬁlter H ( f ) makes an increasingly deep and narrow notch at frequencies ˆ | f | ≤ B.Name:joey iwatsuru Email:joeyiwat@yahoo.000/19 = 263. The following M ATLAB program generates and plots the functions shown in Figure 8 74 Address:104 pine meadows loop. 1 ˆ + 1 (10) H( f ) = 104 2B ⎩ 0 otherwise. the PSD S N ( f ) becomes increasingly tall. From Equation (11. The result is that the MSE goes down. but only over a bandwidth B that is decreasing. L Quiz 11. The noise power is always Var[N ] = 1 Watt. Thus as ˆ B descreases.

SX=fftc(rx. they tend to confuse the stem function. we generate stem plots of the magnitude of each power spectral density. SY2 and SY10 in mquiz11 should all be realvalued vectors.5 0 H(f) −5000 −2000 0 f 2000 5000 1 0. However.abs(SY10)). stem(0:N-1.abs(sx)).ylabel(’S_{Y_2}(n/N)’). rx=[2 4 2].5 0 −5000 −2000 0 f 2000 5000 B = 500 B = 2500 Figure 7: Wiener ﬁlter for Quiz 11. us (United States) Zip Code:71901 . Relative to M = 2. AR. the low pass moving average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter output that varies very slowly. Hence. the ﬁlter H (φ) ﬁlters out almost all of the high frequency components of X (t). hot springs.ˆ2). xlabel(’n’). %impulse/filter response: M=2 SY2=SX. when M = 10.N). h2=0. %autocorrelation and PSD stem(0:N-1.com Phone:5017621195 1 H(f) 0. Although these imaginary parts have no computational signiﬁcance. H10=fft(h10.ylabel(’S_X(n/N)’).Name:joey iwatsuru Email:joeyiwat@yahoo. note that the vectors SX. H2=fft(h2. the ﬁnite numerical precision of M ATLAB results in tiny imaginary parts. %impulse/filter response: M=10 SY10=sx.abs(SY2)). In the context of Example 11.N). figure. 75 Address:104 pine meadows loop.m N=32.9. h10=0. figure.N). xlabel(’n’).*((abs(H10)). As an aside.ˆ2). %PSD of Y for M=2 xlabel(’n’).10).26. stem(0:N-1.ylabel(’S_{Y_{10}}(n/N)’).1*ones(1. %mquiz11.* ((abs(H2)).5*[1 1].

10. and Sφ (n/N ) for M = 10 using an N = 32 point DFT. AR. SY (n/N ) for M = 2. 76 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. graphs of S X (φ). hot springs.com Phone:5017621195 10 SX(n/N) 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 2 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 10 5 0 0 5 10 15 n 20 25 30 35 Figure 8: For Quiz 11. us (United States) Zip Code:71901 .

2 0 0 ⎦ Pn = S−1 Dn S = ⎣0.6 0.2 The eigenvalues of P are λ1 = 0 λ2 = 0.2 From the problem statement.2 0.2 −0.6 0.2⎦ + (0.4 0.99 0.2 Quiz 12.Name:joey iwatsuru Email:joeyiwat@yahoo.6 P = ⎣0.1 (2) These conditional probabilities correspond to the transition matrix and Markov chain: 0.01 0.5 −0.4 λ3 = 1 (1) (2) We can diagonalize P into ⎤⎡ ⎡ ⎤ ⎤⎡ −0.99 0. the ith row of S.2 0.2 0.9 (1) Since each X n must be either 0 or 1.6 0.6 0.10 0. is the left eigenvector of P satisfying si P = λi si .4 0.6 0 0.5 0 0.1 1 P= 0. From the problem statement. AR.01 0 0.9 P X n+1 = 0|X n = 1 = 0. Algebra will verify that the n-step transition matrix is ⎡ ⎡ ⎤ ⎤ 0.com Phone:5017621195 Quiz Solutions – Chapter 12 Quiz 12.5 0.2⎦ 1 0 1 0 0. we can conclude that P X n+1 = 1|X n = 0 = 0.4 0 0 λ3 0.4 0.6 0.99 P X n+1 = 1|X n = 1 = 0. us (United States) Zip Code:71901 .1 The system has two states depending on whether the previous packet was received in error.5 0.01 0.4)n ⎣ 0 (4) −0.6 0.90 (3) Quiz 12.5 0 −0.2 0. hot springs.6 0.2 0.3 The Markov chain describing the factory status and the corresponding state transition matrix are 77 Address:104 pine meadows loop.4 0.6 0. we are given the conditional probabilities P X n+1 = 0|X n = 0 = 0.5 1 −0. the Markov chain and the transition matrix are ⎡ ⎤ 0.2 0.6 0.5 1 (3) where si .6 −0.5 1 λ1 0 0 0 −1 ⎦ 0 1 ⎦ ⎣ 0 λ2 0 ⎦ ⎣ 1 P = S−1 DS = ⎣ 0.

2.4 The communicating classes are C1 = {0. The state transition probabilities are Pn−1. 1. 5..0 P [K > n] P [K > n − 1] P [K = n] = P [K = n|K > n − 1] = P [K > n − 1] (1) (2) (3) The Markov chain resembles P[K=5] P[K=4] P[K= 1] P[K=2] P[K=3] 0 1 1 1 2 1 3 1 4 . the state n can take on the values 0. Quiz 12. . Once the system exits C2 . the class C1 is never left.1 0 1 1 1 ⎡ ⎤ 0. AR. C1 is a recurrent class. Quiz 12. On the other hand.1 + 0. That is. Similarly. .. hot springs. This implies π0 + π1 + π2 = π0 (1 + 0..com Phone:5017621195 0.5 At any time t.9 0. Thus the states in C1 are recurrent. 1} C2 = {2. the states in C3 are recurrent. the states in C2 are transient.9 0. . 6} (1) π1 = 1/12.Name:joey iwatsuru Email:joeyiwat@yahoo. the states in C2 are never reentered. 3} C3 = {4.n = P [K > n|K > n − 1] = Pn−1. Once the system enters a state in C1 .1π0 and π2 = π1 . us (United States) Zip Code:71901 .1) = 1 It follows that the limiting state probabilities are π0 = 5/6. (3) (2) The states in C1 and C3 are aperiodic. 1 … 78 Address:104 pine meadows loop. the system of equations π = π P yields π1 = 0. The states in C2 have period 2.1 0 0 1⎦ P=⎣ 0 1 0 0 (1) 2 With π = π0 π1 π2 . π2 = 1/12.

(2) To ﬁnd the stationary probabilities. Equation (5) implies π2 = π1 − π0 P [K = 2] = π0 (P [K > 1] − P [K = 2]) = π0 P [K > 2] (8) (7) (4) (5) k = 1. the number of transitions need to return to state 0 is always a multiple of 2. we obtain π1 = π0 (1 − P [K = 1]) = π0 P [K > 1] Similarly. AR. When we apply we recall that ∞ k=0 πk ∞ k=0 P[K (9) = 1. Since we spend one unit of time in each state. hot springs. πk−1 = π0 P [K = k] + πk . we have k − 1 units of time left after the state 0 counter reset. n=0 > k] = E[K ]. Quiz 12. We verify this pattern by showing that πk = π0 P[K > k] satisﬁes Equation (6): π0 P [K > k − 1] = π0 P [K = k] + π0 P [K > k] . the system is in state 0. then W has a discrete PMF representing the remaining time of the counter at a time in the distant future. This implies πn = P [K > n] E [K ] (10) This Markov chain models repeated random countdowns.com Phone:5017621195 The stationary probabilities satisfy π0 = π0 P [K = 1] + π1 . When the counter expires. .6 (1) By inspection. . If we have a random variable W such that the PMF of W satisﬁes PW (n) = πn . The system state is the time until the counter expires. . 2.5.Name:joey iwatsuru Email:joeyiwat@yahoo. . us (United States) Zip Code:71901 .11. . we solve the system of equations π = πP and 3 i=0 πi = 1: π0 = (3/4)π1 + (1/4)π3 π1 = (1/4)π0 + (1/4)π2 π2 = (1/4)π1 + (3/4)π3 1 = π0 + π1 + π2 + π3 79 (1) (2) (3) (4) Address:104 pine meadows loop. From Equation (4). including state 0. (6) This suggests that πk = π0 P[K > k]. we obtain π0 ∞ P[K > k] = 1. π1 = π0 P [K = 2] + π2 . Thus the period of state 0 is d = 2. and we randomly reset the counter to a new value K = k and then we count down k units of time. . From Problem 2.

The only difference is the modiﬁed transition rates: 1 (1/2)a (2/3)a (3/4) a (4/5) a 0 1. Lastly.(4/5)a a 2 3 4 … The event T00 > n occurs if the system reaches state n before returning to state 0. AR.14 to ﬁnd the limiting probability that the system is in state 0 at time nd: lim P00 (nd) = dπ0 = 3 8 (9) n→∞ Quiz 12. we observe that for all α > 0 P [V00 ] = lim FT00 (n) = lim 1 − n→∞ n→∞ 1 = 1. It follows from the ﬁrst and second equations that π2 = (5/3)π0 and π3 = 2π0 . hot springs. we choose π0 so the state probabilities sum to 1: 16 2 5 1 = π0 + π1 + π2 + π3 = π0 1 + + + 2 = π0 (7) 3 3 3 It follows that the state probabilities are π0 = 3 16 π1 = 2 16 π2 = 5 16 π3 = 6 16 (8) (3) Since the system starts in state 0 at time 0. nα (2) 80 Address:104 pine meadows loop. (1) Thus the CDF of T00 satisﬁes FT00 (n) = 1− P[T00 > n] = 1−1/n α . To determine whether state 0 is recurrent.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . which occurs with probability P [T00 1 > n] = 1 × 2 α 2 × 3 α n−1 × ··· × n α = 1 n α .com Phone:5017621195 Solving the second and third equations for π2 and π3 yields π2 = 4π1 − π0 π3 = (4/3)π2 − (1/3)π1 = 5π1 − (4/3)π0 (5) Substituting π3 back into the ﬁrst equation yields π0 = (3/4)π1 + (1/4)π3 = (3/4)π1 + (5/4)π1 − (1/3)π0 (6) This implies π1 = (2/3)π0 .(2/3) a 1 .(3/4) 1 . we can use Theorem 12.22.7 The Markov chain has the same structure as that in Example 12.(1/2) a 1 1 .

the Markov chain is positive recurrent.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . we did this by deriving the PMF PT00 (n). In Example 12. Since the chain has only one communicating class.24. In this problem. Quiz 12. hot springs. AR. it will be simpler to use the result of Problem 2. (5) nα n=2 Note that for all n ≥ 2 1 ≤ nα ∞ n n−1 dx xα (6) This implies E [T00 ] ≤ 2 + =2+ n n=2 n−1 ∞ dx 1 dx xα (7) (8) xα x −α+1 =2+ −α + 1 ∞ =2+ 1 1 <∞ α−1 (9) Thus for all α > 1. the expected time to return to state 0 is ∞ ∞ E [T00 ] = n=0 P [T00 > n] = 1 + n=1 1 .5. for α > 1. nα (3) For 0 < α ≤ 1. 1/n α ≥ 1/n and it follows that ∞ E [T00 ] ≥ 1 + n=1 1 = ∞. n (4) We conclude that the Markov chain is null recurrent for 0 < α ≤ 1.8 The number of customers in the ”friendly” store is given by the Markov chain (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) 0 (1-p)q 1 (1-p)q ××× i (1-p)q (1-p)q i+1 ××× 81 Address:104 pine meadows loop. ( We also note that if α = 0.com Phone:5017621195 Thus state 0 is recurrent for all α > 0.11 which says that ∞ P[K > k] = k=0 E[K ] for any non-negative integer-valued random variable K . On the other hand. we need to calculate E[T00 ]. Applying this result.) To determine whether the chain is null recurrent or positive recurrent. then all states are transient. all states are recurrent. ∞ 1 E [T00 ] = 2 + .

i} and S = {i + 1. i = 0. 2.01 0.13 with state space partitioned between S = {0. From the Markov chain.}. . πi p = πi+1 (1 − p)q.01 1 3 0.1 per msec and the rate to state 0 is the sum of those two rates. 1−α (4) Thus for α < 1. we have that πi = π0 α i where p . ..01 0.01 p2 = 2 p1 + 3 p3 3. 1. .Name:joey iwatsuru Email:joeyiwat@yahoo. 620 p0 1. . 1. 381 (1) Address:104 pine meadows loop. we obtain the following useful equations for the stationary distribution. . equivalently. 1.1 since the task completes at rate 3 per msec and the processor reboots at rate 0.9 The continuous time Markov chain describing the processor is 2 2 2 2 0 3. This implies πi+1 = p πi . i + 2. we see that for any state i ≥ 0. 014. we have that for α < 1. . .com Phone:5017621195 In the above chain. . α= (1 − p)q Requiring the state probabilities to sum to 1. By applying Theorem 12. the limiting state probabilities are πi = (1 − α)α i . p ≥ q/(1 − q). us (United States) Zip Code:71901 . . for α ≥ 1 or. .01 p3 = 2 p2 + 3 p4 5. .01 2 3 3 3 4 Note that q10 = 3. hot springs.01 p1 = 2 p0 + 3 p2 5. 5. the limiting state probabilities do not exist. . yielding p4 = 20 p3 31 p3 = 620 p2 981 p2 = 82 19620 p1 31431 p1 = 628. Quiz 12. an existing customer gets one unit of service and then departs the store. (5) In addition. (1 − p)q (1) (2) Since Equation (2) holds for i = 0. p3 in terms of p2 and so on.01 p4 = 2 p3 We can solve these equations by working backward and solving for p4 in terms of p3 . we note that (1 − p)q is the probability that no new customer arrives. . AR. ∞ ∞ (3) πi = π0 i=0 i=0 αi = π0 = 1.

1606 p3 = 0.com Phone:5017621195 Applying p0 + p1 + p2 + p3 + p4 = 1 yields p0 = 1.10 The M/M/c/∞ queue has Markov chain λ λ λ λ λ (2) 0 µ 1 2µ cµ c cµ c+1 cµ From the Markov chain. . 401 and the stationary probabilities are p0 = 0. 2. . the stationary probabilities must satisfy pn = (ρ/n) pn−1 n = 1. c + 2. . AR.1015 p4 = 0. . c (ρ/c) pn−1 n = c + 1. . . 014.2573 p2 = 0. . . c + 2. . pn = 1 yields c (2) p0 = n=0 ρ c ρ/c ρ /n! + c! 1 − ρ/c n −1 (3) 83 Address:104 pine meadows loop. . .Name:joey iwatsuru Email:joeyiwat@yahoo. c n−c c p0 (ρ/c) ρ /c! n = c + 1.4151 p1 = 0. 2. 443. .0655 Quiz 12. 381/2. . hot springs. (1) It is straightforward to show that this implies pn = The requirement that ∞ n=0 p0 ρ n /n! n = 1. . us (United States) Zip Code:71901 .

- Markov Chains to Model Genetic Algorithms
- Yates - Probability and Stochastic Processes (2nd Edition)
- Probability and Stochastic Processes
- Planing a Microwave Radio Link
- [솔루션] Probability and Stochastic Processes 2nd Roy D. Yates and David J. Goodman 2판 확률과 통계 솔루션 433 4000
- Markov Chain
- 10.1.1.243.8916
- Sol Manual
- Propagation Study
- Signal Detection and Estimation - Solution Manual
- Instrumentos de Medição Eletrica
- lectureAll_ece5325_6325_f11
- 1
- Markov Chain
- Probability and Stochastic Processes 2nd Roy D Yates and David J Goodman
- ALVProgrammingGuide
- Creating Queries Access 2010 Tutorial
- Digital Communications 5th Ed SolutionsChap3
- MomaReleaseEn03v10
- Netezza Stored Procedures Guide
- Hidden Markov Models
- lab4
- HCL
- Httpwww Wi Pb Edu Plplikinaukazeszytyz9oniszczuk-Full
- EV06-HDMS
- Probability and Statistics UIUC Luthuli
- Parametros Frame Graber
- Fockers Arma Scripting - Chapter 1
- Pro
- 1

- Personalized Gesture Recognition with Hidden Markove Model and Dynamic Time Warping
- UT Dallas Syllabus for cs4375.501.07f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs6375.501.10f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.501 06f taught by Yu Chung Ng (ycn041000)
- tmpDB70.tmp
- tmpFEC7.tmp
- Tmp 7832
- tmpCE5.tmp
- Supply Chain and Value Chain Management for Sugar Factory
- tmpD7C3
- tmp51A9.tmp
- Convergence of EU regions. Measures and evolution (Eng)/ La convergencia de las regiones de la UE (Ing)/ EBko eskualdeen konbergentzia (Ing)
- rev_frbrich2013q1.pdf
- tmp9649.tmp
- Speech Recognition Using Hidden Markov Model Algorithm
- A Study and Comparative analysis of Conditional Random Fields for Intrusion detection
- UT Dallas Syllabus for stat7345.501.10f taught by Robert Serfling (serfling)
- UT Dallas Syllabus for stat7345.501.09s taught by Robert Serfling (serfling)
- frbclv_wp1992-01.pdf
- Speech Recognition using HMM & GMM Models
- tmp79F5.tmp
- A Survey of Online Credit Card Fraud Detection using Data Mining Techniques
- UT Dallas Syllabus for cs4375.501.09f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.001.11f taught by Yu Chung Ng (ycn041000)
- The Impact of Exchange Rate on FDI and the Interdependence of FDI over Time
- As IEC 61165-2008 Application of Markov Techniques
- tmp3746.tmp
- UT Dallas Syllabus for cs6375.002 06f taught by Yu Chung Ng (ycn041000)
- tmpF98F
- tmp108.tmp

Sign up to vote on this title

UsefulNot usefulCerrar diálogo## ¿Está seguro?

This action might not be possible to undo. Are you sure you want to continue?

Cerrar diálogo## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading