This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

- Quiz 1.1
- Quiz 1.2
- Quiz 1.5
- Quiz 1.7
- Quiz 1.8
- Quiz 1.9
- Quiz 1.10
- Quiz 1.11
- Quiz 2.1
- Quiz 2.2
- Quiz 2.3
- Quiz 2.4
- Quiz 2.7
- Quiz 2.8
- Quiz 2.9
- Quiz 2.10
- Quiz 3.1
- Quiz 3.2
- Quiz 3.3
- Quiz 3.4
- Quiz 3.5
- Quiz 3.6
- Quiz 3.7
- Quiz 3.8
- Quiz 3.9
- Quiz 4.1
- Quiz 4.2
- Quiz 4.3
- Quiz 4.4
- Quiz 4.5
- Quiz 4.6
- Quiz 4.7
- Quiz 4.8
- Quiz 4.9
- Quiz 4.10
- Quiz 4.11
- Quiz 4.12
- Quiz 5.1
- Quiz 5.2
- Quiz 5.3
- Quiz 5.4
- Quiz 5.5
- Quiz 5.6
- Quiz 5.7
- Quiz 5.8
- Quiz 6.1
- Quiz 6.2
- Quiz 6.3
- Quiz 6.4
- Quiz 6.5
- Quiz 6.6
- Quiz 6.7
- Quiz 6.8
- Quiz 6.9
- Quiz 7.1
- Quiz 7.2
- Quiz 7.3
- Quiz 7.4
- Quiz 7.5
- Quiz 8.1
- Quiz 8.2
- Quiz 8.3
- Quiz 8.4
- Quiz 9.1
- Quiz 9.2
- Quiz 9.3
- Quiz 9.4
- Quiz 9.5
- Quiz 10.1
- Quiz 10.2
- Quiz 10.3
- Quiz 10.4
- Quiz 10.5
- Quiz 10.6
- Quiz 10.7
- Quiz 10.8
- Quiz 10.9
- Quiz 10.10
- Quiz 10.11
- Quiz 10.12
- Quiz 10.13
- Quiz 11.1
- Quiz 11.2
- Quiz 11.3
- Quiz 11.4
- Quiz 11.5
- Quiz 11.6
- Quiz 11.7
- Quiz 11.8
- Quiz 11.9
- Quiz 11.10
- Quiz 12.1
- Quiz 12.2
- Quiz 12.3
- Quiz 12.4
- Quiz 12.5
- Quiz 12.6
- Quiz 12.7
- Quiz 12.8
- Quiz 12.9
- Quiz 12.10

**A Friendly Introduction for Electrical and Computer Engineers
**

SECOND EDITION

MATLAB Function Reference

Roy D. Yates and David J. Goodman

May 22, 2004

This document is a supplemental reference for MATLAB functions described in the text Prob-

ability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers.

This document should be accompanied by matcode.zip, an archive of the corresponding MAT-

LAB .m ﬁles. Here are some points to keep in mind in using these functions.

• The actual programs can be found in the archive matcode.zip or in a directory matcode.

To use the functions, you will need to use the MATLAB command addpath to add this

directory to the path that MATLAB searches for executable .m ﬁles.

• The matcode archive has both general purpose programs for solving probability problems

as well as speciﬁc .m ﬁles associated with examples or quizzes in the text. This manual

describes only the general purpose .m ﬁles in matcode.zip. Other programs in the archive

are described in main text or in the Quiz Solution Manual.

• The MATLAB functions described here are intended as a supplement the text. The code is

not fully commented. Many comments and explanations relating to the code appear in the

text, the Quiz Solution Manual (available on the web) or in the Problem Solution Manual

(available on the web for instructors).

• The code is instructional. The focus is on MATLAB programming techniques to solve prob-

ability problems and to simulate experiments. The code is deﬁnitely not bulletproof; for

example, input range checking is generally neglected.

• This is a work in progress. At the moment (May, 2004), the homework solution manual has

a number of unsolved homework problems. As these solutions require the development of

additional MATLAB functions, these functions will be added to this reference manual.

• There is a nonzero probability (in fact, a probability close to unity) that errors will be found. If

you ﬁnd errors or have suggestions or comments, please send email to ryates@winlab.rutgers.edu.

When errors are found, revisions both to this document and the collection of MATLAB func-

tions will be posted.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Random Variables

bernoullipmf y=bernoullipmf(p,x)

function pv=bernoullipmf(p,x)

%For Bernoulli (p) rv X

%input = vector x

%output = vector pv

%such that pv(i)=Prob(X=x(i))

pv=(1-p)*(x==0) + p*(x==1);

pv=pv(:);

Input: p is the success probability of a Bernoulli

random variable X, x is a vector of possible

sample values

Output: y is a vector with y(i) = P

X

(x(i)).

bernoullicdf y=bernoullicdf(p,x)

function cdf=bernoullicdf(p,x)

%Usage: cdf=bernoullicdf(p,x)

% For Bernoulli (p) rv X,

%given input vector x, output is

%vector pv such that pv(i)=Prob[X<=x(i)]

x=floor(x(:));

allx=0:1;

allcdf=cumsum(bernoullipmf(p,allx));

okx=(x>=0); %x_i < 1 are bad values

x=(okx.*x); %set bad x_i=0

cdf= okx.*allcdf(x); %zeroes out bad x_i

Input: p is the success probability of

a Bernoulli random variable X,

x is a vector of possible sample

values

Output: y is a vector with y(i) =

F

X

(x(i)).

bernoullirv x=bernoullirv(p,m)

function x=bernoullirv(p,m)

%return m samples of bernoulli (p) rv

r=rand(m,1);

x=(r>=(1-p));

Input: p is the success probability of a

Bernoulli random variable X, m is

a positive integer vector of possible

sample values

Output: x is a vector of m independent

sample values of X

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf y=bignomialpmf(n,p,x)

function pmf=bignomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

k=(0:n-1)’;

a=log((p/(1-p))*((n-k)./(k+1)));

L0=n*log(1-p);

L=[L0; L0+cumsum(a)];

pb=exp(L);

% pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

Comment: This function should al-

ways produce the same output

as binomialpmf(n,p,x);

however, the function calcu-

lates the logarithmof the proba-

bility and thismay lead to small

numerical innaccuracy.

binomialcdf y=binomialcdf(n,p,x)

function cdf=binomialcdf(n,p,x)

%Usage: cdf=binomialcdf(n,p,x)

%For binomial(n,p) rv X,

%and input vector x, output is

%vector cdf: cdf(i)=P[X<=x(i)]

x=floor(x(:)); %for noninteger x(i)

allx=0:max(x);

%calculate cdf from 0 to max(x)

allcdf=cumsum(binomialpmf(n,p,allx));

okx=(x>=0); %x(i) < 0 are zero-prob values

x=(okx.*x); %set zero-prob x(i)=0

cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

Input: n and p are the pa-

rameters of a bino-

mial (n, p) random

variable X, x is a vec-

tor of possible sample

values

Output: y is a vector with

y(i) = F

X

(x(i)).

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

binomialpmf y=binomialpmf(n,p,x)

function pmf=binomialpmf(n,p,x)

%binomial(n,p) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

if p<0.5

pp=p;

else

pp=1-p;

end

i=0:n-1;

ip= ((n-i)./(i+1))*(pp/(1-pp));

pb=((1-pp)ˆn)*cumprod([1 ip]);

if pp < p

pb=fliplr(pb);

end

pb=pb(:); % pb=[P[X=0] ... P[X=n]]ˆt

x=x(:);

okx =(x>=0).*(x<=n).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

Input: n and p are the parameters of

a binomial (n, p) random vari-

able X, x is a vector of possible

sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

binomialrv x=binomialrv(n,p,m)

function x=binomialrv(n,p,m)

% m binomial(n,p) samples

r=rand(m,1);

cdf=binomialcdf(n,p,0:n);

x=count(cdf,r);

Input: n and p are the parameters of a binomial ran-

dom variable X, m is a positive integer

Output: x is a vector of m independent samples of

random variable X

bivariategausspdf

function f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Usage: f=bivariategausspdf(muX,muY,sigmaX,sigmaY,rho,x,y)

%Evaluate the bivariate Gaussian (muX,muY,sigmaX,sigmaY,rho) PDF

nx=(x-muX)/sigmaX;

ny=(y-muY)/sigmaY;

f=exp(-((nx.ˆ2) +(ny.ˆ2) - (2*rho*nx.*ny))/(2*(1-rhoˆ2)));

f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2));

Input: Scalar parameters muX,muY,sigmaX,sigmaY,rho of the bivariate Gaussian PDF, scalars

x and y.

Output: f the value of the bivariate Gaussian PDF at x,y.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

duniformcdf y=duniformcdf(k,l,x)

function cdf=duniformcdf(k,l,x)

%Usage: cdf=duniformcdf(k,l,x)

% For discrete uniform (k,l) rv X

% and input vector x, output is

% vector cdf: cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); %for noninteger x_i

allx=k:max(x);

%allcdf = cdf values from 0 to max(x)

allcdf=cumsum(duniformpmf(k,l,allx));

%x_i < k are zero prob values

okx=(x>=k);

%set zero prob x(i)=k

x=((1-okx)*k)+(okx.*x);

%x(i)=0 for zero prob x(i)

cdf= okx.*allcdf(x-k+1);

Input: k and l are the parameters of

a discrete uniform (k, l) random

variable X, x is a vector of pos-

sible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

duniformpmf y=duniformpmf(k,l,x)

function pmf=duniformpmf(k,l,x)

%discrete uniform(k,l) rv X,

%input = vector x

%output= vector pmf: pmf(i)=Prob[X=x(i)]

pmf= (x>=k).*(x<=l).*(x==floor(x));

pmf=pmf(:)/(l-k+1);

Input: k and l are the parameters

of a discrete uniform (k, l) ran-

dom variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

duniformrv x=duniformrv(k,l,m)

function x=duniformrv(k,l,m)

%returns m samples of a discrete

%uniform (k,l) random variable

r=rand(m,1);

cdf=duniformcdf(k,l,k:l);

x=k+count(cdf,r);

Input: k and l are the parameters of a discrete

uniform (k, l) random variable X, m is a

positive integer

Output: x is a vector of m independent samples

of random variable X

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

erlangb pb=erlangb(rho,c)

function pb=erlangb(rho,c);

%Usage: pb=erlangb(rho,c)

%returns the Erlang-B blocking

%probability for sn M/M/c/c

%queue with load rho

pn=exp(-rho)*poissonpmf(rho,0:c);

pb=pn(c+1)/sum(pn);

Input: Offered load rho (ρ = λ/µ), and

the number of servers c of an M/M/c/c

queue.

Output: pb, the blocking probability of the

queue

erlangcdf y=erlangcdf(n,lambda,x)

function F=erlangcdf(n,lambda,x)

F=1.0-poissoncdf(lambda*x,n-1);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

).

erlangpdf y=erlangpdf(n,lambda,x)

function f=erlangpdf(n,lambda,x)

f=((lambdaˆn)/factorial(n))...

*(x.ˆ(n-1)).*exp(-lambda*x);

Input: n and lambda are the parameters of an

Erlang random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λ

n

x

n−1

i

e

−λx

i

/(n − 1)!.

erlangrv x=erlangrv(n,lambda,m)

function x=erlangrv(n,lambda,m)

y=exponentialrv(lambda,m*n);

x=sum(reshape(y,m,n),2);

Input: n and lambda are the parameters of an

Erlang random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

exponentialcdf y=exponentialcdf(lambda,x)

function F=exponentialcdf(lambda,x)

F=1.0-exp(-lambda*x);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

1 − e

−λx

i

.

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

exponentialpdf y=exponentialpdf(lambda,x)

function f=exponentialpdf(lambda,x)

f=lambda*exp(-lambda*x);

f=f.*(x>=0);

Input: lambda is the parameter of an ex-

ponential random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

) =

λe

−λx

i

.

exponentialrv x=exponentialrv(lambda,m)

function x=exponentialrv(lambda,m)

x=-(1/lambda)*log(1-rand(m,1));

Input: lambda is the parameter of an expo-

nential random variable X, integer m

Output: Length m vector x such that each x

i

is a sample of X

finitecdf y=finitecdf(sx,p,x)

function cdf=finitecdf(s,p,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% cdf: cdf(i)=P[X=x(i)]

cdf=[];

for i=1:length(x)

pxi= sum(p(find(s<=x(i))));

cdf=[cdf; pxi];

end

Input: sx is the range of a ﬁnite random variable

X, px is the corresponding probability as-

signment, x is a vector of possible sample

values

Output: y is a vector with y(i) = F

X

(x(i)).

finitecoeff rho=finitecoeff(SX,SY,PXY)

function rho=finitecoeff(SX,SY,PXY);

%Usage: rho=finitecoeff(SX,SY,PXY)

%Calculate the correlation coefficient rho of

%finite random variables X and Y

ex=finiteexp(SX,PXY); vx=finitevar(SX,PXY);

ey=finiteexp(SY,PXY); vy=finitevar(SY,PXY);

R=finiteexp(SX.*SY,PXY);

rho=(R-ex*ey)/sqrt(vx*vy);

Input: Grids SX, SY and

probability grid PXY de-

scribing the ﬁnite ran-

dom variables X and Y.

Output: rho, the correlation

coefﬁcient of X and Y

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitecov covxy=finitecov(SX,SY,PXY)

function covxy=finitecov(SX,SY,PXY);

%Usage: cxy=finitecov(SX,SY,PXY)

%returns the covariance of

%finite random variables X and Y

%given by grids SX, SY, and PXY

ex=finiteexp(SX,PXY);

ey=finiteexp(SY,PXY);

R=finiteexp(SX.*SY,PXY);

covxy=R-ex*ey;

Input: Grids SX, SY and probability grid

PXY describing the ﬁnite random

variables X and Y.

Output: covxy, the covariance of X and

Y.

finiteexp ex=finiteexp(sx,px)

function ex=finiteexp(sx,px);

%Usage: ex=finiteexp(sx,px)

%returns the expected value E[X]

%of finite random variable X described

%by samples sx and probabilities px

ex=sum((sx(:)).*(px(:)));

Input: Probability vector px, vector

of samples sx describing random

variable X.

Output: ex, the expected value E[X].

finitepmf y=finitepmf(sx,p,x)

function pmf=finitepmf(sx,px,x)

% finite random variable X:

% vector sx of sample space

% elements {sx(1),sx(2), ...}

% vector px of probabilities

% px(i)=P[X=sx(i)]

% Output is the vector

% pmf: pmf(i)=P[X=x(i)]

pmf=zeros(size(x(:)));

for i=1:length(x)

pmf(i)= sum(px(find(sx==x(i))));

end

Input: sx is the range of a ﬁnite random

variable X, px is the corresponding

probability assignment, x is a vector

of possible sample values

Output: y is a vector with y(i) =

P[X = x(i)].

finiterv x=finiterv(sx,p,m)

function x=finiterv(s,p,m)

% returns m samples

% of finite (s,p) rv

%s=s(:);p=p(:);

r=rand(m,1);

cdf=cumsum(p);

x=s(1+count(cdf,r));

Input: sx is the range of a ﬁnite random variable X, p

is the corresponding probability assignment, m is

positive integer

Output: x is a vector of m sample values y(i) =

F

X

(x(i)).

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

finitevar v=finitevar(sx,px)

function v=finitevar(sx,px);

%Usage: ex=finitevar(sx,px)

% returns the variance Var[X]

% of finite random variables X described by

% samples sx and probabilities px

ex2=finiteexp(sx.ˆ2,px);

ex=finiteexp(sx,px);

v=ex2-(exˆ2);

Input: Probability vector px

and vector of samples

sx describing random

variable X.

Output: v, the variance

Var[X].

gausscdf y=gausscdf(mu,sigma,x)

function f=gausscdf(mu,sigma,x)

f=phi((x-mu)/sigma);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

) =

((x

i

− µ)/σ).

gausspdf y=gausspdf(mu,sigma,x)

function f=gausspdf(mu,sigma,x)

f=exp(-(x-mu).ˆ2/(2*sigmaˆ2))/...

sqrt(2*pi*sigmaˆ2);

Input: mu and sigma are the parameters of an

Guassian random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

).

gaussrv x=gaussrv(mu,sigma,m)

function x=gaussrv(mu,sigma,m)

x=mu +(sigma*randn(m,1));

Input: mu and sigma are the parameters of an

Gaussian random variable X, integer m

Output: Length m vector x such that each x

i

is a

sample of X

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

gaussvector x=gaussvector(mu,C,m)

function x=gaussvector(mu,C,m)

%output: m Gaussian vectors,

%each with mean mu

%and covariance matrix C

if (min(size(C))==1)

C=toeplitz(C);

end

n=size(C,2);

if (length(mu)==1)

mu=mu*ones(n,1);

end

[U,D,V]=svd(C);

x=V*(Dˆ(0.5))*randn(n,m)...

+(mu(:)*ones(1,m));

Input: For a Gaussian (µ

X

, C

X

) random vector X,

gaussvector can be called in two ways:

• C is the n × n covariance matrix, mu is

either a length n vector, or a length 1

scalar, m is an integer.

• C is the length n vector equal to the ﬁrst

row of a symmetric Toeplitz covariance

matrix C

X

, mu is either a length n vec-

tor, or a length 1 scalar, m is an integer.

If mu is a length n vector, then mu is the ex-

pected value vector; otherwise, each element

of X is assumed to have mean mu.

Output: n × m matrix x such that each column

x(:,i) is a sample vector of X

gaussvectorpdf f=gaussvector(mu,C,x)

function f=gaussvectorpdf(mu,C,x)

n=length(x);

z=x(:)-mu(:);

f=exp(-z’*inv(C)*z)/...

sqrt((2*pi)ˆn*det(C));

Input: For a Gaussian (µ

X

, C

X

) random vec-

tor X, mu is a length n vector, C is the

n × n covariance matrix, x is a length n

vector.

Output: f is the Gaussian vector PDF f

X

(x)

evaluated at x.

geometriccdf y=geometriccdf(p,x)

function cdf=geometriccdf(p,x)

% for geometric(p) rv X,

%For input vector x, output is vector

%cdf such that cdf_i=Prob(X<=x_i)

x=(x(:)>=1).*floor(x(:));

cdf=1-((1-p).ˆx);

Input: p is the parameter of a geometric

random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

geometricpmf y=geometricpmf(p,x)

function pmf=geometricpmf(p,x)

%geometric(p) rv X

%out: pmf(i)=Prob[X=x(i)]

x=x(:);

pmf= p*((1-p).ˆ(x-1));

pmf= (x>0).*(x==floor(x)).*pmf;

Input: p is the parameter of a geometric random

variable X, x is a vector of possible sample

values

Output: y is a vector with y(i) = P

X

(x(i)).

geometricrv x=geometricrv(p,m)

function x=geometricrv(p,m)

%Usage: x=geometricrv(p,m)

% returns m samples of a geometric (p) rv

r=rand(m,1);

x=ceil(log(1-r)/log(1-p));

Input: p is the parameters of a

geometric random variable

X, m is a positive integer

Output: x is a vector of m inde-

pendent samples of random

variable X

icdfrv x=icdfrv(@icdf,m)

function x=icdfrv(icdfhandle,m)

%Usage: x=icdfrv(@icdf,m)

%returns m samples of rv X

%with inverse CDF icdf.m

u=rand(m,1);

x=feval(icdfhandle,u);

Input: @icdfrv is a “handle” (a kind of pointer)

to a MATLAB function icdf.m that is

MATLAB’s representation of an inverse

CDF F

−1

X

(x) of a random variable X, inte-

ger m

Output: Length m vector x such that each x

i

is a

sample of X

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalcdf y=pascalcdf(k,p,x)

function cdf=pascalcdf(k,p,x)

%Usage: cdf=pascalcdf(k,p,x)

%For a pascal (k,p) rv X

%and input vector x, the output

%is a vector cdf such that

% cdf(i)=Prob[X<=x(i)]

x=floor(x(:)); % for noninteger x(i)

allx=k:max(x);

%allcdf holds all needed cdf values

allcdf=cumsum(pascalpmf(k,p,allx));

%x_i < k have zero-prob,

% other values are OK

okx=(x>=k);

%set zero-prob x(i)=k,

%just so indexing is not fouled up

x=(okx.*x) +((1-okx)*k);

cdf= okx.*allcdf(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

pascalpmf y=pascalpmf(k,p,x)

function pmf=pascalpmf(k,p,x)

%For Pascal (k,p) rv X, and

%input vector x, output is a

%vector pmf: pmf(i)=Prob[X=x(i)]

x=x(:);

n=max(x);

i=(k:n-1)’;

ip= [1 ;(1-p)*(i./(i+1-k))];

%pb=all n-k+1 pascal probs

pb=(pˆk)*cumprod(ip);

okx=(x==floor(x)).*(x>=k);

%set bad x(i)=k to stop bad indexing

x=(okx.*x) + k*(1-okx);

% pmf(i)=0 unless x(i) >= k

pmf=okx.*pb(x-k+1);

Input: k and p are the parameters of a Pas-

cal (k, p) random variable X, x is a

vector of possible sample values

Output: y is a vector with y(i) =

P

X

(x(i)).

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pascalrv x=pascalrv(k,p,m)

function x=pascalrv(k,p,m)

% return m samples of pascal(k,p) rv

r=rand(m,1);

rmax=max(r);

xmin=k;

xmax=ceil(2*(k/p)); %set max range

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=pascalcdf(k,p,sx);

end

x=xmin+countless(cdf,r);

Input: k and p are the parameters of a Pas-

cal random variable X, m is a posi-

tive integer

Output: x is a vector of m independent

samples of random variable X

phi y=phi(x)

function y=phi(x)

sq2=sqrt(2);

y= 0.5 + 0.5*erf(x/sq2);

Input: Vector x

Output: Vector y such that y(i) = (x(i)).

poissoncdf y=poissoncdf(alpha,x)

function cdf=poissoncdf(alpha,x)

%output cdf(i)=Prob[X<=x(i)]

x=floor(x(:));

sx=0:max(x);

cdf=cumsum(poissonpmf(alpha,sx));

%cdf from 0 to max(x)

okx=(x>=0);%x(i)<0 -> cdf=0

x=(okx.*x);%set negative x(i)=0

cdf= okx.*cdf(x+1);

%cdf=0 for x(i)<0

Input: alpha is the parameter of a Poisson

(α) random variable X, x is a vector of

possible sample values

Output: y is a vector with y(i) =

F

X

(x(i)).

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonpmf y=poissonpmf(alpha,x)

function pmf=poissonpmf(alpha,x)

%Poisson (alpha) rv X,

%out=vector pmf: pmf(i)=P[X=x(i)]

x=x(:);

k=(1:max(x))’;

logfacts =cumsum(log(k));

pb=exp([-alpha; ...

-alpha+ (k*log(alpha))-logfacts]);

okx=(x>=0).*(x==floor(x));

x=okx.*x;

pmf=okx.*pb(x+1);

%pmf(i)=0 for zero-prob x(i)

Input: alpha is the parameter of a

Poisson (α) random variable X, x

is a vector of possible sample val-

ues

Output: y is a vector with y(i) =

P

X

(x(i)).

poissonrv x=poissonrv(alpha,m)

function x=poissonrv(alpha,m)

%return m samples of poisson(alpha) rv X

r=rand(m,1);

rmax=max(r);

xmin=0;

xmax=ceil(2*alpha); %set max range

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

%while ( sum(cdf <=rmax) ==(xmax-xmin+1) )

while cdf(length(cdf)) <=rmax

xmax=2*xmax;

sx=xmin:xmax;

cdf=poissoncdf(alpha,sx);

end

x=xmin+countless(cdf,r);

Input: alpha is the parameter of

a Poisson (α) random vari-

able X, m is a positive inte-

ger

Output: x is a vector of m inde-

pendent samples of random

variable X

uniformcdf y=uniformcdf(a,b,x)

function F=uniformcdf(a,b,x)

%Usage: F=uniformcdf(a,b,x)

%returns the CDF of a continuous

%uniform rv evaluated at x

F=x.*((x>=a) & (x<b))/(b-a);

F=f+1.0*(x>=b);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= F

X

(x

i

)

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

uniformpdf y=uniformpdf(a,b,x)

function f=uniformpdf(a,b,x)

%Usage: f=uniformpdf(a,b,x)

%returns the PDF of a continuous

%uniform rv evaluated at x

f=((x>=a) & (x<b))/(b-a);

Input: a and ( b) are parameters for continuous

uniform random variable X, vector x

Output: Vector y such that y

i

= f

X

(x

i

)

uniformrv x=uniformrv(a,b,m)

function x=uniformrv(a,b,m)

%Usage: x=uniformrv(a,b,m)

%Returns m samples of a

%uniform (a,b) random varible

x=a+(b-a)*rand(m,1);

Input: a and ( b) are parameters for continuous uni-

form random variable X, positive integer m

Output: m element vector x such that each x(i) is

a sample of X.

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Functions for Stochastic Processes

brownian w=brownian(alpha,t)

function w=brownian(alpha,t)

%Brownian motion process

%sampled at t(1)<t(2)< ...

t=t(:);

n=length(t);

delta=t-[0;t(1:n-1)];

x=sqrt(alpha*delta).*gaussrv(0,1,n);

w=cumsum(x);

Input: t is a vector holding an ordered se-

quence of inspection times, alpha

is the scaling constant of a Brownian

motion process such that the i th in-

crement has variance α(t

i

− t

i −1

).

Output: w is a vector such that w(i) is

the position at time t(i) of the par-

ticle in Brownian motion.

cmcprob pv=cmcprob(Q,p0,t)

function pv = cmcprob(Q,p0,t)

%Q has zero diagonal rates

%initial state probabilities p0

K=size(Q,1)-1; %max no. state

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

R=Q-diag(sum(Q,2));

pv= (p0(:)’*expm(R*t))’;

Input: n × n state transition matrix Q for a

continuous-time ﬁnite Markov chain, length

n vector p0 denoting the initial state proba-

bilities, nonengative scalar t

Output: Length n vector pv such that pv(t) is

the state probability vector at time t of the

Markov chain

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0

cmcstatprob pv=cmcstatprob(Q)

function pv = cmcstatprob(Q)

%Q has zero diagonal rates

R=Q-diag(sum(Q,2));

n=size(Q,1);

R(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Rˆ(-1))’;

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain

Output: pv is the stationary probability vector for

the continuous-time Markov chain

dmcstatprob pv=dmcstatprob(P)

function pv = dmcstatprob(P)

n=size(P,1);

A=(eye(n)-P);

A(:,1)=ones(n,1);

pv=([1 zeros(1,n-1)]*Aˆ(-1))’;

Input: n × n stochastic matrix P representing

a discrete-time aperiodic irreducible ﬁnite

Markov chain

Output: pv is the stationary probability vector.

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

poissonarrivals s=poissonarrivals(lambda,T)

function s=poissonarrivals(lambda,T)

%arrival times s=[s(1) ... s(n)]

% s(n)<= T < s(n+1)

n=ceil(1.1*lambda*T);

s=cumsum(exponentialrv(lambda,n));

while (s(length(s))< T),

s_new=s(length(s))+ ...

cumsum(exponentialrv(lambda,n));

s=[s; s_new];

end

s=s(s<=T);

Input: lambda is the arrival rate of a

Poisson process, T marks the end of

an observation interval [0, T].

Output: s=[s(1), ..., s(n)]’ is

a vector such that s(i) is i th arrival

time. Note that length n is a Poisson

random variable with expected value

λT.

Comment: This code is pretty stupid.

There are decidedly better ways to

create a set of arrival times; see Prob-

lem 10.13.5.

poissonprocess N=poissonprocess(lambda,t)

function N=poissonprocess(lambda,t)

%input: rate lambda>0, vector t

%For a sample function of a

%Poisson process of rate lambda,

%N(i) = no. of arrivals by t(i)

s=poissonarrivals(lambda,max(t));

N=count(s,t);

Input: lambda is the arrival rate of a Pois-

son process, t is a vector of “inspec-

tion times’.’

Output: N is a vector such that N(i) is the

number of arrival by inspection time

t(i).

simcmc ST=simcmc(Q,p0,T)

function ST=simcmc(Q,p0,T);

K=size(Q,1)-1; max no. state

%calc average trans. rate

ps=cmcstatprob(Q);

v=sum(Q,2); R=ps’*v;

n=ceil(0.6*T/R);

ST=simcmcstep(Q,p0,2*n);

while (sum(ST(:,2))<T),

s=ST(size(ST,1),1);

p00=Q(1+s,:)/v(1+s);

S=simcmcstep(Q,p00,n);

ST=[ST;S];

end

n=1+sum(cumsum(ST(:,2))<T);

ST=ST(1:n,:);

%truncate last holding time

ST(n,2)=T-sum(ST(1:n-1,2));

Input: state transition matrix Q for a continuous-time

ﬁnite Markov chain, vector p0 denoting the ini-

tial state probabilities, integer n

Output: A simulation of the Markov chain system

over the time interval [0, T]: The output is an

n × 2 matrix ST such that the ﬁrst column

ST(:,1) is the sequence of system states and

the second column ST(:,2) is the amount of

time spent in each state. That is, ST(i,2) is

the amount of time the system spends in state

ST(i,1).

Comment: If p0 is a scalar integer, then the simula-

tion starts in state p0. Note that n, the number

of state occupancy periods, is random.

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simcmcstep S=simcmcstep(Q,p0,n)

function S=simcmcstep(Q,p0,n);

%S=simcmcstep(Q,p0,n)

% Simulate n steps of a cts

% Markov Chain, rate matrix Q,

% init. state probabilities p0

K=size(Q,1)-1; %max no. state

S=zeros(n+1,2);%init allocation

%check for integer p0

if (length(p0)==1)

p0=((0:K)==p0);

end

v=sum(Q,2); %state dep. rates

t=1./v;

P=diag(t)*Q;

S(:,1)=simdmc(P,p0,n);

S(:,2)=t(1+S(:,1)) ...

.*exponentialrv(1,n+1);

Input: State transition matrix Q for a continuous-

time ﬁnite Markov chain, vector p0 denot-

ing the initial state probabilities, integer n

Output: A simulation of n steps of the

continuous-time Markov chain system:

The output is an n × 2 matrix ST such that

the ﬁrst column ST(:,1) is the length n

sequence of system states and the second

column ST(:,2) is the amount of time

spent in each state. That is, ST(i,2) is

the amount of time the system spends in

state ST(i,1).

Comment: If p0 is a scalar integer, then the sim-

ulation starts in state p0. This program is

the basis for simcmc.

simdmc x=simdmc(P,p0,n)

function x=simdmc(P,p0,n)

K=size(P,1)-1; %highest no. state

sx=0:K; %state space

x=zeros(n+1,1); %initialization

if (length(p0)==1) %convert integer p0 to prob vector

p0=((0:K)==p0);

end

x(1)=finiterv(sx,p0,1); %x(m)= state at time m-1

for m=1:n,

x(m+1)=finiterv(sx,P(x(m)+1,:),1);

end

Input: n×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov

chain, length n vector p0 denoting the initial state probabilities, integer n.

Output: A simulation of the Markov chain system such that for the length n vector x, x(m) is the

state at time m-1 of the Markov chain.

Comment: If p0 is a scalar integer, then the simulation starts in state p0

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Random Utilities

count n=count(x,y)

function n=count(x,y)

%Usage n=count(x,y)

%n(i)= # elements of x <= y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<=MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x less than or equal to y(i).

countequal n=countequal(x,y)

function n=countequal(x,y)

%Usage: n=countequal(x,y)

%n(j)= # elements of x = y(j)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX==MY),1))’;

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x equal to y(i).

countless n=countless(x,y)

function n=countless(x,y)

%Usage: n=countless(x,y)

%n(i)= # elements of x < y(i)

[MX,MY]=ndgrid(x,y);

%each column of MX = x

%each row of MY = y

n=(sum((MX<MY),1))’;

Input:

Input: Vectors x and y

Output: Vector n such that n(i ) is the number of

elements of x strictly less than y(i).

dftmat F=dftmat(N)

function F = dftmat(N);

Usage: F=dftmat(N)

%F is the N by N DFT matrix

n=(0:N-1)’;

F=exp((-1.0j)*2*pi*(n*(n’))/N);

Input: Integer N.

Output: F is the N by N discrete Fourier trans-

form matrix

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

freqxy fxy=freqxy(xy,SX,SY)

function fxy = freqxy(xy,SX,SY)

%Usage: fxy = freqxy(xy,SX,SY)

%xy is an m x 2 matrix:

%xy(i,:)= ith sample pair X,Y

%Output fxy is a K x 3 matrix:

% [fxy(k,1) fxy(k,2)]

% = kth unique pair [x y] and

% fxy(k,3)= corresp. rel. freq.

%extend xy to include a sample

%for all possible (X,Y) pairs:

xy=[xy; SX(:) SY(:)];

[U,I,J]=unique(xy,’rows’);

N=hist(J,1:max(J))-1;

N=N/sum(N);

fxy=[U N(:)];

%reorder fxy rows to match

%rows of [SX(:) SY(:) PXY(:)]:

fxy=sortrows(fxy,[2 1 3]);

Input: For random variables X and Y, xy is

an m × 2 matrix holding a list of sample

values pairs; yy(i,:) is the i th sample

pair (X, Y). Grids SX and SY represent-

ing the sample space.

Output: fxy is a K × 3 matrix. In each row

[fxy(k,1) fxy(k,2) fxy(k,3)]

[fxy(k,1) fxy(k,2)] is a unique

(X, Y) pair with relative frequency

fxy(k,3).

Comment: Given the grids SX, SY and the

probability grid PXY, a list of random

sample value pairs xy can be simulated

by the commands

S=[SX(:) SY(:)];

xy=finiterv(S,PXY(:),m);

The output fxy is ordered so that the

rows match the ordering of rows in the

matrix

[SX(:) SY(:) PXY(:)].

fftc S=fftc(r,N); S=fftc(r)

function S=fftc(varargin);

%DFT for a signal r

%centered at the origin

%Usage:

% fftc(r,N): N point DFT of r

% fftc(r): length(r) DFT of r

r=varargin{1};

L=1+floor(length(r)/2);

if (nargin>1)

N=varargin{2}(1);

else

N=(2*L)-1;

end

R=fft(r,N);

n=reshape(0:(N-1),size(R));

phase=2*pi*(n/N)*(L-1);

S=R.*exp((1.0j)*phase);

Input: Vector r=[r(1) ... r(2k+1)]

holding the time sequence r

−k

, . . . , r

0

, . . . , r

k

centered around the origin.

Output: S is the DFT of r

Comment: Supports the same calling conventions

as fft.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

pmfplot pmfplot(sx,px,’x’,’y axis text’)

function h=pmfplot(sx,px,xls,yls)

%Usage: pmfplot(sx,px,xls,yls)

%sx and px are vectors, px is the PMF

%xls and yls are x and y label strings

nonzero=find(px);

sx=sx(nonzero); px=px(nonzero);

sx=(sx(:))’; px=(px(:))’;

XM = [sx; sx];

PM=[zeros(size(px)); px];

h=plot(XM,PM,’-k’);

set(h,’LineWidth’,3);

if (nargin==4)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

xmin=min(sx); xmax=max(sx);

xborder=0.05*(xmax-xmin);

xmax=xmax+xborder;

xmin=xmin-xborder;

ymax=1.1*max(px);

axis([xmin xmax 0 ymax]);

Input: Sample space vector sx

and PMF vector px for ﬁ-

nite random variable PXY,

optional text strings xls

and yls

Output: A plot of the PMF

P

X

(x) in the bar style used

in the text.

rect y=rect(x)

function y=rect(x);

%Usage:y=rect(x);

y=1.0*(abs(x)<0.5);

Input: Vector x

Output: Vector y such that

y

i

= rect(x

i

) =

1 |x

i

| < 0.5

0 otherwise

sinc y=sinc(x)

function y=sinc(x);

xx=x+(x==0);

y=sin(pi*xx)./(pi*xx);

y=((1.0-(x==0)).*y)+ (1.0*(x==0));

Input: Vector x

Output: Vector y such that

y

i

= sinc(x

i

) =

sin(πx

i

)

πx

i

Comment: The code is ugly because it makes

sure to produce the right limit value at

x

i

= 0.

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

simplot simplot(S,xlabel,ylabel)

function h=simplot(S,xls,yls);

%h=simplot(S,xlabel,ylabel)

% Plots the output of a simulated state sequence

% If S is N by 1, a discrete time chain is assumed

% with visit times of one unit.

% If S is an N by 2 matrix, a cts time Markov chain

% is assumed where

% S(:,1) = state sequence.

% S(:,2) = state visit times.

% The cumulative sum

% of visit times are transition instances.

% h is a handle to a stairs plot of the state sequence

% vs state transition times

%in case of discrete time simulation

if (size(S,2)==1)

S=[S ones(size(S))];

end

Y=[S(:,1) ; S(size(S,1),1)];

X=cumsum([0 ; S(:,2)]);

h=stairs(X,Y);

if (nargin==3)

xlabel(xls);

ylabel(yls,’VerticalAlignment’,’Bottom’);

end

Input: The simulated state sequence vector S generated by S=simdmc(P,p0,n) or the n × 2

state/time matrix ST generated by either

ST=simcmc(Q,p0,T)

or

ST=simcmcstep(Q,p0,n).

Output: A “stairs” plot showing the sequence of simulation states over time.

Comment: If S is just a state sequence vector, then each stair has equal width. If S is n × 2

state/time matrix ST, then the width of the stair is proportional to the time spent in that state.

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Probability and Stochastic Processes

A Friendly Introduction for Electrical and Computer Engineers

Second Edition

Quiz Solutions

Roy D. Yates and David J. Goodman

May 22, 2004

• The MATLAB section quizzes at the end of each chapter use programs available for

download as the archive matcode.zip. This archive has programs of general pur-

pose programs for solving probability problems as well as speciﬁc .m ﬁles associated

with examples or quizzes in the text. Also available is a manual probmatlab.pdf

describing the general purpose .m ﬁles in matcode.zip.

• We have made a substantial effort to check the solution to every quiz. Nevertheless,

there is a nonzero probability (in fact, a probability close to unity) that errors will be

found. If you ﬁnd errors or have suggestions or comments, please send email to

ryates@winlab.rutgers.edu.

When errors are found, corrected solutions will be posted at the website.

1

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 1

Quiz 1.1

In the Venn diagrams for parts (a)-(g) below, the shaded area represents the indicated

set.

M

O

T

M

O

T

M

O

T

(1) R = T

c

(2) M ∪ O (3) M ∩ O

M

O

T

M

O

T

M

O

T

(4) R ∪ M (4) R ∩ M (6) T

c

− M

Quiz 1.2

(1) A

1

= {vvv, vvd, vdv, vdd}

(2) B

1

= {dvv, dvd, ddv, ddd}

(3) A

2

= {vvv, vvd, dvv, dvd}

(4) B

2

= {vdv, vdd, ddv, ddd}

(5) A

3

= {vvv, ddd}

(6) B

3

= {vdv, dvd}

(7) A

4

= {vvv, vvd, vdv, dvv, vdd, dvd, ddv}

(8) B

4

= {ddd, ddv, dvd, vdd}

Recall that A

i

and B

i

are collectively exhaustive if A

i

∪ B

i

= S. Also, A

i

and B

i

are

mutually exclusive if A

i

∩ B

i

= φ. Since we have written down each pair A

i

and B

i

above,

we can simply check for these properties.

The pair A

1

and B

1

are mutually exclusive and collectively exhaustive. The pair A

2

and

B

2

are mutually exclusive and collectively exhaustive. The pair A

3

and B

3

are mutually

exclusive but not collectively exhaustive. The pair A

4

and B

4

are not mutually exclusive

since dvd belongs to A

4

and B

4

. However, A

4

and B

4

are collectively exhaustive.

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.3

There are exactly 50 equally likely outcomes: s

51

through s

100

. Each of these outcomes

has probability 0.02.

(1) P[{s

79

}] = 0.02

(2) P[{s

100

}] = 0.02

(3) P[A] = P[{s

90

, . . . , s

100

}] = 11 ×0.02 = 0.22

(4) P[F] = P[{s

51

, . . . , s

59

}] = 9 ×0.02 = 0.18

(5) P[T ≥ 80] = P[{s

80

, . . . , s

100

}] = 21 ×0.02 = 0.42

(6) P[T < 90] = P[{s

51

, s

52

, . . . , s

89

}] = 39 ×0.02 = 0.78

(7) P[a C grade or better] = P[{s

70

, . . . , s

100

}] = 31 ×0.02 = 0.62

(8) P[student passes] = P[{s

60

, . . . , s

100

}] = 41 ×0.02 = 0.82

Quiz 1.4

We can describe this experiment by the event space consisting of the four possible

events V B, V L, DB, and DL. We represent these events in the table:

V D

L 0.35 ?

B ? ?

In a roundabout way, the problem statement tells us how to ﬁll in the table. In particular,

P [V] = 0.7 = P [V L] + P [V B] (1)

P [L] = 0.6 = P [V L] + P [DL] (2)

Since P[V L] = 0.35, we can conclude that P[V B] = 0.35 and that P[DL] = 0.6 −

0.35 = 0.25. This allows us to ﬁll in two more table entries:

V D

L 0.35 0.25

B 0.35 ?

The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.

This implies P[DB] = 0.05 and the complete table is

V D

L 0.35 0.25

B 0.35 0.05

Finding the various probabilities is now straightforward:

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25

(2) P[D ∪ L] = P[V L] + P[DL] + P[DB] = 0.35 +0.25 +0.05 = 0.65.

(3) P[V B] = 0.35

(4) P[V ∪ L] = P[V] + P[L] − P[V L] = 0.7 +0.6 −0.35 = 0.95

(5) P[V ∪ D] = P[S] = 1

(6) P[LB] = P[LL

c

] = 0

Quiz 1.5

(1) The probability of exactly two voice calls is

P [N

V

= 2] = P [{vvd, vdv, dvv}] = 0.3 (1)

(2) The probability of at least one voice call is

P [N

V

≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] (2)

= 6(0.1) +0.2 = 0.8 (3)

An easier way to get the same answer is to observe that

P [N

V

≥ 1] = 1 − P [N

V

< 1] = 1 − P [N

V

= 0] = 1 − P [{ddd}] = 0.8 (4)

(3) The conditional probability of two voice calls followed by a data call given that there

were two voice calls is

P [{vvd} |N

V

= 2] =

P [{vvd} , N

V

= 2]

P [N

V

= 2]

=

P [{vvd}]

P [N

V

= 2]

=

0.1

0.3

=

1

3

(5)

(4) The conditional probability of two data calls followed by a voice call given there

were two voice calls is

P [{ddv} |N

V

= 2] =

P [{ddv} , N

V

= 2]

P [N

V

= 2]

= 0 (6)

The joint event of the outcome ddv and exactly two voice calls has probability zero

since there is only one voice call in the outcome ddv.

(5) The conditional probability of exactly two voice calls given at least one voice call is

P [N

V

= 2|N

v

≥ 1] =

P [N

V

= 2, N

V

≥ 1]

P [N

V

≥ 1]

=

P [N

V

= 2]

P [N

V

≥ 1]

=

0.3

0.8

=

3

8

(7)

(6) The conditional probability of at least one voice call given there were exactly two

voice calls is

P [N

V

≥ 1|N

V

= 2] =

P [N

V

≥ 1, N

V

= 2]

P [N

V

= 2]

=

P [N

V

= 2]

P [N

V

= 2]

= 1 (8)

Given that there were two voice calls, there must have been at least one voice call.

4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6

In this experiment, there are four outcomes with probabilities

P[{vv}] = (0.8)

2

= 0.64 P[{vd}] = (0.8)(0.2) = 0.16

P[{dv}] = (0.2)(0.8) = 0.16 P[{dd}] = (0.2)

2

= 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition

and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes,

we now can test for the independence of events.

(1) First, we calculate the probability of the joint event:

P [N

V

= 2, N

V

≥ 1] = P [N

V

= 2] = P [{vv}] = 0.64 (1)

Next, we observe that

P [N

V

≥ 1] = P [{vd, dv, vv}] = 0.96 (2)

Finally, we make the comparison

P [N

V

= 2] P [N

V

≥ 1] = (0.64)(0.96) = P [N

V

= 2, N

V

≥ 1] (3)

which shows the two events are dependent.

(2) The probability of the joint event is

P [N

V

≥ 1, C

1

= v] = P [{vd, vv}] = 0.80 (4)

From part (a), P[N

V

≥ 1] = 0.96. Further, P[C

1

= v] = 0.8 so that

P [N

V

≥ 1] P [C

1

= v] = (0.96)(0.8) = 0.768 = P [N

V

≥ 1, C

1

= v] (5)

Hence, the events are dependent.

(3) The problem statement that the calls were independent implies that the events the

second call is a voice call, {C

2

= v}, and the ﬁrst call is a data call, {C

1

= d} are

independent events. Just to be sure, we can do the calculations to check:

P [C

1

= d, C

2

= v] = P [{dv}] = 0.16 (6)

Since P[C

1

= d]P[C

2

= v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are

independent. Note that this shouldn’t be surprising since we used the information that

the calls were independent in the problem statement to determine the probabilities of

the outcomes.

(4) The probability of the joint event is

P [C

2

= v, N

V

is even] = P [{vv}] = 0.64 (7)

Also, each event has probability

P [C

2

= v] = P [{dv, vv}] = 0.8, P [N

V

is even] = P [{dd, vv}] = 0.68 (8)

Thus, P[C

2

= v]P[N

V

is even] = (0.8)(0.68) = 0.544. Since P[C

2

= v, N

V

is even] =

0.544, the events are dependent.

5

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7

Let F

i

denote the event that that the user is found on page i . The tree for the experiment

is

¨

¨

¨

¨

¨

¨

F

1

0.8

F

c

1

0.2

¨

¨

¨

¨

¨

¨

F

2

0.8

F

c

2

0.2

¨

¨

¨

¨

¨

¨

F

3

0.8

F

c

3

0.2

The user is found unless all three paging attempts fail. Thus the probability the user is

found is

P [F] = 1 − P

_

F

c

1

F

c

2

F

c

3

_

= 1 −(0.2)

3

= 0.992 (1)

Quiz 1.8

(1) We can view choosing each bit in the code word as a subexperiment. Each subex-

periment has two possible outcomes: 0 and 1. Thus by the fundamental principle of

counting, there are 2 ×2 ×2 ×2 = 2

4

= 16 possible code words.

(2) An experiment that can yield all possible code words with two zeroes is to choose

which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There

are

_

4

2

_

= 6 ways to do this. Hence, there are six code words with exactly two zeroes.

For this problem, it is also possible to simply enumerate the six code words:

1100, 1010, 1001, 0101, 0110, 0011.

(3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst

bit has only one outcome. For each of the next three bits, we have two choices. In

this case, there are 1 ×2 ×2 ×2 = 8 ways of choosing a code word.

(4) For the constant ratio code, we can specify a code word by choosing M of the bits to

be ones. The other N −M bits will be zeroes. The number of ways of choosing such

a code word is

_

N

M

_

. For N = 8 and M = 3, there are

_

8

3

_

= 56 code words.

Quiz 1.9

(1) In this problem, k bits received in error is the same as k failures in 100 trials. The

failure probability is = 1 − p and the success probability is 1 − = p. That is, the

probability of k bits in error and 100 −k correctly received bits is

P

_

S

k,100−k

_

=

_

100

k

_

k

(1 −)

100−k

(1)

6

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

For = 0.01,

P

_

S

0,100

_

= (1 −)

100

= (0.99)

100

= 0.3660 (2)

P

_

S

1,99

_

= 100(0.01)(0.99)

99

= 0.3700 (3)

P

_

S

2,98

_

= 4950(0.01)

2

(0.99)

9

8 = 0.1849 (4)

P

_

S

3,97

_

= 161, 700(0.01)

3

(0.99)

97

= 0.0610 (5)

(2) The probability a packet is decoded correctly is just

P [C] = P

_

S

0,100

_

+ P

_

S

1,99

_

+ P

_

S

2,98

_

+ P

_

S

3,97

_

= 0.9819 (6)

Quiz 1.10

Since the chip works only if all n transistors work, the transistors in the chip are like

devices in series. The probability that a chip works is P[C] = p

n

.

The module works if either 8 chips work or 9 chips work. Let C

k

denote the event that

exactly k chips work. Since transistor failures are independent of each other, chip failures

are also independent. Thus each P[C

k

] has the binomial probability

P [C

8

] =

_

9

8

_

(P [C])

8

(1 − P [C])

9−8

= 9p

8n

(1 − p

n

), (1)

P [C

9

] = (P [C])

9

= p

9n

. (2)

The probability a memory module works is

P [M] = P [C

8

] + P [C

9

] = p

8n

(9 −8p

n

) (3)

Quiz 1.11

R=rand(1,100);

X=(R<= 0.4) ...

+ (2*(R>0.4).*(R<=0.9)) ...

+ (3*(R>0.9));

Y=hist(X,1:3)

For a MATLAB simulation, we ﬁrst gen-

erate a vector R of 100 random numbers.

Second, we generate vector X as a func-

tion of R to represent the 3 possible out-

comes of a ﬂip. That is, X(i)=1 if ﬂip i

was heads, X(i)=2 if ﬂip i was tails, and

X(i)=3) is ﬂip i landed on the edge.

To see how this works, we note there are three cases:

• If R(i) <= 0.4, then X(i)=1.

• If 0.4 < R(i) and R(i)<=0.9, then X(i)=2.

• If 0.9 < R(i), then X(i)=3.

These three cases will have probabilities 0.4, 0.5 and 0.1. Lastly, we use the hist function

to count how many occurences of each possible value of X(i).

7

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 2

Quiz 2.1

The sample space, probabilities and corresponding grades for the experiment are

Outcome P[·] G

BB 0.36 3.0

BC 0.24 2.5

CB 0.24 2.5

CC 0.16 2

Quiz 2.2

(1) To ﬁnd c, we recall that the PMF must sum to 1. That is,

3

n=1

P

N

(n) = c

_

1 +

1

2

+

1

3

_

= 1 (1)

This implies c = 6/11. Now that we have found c, the remaining parts are straight-

forward.

(2) P[N = 1] = P

N

(1) = c = 6/11

(3) P[N ≥ 2] = P

N

(2) + P

N

(3) = c/2 +c/3 = 5/11

(4) P[N > 3] =

∞

n=4

P

N

(n) = 0

Quiz 2.3

Decoding each transmitted bit is an independent trial where we call a bit error a “suc-

cess.” Each bit is in error, that is, the trial is a success, with probability p. Now we can

interpret each experiment in the generic context of independent trials.

(1) The random variable X is the number of trials up to and including the ﬁrst success.

Similar to Example 2.11, X has the geometric PMF

P

X

(x) =

_

p(1 − p)

x−1

x = 1, 2, . . .

0 otherwise

(1)

(2) If p = 0.1, then the probability exactly 10 bits are sent is

P [X = 10] = P

X

(10) = (0.1)(0.9)

9

= 0.0387 (2)

8

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The probability that at least 10 bits are sent is P[X ≥ 10] =

∞

x=10

P

X

(x). This

sum is not too hard to calculate. However, its even easier to observe that X ≥ 10 if

the ﬁrst 10 bits are transmitted correctly. That is,

P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)

10

(3)

For p = 0.1, P[X ≥ 10] = 0.9

10

= 0.3487.

(3) The random variable Y is the number of successes in 100 independent trials. Just as

in Example 2.13, Y has the binomial PMF

P

Y

(y) =

_

100

y

_

p

y

(1 − p)

100−y

(4)

If p = 0.01, the probability of exactly 2 errors is

P [Y = 2] = P

Y

(2) =

_

100

2

_

(0.01)

2

(0.99)

98

= 0.1849 (5)

(4) The probability of no more than 2 errors is

P [Y ≤ 2] = P

Y

(0) + P

Y

(1) + P

Y

(2) (6)

= (0.99)

100

+100(0.01)(0.99)

99

+

_

100

2

_

(0.01)

2

(0.99)

98

(7)

= 0.9207 (8)

(5) Random variable Z is the number of trials up to and including the third success. Thus

Z has the Pascal PMF (see Example 2.15)

P

Z

(z) =

_

z −1

2

_

p

3

(1 − p)

z−3

(9)

Note that P

Z

(z) > 0 for z = 3, 4, 5, . . ..

(6) If p = 0.25, the probability that the third error occurs on bit 12 is

P

Z

(12) =

_

11

2

_

(0.25)

3

(0.75)

9

= 0.0645 (10)

Quiz 2.4

Each of these probabilities can be read off the CDF F

Y

(y). However, we must keep in

mind that when F

Y

(y) has a discontinuity at y

0

, F

Y

(y) takes the upper value F

Y

(y

+

0

).

(1) P[Y < 1] = F

Y

(1

−

) = 0

9

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) P[Y ≤ 1] = F

Y

(1) = 0.6

(3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − F

Y

(2) = 1 −0.8 = 0.2

(4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − F

Y

(2

−

) = 1 −0.6 = 0.4

(5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = F

Y

(1

+

) − F

Y

(1

−

) = 0.6

(6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = F

Y

(3

+

) − F

Y

(3

−

) = 0.8 −0.8 = 0

Quiz 2.5

(1) With probability 0.7, a call is a voice call and C = 25. Otherwise, with probability

0.3, we have a data call and C = 40. This corresponds to the PMF

P

C

(c) =

⎧

⎨

⎩

0.7 c = 25

0.3 c = 40

0 otherwise

(1)

(2) The expected value of C is

E [C] = 25(0.7) +40(0.3) = 29.5 cents (2)

Quiz 2.6

(1) As a function of N, the cost T is

T = 25N +40(3 − N) = 120 −15N (1)

(2) To ﬁnd the PMF of T, we can draw the following tree:

¨

¨

¨

¨

¨

¨

¨

N=0

0.1

r

r

r

r

r

r

r

N=3

0.3

$

$

$

$

$

$

$N=1 0.3

N=2 0.3

•T=120

•T=105

•T=90

•T=75

From the tree, we can write down the PMF of T:

P

T

(t ) =

⎧

⎨

⎩

0.3 t = 75, 90, 105

0.1 t = 120

0 otherwise

(2)

From the PMF P

T

(t ), the expected value of T is

E [T] = 75P

T

(75) +90P

T

(90) +105P

T

(105) +120P

T

(120) (3)

= (75 +90 +105)(0.3) +120(0.1) = 62 (4)

10

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.7

(1) Using Deﬁnition 2.14, the expected number of applications is

E [A] =

4

a=1

aP

A

(a) = 1(0.4) +2(0.3) +3(0.2) +4(0.1) = 2 (1)

(2) The number of memory chips is M = g(A) where

g(A) =

⎧

⎨

⎩

4 A = 1, 2

6 A = 3

8 A = 4

(2)

(3) By Theorem 2.10, the expected number of memory chips is

E [M] =

4

a=1

g(A)P

A

(a) = 4(0.4) +4(0.3) +6(0.2) +8(0.1) = 4.8 (3)

Since E[A] = 2, g(E[A]) = g(2) = 4. However, E[M] = 4.8 = g(E[A]). The two

quantities are different because g(A) is not of the form αA +β.

Quiz 2.8

The PMF P

N

(n) allows to calculate each of the desired quantities.

(1) The expected value of N is

E [N] =

2

n=0

nP

N

(n) = 0(0.1) +1(0.4) +2(0.5) = 1.4 (1)

(2) The second moment of N is

E

_

N

2

_

=

2

n=0

n

2

P

N

(n) = 0

2

(0.1) +1

2

(0.4) +2

2

(0.5) = 2.4 (2)

(3) The variance of N is

Var[N] = E

_

N

2

_

−(E [N])

2

= 2.4 −(1.4)

2

= 0.44 (3)

(4) The standard deviation is σ

N

=

√

Var[N] =

√

0.44 = 0.663.

11

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 2.9

(1) From the problem statement, we learn that the conditional PMF of N given the event

I is

P

N|I

(n) =

_

0.02 n = 1, 2, . . . , 50

0 otherwise

(1)

(2) Also from the problem statement, the conditional PMF of N given the event T is

P

N|T

(n) =

_

0.2 n = 1, 2, 3, 4, 5

0 otherwise

(2)

(3) The problem statement tells us that P[T] = 1 − P[I ] = 3/4. From Theorem 1.10

(the law of total probability), we ﬁnd the PMF of N is

P

N

(n) = P

N|T

(n) P [T] + P

N|I

(n) P [I ] (3)

=

⎧

⎨

⎩

0.2(0.75) +0.02(0.25) n = 1, 2, 3, 4, 5

0(0.75) +0.02(0.25) n = 6, 7, . . . , 50

0 otherwise

(4)

=

⎧

⎨

⎩

0.155 n = 1, 2, 3, 4, 5

0.005 n = 6, 7, . . . , 50

0 otherwise

(5)

(4) First we ﬁnd

P [N ≤ 10] =

10

n=1

P

N

(n) = (0.155)(5) +(0.005)(5) = 0.80 (6)

By Theorem 2.17, the conditional PMF of N given N ≤ 10 is

P

N|N≤10

(n) =

_

P

N

(n)

P[N≤10]

n ≤ 10

0 otherwise

(7)

=

⎧

⎨

⎩

0.155/0.8 n = 1, 2, 3, 4, 5

0.005/0.8 n = 6, 7, 8, 9, 10

0 otherwise

(8)

=

⎧

⎨

⎩

0.19375 n = 1, 2, 3, 4, 5

0.00625 n = 6, 7, 8, 9, 10

0 otherwise

(9)

(5) Once we have the conditional PMF, calculating conditional expectations is easy.

E [N|N ≤ 10] =

n

nP

N|N≤10

(n) (10)

=

5

n=1

n(0.19375) +

10

n=6

n(0.00625) (11)

= 3.15625 (12)

12

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 50 100

0

2

4

6

8

10

0 500 1000

0

2

4

6

8

10

(a) samplemean(100) (b) samplemean(1000)

Figure 1: Two examples of the output of samplemean(k)

(6) To ﬁnd the conditional variance, we ﬁrst ﬁnd the conditional second moment

E

_

N

2

|N ≤ 10

_

=

n

n

2

P

N|N≤10

(n) (13)

=

5

n=1

n

2

(0.19375) +

10

n=6

n

2

(0.00625) (14)

= 55(0.19375) +330(0.00625) = 12.71875 (15)

The conditional variance is

Var[N|N ≤ 10] = E

_

N

2

|N ≤ 10

_

−(E [N|N ≤ 10])

2

(16)

= 12.71875 −(3.15625)

2

= 2.75684 (17)

Quiz 2.10

The function samplemean(k) generates and plots ﬁve m

n

sequences for n = 1, 2, . . . , k.

The i th column M(:,i) of M holds a sequence m

1

, m

2

, . . . , m

k

.

function M=samplemean(k);

K=(1:k)’;

M=zeros(k,5);

for i=1:5,

X=duniformrv(0,10,k);

M(:,i)=cumsum(X)./K;

end;

plot(K,M);

Examples of the function calls (a) samplemean(100) and (b) samplemean(1000)

are shown in Figure 1. Each time samplemean(k) is called produces a random output.

What is observed in these ﬁgures is that for small n, m

n

is fairly random but as n gets

13

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

large, m

n

gets close to E[X] = 5. Although each sequence m

1

, m

2

, . . . that we generate is

random, the sequences always converges to E[X]. This random convergence is analyzed

in Chapter 7.

14

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 3

Quiz 3.1

The CDF of Y is

0 2 4

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0

y/4 0 ≤ y ≤ 4

1 y > 4

(1)

From the CDF F

Y

(y), we can calculate the probabilities:

(1) P[Y ≤ −1] = F

Y

(−1) = 0

(2) P[Y ≤ 1] = F

Y

(1) = 1/4

(3) P[2 < Y ≤ 3] = F

Y

(3) − F

Y

(2) = 3/4 −2/4 = 1/4

(4) P[Y > 1.5] = 1 − P[Y ≤ 1.5] = 1 − F

Y

(1.5) = 1 −(1.5)/4 = 5/8

Quiz 3.2

(1) First we will ﬁnd the constant c and then we will sketch the PDF. To ﬁnd c, we use

the fact that

_

∞

−∞

f

X

(x) dx = 1. We will evaluate this integral using integration by

parts:

_

∞

−∞

f

X

(x) dx =

_

∞

0

cxe

−x/2

dx (1)

= −2cxe

−x/2

¸

¸

¸

∞

0

. ,, .

=0

+

_

∞

0

2ce

−x/2

dx (2)

= −4ce

−x/2

¸

¸

¸

∞

0

= 4c (3)

Thus c = 1/4 and X has the Erlang (n = 2, λ = 1/2) PDF

0 5 10 15

0

0.1

0.2

x

f

X

(

x

)

f

X

(x) =

_

(x/4)e

−x/2

x ≥ 0

0 otherwise

(4)

15

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To ﬁnd the CDF F

X

(x), we ﬁrst note X is a nonnegative random variable so that

F

X

(x) = 0 for all x < 0. For x ≥ 0,

F

X

(x) =

_

x

0

f

X

(y) dy =

_

x

0

y

4

e

−y/2

dy (5)

= −

y

2

e

−y/2

¸

¸

¸

x

0

−

_

x

0

−

1

2

e

−y/2

dy (6)

= 1 −

x

2

e

−x/2

−e

−x/2

(7)

The complete expression for the CDF is

0 5 10 15

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

_

1 −

_

x

2

+1

_

e

−x/2

x ≥ 0

0 otherwise

(8)

(3) From the CDF F

X

(x),

P [0 ≤ X ≤ 4] = F

X

(4) − F

X

(0) = 1 −3e

−2

. (9)

(4) Similarly,

P [−2 ≤ X ≤ 2] = F

X

(2) − F

X

(−2) = 1 −3e

−1

. (10)

Quiz 3.3

The PDF of Y is

−2 0 2

0

1

2

3

y

f

Y

(

y

)

f

Y

(y) =

_

3y

2

/2 −1 ≤ y ≤ 1,

0 otherwise.

(1)

(1) The expected value of Y is

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

1

−1

(3/2)y

3

dy = (3/8)y

4

¸

¸

¸

1

−1

= 0. (2)

Note that the above calculation wasn’t really necessary because E[Y] = 0 whenever

the PDF f

Y

(y) is an even function (i.e., f

Y

(y) = f

Y

(−y)).

(2) The second moment of Y is

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

1

−1

(3/2)y

4

dy = (3/10)y

5

¸

¸

¸

1

−1

= 3/5. (3)

16

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The variance of Y is

Var[Y] = E

_

Y

2

_

−(E [Y])

2

= 3/5. (4)

(4) The standard deviation of Y is σ

Y

=

√

Var[Y] =

√

3/5.

Quiz 3.4

(1) When X is an exponential (λ) random variable, E[X] = 1/λ and Var[X] = 1/λ

2

.

Since E[X] = 3 and Var[X] = 9, we must have λ = 1/3. The PDF of X is

f

X

(x) =

_

(1/3)e

−x/3

x ≥ 0,

0 otherwise.

(1)

(2) We know X is a uniform (a, b) random variable. To ﬁnd a and b, we apply Theo-

rem 3.6 to write

E [X] =

a +b

2

= 3 Var[X] =

(b −a)

2

12

= 9. (2)

This implies

a +b = 6, b −a = ±6

√

3. (3)

The only valid solution with a < b is

a = 3 −3

√

3, b = 3 +3

√

3. (4)

The complete expression for the PDF of X is

f

X

(x) =

_

1/(6

√

3) 3 −3

√

3 ≤ x < 3 +3

√

3,

0 otherwise.

(5)

Quiz 3.5

Each of the requested probabilities can be calculated using (z) function and Table 3.1

or Q(z) and Table 3.2. We start with the sketches.

(1) The PDFs of X and Y are shown below. The fact that Y has twice the standard

deviation of X is reﬂected in the greater spread of f

Y

(y). However, it is important

to remember that as the standard deviation increases, the peak value of the Gaussian

PDF goes down.

−5 0 5

0

0.2

0.4

x y

f

X

(

x

)

f

Y

(

y

)

← f

X

(x)

← f

Y

(y)

17

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Since X is Gaussian (0, 1),

P [−1 < X ≤ 1] = F

X

(1) − F

X

(−1) (1)

= (1) −(−1) = 2(1) −1 = 0.6826. (2)

(3) Since Y is Gaussian (0, 2),

P [−1 < Y ≤ 1] = F

Y

(1) − F

Y

(−1) (3)

=

_

1

σ

Y

_

−

_

−1

σ

Y

_

= 2

_

1

2

_

−1 = 0.383. (4)

(4) Again, since X is Gaussian (0, 1), P[X > 3.5] = Q(3.5) = 2.33 ×10

−4

.

(5) Since Y is Gaussian (0, 2), P[Y > 3.5] = Q(

3.5

2

) = Q(1.75) = 1 − (1.75) =

0.0401.

Quiz 3.6

The CDF of X is

−2 0 2

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < −1,

(x +1)/4 −1 ≤ x < 1,

1 x ≥ 1.

(1)

The following probabilities can be read directly from the CDF:

(1) P[X ≤ 1] = F

X

(1) = 1.

(2) P[X < 1] = F

X

(1

−

) = 1/2.

(3) P[X = 1] = F

X

(1

+

) − F

X

(1

−

) = 1 −1/2 = 1/2.

(4) We ﬁnd the PDF f

Y

(y) by taking the derivative of F

Y

(y). The resulting PDF is

−2 0 2

0

0.5

x

f

X

(

x

)

0.5

f

X

(x) =

⎧

⎨

⎩

1/4 −1 ≤ x < 1,

(1/2)δ(x −1) x = 1,

0 otherwise.

(2)

Quiz 3.7

18

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) Since X is always nonnegative, F

X

(x) = 0 for x < 0. Also, F

X

(x) = 1 for x ≥ 2

since its always true that x ≤ 2. Lastly, for 0 ≤ x ≤ 2,

F

X

(x) =

_

x

−∞

f

X

(y) dy =

_

x

0

(1 − y/2) dy = x − x

2

/4. (1)

The complete CDF of X is

−1 0 1 2 3

0

0.5

1

x

F

X

(

x

)

F

X

(x) =

⎧

⎨

⎩

0 x < 0,

x − x

2

/4 0 ≤ x ≤ 2,

1 x > 2.

(2)

(2) The probability that Y = 1 is

P [Y = 1] = P [X ≥ 1] = 1 − F

X

(1) = 1 −3/4 = 1/4. (3)

(3) Since X is nonnegative, Y is also nonnegative. Thus F

Y

(y) = 0 for y < 0. Also,

because Y ≤ 1, F

Y

(y) = 1 for all y ≥ 1. Finally, for 0 < y < 1,

F

Y

(y) = P [Y ≤ y] = P [X ≤ y] = F

X

(y) . (4)

Using the CDF F

X

(x), the complete expression for the CDF of Y is

−1 0 1 2 3

0

0.5

1

y

F

Y

(

y

)

F

Y

(y) =

⎧

⎨

⎩

0 y < 0,

y − y

2

/4 0 ≤ y < 1,

1 y ≥ 1.

(5)

As expected, we see that the jump in F

Y

(y) at y = 1 is exactly equal to P[Y = 1].

(4) By taking the derivative of F

Y

(y), we obtain the PDF f

Y

(y). Note that when y < 0

or y > 1, the PDF is zero.

−1 0 1 2 3

0

0.5

1

1.5

y

f

Y

(

y

)

0.25

f

Y

(y) =

_

1 − y/2 +(1/4)δ(y −1) 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 3.8

(1) P[Y ≤ 6] =

_

6

−∞

f

Y

(y) dy =

_

6

0

(1/10) dy = 0.6 .

19

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) From Deﬁnition 3.15, the conditional PDF of Y given Y ≤ 6 is

f

Y|Y≤6

(y) =

_

f

Y

(y)

P[Y≤6]

y ≤ 6,

0 otherwise,

=

_

1/6 0 ≤ y ≤ 6,

0 otherwise.

(1)

(3) The probability Y > 8 is

P [Y > 8] =

_

10

8

1

10

dy = 0.2 . (2)

(4) From Deﬁnition 3.15, the conditional PDF of Y given Y > 8 is

f

Y|Y>8

(y) =

_

f

Y

(y)

P[Y>8]

y > 8,

0 otherwise,

=

_

1/2 8 < y ≤ 10,

0 otherwise.

(3)

(5) From the conditional PDF f

Y|Y≤6

(y), we can calculate the conditional expectation

E [Y|Y ≤ 6] =

_

∞

−∞

y f

Y|Y≤6

(y) dy =

_

6

0

y

6

dy = 3. (4)

(6) From the conditional PDF f

Y|Y>8

(y), we can calculate the conditional expectation

E [Y|Y > 8] =

_

∞

−∞

y f

Y|Y>8

(y) dy =

_

10

8

y

2

dy = 9. (5)

Quiz 3.9

A natural way to produce random variables with PDF f

T|T>2

(t ) is to generate samples

of T with PDF f

T

(t ) and then to discard those samples which fail to satisfy the condition

T > 2. Here is a MATLAB function that uses this method:

function t=t2rv(m)

i=0;lambda=1/3;

t=zeros(m,1);

while (i<m),

x=exponentialrv(lambda,1);

if (x>2)

t(i+1)=x;

i=i+1;

end

end

A second method exploits the fact that if T is an exponential (λ) random variable, then

T

= T +2 has PDF f

T

(t ) = f

T|T>2

(t ). In this case the command

t=2.0+exponentialrv(1/3,m)

generates the vector t.

20

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 4

Quiz 4.1

Each value of the joint CDF can be found by considering the corresponding probability.

(1) F

X,Y

(−∞, 2) = P[X ≤ −∞, Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on

the value −∞.

(2) F

X,Y

(∞, ∞) = P[X ≤ ∞, Y ≤ ∞] = 1. This result is given in Theorem 4.1.

(3) F

X,Y

(∞, y) = P[X ≤ ∞, Y ≤ y] = P[Y ≤ y] = F

Y

(y).

(4) F

X,Y

(∞, −∞) = P[X ≤ ∞, Y ≤ −∞] = 0 since Y cannot take on the value −∞.

Quiz 4.2

From the joint PMF of Q and G given in the table, we can calculate the requested

probabilities by summing the PMF over those values of Q and G that correspond to the

event.

(1) The probability that Q = 0 is

P [Q = 0] = P

Q,G

(0, 0) + P

Q,G

(0, 1) + P

Q,G

(0, 2) + P

Q,G

(0, 3) (1)

= 0.06 +0.18 +0.24 +0.12 = 0.6 (2)

(2) The probability that Q = G is

P [Q = G] = P

Q,G

(0, 0) + P

Q,G

(1, 1) = 0.18 (3)

(3) The probability that G > 1 is

P [G > 1] =

3

g=2

1

q=0

P

Q,G

(q, g) (4)

= 0.24 +0.16 +0.12 +0.08 = 0.6 (5)

(4) The probability that G > Q is

P [G > Q] =

1

q=0

3

g=q+1

P

Q,G

(q, g) (6)

= 0.18 +0.24 +0.12 +0.16 +0.08 = 0.78 (7)

21

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.3

By Theorem 4.3, the marginal PMF of H is

P

H

(h) =

b=0,2,4

P

H,B

(h, b) (1)

For each value of h, this corresponds to calculating the row sum across the table of the joint

PMF. Similarly, the marginal PMF of B is

P

B

(b) =

1

h=−1

P

H,B

(h, b) (2)

For each value of b, this corresponds to the column sum down the table of the joint PMF.

The easiest way to calculate these marginal PMFs is to simply sum each row and column:

P

H,B

(h, b) b = 0 b = 2 b = 4 P

H

(h)

h = −1 0 0.4 0.2 0.6

h = 0 0.1 0 0.1 0.2

h = 1 0.1 0.1 0 0.2

P

B

(b) 0.2 0.5 0.3

(3)

Quiz 4.4

To ﬁnd the constant c, we apply

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy = 1. Speciﬁcally,

_

∞

−∞

_

∞

−∞

f

X,Y

(x, y) dx dy =

_

2

0

_

1

0

cxy dx dy (1)

= c

_

2

0

y

_

x

2

/2

¸

¸

¸

1

0

_

dy (2)

= (c/2)

_

2

0

y dy = (c/4)y

2

¸

¸

¸

2

0

= c (3)

Thus c = 1. To calculate P[A], we write

P [A] =

__

A

f

X,Y

(x, y) dx dy (4)

To integrate over A, we convert to polar coordinates using the substitutions x = r cos θ,

y = r sin θ and dx dy = r dr dθ, yielding

Y

X

1

1

2

A

P [A] =

_

π/2

0

_

1

0

r

2

sin θ cos θ r dr dθ (5)

=

_

_

1

0

r

3

dr

__

_

π/2

0

sin θ cos θ dθ

_

(6)

=

_

r

4

/4

¸

¸

¸

1

0

_

⎛

⎝

sin

2

θ

2

¸

¸

¸

¸

¸

π/2

0

⎞

⎠

= 1/8 (7)

22

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.5

By Theorem 4.8, the marginal PDF of X is

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy (1)

For x < 0 or x > 1, f

X

(x) = 0. For 0 ≤ x ≤ 1,

f

X

(x) =

6

5

_

1

0

(x + y

2

) dy =

6

5

_

xy + y

3

/3

_¸

¸

¸

y=1

y=0

=

6

5

(x +1/3) =

6x +2

5

(2)

The complete expression for the PDf of X is

f

X

(x) =

_

(6x +2)/5 0 ≤ x ≤ 1

0 otherwise

(3)

By the same method we obtain the marginal PDF for Y. For 0 ≤ y ≤ 1,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dy (4)

=

6

5

_

1

0

(x + y

2

) dx =

6

5

_

x

2

/2 + xy

2

_¸

¸

¸

x=1

x=0

=

6

5

(1/2 + y

2

) =

3 +6y

2

5

(5)

Since f

Y

(y) = 0 for y < 0 or y > 1, the complete expression for the PDF of Y is

f

Y

(y) =

_

(3 +6y

2

)/5 0 ≤ y ≤ 1

0 otherwise

(6)

Quiz 4.6

(A) The time required for the transfer is T = L/B. For each pair of values of L and B,

we can calculate the time T needed for the transfer. We can write these down on the

table for the joint PMF of L and B as follows:

P

L,B

(l, b) b = 14, 400 b = 21, 600 b = 28, 800

l = 518, 400 0.20 (T=36) 0.10 (T=24) 0.05 (T=18)

l = 2, 592, 000 0.05 (T=180) 0.10 (T=120) 0.20 (T=90)

l = 7, 776, 000 0.00 (T=540) 0.10 (T=360) 0.20 (T=270)

From the table, writing down the PMF of T is straightforward.

P

T

(t ) =

⎧

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎨

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎪

⎩

0.05 t = 18

0.1 t = 24

0.2 t = 36, 90

0.1 t = 120

0.05 t = 180

0.2 t = 270

0.1 t = 360

0 otherwise

(1)

23

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(B) First, we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1, W = XY satisﬁes

0 ≤ W ≤ 1. Thus f

W

(0) = 0 and f

W

(1) = 1. For 0 < w < 1, we calculate the

CDF F

W

(w) = P[W ≤ w]. As shown below, integrating over the region W ≤ w

is fairly complex. The calculus is simpler if we integrate over the region XY > w.

Speciﬁcally,

Y

X

1

1

XY > w

w

w

XY = w

F

W

(w) = 1 − P [XY > w] (2)

= 1 −

_

1

w

_

1

w/x

dy dx (3)

= 1 −

_

1

w

(1 −w/x) dx (4)

= 1 −

_

x −wln x|

x=1

x=w

_

(5)

= 1 −(1 −w +wln w) = w −wln w (6)

The complete expression for the CDF is

F

W

(w) =

⎧

⎨

⎩

0 w < 0

w −wln w 0 ≤ w ≤ 1

1 w > 1

(7)

By taking the derivative of the CDF, we ﬁnd the PDF is

f

W

(w) =

d F

W

(w)

dw

=

⎧

⎨

⎩

0 w < 0

−ln w 0 ≤ w ≤ 1

0 w > 1

(8)

Quiz 4.7

(A) It is helpful to ﬁrst make a table that includes the marginal PMFs.

P

L,T

(l, t ) t = 40 t = 60 P

L

(l)

l = 1 0.15 0.1 0.25

l = 2 0.3 0.2 0.5

l = 3 0.15 0.1 0.25

P

T

(t ) 0.6 0.4

(1) The expected value of L is

E [L] = 1(0.25) +2(0.5) +3(0.25) = 2. (1)

Since the second moment of L is

E

_

L

2

_

= 1

2

(0.25) +2

2

(0.5) +3

2

(0.25) = 4.5, (2)

the variance of L is

Var [L] = E

_

L

2

_

−(E [L])

2

= 0.5. (3)

24

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) The expected value of T is

E [T] = 40(0.6) +60(0.4) = 48. (4)

The second moment of T is

E

_

T

2

_

= 40

2

(0.6) +60

2

(0.4) = 2400. (5)

Thus

Var[T] = E

_

T

2

_

−(E [T])

2

= 2400 −48

2

= 96. (6)

(3) The correlation is

E [LT] =

t =40,60

3

l=1

lt P

LT

(lt ) (7)

= 1(40)(0.15) +2(40)(0.3) +3(40)(0.15) (8)

+1(60)(0.1) +2(60)(0.2) +3(60)(0.1) (9)

= 96 (10)

(4) From Theorem 4.16(a), the covariance of L and T is

Cov [L, T] = E [LT] − E [L] E [T] = 96 −2(48) = 0 (11)

(5) Since Cov[L, T] = 0, the correlation coefﬁcient is ρ

L,T

= 0.

(B) As in the discrete case, the calculations become easier if we ﬁrst calculate the marginal

PDFs f

X

(x) and f

Y

(y). For 0 ≤ x ≤ 1,

f

X

(x) =

_

∞

−∞

f

X,Y

(x, y) dy =

_

2

0

xy dy =

1

2

xy

2

¸

¸

¸

¸

y=2

y=0

= 2x (12)

Similarly, for 0 ≤ y ≤ 2,

f

Y

(y) =

_

∞

−∞

f

X,Y

(x, y) dx =

_

2

0

xy dx =

1

2

x

2

y

¸

¸

¸

¸

x=1

x=0

=

y

2

(13)

The complete expressions for the marginal PDFs are

f

X

(x) =

_

2x 0 ≤ x ≤ 1

0 otherwise

f

Y

(y) =

_

y/2 0 ≤ y ≤ 2

0 otherwise

(14)

From the marginal PDFs, it is straightforward to calculate the various expectations.

25

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) The ﬁrst and second moments of X are

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

1

0

2x

2

dx =

2

3

(15)

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

1

0

2x

3

dx =

1

2

(16)

(17)

The variance of X is Var[X] = E[X

2

] −(E[X])

2

= 1/18.

(2) The ﬁrst and second moments of Y are

E [Y] =

_

∞

−∞

y f

Y

(y) dy =

_

2

0

1

2

y

2

dy =

4

3

(18)

E

_

Y

2

_

=

_

∞

−∞

y

2

f

Y

(y) dy =

_

2

0

1

2

y

3

dy = 2 (19)

The variance of Y is Var[Y] = E[Y

2

] −(E[Y])

2

= 2 −16/9 = 2/9.

(3) The correlation of X and Y is

E [XY] =

_

∞

−∞

_

∞

−∞

xy f

X,Y

(x, y) dx, dy (20)

=

_

1

0

_

2

0

x

2

y

2

dx, dy =

x

3

3

¸

¸

¸

¸

1

0

y

3

3

¸

¸

¸

¸

2

0

=

8

9

(21)

(4) The covariance of X and Y is

Cov [X, Y] = E [XY] − E [X] E [Y] =

8

9

−

_

2

3

__

4

3

_

= 0. (22)

(5) Since Cov[X, Y] = 0, the correlation coefﬁcient is ρ

X,Y

= 0.

Quiz 4.8

(A) Since the event V > 80 occurs only for the pairs (L, T) = (2, 60), (L, T) = (3, 40)

and (L, T) = (3, 60),

P [A] = P [V > 80] = P

L,T

(2, 60) + P

L,T

(3, 40) + P

L,T

(3, 60) = 0.45 (1)

By Deﬁnition 4.9,

P

L,T| A

(l, t ) =

_

P

L,T

(l,t )

P[A]

lt > 80

0 otherwise

(2)

26

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We can represent this conditional PMF in the following table:

P

L,T| A

(l, t ) t = 40 t = 60

l = 1 0 0

l = 2 0 4/9

l = 3 1/3 2/9

The conditional expectation of V can be found from the conditional PMF.

E [V| A] =

l

t

lt P

L,T| A

(l, t ) (3)

= (2 · 60)

4

9

+(3 · 40)

1

3

+(3 · 60)

2

9

= 133

1

3

(4)

For the conditional variance Var[V| A], we ﬁrst ﬁnd the conditional second moment

E

_

V

2

| A

_

=

l

t

(lt )

2

P

L,T| A

(l, t ) (5)

= (2 · 60)

2

4

9

+(3 · 40)

2

1

3

+(3 · 60)

2

2

9

= 18, 400 (6)

It follows that

Var [V| A] = E

_

V

2

| A

_

−(E [V| A])

2

= 622

2

9

(7)

(B) For continuous random variables X and Y, we ﬁrst calculate the probability of the

conditioning event.

P [B] =

__

B

f

X,Y

(x, y) dx dy =

_

60

40

_

3

80/y

xy

4000

dx dy (8)

=

_

60

40

y

4000

_

x

2

2

¸

¸

¸

¸

3

80/y

_

dy (9)

=

_

60

40

y

4000

_

9

2

−

3200

y

2

_

dy (10)

=

9

8

−

4

5

ln

3

2

≈ 0.801 (11)

The conditional PDF of X and Y is

f

X,Y|B

(x, y) =

_

f

X,Y

(x, y) /P [B] (x, y) ∈ B

0 otherwise

(12)

=

_

Kxy 40 ≤ y ≤ 60, 80/y ≤ x ≤ 3

0 otherwise

(13)

27

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

where K = (4000P[B])

−1

. The conditional expectation of W given event B is

E [W|B] =

_

∞

−∞

_

∞

−∞

xy f

X,Y|B

(x, y) dx dy (14)

=

_

60

40

_

3

80/y

Kx

2

y

2

dx dy (15)

= (K/3)

_

60

40

y

2

x

3

¸

¸

¸

x=3

x=80/y

dy (16)

= (K/3)

_

60

40

_

27y

2

−80

3

/y

_

dy (17)

= (K/3)

_

9y

3

−80

3

ln y

_¸

¸

¸

60

40

≈ 120.78 (18)

The conditional second moment of K given B is

E

_

W

2

|B

_

=

_

∞

−∞

_

∞

−∞

(xy)

2

f

X,Y|B

(x, y) dx dy (19)

=

_

60

40

_

3

80/y

Kx

3

y

3

dx dy (20)

= (K/4)

_

60

40

y

3

x

4

¸

¸

¸

x=3

x=80/y

dy (21)

= (K/4)

_

60

40

_

81y

3

−80

4

/y

_

dy (22)

= (K/4)

_

(81/4)y

4

−80

4

ln y

_¸

¸

¸

60

40

≈ 16, 116.10 (23)

It follows that the conditional variance of W given B is

Var [W|B] = E

_

W

2

|B

_

−(E [W|B])

2

≈ 1528.30 (24)

Quiz 4.9

(A) (1) The joint PMF of A and B can be found from the marginal and conditional

PMFs via P

A,B

(a, b) = P

B| A

(b|a)P

A

(a). Incorporating the information from

the given conditional PMFs can be confusing, however. Consequently, we can

note that A has range S

A

= {0, 2} and B has range S

B

= {0, 1}. A table of the

joint PMF will include all four possible combinations of A and B. The general

form of the table is

P

A,B

(a, b) b = 0 b = 1

a = 0 P

B| A

(0|0)P

A

(0) P

B| A

(1|0)P

A

(0)

a = 2 P

B| A

(0|2)P

A

(2) P

B| A

(1|2)P

A

(2)

28

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Substituting values from P

B| A

(b|a) and P

A

(a), we have

P

A,B

(a, b) b = 0 b = 1

a = 0 (0.8)(0.4) (0.2)(0.4)

a = 2 (0.5)(0.6) (0.5)(0.6)

or

P

A,B

(a, b) b = 0 b = 1

a = 0 0.32 0.08

a = 2 0.3 0.3

(2) Given the conditional PMF P

B| A

(b|2), it is easy to calculate the conditional

expectation

E [B| A = 2] =

1

b=0

bP

B| A

(b|2) = (0)(0.5) +(1)(0.5) = 0.5 (1)

(3) From the joint PMF P

A,B

(a, b), we can calculate the the conditional PMF

P

A|B

(a|0) =

P

A,B

(a, 0)

P

B

(0)

=

⎧

⎨

⎩

0.32/0.62 a = 0

0.3/0.62 a = 2

0 otherwise

(2)

=

⎧

⎨

⎩

16/31 a = 0

15/31 a = 2

0 otherwise

(3)

(4) We can calculate the conditional variance Var[A|B = 0] using the conditional

PMF P

A|B

(a|0). First we calculate the conditional expected value

E [A|B = 0] =

a

aP

A|B

(a|0) = 0(16/31) +2(15/31) = 30/31 (4)

The conditional second moment is

E

_

A

2

|B = 0

_

=

a

a

2

P

A|B

(a|0) = 0

2

(16/31) +2

2

(15/31) = 60/31 (5)

The conditional variance is then

Var[A|B = 0] = E

_

A

2

|B = 0

_

−(E [A|B = 0])

2

=

960

961

(6)

(B) (1) The joint PDF of X and Y is

f

X,Y

(x, y) = f

Y|X

(y|x) f

X

(x) =

_

6y 0 ≤ y ≤ x, 0 ≤ x ≤ 1

0 otherwise

(7)

(2) From the given conditional PDF f

Y|X

(y|x),

f

Y|X

(y|1/2) =

_

8y 0 ≤ y ≤ 1/2

0 otherwise

(8)

29

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) The conditional PDF of Y given X = 1/2 is f

X|Y

(x|1/2) = f

X,Y

(x, 1/2)/f

Y

(1/2).

To ﬁnd f

Y

(1/2), we integrate the joint PDF.

f

Y

(1/2) =

_

∞

−∞

f

X,1/2

( ) dx =

_

1

1/2

6(1/2) dx = 3/2 (9)

Thus, for 1/2 ≤ x ≤ 1,

f

X|Y

(x|1/2) =

f

X,Y

(x, 1/2)

f

Y

(1/2)

=

6(1/2)

3/2

= 2 (10)

(4) From the pervious part, we see that given Y = 1/2, the conditional PDF of X

is uniform (1/2, 1). Thus, by the deﬁnition of the uniform (a, b) PDF,

Var [X|Y = 1/2] =

(1 −1/2)

2

12

=

1

48

(11)

Quiz 4.10

(A) (1) For random variables X and Y from Example 4.1, we observe that P

Y

(1) =

0.09 and P

X

(0) = 0.01. However,

P

X,Y

(0, 1) = 0 = P

X

(0) P

Y

(1) (1)

Since we have found a pair x, y such that P

X,Y

(x, y) = P

X

(x)P

Y

(y), we can

conclude that X and Y are dependent. Note that whenever P

X,Y

(x, y) = 0,

independence requires that either P

X

(x) = 0 or P

Y

(y) = 0.

(2) For random variables Q and G from Quiz 4.2, it is not obvious whether they

are independent. Unlike X and Y in part (a), there are no obvious pairs q, g

that fail the independence requirement. In this case, we calculate the marginal

PMFs from the table of the joint PMF P

Q,G

(q, g) in Quiz 4.2.

P

Q,G

(q, g) g = 0 g = 1 g = 2 g = 3 P

Q

(q)

q = 0 0.06 0.18 0.24 0.12 0.60

q = 1 0.04 0.12 0.16 0.08 0.40

P

G

(g) 0.10 0.30 0.40 0.20

Careful study of the table will verify that P

Q,G

(q, g) = P

Q

(q)P

G

(g) for every

pair q, g. Hence Q and G are independent.

(B) (1) Since X

1

and X

2

are independent,

f

X

1

,X

2

(x

1

, x

2

) = f

X

1

(x

1

) f

X

2

(x

2

) (2)

=

_

(1 − x

1

/2)(1 − x

2

/2) 0 ≤ x

1

≤ 2, 0 ≤ x

2

≤ 2

0 otherwise

(3)

30

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) Let F

X

(x) denote the CDF of both X

1

and X

2

. The CDF of Z = max(X

1

, X

2

)

is found by observing that Z ≤ z iff X

1

≤ z and X

2

≤ z. That is,

P [Z ≤ z] = P [X

1

≤ z, X

2

≤ z] (4)

= P [X

1

≤ z] P [X

2

≤ z] = [F

X

(z)]

2

(5)

To complete the problem, we need to ﬁnd the CDF of each X

i

. From the PDF

f

X

(x), the CDF is

F

X

(x) =

_

x

−∞

f

X

(y) dy =

⎧

⎨

⎩

0 x < 0

x − x

2

/4 0 ≤ x ≤ 2

1 x > 2

(6)

Thus for 0 ≤ z ≤ 2,

F

Z

(z) = (z − z

2

/4)

2

(7)

The complete expression for the CDF of Z is

F

Z

(z) =

⎧

⎨

⎩

0 z < 0

(z − z

2

/4)

2

0 ≤ z ≤ 2

1 z > 1

(8)

Quiz 4.11

This problem just requires identifying the various terms in Deﬁnition 4.17 and Theo-

rem 4.29. Speciﬁcally, from the problem statement, we know that ρ = 1/2,

µ

1

= µ

X

= 0, µ

2

= µ

Y

= 0, (1)

and that

σ

1

= σ

X

= 1, σ

2

= σ

Y

= 1. (2)

(1) Applying these facts to Deﬁnition 4.17, we have

f

X,Y

(x, y) =

1

√

3π

2

e

−2(x

2

−xy+y

2

)/3

. (3)

(2) By Theorem 4.30, the conditional expected value and standard deviation of X given

Y = y are

E [X|Y = y] = y/2 ˜ σ

X

= σ

2

1

(1 −ρ

2

) =

_

3/4. (4)

When Y = y = 2, we see that E[X|Y = 2] = 1 and Var[X|Y = 2] = 3/4. The

conditional PDF of X given Y = 2 is simply the Gaussian PDF

f

X|Y

(x|2) =

1

√

3π/2

e

−2(x−1)

2

/3

. (5)

31

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 4.12

One straightforward method is to follow the approach of Example 4.28. Instead, we use

an alternate approach. First we observe that X has the discrete uniform (1, 4) PMF. Also,

given X = x, Y has a discrete uniform (1, x) PMF. That is,

P

X

(x) =

_

1/4 x = 1, 2, 3, 4,

0 otherwise,

P

Y|X

(y|x) =

_

1/x y = 1, . . . , x

0 otherwise

(1)

Given X = x, and an independent uniform (0, 1) random variable U, we can generate a

sample value of Y with a discrete uniform (1, x) PMF via Y = xU. This observation

prompts the following program:

function xy=dtrianglerv(m)

sx=[1;2;3;4];

px=0.25*ones(4,1);

x=finiterv(sx,px,m);

y=ceil(x.*rand(m,1));

xy=[x’;y’];

32

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 5

Quiz 5.1

We ﬁnd P[C] by integrating the joint PDF over the region of interest. Speciﬁcally,

P [C] =

_

1/2

0

dy

2

_

y

2

0

dy

1

_

1/2

0

dy

4

_

y

4

0

4dy

3

(1)

= 4

_

_

1/2

0

y

2

dy

2

__

_

1/2

0

y

4

dy

4

_

= 1/4. (2)

Quiz 5.2

By deﬁnition of A, Y

1

= X

1

, Y

2

= X

2

−X

1

and Y

3

= X

3

−X

2

. Since 0 < X

1

< X

2

<

X

3

, each Y

i

must be a strictly positive integer. Thus, for y

1

, y

2

, y

3

∈ {1, 2, . . .},

P

Y

(y) = P [Y

1

= y

1

, Y

2

= y

2

, Y

3

= y

3

] (1)

= P [X

1

= y

1

, X

2

− X

1

= y

2

, X

3

− X

2

= y

3

] (2)

= P [X

1

= y

1

, X

2

= y

2

+ y

1

, X

3

= y

3

+ y

2

+ y

1

] (3)

= (1 − p)

3

p

y

1

+y

2

+y

3

(4)

By deﬁning the vector a =

_

1 1 1

_

**, the complete expression for the joint PMF of Y is
**

P

Y

(y) =

_

(1 − p) p

a

y

y

1

, y

2

, y

3

∈ {1, 2, . . .}

0 otherwise

(5)

Quiz 5.3

First we note that each marginal PDF is nonzero only if any subset of the x

i

obeys the

ordering contraints 0 ≤ x

1

≤ x

2

≤ x

3

≤ 1. Within these constraints, we have

f

X

1

,X

2

(x

1

, x

2

) =

_

∞

−∞

f

X

(x) dx

3

=

_

1

x

2

6 dx

3

= 6(1 − x

2

), (1)

f

X

2

,X

3

(x

2

, x

3

) =

_

∞

−∞

f

X

(x) dx

1

=

_

x

2

0

6 dx

1

= 6x

2

, (2)

f

X

1

,X

3

(x

1

, x

3

) =

_

∞

−∞

f

X

(x) dx

2

=

_

x

3

x

1

6 dx

2

= 6(x

3

− x

1

). (3)

In particular, we must keep in mind that f

X

1

,X

2

(x

1

, x

2

) = 0 unless 0 ≤ x

1

≤ x

2

≤ 1,

f

X

2

,X

3

(x

2

, x

3

) = 0 unless 0 ≤ x

2

≤ x

3

≤ 1, and that f

X

1

,X

3

(x

1

, x

3

) = 0 unless 0 ≤ x

1

≤

33

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

x

3

≤ 1. The complete expressions are

f

X

1

,X

2

(x

1

, x

2

) =

_

6(1 − x

2

) 0 ≤ x

1

≤ x

2

≤ 1

0 otherwise

(4)

f

X

2

,X

3

(x

2

, x

3

) =

_

6x

2

0 ≤ x

2

≤ x

3

≤ 1

0 otherwise

(5)

f

X

1

,X

3

(x

1

, x

3

) =

_

6(x

3

− x

1

) 0 ≤ x

1

≤ x

3

≤ 1

0 otherwise

(6)

Now we can ﬁnd the marginal PDFs. When 0 ≤ x

i

≤ 1 for each x

i

,

f

X

1

(x

1

) =

_

∞

−∞

f

X

1

,X

2

(x

1

, x

2

) dx

2

=

_

1

x

1

6(1 − x

2

) dx

2

= 3(1 − x

1

)

2

(7)

f

X

2

(x

2

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

3

=

_

1

x

2

6x

2

dx

3

= 6x

2

(1 − x

2

) (8)

f

X

3

(x

3

) =

_

∞

−∞

f

X

2

,X

3

(x

2

, x

3

) dx

2

=

_

x

3

0

6x

2

dx

2

= 3x

2

3

(9)

The complete expressions are

f

X

1

(x

1

) =

_

3(1 − x

1

)

2

0 ≤ x

1

≤ 1

0 otherwise

(10)

f

X

2

(x

2

) =

_

6x

2

(1 − x

2

) 0 ≤ x

2

≤ 1

0 otherwise

(11)

f

X

3

(x

3

) =

_

3x

2

3

0 ≤ x

3

≤ 1

0 otherwise

(12)

Quiz 5.4

In the PDF f

Y

(y), the components have dependencies as a result of the ordering con-

straints Y

1

≤ Y

2

and Y

3

≤ Y

4

. We can separate these constraints by creating the vectors

V =

_

Y

1

Y

2

_

, W =

_

Y

3

Y

4

_

. (1)

The joint PDF of V and W is

f

V,W

(v, w) =

_

4 0 ≤ v

1

≤ v

2

≤ 1, 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(2)

34

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

We must verify that V and W are independent. For 0 ≤ v

1

≤ v

2

≤ 1,

f

V

(v) =

__

f

V,W

(v, w) dw

1

dw

2

(3)

=

_

1

0

_

_

1

w

1

4 dw

2

_

dw

1

(4)

=

_

1

0

4(1 −w

1

) dw

1

= 2 (5)

Similarly, for 0 ≤ w

1

≤ w

2

≤ 1,

f

W

(w) =

__

f

V,W

(v, w) dv

1

dv

2

(6)

=

_

1

0

_

_

1

v

1

4 dv

2

_

dv

1

= 2 (7)

It follows that V and W have PDFs

f

V

(v) =

_

2 0 ≤ v

1

≤ v

2

≤ 1

0 otherwise

, f

W

(w) =

_

2 0 ≤ w

1

≤ w

2

≤ 1

0 otherwise

(8)

It is easy to verify that f

V,W

(v, w) = f

V

(v) f

W

(w), conﬁrming that V and W are indepen-

dent vectors.

Quiz 5.5

(A) Referring to Theorem 1.19, each test is a subexperiment with three possible out-

comes: L, A and R. In ﬁve trials, the vector X =

_

X

1

X

2

X

3

_

indicating the

number of outcomes of each subexperiment has the multinomial PMF

P

X

(x) =

⎧

⎨

⎩

_

5

x

1

,x

2

,x

3

_

(0.3)

x

1

(0.6)

x

2

(0.1)

x

3

x

1

+ x

2

+ x

3

= 5;

x

1

, x

2

, x

3

∈ {0, 1, . . . , 5}

0 otherwise

(1)

We can ﬁnd the marginal PMF for each X

i

from the joint PMF P

X

(x); however it

is simpler to just start from ﬁrst principles and observe that X

1

is the number of

occurrences of L in ﬁve independent tests. If we view each test as a trial with success

probability P[L] = 0.3, we see that X

1

is a binomial (n, p) = (5, 0.3) random

variable. Similarly, X

2

is a binomial (5, 0.6) random variable and X

3

is a binomial

(5, 0.1) random variable. That is, for p

1

= 0.3, p

2

= 0.6 and p

3

= 0.1,

P

X

i

(x) =

_ _

5

x

_

p

x

i

(1 − p

i

)

5−x

x = 0, 1, . . . , 5

0 otherwise

(2)

35

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the marginal PMFs, we see that X

1

, X

2

and X

3

are not independent. Hence, we

must use Theorem 5.6 to ﬁnd the PMF of W. In particular, since X

1

+ X

2

+ X

3

= 5

and since each X

i

is non-negative, P

W

(0) = P

W

(1) = 0. Furthermore,

P

W

(2) = P

X

(1, 2, 2) + P

X

(2, 1, 2) + P

X

(2, 2, 1) (3)

=

5![0.3(0.6)

2

(0.1)

2

+0.3

2

(0.6)(0.1)

2

+0.3

2

(0.6)

2

(0.1)]

2!2!1!

(4)

= 0.1458 (5)

In addition, for w = 3, w = 4, and w = 5, the event W = w occurs if and only if

one of the mutually exclusive events X

1

= w, X

2

= w, or X

3

= w occurs. Thus,

P

W

(3) = P

X

1

(3) + P

X

2

(3) + P

X

3

(3) = 0.486 (6)

P

W

(4) = P

X

1

(4) + P

X

2

(4) + P

X

3

(4) = 0.288 (7)

P

W

(5) = P

X

1

(5) + P

X

2

(5) + P

X

3

(5) = 0.0802 (8)

(B) Since each Y

i

= 2X

i

+4, we can apply Theorem 5.10 to write

f

Y

(y) =

1

2

3

f

X

_

y

1

−4

2

,

y

2

−4

2

,

y

3

−4

2

_

(9)

=

_

(1/8)e

−(y

3

−4)/2

4 ≤ y

1

≤ y

2

≤ y

3

0 otherwise

(10)

Note that for other matrices A, the constraints on y resulting from the constraints

0 ≤ X

1

≤ X

2

≤ X

3

can be much more complicated.

Quiz 5.6

We start by ﬁnding the components E[X

i

] =

_

∞

−∞

x f

X

i

(x) dx of µ

X

. To do so, we use

the marginal PDFs f

X

i

(x) found in Quiz 5.3:

E [X

1

] =

_

1

0

3x(1 − x)

2

dx = 1/4, (1)

E [X

2

] =

_

1

0

6x

2

(1 − x) dx = 1/2, (2)

E [X

3

] =

_

1

0

3x

3

dx = 3/4. (3)

To ﬁnd the correlation matrix R

X

, we need to ﬁnd E[X

i

X

j

] for all i and j . We start with

36

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

the second moments:

E

_

X

2

1

_

=

_

1

0

3x

2

(1 − x)

2

dx = 1/10. (4)

E

_

X

2

2

_

=

_

1

0

6x

3

(1 − x) dx = 3/10. (5)

E

_

X

2

3

_

=

_

1

0

3x

4

dx = 3/5. (6)

Using marginal PDFs from Quiz 5.3, the cross terms are

E [X

1

X

2

] =

_

∞

−∞

_

∞

−∞

x

1

x

2

f

X

1

,X

2

(x

1

, x

2

) , dx

1

dx

2

(7)

=

_

1

0

_

_

1

x

1

6x

1

x

2

(1 − x

2

) dx

2

_

dx

1

(8)

=

_

1

0

[x

1

−3x

3

1

+2x

4

1

] dx

1

= 3/20. (9)

E [X

2

X

3

] =

_

1

0

_

1

x

2

6x

2

2

x

3

dx

3

dx

2

(10)

=

_

1

0

[3x

2

2

−3x

4

2

] dx

2

= 2/5 (11)

E [X

1

X

3

] =

_

1

0

_

1

x

1

6x

1

x

3

(x

3

− x

1

) dx

3

dx

1

. (12)

=

_

1

0

_

(2x

1

x

3

3

−3x

2

1

x

2

3

)

¸

¸

¸

x

3

=1

x

3

=x

1

_

dx

1

(13)

=

_

1

0

[2x

1

−3x

2

1

+ x

4

1

] dx

1

= 1/5. (14)

Summarizing the results, X has correlation matrix

R

X

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

. (15)

Vector X has covariance matrix

C

X

= R

X

− E [X] E [X]

(16)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/4

1/2

3/4

⎤

⎦

_

1/4 1/2 3/4

_

(17)

=

⎡

⎣

1/10 3/20 1/5

3/20 3/10 2/5

1/5 2/5 3/5

⎤

⎦

−

⎡

⎣

1/16 1/8 3/16

1/8 1/4 3/8

3/16 3/8 9/16

⎤

⎦

=

1

80

⎡

⎣

3 2 1

2 4 2

1 2 3

⎤

⎦

. (18)

37

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This problemshows that even for fairly simple joint PDFs, computing the covariance matrix

by calculus can be a time consuming task.

Quiz 5.7

We observe that X = AZ +b where

A =

_

2 1

1 −1

_

, b =

_

2

0

_

. (1)

It follows from Theorem 5.18 that µ

X

= b and that

C

X

= AA

=

_

2 1

1 −1

_ _

2 1

1 −1

_

=

_

5 1

1 2

_

. (2)

Quiz 5.8

First, we observe that Y = AT where A =

_

1/31 1/31 · · · 1/31

_

. Since T is a

Gaussian random vector, Theorem 5.16 tells us that Y is a 1 dimensional Gaussian vector,

i.e., just a Gaussian random variable. The expected value of Y is µ

Y

= µ

T

= 80. The

covariance matrix of Y is 1 × 1 and is just equal to Var[Y]. Thus, by Theorem 5.16,

Var[Y] = AC

T

A

.

function p=julytemps(T);

[D1 D2]=ndgrid((1:31),(1:31));

CT=36./(1+abs(D1-D2));

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

In julytemps.m, the ﬁrst two lines gen-

erate the 31 ×31 covariance matrix CT, or

C

T

. Next we calculate Var[Y]. The ﬁnal

step is to use the (·) function to calculate

P[Y < T].

Here is the output of julytemps.m:

>> julytemps([70 75 80 85 90 95])

ans =

0.0000 0.0221 0.5000 0.9779 1.0000 1.0000

Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0000. Its

just that the MATLAB’s short format output, invoked with the command format short,

rounds off those probabilities. Here is the long format output:

>> format long

>> julytemps([70 75 80 85 90 95])

ans =

Columns 1 through 4

0.00002844263128 0.02207383067604 0.50000000000000 0.97792616932396

Columns 5 through 6

0.99997155736872 0.99999999922010

38

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The ndgrid function is a useful to way calculate many covariance matrices. However, in

this problem, C

X

has a special structure; the i, j th element is

C

T

(i, j ) = c

|i −j |

=

36

1 +|i − j |

. (1)

If we write out the elements of the covariance matrix, we see that

C

T

=

⎡

⎢

⎢

⎢

⎣

c

0

c

1

· · · c

30

c

1

c

0

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. c

1

c

30

· · · c

1

c

0

⎤

⎥

⎥

⎥

⎦

. (2)

This covariance matrix is known as a symmetric Toeplitz matrix. We will see in Chap-

ters 9 and 11 that Toeplitz covariance matrices are quite common. In fact, MATLAB has a

toeplitz function for generating them. The function julytemps2 use the toeplitz

to generate the correlation matrix C

T

.

function p=julytemps2(T);

c=36./(1+abs(0:30));

CT=toeplitz(c);

A=ones(31,1)/31.0;

CY=(A’)*CT*A;

p=phi((T-80)/sqrt(CY));

39

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 6

Quiz 6.1

Let K

1

, . . . , K

n

denote a sequence of iid random variables each with PMF

P

K

(k) =

_

1/4 k = 1, . . . , 4

0 otherwise

(1)

We can write W

n

in the form of W

n

= K

1

+ · · · + K

n

. First, we note that the ﬁrst two

moments of K

i

are

E [K

i

] = (1 +2 +3 +4)/4 = 2.5 (2)

E

_

K

2

i

_

= (1

2

+2

2

+3

2

+4

2

)/4 = 7.5 (3)

Thus the variance of K

i

is

Var[K

i

] = E

_

K

2

i

_

−(E [K

i

])

2

= 7.5 −(2.5)

2

= 1.25 (4)

Since E[K

i

] = 2.5, the expected value of W

n

is

E [W

n

] = E [K

1

] +· · · + E [K

n

] = nE [K

i

] = 2.5n (5)

Since the rolls are independent, the random variables K

1

, . . . , K

n

are independent. Hence,

by Theorem 6.3, the variance of the sum equals the sum of the variances. That is,

Var[W

n

] = Var[K

1

] +· · · +Var[K

n

] = 1.25n (6)

Quiz 6.2

Random variables X and Y have PDFs

f

X

(x) =

_

3e

−3x

x ≥ 0

0 otherwise

f

Y

(y) =

_

2e

−2y

y ≥ 0

0 otherwise

(1)

Since X and Y are nonnegative, W = X +Y is nonnegative. By Theorem 6.5, the PDF of

W = X +Y is

f

W

(w) =

_

∞

−∞

f

X

(w − y) f

Y

(y) dy = 6

_

w

0

e

−3(w−y)

e

−2y

dy (2)

Fortunately, this integral is easy to evaluate. For w > 0,

f

W

(w) = e

−3w

e

y

¸

¸

w

0

= 6

_

e

−2w

−e

−3w

_

(3)

Since f

W

(w) = 0 for w < 0, a conmplete expression for the PDF of W is

f

W

(w) =

_

6e

−2w

_

1 −e

−w

_

w ≥ 0,

0 otherwise.

(4)

40

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.3

The MGF of K is

φ

K

(s) = E

_

e

s K

_

==

4

k=0

(0.2)e

sk

= 0.2

_

1 +e

s

+e

2s

+e

3s

+e

4s

_

(1)

We ﬁnd the moments by taking derivatives. The ﬁrst derivative of φ

K

(s) is

dφ

K

(s)

ds

= 0.2(e

s

+2e

2s

+3e

3s

+4e

4s

) (2)

Evaluating the derivative at s = 0 yields

E [K] =

dφ

K

(s)

ds

¸

¸

¸

¸

s=0

= 0.2(1 +2 +3 +4) = 2 (3)

To ﬁnd higher-order moments, we continue to take derivatives:

E

_

K

2

_

=

d

2

φ

K

(s)

ds

2

¸

¸

¸

¸

s=0

= 0.2(e

s

+4e

2s

+9e

3s

+16e

4s

)

¸

¸

¸

s=0

= 6 (4)

E

_

K

3

_

=

d

3

φ

K

(s)

ds

3

¸

¸

¸

¸

s=0

= 0.2(e

s

+8e

2s

+27e

3s

+64e

4s

)

¸

¸

¸

s=0

= 20 (5)

E

_

K

4

_

=

d

4

φ

K

(s)

ds

4

¸

¸

¸

¸

s=0

= 0.2(e

s

+16e

2s

+81e

3s

+256e

4s

)

¸

¸

¸

s=0

= 70.8 (6)

(7)

Quiz 6.4

(A) Each K

i

has MGF

φ

K

(s) = E

_

e

s K

i

_

=

e

s

+e

2s

+· · · +e

ns

n

=

e

s

(1 −e

ns

)

n(1 −e

s

)

(1)

Since the sequence of K

i

is independent, Theorem 6.8 says the MGF of J is

φ

J

(s) = (φ

K

(s))

m

=

e

ms

(1 −e

ns

)

m

n

m

(1 −e

s

)

m

(2)

(B) Since the set of α

j

X

j

are independent Gaussian random variables, Theorem 6.10

says that W is a Gaussian random variable. Thus to ﬁnd the PDF of W, we need

only ﬁnd the expected value and variance. Since the expectation of the sum equals

the sum of the expectations:

E [W] = αE [X

1

] +α

2

E [X

2

] +· · · +α

n

E [X

n

] = 0 (3)

41

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since the α

j

X

j

are independent, the variance of the sum equals the sum of the vari-

ances:

Var[W] = α

2

Var[X

1

] +α

4

Var[X

2

] +· · · +α

2n

Var[X

n

] (4)

= α

2

+2(α

2

)

2

+3(α

2

)

3

+· · · +n(α

2

)

n

(5)

Deﬁning q = α

2

, we can use Math Fact B.6 to write

Var[W] =

α

2

−α

2n+2

[1 +n(1 −α

2

)]

(1 −α

2

)

2

(6)

With E[W] = 0 and σ

2

W

= Var[W], we can write the PDF of W as

f

W

(w) =

1

_

2πσ

2

W

e

−w

2

/2σ

2

W

(7)

Quiz 6.5

(1) From Table 6.1, each X

i

has MGF φ

X

(s) and random variable N has MGF φ

N

(s)

where

φ

X

(s) =

1

1 −s

, φ

N

(s) =

1

5

e

s

1 −

4

5

e

s

. (1)

From Theorem 6.12, R has MGF

φ

R

(s) = φ

N

(ln φ

X

(s)) =

1

5

φ

X

(s)

1 −

4

5

φ

X

(s)

(2)

Substituting the expression for φ

X

(s) yields

φ

R

(s) =

1

5

1

5

−s

. (3)

(2) From Table 6.1, we see that R has the MGF of an exponential (1/5) random variable.

The corresponding PDF is

f

R

(r) =

_

(1/5)e

−r/5

r ≥ 0

0 otherwise

(4)

This quiz is an example of the general result that a geometric sum of exponential

random variables is an exponential random variable.

42

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.6

(1) The expected access time is

E [X] =

_

∞

−∞

x f

X

(x) dx =

_

12

0

x

12

dx = 6 msec (1)

(2) The second moment of the access time is

E

_

X

2

_

=

_

∞

−∞

x

2

f

X

(x) dx =

_

12

0

x

2

12

dx = 48 (2)

The variance of the access time is Var[X] = E[X

2

] −(E[X])

2

= 48 −36 = 12.

(3) Using X

i

to denote the access time of block i , we can write

A = X

1

+ X

2

+· · · + X

12

(3)

Since the expectation of the sum equals the sum of the expectations,

E [A] = E [X

1

] +· · · + E [X

12

] = 12E [X] = 72 msec (4)

(4) Since the X

i

are independent,

Var[A] = Var[X

1

] +· · · +Var[X

12

] = 12 Var[X] = 144 (5)

Hence, the standard deviation of A is σ

A

= 12

(5) To use the central limit theorem, we write

P [A > 75] = 1 − P [A ≤ 75] (6)

= 1 − P

_

A − E [A]

σ

A

≤

75 − E [A]

σ

A

_

(7)

≈ 1 −

_

75 −72

12

_

(8)

= 1 −0.5987 = 0.4013 (9)

Note that we used Table 3.1 to look up (0.25).

(6) Once again, we use the central limit theorem and Table 3.1 to estimate

P [A < 48] = P

_

A − E [A]

σ

A

<

48 − E [A]

σ

A

_

(10)

≈

_

48 −72

12

_

(11)

= 1 −(2) = 1 −0.9773 = 0.0227 (12)

43

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 6.7

Random variable K

n

has a binomial distribution for n trials and success probability

P[V] = 3/4.

(1) The expected number of voice calls out of 48 calls is E[K

48

] = 48P[V] = 36.

(2) The variance of K

48

is

Var[K

48

] = 48P [V] (1 − P [V]) = 48(3/4)(1/4) = 9 (1)

Thus K

48

has standard deviation σ

K

48

= 3.

(3) Using the ordinary central limit theorem and Table 3.1 yields

P [30 ≤ K

48

≤ 42] ≈

_

42 −36

3

_

−

_

30 −36

3

_

= (2) −(−2) (2)

Recalling that (−x) = 1 −(x), we have

P [30 ≤ K

48

≤ 42] ≈ 2(2) −1 = 0.9545 (3)

(4) Since K

48

is a discrete random variable, we can use the De Moivre-Laplace approx-

imation to estimate

P [30 ≤ K

48

≤ 42] ≈

_

42 +0.5 −36

3

_

−

_

30 −0.5 −36

3

_

(4)

= 2(2.16666) −1 = 0.9687 (5)

Quiz 6.8

The train interarrival times X

1

, X

2

, X

3

are iid exponential (λ) random variables. The

arrival time of the third train is

W = X

1

+ X

2

+ X

3

. (1)

In Theorem 6.11, we found that the sum of three iid exponential (λ) random variables is an

Erlang (n = 3, λ) random variable. From Appendix A, we ﬁnd that W has expected value

and variance

E [W] = 3/λ = 6 Var[W] = 3/λ

2

= 12 (2)

(1) By the Central Limit Theorem,

P [W > 20] = P

_

W −6

√

12

>

20 −6

√

12

_

≈ Q(7/

√

3) = 2.66 ×10

−5

(3)

44

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) To use the Chernoff bound, we note that the MGF of W is

φ

W

(s) =

_

λ

λ −s

_

3

=

1

(1 −2s)

3

(4)

The Chernoff bound states that

P [W > 20] ≤ min

s≥0

e

−20s

φ

X

(s) = min

s≥0

e

−20s

(1 −2s)

3

(5)

To minimize h(s) = e

−20s

/(1 −2s)

3

, we set the derivative of h(s) to zero:

dh(s)

ds

=

−20(1 −2s)

3

e

−20s

+6e

−20s

(1 −2s)

2

(1 −2s)

6

= 0 (6)

This implies 20(1 − 2s) = 6 or s = 7/20. Applying s = 7/20 into the Chernoff

bound yields

P [W > 20] ≤

e

−20s

(1 −2s)

3

¸

¸

¸

¸

s=7/20

= (10/3)

3

e

−7

= 0.0338 (7)

(3) Theorem 3.11 says that for any w > 0, the CDF of the Erlang (λ, 3) random variable

W satisﬁes

F

W

(w) = 1 −

2

k=0

(λw)

k

e

−λw

k!

(8)

Equivalently, for λ = 1/2 and w = 20,

P [W > 20] = 1 − F

W

(20) (9)

= e

−10

_

1 +

10

1!

+

10

2

2!

_

= 61e

−10

= 0.0028 (10)

Although the Chernoff bound is relatively weak in that it overestimates the proba-

bility by roughly a factor of 12, it is a valid bound. By contrast, the Central Limit

Theorem approximation grossly underestimates the true probability.

Quiz 6.9

One solution to this problem is to follow the approach of Example 6.19:

%unifbinom100.m

sx=0:100;sy=0:100;

px=binomialpmf(100,0.5,sx); py=duniformpmf(0,100,sy);

[SX,SY]=ndgrid(sx,sy); [PX,PY]=ndgrid(px,py);

SW=SX+SY; PW=PX.*PY;

sw=unique(SW); pw=finitepmf(SW,PW,sw);

pmfplot(sw,pw,’\itw’,’\itP_W(w)’);

A graph of the PMF P

W

(w) appears in Figure 2 With some thought, it should be apparent

that the finitepmf function is implementing the convolution of the two PMFs.

45

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 20 40 60 80 100 120 140 160 180 200

0

0.002

0.004

0.006

0.008

0.01

w

P

W

(

w

)

Figure 2: From Quiz 6.9, the PMF P

W

(w) of the independent sum of a binomial (100, 0.5)

random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 7

Quiz 7.1

An exponential random variable with expected value 1 also has variance 1. By Theo-

rem 7.1, M

n

(X) has variance Var[M

n

(X)] = 1/n. Hence, we need n = 100 samples.

Quiz 7.2

The arrival time of the third elevator is W = X

1

+ X

2

+ X

3

. Since each X

i

is uniform

(0, 30),

E [X

i

] = 15, Var [X

i

] =

(30 −0)

2

12

= 75. (1)

Thus E[W] = 3E[X

i

] = 45, and Var[W] = 3 Var[X

i

] = 225.

(1) By the Markov inequality,

P [W > 75] ≤

E [W]

75

=

45

75

=

3

5

(2)

(2) By the Chebyshev inequality,

P [W > 75] = P [W − E [W] > 30] (3)

≤ P [|W − E [W]| > 30] ≤

Var [W]

30

2

=

225

900

=

1

4

(4)

Quiz 7.3

Deﬁne the random variable W = (X − µ

X

)

2

. Observe that V

100

(X) = M

100

(W). By

Theorem 7.6, the mean square error is

E

_

(M

100

(W) −µ

W

)

2

_

=

Var[W]

100

(1)

Observe that µ

X

= 0 so that W = X

2

. Thus,

µ

W

= E

_

X

2

_

=

_

1

−1

x

2

f

X

(x) dx = 1/3 (2)

E

_

W

2

_

= E

_

X

4

_

=

_

1

−1

x

4

f

X

(x) dx = 1/5 (3)

Therefore Var[W] = E[W

2

] − µ

2

W

= 1/5 − (1/3)

2

= 4/45 and the mean square error is

4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4

Assuming the number n of samples is large, we can use a Gaussian approximation for

M

n

(X). SinceE[X] = p and Var[X] = p(1 − p), we apply Theorem 7.13 which says that

the interval estimate

M

n

(X) −c ≤ p ≤ M

n

(X) +c (1)

has conﬁdence coefﬁcient 1 −α where

α = 2 −2

_

c

√

n

p(1 − p)

_

. (2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must

have

_

c

√

n

p(1 − p)

_

≥ 0.95 (3)

for every value of p. Since (x) is an increasing function of x, we must satisfy c

√

n ≥

1.65p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥

1.65

4

√

n

=

0.41

√

n

. (4)

The 0.9 conﬁdence interval estimate of p is

M

n

(X) −

0.41

√

n

≤ p ≤ M

n

(X) +

0.41

√

n

. (5)

For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c

√

n/( p(1−p))) ≥ 0.995.

This implies c

√

n ≥ 2.58p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that

c ≥ (0.25)(2.58)/

√

n. In this case, the 0.99 conﬁdence interval estimate is

M

n

(X) −

0.645

√

n

≤ p ≤ M

n

(X) +

0.645

√

n

. (6)

Note that if M

100

(X) = 0.4, then the 0.99 conﬁdence interval estimate is

0.3355 ≤ p ≤ 0.4645. (7)

The interval is wide because the 0.99 conﬁdence is high.

Quiz 7.5

Following the approach of bernoullitraces.m, we generate m = 1000 sample

paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the

fraction of sample paths that have sample mean within one standard error of p. The pro-

gram bernoullisample.m generates graphs the number of traces within one standard

error as a function of the time, i.e. the number of trials in each trace.

48

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function OK=bernoullisample(n,m,p);

x=reshape(bernoullirv(p,m*n),n,m);

nn=(1:n)’*ones(1,m);

MN=cumsum(x)./nn;

stderr=sqrt(p*(1-p))./sqrt((1:n)’);

stderrmat=stderr*ones(1,m);

OK=sum(abs(MN-p)<stderrmat,2)/m;

plot(1:n,OK,’-s’);

The following graph was generated by bernoullisample(100,5000,0.5):

0 10 20 30 40 50 60 70 80 90 100

0.4

0.5

0.6

0.7

0.8

0.9

1

As we would expect, as m gets large, the fraction of traces within one standard error ap-

proaches 2(1) −1 ≈ 0.68. The unusual sawtooth pattern, though perhaps unexpected, is

examined in Problem 7.5.2.

49

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 8

Quiz 8.1

From the problem statement, each X

i

has PDF and CDF

f

X

i

(x) =

_

e

−x

x ≥ 0

0 otherwise

F

X

i

(x) =

_

0 x < 0

1 −e

−x

x ≥ 0

(1)

Hence, the CDF of the maximum of X

1

, . . . , X

15

obeys

F

X

(x) = P [X ≤ x] = P [X

1

≤ x, X

2

≤ x, · · · , X

15

≤ x] = [P [X

i

≤ x]]

15

. (2)

This implies that for x ≥ 0,

F

X

(x) =

_

F

X

i

(x)

_

15

=

_

1 −e

−x

_

15

(3)

To design a signiﬁcance test, we must choose a rejection region for X. A reasonable choice

is to reject the hypothesis if X is too small. That is, let R = {X ≤ r}. For a signiﬁcance

level of α = 0.01, we obtain

α = P [X ≤ r] = (1 −e

−r

)

15

= 0.01 (4)

It is straightforward to show that

r = −ln

_

1 −(0.01)

1/15

_

= 1.33 (5)

Hence, if we observe X < 1.33, then we reject the hypothesis.

Quiz 8.2

From the problem statement, the conditional PMFs of K are

P

K|H

0

(k) =

_

10

4k

e

−10

4

k!

k = 0, 1, . . .

0 otherwise

(1)

P

K|H

1

(k) =

_

10

6k

e

−10

6

k!

k = 0, 1, . . .

0 otherwise

(2)

Since the two hypotheses are equally likely, the MAP and ML tests are the same. From

Theorem 8.6, the ML hypothesis rule is

k ∈ A

0

if P

K|H

0

(k) ≥ P

K|H

1

(k) ; k ∈ A

1

otherwise. (3)

This rule simpliﬁes to

k ∈ A

0

if k ≤ k

∗

=

10

6

−10

4

ln 100

= 214, 975.7; k ∈ A

1

otherwise. (4)

Thus if we observe at least 214, 976 photons, then we accept hypothesis H

1

.

50

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 8.3

For the QPSK system, a symbol error occurs when s

i

is transmitted but (X

1

, X

2

) ∈ A

j

for some j = i . For a QPSK system, it is easier to calculate the probability of a correct

decision. Given H

0

, the conditional probability of a correct decision is

P [C|H

0

] = P [X

1

> 0, X

2

> 0|H

0

] = P

_

√

E/2 + N

1

> 0,

√

E/2 + N

2

> 0

_

(1)

Because of the symmetry of the signals, P[C|H

0

] = P[C|H

i

] for all i . This implies the

probability of a correct decision is P[C] = P[C|H

0

]. Since N

1

and N

2

are iid Gaussian

(0, σ) random variables, we have

P [C] = P [C|H

0

] = P

_

√

E/2 + N

1

> 0

_

P

_

√

E/2 + N

2

> 0

_

(2)

=

_

P

_

N

1

> −

√

E/2

__

2

(3)

=

_

1 −

_

−

√

E/2

σ

__

2

(4)

Since (−x) = 1 − (x), we have P[C] =

2

(

_

E/2σ

2

). Equivalently, the probability

of error is

P

ERR

= 1 − P [C] = 1 −

2

_

_

E

2σ

2

_

(5)

Quiz 8.4

To generate the ROC, the existing program sqdistor already calculates this miss

probability P

MISS

= P

01

and the false alarm probability P

FA

= P

10

. The modiﬁed pro-

gram, sqdistroc.m is essentially the same as sqdistor except the output is a ma-

trix FM whose columns are the false alarm and miss probabilities. Next, the program

sqdistrocplot.m calls sqdistroc three times to generate a plot that compares the

receiver performance for the three requested values of d. Here is the modiﬁed code:

function FM=sqdistroc(v,d,m,T)

%square law distortion recvr

%P(error) for m bits tested

%transmit v volts or -v volts,

%add N volts, N is Gauss(0,1)

%add d(v+N)ˆ2 distortion

%receive 1 if x>T, otherwise 0

%FM = [P(FA) P(MISS)]

x=(v+randn(m,1));

[XX,TT]=ndgrid(x,T(:));

P01=sum((XX+d*(XX.ˆ2)< TT),1)/m;

x= -v+randn(m,1);

[XX,TT]=ndgrid(x,T(:));

P10=sum((XX+d*(XX.ˆ2)>TT),1)/m;

FM=[P10(:) P01(:)];

function FM=sqdistrocplot(v,m,T);

FM1=sqdistroc(v,0.1,m,T);

FM2=sqdistroc(v,0.2,m,T);

FM5=sqdistroc(v,0.3,m,T);

FM=[FM1 FM2 FM5];

loglog(FM1(:,1),FM1(:,2),’-k’, ...

FM2(:,1),FM2(:,2),’--k’, ...

FM5(:,1),FM5(:,2),’:k’);

legend(’\it d=0.1’,’\it d=0.2’,...

’\it d=0.3’,3)

ylabel(’P_{MISS}’);

xlabel(’P_{FA}’);

51

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

To see the effect of d, the commands

T=-3:0.1:3; sqdistrocplot(3,100000,T);

generated the plot shown in Figure 3.

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

10

−5

10

−4

10

−3

10

−2

10

−1

10

0

P

M

I

S

S

P

FA

d=0.1

d=0.2

d=0.3

T=-3:0.1:3; sqdistrocplot(3,100000,T);

Figure 3: The receiver operating curve for the communications system of Quiz 8.4 with

squared distortion.

52

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 9

Quiz 9.1

(1) First, we calculate the marginal PDF for 0 ≤ y ≤ 1:

f

Y

(y) =

_

y

0

2(y + x) dx = 2xy + x

2

¸

¸

¸

x=y

x=0

= 3y

2

(1)

This implies the conditional PDF of X given Y is

f

X|Y

(x|y) =

f

X,Y

(x, y)

f

Y

(y)

=

_

2

3y

+

2x

3y

2

0 ≤ x ≤ y

0 otherwise

(2)

(2) The minimum mean square error estimate of X given Y = y is

ˆ x

M

(y) = E [X|Y = y] =

_

y

0

_

2x

3y

+

2x

2

3y

2

_

dx = 5y/9 (3)

Thus the MMSE estimator of X given Y is

ˆ

X

M

(Y) = 5Y/9.

(3) To obtain the conditional PDF f

Y|X

(y|x), we need the marginal PDF f

X

(x). For

0 ≤ x ≤ 1,

f

X

(x) =

_

1

x

2(y + x) dy = y

2

+2xy

¸

¸

¸

y=1

y=x

= 1 +2x −3x

2

(4)

(5)

For 0 ≤ x ≤ 1, the conditional PDF of Y given X is

f

Y|X

(y|x) =

_

2(y+x)

1+2x−3x

2

x ≤ y ≤ 1

0 otherwise

(6)

(4) The MMSE estimate of Y given X = x is

ˆ y

M

(x) = E [Y|X = x] =

_

1

x

2y

2

+2xy

1 +2x −3x

2

dy (7)

=

2y

3

/3 + xy

2

1 +2x −3x

2

¸

¸

¸

¸

y=1

y=x

(8)

=

2 +3x −5x

3

3 +6x −9x

2

(9)

53

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.2

(1) Since the expectation of the sum equals the sum of the expectations,

E [R] = E [T] + E [X] = 0 (1)

(2) Since T and X are independent, the variance of the sum R = T + X is

Var[R] = Var[T] +Var[X] = 9 +3 = 12 (2)

(3) Since T and R have expected values E[R] = E[T] = 0,

Cov [T, R] = E [T R] = E [T(T + X)] = E

_

T

2

_

+ E [T X] (3)

Since T and X are independent and have zero expected value, E[T X] = E[T]E[X] =

0 and E[T

2

] = Var[T]. Thus Cov[T, R] = Var[T] = 9.

(4) From Deﬁnition 4.8, the correlation coefﬁcient of T and R is

ρ

T,R

=

Cov [T, R]

√

Var[R] Var[T]

=

σ

T

σ

R

=

√

3/2 (4)

(5) From Theorem 9.4, the optimum linear estimate of T given R is

ˆ

T

L

(R) = ρ

T,R

σ

T

σ

R

(R − E [R]) + E [T] (5)

Since E[R] = E[T] = 0 and ρ

T,R

= σ

T

/σ

R

,

ˆ

T

L

(R) =

σ

2

T

σ

2

R

R =

σ

2

T

σ

2

T

+σ

2

X

R =

3

4

R (6)

Hence a

∗

= 3/4 and b

∗

= 0.

(6) By Theorem 9.4, the mean square error of the linear estimate is

e

∗

L

= Var[T](1 −ρ

2

T,R

) = 9(1 −3/4) = 9/4 (7)

Quiz 9.3

When R = r, the conditional PDF of X = Y −40−40 log

10

r is Gaussian with expected

value −40 −40 log

10

r and variance 64. The conditional PDF of X given R is

f

X|R

(x|r) =

1

√

128π

e

−(x+40+40 log

10

r)

2

/128

(1)

54

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

From the conditional PDF f

X|R

(x|r), we can use Deﬁnition 9.2 to write the ML estimate

of R given X = x as

ˆ r

ML

(x) = arg max

r≥0

f

X|R

(x|r) (2)

We observe that f

X|R

(x|r) is maximized when the exponent (x + 40 + 40 log

10

r)

2

is

minimized. This minimum occurs when the exponent is zero, yielding

log

10

r = −1 − x/40 (3)

or

ˆ r

ML

(x) = (0.1)10

−x/40

m (4)

If the result doesn’t look correct, note that a typical ﬁgure for the signal strength might be

x = −120 dB. This corresponds to a distance estimate of ˆ r

ML

(−120) = 100 m.

For the MAP estimate, we observe that the joint PDF of X and R is

f

X,R

(x, r) = f

X|R

(x|r) f

R

(r) =

1

10

6

√

32π

re

−(x+40+40 log

10

r)

2

/128

(5)

From Theorem 9.6, the MAP estimate of R given X = x is the value of r that maximizes

f

X,R

(x, r). That is,

ˆ r

MAP

(x) = arg max

0≤r≤1000

f

X,R

(x, r) (6)

Note that we have included the constraint r ≤ 1000 in the maximization to highlight the

fact that under our probability model, R ≤ 1000 m. Setting the derivative of f

X,R

(x, r)

with respect to r to zero yields

e

−(x+40+40 log

10

r)

2

/128

_

1 −

80 log

10

e

128

(x +40 +40 log

10

r)

_

= 0 (7)

Solving for r yields

r = 10

_

1

25 log

10

e

−1

_

10

−x/40

= (0.1236)10

−x/40

(8)

This is the MAP estimate of R given X = x as long as r ≤ 1000 m. When x ≤ −156.3 dB,

the above estimate will exceed 1000 m, which is not possible in our probability model.

Hence, the complete description of the MAP estimate is

ˆ r

MAP

(x) =

_

1000 x < −156.3

(0.1236)10

−x/40

x ≥ −156.3

(9)

For example, if x = −120dB, then ˆ r

MAP

(−120) = 123.6 m. When the measured signal

strength is not too low, the MAP estimate is 23.6% larger than the ML estimate. This re-

ﬂects the fact that large values of R are a priori more probable than small values. However,

for very low signal strengths, the MAP estimate takes into account that the distance can

never exceed 1000 m.

55

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 9.4

(1) From Theorem 9.4, the LMSE estimate of X

2

given Y

2

is

ˆ

X

2

(Y

2

) = a

∗

Y

2

+b

∗

where

a

∗

=

Cov [X

2

, Y

2

]

Var[Y

2

]

, b

∗

= µ

X

2

−a

∗

µ

Y

2

. (1)

Because E[X] = E[Y] = 0,

Cov [X

2

, Y

2

] = E [X

2

Y

2

] = E [X

2

(X

2

+ W

2

)] = E

_

X

2

2

_

= 1 (2)

Var[Y

2

] = Var[X

2

] +Var[W

2

] = E

_

X

2

2

_

+ E

_

W

2

2

_

= 1.1 (3)

It follows that a

∗

= 1/1.1. Because µ

X

2

= µ

Y

2

= 0, it follows that b

∗

= 0. Finally,

to compute the expected square error, we calculate the correlation coefﬁcient

ρ

X

2

,Y

2

=

Cov [X

2

, Y

2

]

σ

X

2

σ

Y

2

=

1

√

1.1

(4)

The expected square error is

e

∗

L

= Var[X

2

](1 −ρ

2

X

2

,Y

2

) = 1 −

1

1.1

=

1

11

= 0.0909 (5)

(2) Since Y = X + W and E[X] = E[W] = 0, it follows that E[Y] = 0. Thus we can

apply Theorem 9.7. Note that X and W have correlation matrices

R

X

=

_

1 −0.9

−0.9 1

_

, R

W

=

_

0.1 0

0 0.1

_

. (6)

In terms of Theorem 9.7, n = 2 and we wish to estimate X

2

given the observation

vector Y =

_

Y

1

Y

2

_

**. To apply Theorem 9.7, we need to ﬁnd R
**

Y

and R

YX

2

.

R

Y

= E

_

YY

_

= E

_

(X +W)(X

+W

)

_

(7)

= E

_

XX

+XW

+WX

+WW

_

. (8)

Because Xand Ware independent, E[XW

] = E[X]E[W

] = 0. Similarly, E[WX

] =

0. This implies

R

Y

= E

_

XX

_

+ E

_

WW

_

= R

X

+R

W

=

_

1.1 −0.9

−0.9 1.1

_

. (9)

In addition, we need to ﬁnd

R

YX

2

= E [YX

2

] =

_

E [Y

1

X

2

]

E [Y

2

X

2

]

_

=

_

E [(X

1

+ W

1

)X

2

]

E [(X

2

+ W

2

)X

2

]

_

. (10)

56

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Since Xand Ware independent vectors, E[W

1

X

2

] = E[W

1

]E[X

2

] = 0 and E[W

2

X

2

] =

0. Thus

R

YX

2

=

_

E[X

1

X

2

]

E

_

X

2

2

_

_

=

_

−0.9

1

_

. (11)

By Theorem 9.7,

ˆ a = R

−1

Y

R

YX

2

=

_

−0.225

0.725

_

(12)

Therefore, the optimum linear estimator of X

2

given Y

1

and Y

2

is

ˆ

X

L

= ˆ a

Y = −0.225Y

1

+0.725Y

2

. (13)

The mean square error is

Var [X

2

] − ˆ a

R

YX

2

= Var [X] −a

1

r

Y

1

,X

2

−a

2

r

Y

2

,X

2

= 0.0725. (14)

Quiz 9.5

Since X and W have zero expected value, Y also has zero expected value. Thus, by

Theorem 9.7,

ˆ

X

L

(Y) = ˆ a

Y where ˆ a = R

−1

Y

R

YX

. Since X and W are independent,

E[WX] = 0 and E[XW

] = 0

. This implies

R

YX

= E [YX] = E [(1X +W)X] = 1E

_

X

2

_

= 1. (1)

By the same reasoning, the correlation matrix of Y is

R

Y

= E

_

YY

_

= E

_

(1X +W)(1

X +W

)

_

(2)

= 11

E

_

X

2

_

+1E

_

XW

_

+ E [WX] 1

+ E

_

WW

_

(3)

= 11

+R

W

(4)

Note that 11

**is a 20 ×20 matrix with every entry equal to 1. Thus,
**

ˆ a = R

−1

Y

R

YX

=

_

11

+R

W

_

−1

1 (5)

and the optimal linear estimator is

ˆ

X

L

(Y) = 1

_

11

+R

W

_

−1

Y (6)

The mean square error is

e

∗

L

= Var[X] − ˆ a

R

YX

= 1 −1

_

11

+R

W

_

−1

1 (7)

Now we note that R

W

has i, j th entry R

W

(i, j ) = c

|i −j |−1

. The question we must address

is what value c minimizes e

∗

L

. This problem is atypical in that one does not usually get

57

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

to choose the correlation structure of the noise. However, we will see that the answer is

somewhat instructive.

We note that the answer is not obviously apparent from Equation (7). In particular, we

observe that Var[W

i

] = R

W

(i, i ) = 1/c. Thus, when c is small, the noises W

i

have high

variance and we would expect our estimator to be poor. On the other hand, if c is large

W

i

and W

j

are highly correlated and the separate measurements of X are very dependent.

This would suggest that large values of c will also result in poor MSE. If this argument is

not clear, consider the extreme case in which every W

i

and W

j

have correlation coefﬁcient

ρ

i j

= 1. In this case, our 20 measurements will be all the same and one measurement is as

good as 20 measurements.

To ﬁnd the optimal value of c, we write a MATLAB function mquiz9(c) to calculate

the MSE for a given c and second function that ﬁnds plots the MSE for a range of values

of c.

function [mse,af]=mquiz9(c);

v1=ones(20,1);

RW=toeplitz(c.ˆ((0:19)-1));

RY=(v1*(v1’)) +RW;

af=(inv(RY))*v1;

mse=1-((v1’)*af);

function cmin=mquiz9minc(c);

msec=zeros(size(c));

for k=1:length(c),

[msec(k),af]=mquiz9(c(k));

end

plot(c,msec);

xlabel(’c’);ylabel(’e_Lˆ*’);

[msemin,optk]=min(msec);

cmin=c(optk);

Note in mquiz9 that v1 corresponds to the vector 1 of all ones. The following commands

ﬁnds the minimum c and also produces the following graph:

>> c=0.01:0.01:0.99;

>> mquiz9minc(c)

ans =

0.4500

0 0.5 1

0.2

0.4

0.6

0.8

1

c

e

L *

As we see in the graph, both small values and large values of c result in large MSE.

58

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 10

Quiz 10.1

There are many correct answers to this question. A correct answer speciﬁes enough

random variables to specify the sample path exactly. One choice for an alternate set of

random variables that would specify m(t, s) is

• m(0, s), the number of ongoing calls at the start of the experiment

• N, the number of new calls that arrive during the experiment

• X

1

, . . . , X

N

, the interarrival times of the N new arrivals

• H, the number of calls that hang up during the experiment

• D

1

, . . . , D

H

, the call completion times of the H calls that hang up

Quiz 10.2

(1) We obtain a continuous time, continuous valued process when we record the temper-

ature as a continuous waveform over time.

(2) If at every moment in time, we round the temperature to the nearest degree, then we

obtain a continuous time, discrete valued process.

(3) If we sample the process in part (a) every T seconds, then we obtain a discrete time,

continuous valued process.

(4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time,

discrete valued process.

Quiz 10.3

(1) Each resistor has resistance R in ohms with uniform PDF

f

R

(r) =

_

0.01 950 ≤ r ≤ 1050

0 otherwise

(1)

The probability that a test produces a 1% resistor is

p = P [990 ≤ R ≤ 1010] =

_

1010

990

(0.01) dr = 0.2 (2)

59

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(2) In t seconds, exactly t resistors are tested. Each resistor is a 1% resistor with proba-

bility p, independent of any other resistor. Consequently, the number of 1% resistors

found has the binomial PMF

P

N(t )

(n) =

_ _

t

n

_

p

n

(1 − p)

t −n

n = 0, 1, . . . , t

0 otherwise

(3)

(3) First we will ﬁnd the PMF of T

1

. This problem is easy if we view each resistor test

as an independent trial. A success occurs on a trial with probability p if we ﬁnd a

1% resistor. The ﬁrst 1% resistor is found at time T

1

= t if we observe failures on

trials 1, . . . , t − 1 followed by a success on trial t . Hence, just as in Example 2.11,

T

1

has the geometric PMF

P

T

1

(t ) =

_

(1 − p)

t −1

p t = 1, 2, . . .

9 otherwise

(4)

Since p = 0.2, the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is

P

T

1

(5) = (0.8)

4

(0.2) = 0.08192.

(4) From Theorem 2.5, a geometric random variable with success probability p has ex-

pected value 1/p. In this problem, E[T

1

] = 1/p = 5.

(5) Note that once we ﬁnd the ﬁrst 1% resistor, the number of additional trials needed to

ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/p

since each independent trial is a success with probability p. That is, T

2

= T

1

+ T

where T

**is independent and identically distributed to T
**

1

. Thus

E [T

2

|T

1

= 10] = E [T

1

|T

1

= 10] + E

_

T

|T

1

= 10

_

(5)

= 10 + E

_

T

_

= 10 +5 = 15 (6)

Quiz 10.4

Since each X

i

is a N(0, 1) random variable, each X

i

has PDF

f

X(i )

(x) =

1

√

2π

e

−x

2

/2

(1)

By Theorem 10.1, the joint PDF of X =

_

X

1

· · · X

n

_

is

f

X

(x) = f

X(1),...,X(n)

(x

1

, . . . , x

n

) =

k

i =1

f

X

(x

i

) =

1

(2π)

n/2

e

−(x

2

1

+···+x

2

n

)/2

(2)

60

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.5

The ﬁrst and second hours are nonoverlapping intervals. Since one hour equals 3600

sec and the Poisson process has a rate of 10 packets/sec, the expected number of packets

in each hour is E[M

i

] = α = 36, 000. This implies M

1

and M

2

are independent Poisson

random variables each with PMF

P

M

i

(m) =

_

α

m

e

−α

m!

m = 0, 1, 2, . . .

0 otherwise

(1)

Since M

1

and M

2

are independent, the joint PMF of M

1

and M

2

is

P

M

1

,M

2

(m

1

, m

2

) = P

M

1

(m

1

) P

M

2

(m

2

) =

⎧

⎪

⎪

⎨

⎪

⎪

⎩

α

m

1

+m

2

e

−2α

m

1

!m

2

!

m

1

= 0, 1, . . . ;

m

2

= 0, 1, . . . ,

0 otherwise.

(2)

Quiz 10.6

To answer whether N

**(t ) is a Poisson process, we look at the interarrival times. Let
**

X

1

, X

2

, . . . denote the interarrival times of the N(t ) process. Since we count only even-

numbered arrival for N

(t ), the time until the ﬁrst arrival of the N

(t ) is Y

1

= X

1

+ X

2

.

Since X

1

and X

2

are independent exponential (λ) random variables, Y

1

is an Erlang (n =

2, λ) random variable; see Theorem 6.11. Since Y

i

(t ), the i th interarrival time of the N

(t )

process, has the same PDF as Y

1

(t ), we can conclude that the interarrival times of N

(t )

are not exponential random variables. Thus N

**(t ) is not a Poisson process.
**

Quiz 10.7

First, we note that for t > s,

X(t ) − X(s) =

W(t ) − W(s)

√

α

(1)

Since W(t ) −W(s) is a Gaussian random variable, Theorem 3.13 states that W(t ) −W(s)

is Gaussian with expected value

E [X(t ) − X(s)] =

E [W(t ) − W(s)]

√

α

= 0 (2)

and variance

E

_

(W(t ) − W(s))

2

_

=

E

_

(W(t ) − W(s))

2

_

α

=

α(t −s)

α

(3)

Consider s

≤ s < t . Since s ≥ s

, W(t ) − W(s) is independent of W(s

). This implies

[W(t ) − W(s)]/

√

α is independent of W(s

)/

√

α for all s ≥ s

. That is, X(t ) − X(s) is

independent of X(s

) for all s ≥ s

**. Thus X(t ) is a Brownian motion process with variance
**

Var[X(t )] = t .

61

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.8

First we ﬁnd the expected value

µ

Y

(t ) = µ

X

(t ) +µ

N

(t ) = µ

X

(t ). (1)

To ﬁnd the autocorrelation, we observe that since X(t ) and N(t ) are independent and since

N(t ) has zero expected value, E[X(t )N(t

)] = E[X(t )]E[N(t

)] = 0. Since R

Y

(t, τ) =

E[Y(t )Y(t +τ)], we have

R

Y

(t, τ) = E [(X(t ) + N(t )) (X(t +τ) + N(t +τ))] (2)

= E [X(t )X(t +τ)] + E [X(t )N(t +τ)]

+ E [X(t +τ)N(t )] + E [N(t )N(t +τ)] (3)

= R

X

(t, τ) + R

N

(t, τ). (4)

Quiz 10.9

From Deﬁnition 10.14, X

1

, X

2

, . . . is a stationary random sequence if for all sets of

time instants n

1

, . . . , n

m

and time offset k,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) (1)

Since the random sequence is iid,

f

X

n

1

,...,X

n

m

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (2)

Similarly, for time instants n

1

+k, . . . , n

m

+k,

f

X

n

1

+k

,...,X

n

m

+k

(x

1

, . . . , x

m

) = f

X

(x

1

) f

X

(x

2

) · · · f

X

(x

m

) (3)

We can conclude that the iid random sequence is stationary.

Quiz 10.10

We must check whether each function R(τ) meets the conditions of Theorem 10.12:

R(τ) ≥ 0 R(τ) = R(−τ) |R(τ)| ≤ R(0) (1)

(1) R

1

(τ) = e

−|τ|

meets all three conditions and thus is valid.

(2) R

2

(τ) = e

−τ

2

also is valid.

(3) R

3

(τ) = e

−τ

cos τ is not valid because

R

3

(−2π) = e

2π

cos 2π = e

2π

> 1 = R

3

(0) (2)

(4) R

4

(τ) = e

−τ

2

sin τ also cannot be an autocorrelation function because

R

4

(π/2) = e

−π/2

sin π/2 = e

−π/2

> 0 = R

4

(0) (3)

62

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 10.11

(1) The autocorrelation of Y(t ) is

R

Y

(t, τ) = E [Y(t )Y(t +τ)] (1)

= E [X(−t )X(−t −τ)] (2)

= R

X

(−t −(−t −τ)) = R

X

(τ) (3)

Since E[Y(t )] = E[X(−t )] = µ

X

, we can conclude that Y(t ) is a wide sense

stationary process. In fact, we see that by viewing a process backwards in time, we

see the same second order statistics.

(2) Since X(t ) and Y(t ) are both wide sense stationary processes, we can check whether

they are jointly wide sense stationary by seeing if R

XY

(t, τ) is just a function of τ.

In this case,

R

XY

(t, τ) = E [X(t )Y(t +τ)] (4)

= E [X(t )X(−t −τ)] (5)

= R

X

(t −(−t −τ)) = R

X

(2t +τ) (6)

Since R

XY

(t, τ) depends on both t and τ, we conclude that X(t ) and Y(t ) are not

jointly wide sense stationary. To see why this is, suppose R

X

(τ) = e

−|τ|

so that

samples of X(t ) far apart in time have almost no correlation. In this case, as t gets

larger, Y(t ) = X(−t ) and X(t ) become less and less correlated.

Quiz 10.12

From the problem statement,

E [X(t )] = E [X(t +1)] = 0 (1)

E [X(t )X(t +1)] = 1/2 (2)

Var[X(t )] = Var[X(t +1)] = 1 (3)

The Gaussian random vector X =

_

X(t ) X(t +1)

_

**has covariance matrix and corre-
**

sponding inverse

C

X

=

_

1 1/2

1/2 1

_

C

−1

X

=

4

3

_

1 −1/2

−1/2 1

_

(4)

Since

x

C

−1

X

x =

_

x

0

x

1

_

4

3

_

1 −1/2

−1/2 1

_ _

x

0

x

1

_

=

4

3

_

x

2

0

− x

0

x

+

x

2

1

_

(5)

the joint PDF of X(t ) and X(t +1) is the Gaussian vector PDF

f

X(t ),X(t +1)

(x

0

, x

1

) =

1

(2π)

n/2

[det (C

X

)]

1/2

exp

_

−

1

2

x

C

−1

X

x

_

(6)

=

1

√

3π

2

e

−

2

3

_

x

2

0

−x

0

x

1

+x

2

1

_

(7)

63

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 10 20 30 40 50 60 70 80 90 100

0

20

40

60

80

100

120

t

M

(

t

)

Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10.13.

Quiz 10.13

The simple structure of the switch simulation of Example 10.28 admits a deceptively

simple solution in terms of the vector of arrivals A and the vector of departures D. With the

introduction of call blocking. we cannot generate these vectors all at once. In particular,

when an arrival occurs at time t , we need to know that M(t ), the number of ongoing calls,

satisﬁes M(t ) < c = 120. Otherwise, when M(t ) = c, we must block the call. Call

blocking can be implemented by setting the service time of the call to zero so that the call

departs as soon as it arrives.

The blocking switch is an example of a discrete event system. The system evolves via

a sequence of discrete events, namely arrivals and departures, at discrete time instances. A

simulation of the system moves from one time instant to the next by maintaining a chrono-

logical schedule of future events (arrivals and departures) to be executed. The program

simply executes the event at the head of the schedule. The logic of such a simulation is

1. Start at time t = 0 with an empty system. Schedule the ﬁrst arrival to occur at S

1

, an

exponential (λ) random variable.

2. Examine the head-of-schedule event.

• When the head-of-schedule event is the kth arrival is at time t , check the state

M(t ).

– If M(t ) < c, admit the arrival, increase the system state n by 1, and sched-

ule a departure to occur at time t + S

n

, where S

k

is an exponential (λ)

random variable.

– If M(t ) = c, block the arrival, do not schedule a departure event.

• If the head of schedule event is a departure, reduce the system state n by 1.

3. Delete the head-of-schedule event and go to step 2.

After the head-of-schedule event is completed and any new events (departures in this sys-

tem) are scheduled, we know the system state cannot change until the next scheduled event.

64

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus we know that M(t ) will stay the same until then. In our simulation, we use the vector

t as the set of time instances at which we inspect the system state. Thus for all times t(i)

between the current head-of-schedule event and the next, we set m(i) to the current switch

state.

The complete program is shown in Figure 5. In most programming languages, it is

common to implement the event schedule as a linked list where each item in the list has

a data structure indicating an event timestamp and the type of the event. In MATLAB, a

simple (but not elegant) way to do this is to have maintain two vectors: time is a list

of timestamps of scheduled events and event is a the list of event types. In this case,

event(i)=1 if the i th scheduled event is an arrival, or event(i)=-1 if the i th sched-

uled event is a departure.

When the program is passed a vector t, the output [m a b] is such that m(i) is the

number of ongoing calls at time t(i) while a and b are the number of admits and blocks.

The following instructions

t=0:0.1:5000;

[m,a,b]=simblockswitch(10,0.1,120,t);

plot(t,m);

generated a simulation lasting 5,000 minutes. A sample path of the ﬁrst 100 minutes of

that simulation is shown in Figure 4. The 5,000 minute full simulation produced a=49658

admitted calls and b=239 blocked calls. We can estimate the probability a call is blocked

as

ˆ

P

b

=

b

a +b

= 0.0048. (1)

In Chapter 12, we will learn that the exact blocking probability is given by Equation (12.93),

a result known as the “Erlang-B formula.” From the Erlang-B formula, we can calculate

that the exact blocking probability is P

b

= 0.0057. One reason our simulation underesti-

mates the blocking probability is that in a 5,000 minute simulation, roughly the ﬁrst 100

minutes are needed to load up the switch since the switch is idle when the simulation starts

at time t = 0. However, this says that roughly the ﬁrst two percent of the simulation time

was unusual. Thus this would account for only part of the disparity. The rest of the gap

between 0.0048 and 0.0057 is that a simulation that includes only 239 blocks is not all that

likely to give a very accurate result for the blocking probability.

Note that in Chapter 12, we will learn that the blocking switch is an example of an

M/M/c/c queue, a kind of Markov chain. Chapter 12 develops techniques for analyzing

and simulating systems described by Markov chains that are much simpler than the discrete

event simulation technique shown here. Nevertheless, for very complicated systems, the

discrete event simulation is widely-used and often very efﬁcient simulation method.

65

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

function [M,admits,blocks]=simblockswitch(lam,mu,c,t);

blocks=0; %total # blocks

admits=0; %total # admits

M=zeros(size(t));

n=0; % # in system

time=[ exponentialrv(lam,1) ];

event=[ 1 ]; %first event is an arrival

timenow=0;

tmax=max(t);

while (timenow<tmax)

M((timenow<=t)&(t<time(1)))=n;

timenow=time(1);

eventnow=event(1);

event(1)=[ ]; time(1)= [ ]; % clear current event

if (eventnow==1) % arrival

arrival=timenow+exponentialrv(lam,1); % next arrival

b4arrival=time<arrival;

event=[event(b4arrival) 1 event(˜b4arrival)];

time=[time(b4arrival) arrival time(˜b4arrival)];

if n<c %call admitted

admits=admits+1;

n=n+1;

depart=timenow+exponentialrv(mu,1);

b4depart=time<depart;

event=[event(b4depart) -1 event(˜b4depart)];

time=[time(b4depart) depart time(˜b4depart)];

else

blocks=blocks+1; %one more block, immed departure

disp(sprintf(’Time %10.3d Admits %10d Blocks %10d’,...

timenow,admits,blocks));

end

elseif (eventnow==-1) %departure

n=n-1;

end

end

Figure 5: Discrete event simulation of the blocking switch of Quiz 10.13.

66

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 11

Quiz 11.1

By Theorem 11.2,

µ

Y

= µ

X

_

∞

−∞

h(t )dt = 2

_

∞

0

e

−t

dt = 2 (1)

Since R

X

(τ) = δ(τ), the autocorrelation function of the output is

R

Y

(τ) =

_

∞

−∞

h(u)

_

∞

−∞

h(v)δ(τ +u −v) dv du =

_

∞

−∞

h(u)h(τ +u) du (2)

For τ > 0, we have

R

Y

(τ) =

_

∞

0

e

−u

e

−τ−u

du = e

−τ

_

∞

0

e

−2u

du =

1

2

e

−τ

(3)

For τ < 0, we can deduce that R

Y

(τ) =

1

2

e

−|τ|

by symmetry. Just to be safe though, we

can double check. For τ < 0,

R

Y

(τ) =

_

∞

−τ

h(u)h(τ +u) du =

_

∞

−τ

e

−u

e

−τ−u

du =

1

2

e

τ

(4)

Hence,

R

Y

(τ) =

1

2

e

−|τ|

(5)

Quiz 11.2

The expected value of the output is

µ

Y

= µ

X

∞

n=−∞

h

n

= 0.5(1 +−1) = 0 (1)

The autocorrelation of the output is

R

Y

[n] =

1

i =0

1

j =0

h

i

h

j

R

X

[n +i − j ] (2)

= 2R

X

[n] − R

X

[n −1] − R

X

[n +1] =

_

1 n = 0

0 otherwise

(3)

Since µ

Y

= 0, The variance of Y

n

is Var[Y

n

] = E[Y

2

n

] = R

Y

[0] = 1.

67

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−15 −10 −5 0 5 10 15

0

0.2

0.4

0.6

f

S

X

(

f

)

−1500−1000 −500 0 500 1000 1500

0

2

4

6

8

x 10

f

S

X

(

f

)

−0.2 −0.1 0 0.1 0.2

−5

0

5

10

τ

R

X

(

τ

)

−2 −1 0 1 2

x 10

−3

−5

0

5

10

τ

R

X

(

τ

)

(a) W = 10 (b) W = 1000

Figure 6: The autocorrelation R

X

(τ) and power spectral density S

X

( f ) for process X(t ) in

Quiz 11.5.

Quiz 11.3

By Theorem 11.8, Y =

_

Y

33

Y

34

Y

35

_

**is a Gaussian random vector since X
**

n

is

a Gaussian random process. Moreover, by Theorem 11.5, each Y

n

has expected value

E[Y

n

] = µ

X

∞

n=−∞

h

n

= 0. Thus E[Y] = 0. Fo ﬁnd the PDF of the Gaussian vector

Y, we need to ﬁnd the covariance matrix C

Y

, which equals the correlation matrix R

Y

since

Y has zero expected value. One way to ﬁnd the R

Y

is to observe that R

Y

has the Toeplitz

structure of Theorem 11.6 and to use Theorem 11.5 to ﬁnd the autocorrelation function

R

Y

[n] =

∞

i =−∞

∞

j =−∞

h

i

h

j

R

X

[n +i − j ]. (1)

Despite the fact that R

X

[k] is an impulse, using Equation (1) is surprisingly tedious because

we still need to sum over all i and j such that n +i − j = 0.

In this problem, it is simpler to observe that Y = HX where

X =

_

X

30

X

31

X

32

X

33

X

34

X

35

_

(2)

and

H =

1

4

⎡

⎣

1 1 1 1 0 0

0 1 1 1 1 0

0 0 1 1 1 1

⎤

⎦

. (3)

In this case, following Theorem 11.7, or by directly applying Theorem 5.13 with µ

X

= 0

and A = H, we obtain R

Y

= HR

X

H

. Since R

X

[n] = δ

n

, R

X

= I, the identity matrix.

68

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus

C

Y

= R

Y

= HH

=

1

16

⎡

⎣

4 3 2

3 4 3

2 3 4

⎤

⎦

. (4)

It follows (very quickly if you use MATLAB for 3 ×3 matrix inversion) that

C

−1

Y

= 16

⎡

⎣

7/12 −1/2 1/12

−1/2 1 −1/2

1/12 −1/2 7/12

⎤

⎦

. (5)

Thus, the PDF of Y is

f

Y

(y) =

1

(2π)

3/2

[det (C

Y

)]

1/2

exp

_

−

1

2

y

C

−1

Y

y

_

. (6)

A disagreeable amount of algebra will show det(C

Y

) = 3/1024 and that the PDF can be

“simpliﬁed” to

f

Y

(y) =

16

√

6π

3

exp

_

−8

_

7

12

y

2

33

+ y

2

34

+

7

12

y

2

35

− y

33

y

34

+

1

6

y

33

y

35

− y

34

y

35

__

. (7)

Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution

is that y

C

−1

Y

y is a very concise representation of the cross-terms in the exponent of f

Y

(y).

Quiz 11.4

This quiz is solved using Theorem 11.9 for the case of k = 1 and M = 2. In this case,

X

n

=

_

X

n−1

X

n

_

and

R

X

n

=

_

R

X

[0] R

X

[1]

R

X

[1] R

X

[0]

_

=

_

1.1 0.9

0.9 1.1

_

(1)

and

R

X

n

X

n+1

= E

__

X

n−1

X

n

_

X

n+1

_

=

_

R

X

[2]

R

X

[1]

_

=

_

0.81

0.9

_

. (2)

The MMSE linear ﬁrst order ﬁlter for predicting X

n+1

at time n is the ﬁlter h such that

←−

h = R

−1

X

n

R

X

n

X

n+1

=

_

1.1 0.9

0.9 1.1

_

−1

_

0.81

0.9

_

=

1

400

_

81

261

_

. (3)

It follows that the ﬁlter is h =

_

261/400 81/400

_

**and the MMSE linear predictor is
**

ˆ

X

n+1

=

81

400

X

n−1

+

261

400

X

n

. (4)

to ﬁnd the mean square error, one approach is to follow the method of Example 11.13 and

to directly calculate

e

∗

L

= E

_

(X

n+1

−

ˆ

X

n+1

)

2

_

. (5)

69

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

This method is workable for this simple problem but becomes increasingly tedious for

higher order ﬁlters. Instead, we can derive the mean square error for an arbitary prediction

ﬁlter h. Since

ˆ

X

n+1

=

←−

h

X

n

,

e

∗

L

= E

_

_

X

n+1

−

←−

h

X

n

_

2

_

(6)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−

←−

h

X

n

)

_

(7)

= E

_

(X

n+1

−

←−

h

X

n

)(X

n+1

−X

n

←−

h )

_

(8)

After a bit of algebra, we obtain

e

∗

L

= R

X

[0] −2

←−

h

R

X

n

X

n+1

+

←−

h

R

X

n

←−

h (9)

(10)

with the substitution

←−

h = R

−1

X

n

R

X

n

X

n+1

, we obtain

e

∗

L

= R

X

[0] −R

X

n

X

n+1

R

−1

X

n

R

X

n

X

n+1

(11)

= R

X

[0] −

←−

h

R

X

n

X

n+1

(12)

Note that this is essentially the same result as Theorem 9.7 with Y = X

n

, X = X

n+1

and

ˆ a

=

←−

h

. It is noteworthy that the result is derived in a much simpler way in the proof of

Theorem 9.7 by using the orthoginality property of the LMSE estimator.

In any case, the mean square error is

e

∗

L

= R

X

[0] −

←−

h

R

X

n

X

n+1

= 1.1 −

1

400

_

81 261

_

_

0.81

0.9

_

=

506

1451

= 0.3487. (13)

recalling that the blind estimate would yield a mean square error of Var[X] = 1.1, we see

that observing X

n−1

and X

n

improves the accuracy of our prediction of X

n+1

.

Quiz 11.5

(1) By Theorem 11.13(b), the average power of X(t ) is

E

_

X

2

(t )

_

=

_

∞

−∞

S

X

( f ) d f =

_

W

−W

5

W

d f = 10 Watts (1)

(2) The autocorrelation function is the inverse Fourier transform of S

X

( f ). Consulting

Table 11.1, we note that

S

X

( f ) = 10

1

2W

rect

_

f

2W

_

(2)

It follows that the inverse transform of S

X

( f ) is

R

X

(τ) = 10 sinc(2Wτ) = 10

sin(2πWτ)

2πWτ

(3)

(3) For W = 10 Hz and W = 1 kHZ, graphs of S

X

( f ) and R

X

(τ) appear in Figure 6.

70

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.6

In a sampled system, the discrete time impulse δ[n] has a ﬂat discrete Fourier transform.

That is, if R

X

[n] = 10δ[n], then

S

X

(φ) =

∞

n=−∞

10δ[n]e

−j 2πφn

= 10 (1)

Thus, R

X

[n] = 10δ[n]. (This quiz is really lame!)

Quiz 11.7

Since Y(t ) = X(t −t

0

),

R

XY

(t, τ) = E [X(t )Y(t +τ)] = E [X(t )X(t +τ −t

0

)] = R

X

(τ −t

0

) (1)

We see that R

XY

(t, τ) = R

XY

(τ) = R

X

(τ − t

0

). From Table 11.1, we recall the prop-

erty that g(τ − τ

0

) has Fourier transform G( f )e

−j 2π f τ

0

. Thus the Fourier transform of

R

XY

(τ) = R

X

(τ −t

0

) = g(τ −t

0

) is

S

XY

( f ) = S

X

( f )e

−j 2π f t

0

. (2)

Quiz 11.8

We solve this quiz using Theorem 11.17. First we need some preliminary facts. Let

a

0

= 5,000 so that

R

X

(τ) =

1

a

0

a

0

e

−a

0

|τ|

. (1)

Consulting with the Fourier transforms in Table 11.1, we see that

S

X

( f ) =

1

a

0

2a

2

0

a

2

0

+(2π f )

2

=

2a

0

a

2

0

+(2π f )

2

(2)

The RC ﬁlter has impulse response h(t ) = a

1

e

−a

1

t

u(t ), where u(t ) is the unit step function

and a

1

= 1/RC where RC = 10

−4

is the ﬁlter time constant. From Table 11.1,

H( f ) =

a

1

a

1

+ j 2π f

(3)

(1) Theorem 11.17,

S

XY

( f ) = H( f )S

X

( f ) =

2a

0

a

1

[a

1

+ j 2π f ]

_

a

2

0

+(2π f )

2

_. (4)

(2) Again by Theorem 11.17,

S

Y

( f ) = H

∗

( f )S

XY

( f ) = |H( f )|

2

S

X

( f ). (5)

71

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Note that

|H( f )|

2

= H( f )H

∗

( f ) =

a

1

(a

1

+ j 2π f )

a

1

(a

1

− j 2π f )

=

a

2

1

a

2

1

+(2π f )

2

(6)

Thus,

S

Y

( f ) = |H( f )|

2

S

X

( f ) =

2a

0

a

2

1

_

a

2

1

+(2π f )

2

_ _

a

2

0

+(2π f )

2

_ (7)

(3) To ﬁnd the average power at the ﬁlter output, we can either use basic calculus and

calculate

_

∞

−∞

S

Y

( f ) d f directly or we can ﬁnd R

Y

(τ) as an inverse transform of

S

Y

( f ). Using partial fractions and the Fourier transform table, the latter method is

actually less algebra. In particular, some algebra will show that

S

Y

( f ) =

K

0

a

2

0

+(2π f )

2

+

K

1

a

1

+(2π f )

2

(8)

where

K

0

=

2a

0

a

2

1

a

2

1

−a

2

0

, K

1

=

−2a

0

a

2

1

a

2

1

−a

2

0

. (9)

Thus,

S

Y

( f ) =

K

0

2a

2

0

2a

2

0

a

2

0

+(2π f )

2

+

K

1

2a

2

1

2a

2

1

a

1

+(2π f )

2

. (10)

Consulting with Table 11.1, we see that

R

Y

(τ) =

K

0

2a

2

0

a

0

e

−a

0

|τ|

+

K

1

2a

2

1

a

1

e

−a

1

|τ|

(11)

Substituting the values of K

0

and K

1

, we obtain

R

Y

(τ) =

a

2

1

e

−a

0

|τ|

−a

0

a

1

e

−a

1

|τ|

a

2

1

−a

2

0

. (12)

The average power of the Y(t ) process is

R

Y

(0) =

a

1

a

1

+a

0

=

2

3

. (13)

Note that the input signal has average power R

X

(0) = 1. Since the RC ﬁlter has a 3dB

bandwidth of 10,000 rad/sec and the signal X(t ) has most of its its signal energy below

5,000 rad/sec, the output signal has almost as much power as the input.

72

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 11.9

This quiz implements an example of Equations (11.146) and (11.147) for a system in

which we ﬁlter Y(t ) = X(t ) + N(t ) to produce an optimal linear estimate of X(t ). The

solution to this quiz is just to ﬁnd the ﬁlter

ˆ

H( f ) using Equation (11.146) and to calculate

the mean square error e

L

∗ using Equation (11.147).

Comment: Since the text omitted the derivations of Equations (11.146) and (11.147), we

note that Example 10.24 showed that

R

Y

(τ) = R

X

(τ) + R

N

(τ), R

Y X

(τ) = R

X

(τ). (1)

Taking Fourier transforms, it follows that

S

Y

( f ) = S

X

( f ) + S

N

( f ), S

Y X

( f ) = S

X

( f ). (2)

Now we can go on to the quiz, at peace with the derivations.

(1) Since µ

N

= 0, R

N

(0) = Var[N] = 1. This implies

R

N

(0) =

_

∞

−∞

S

N

( f ) d f =

_

B

−B

N

0

d f = 2N

0

B (3)

Thus N

0

= 1/(2B). Because the noise process N(t ) has constant power R

N

(0) = 1,

decreasing the single-sided bandwidth B increases the power spectral density of the

noise over frequencies | f | < B.

(2) Since R

X

(τ) = sinc(2Wτ), where W = 5,000 Hz, we see from Table 11.1 that

S

X

( f ) =

1

10

4

rect

_

f

10

4

_

. (4)

The noise power spectral density can be written as

S

N

( f ) = N

0

rect

_

f

2B

_

=

1

2B

rect

_

f

2B

_

, (5)

From Equation (11.146), the optimal ﬁlter is

ˆ

H( f ) =

S

X

( f )

S

X

( f ) + S

N

( f )

=

1

10

4

rect

_

f

10

4

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_. (6)

73

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(3) We produce the output

ˆ

X(t ) by passing the noisy signal Y(t ) through the ﬁlter

ˆ

H( f ).

From Equation (11.147), the mean square error of the estimate is

e

∗

L

=

_

∞

−∞

S

X

( f )S

N

( f )

S

X

( f ) + S

N

( f )

d f (7)

=

_

∞

−∞

1

10

4

rect

_

f

10

4

_

1

2B

rect

_

f

2B

_

1

10

4

rect

_

f

10

4

_

+

1

2B

rect

_

f

2B

_ d f. (8)

To evaluate the MSE e

∗

L

, we need to whether B ≤ W. Since the problem asks us to

ﬁnd the largest possible B, let’s suppose B ≤ W. We can go back and consider the

case B > W later. When B ≤ W, the MSE is

e

∗

L

=

_

B

−B

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

10

4

1

10

4

+

1

2B

=

1

1 +

5,000

B

(9)

To obtain MSE e

∗

L

≤ 0.05 requires B ≤ 5,000/19 = 263.16 Hz.

Although this completes the solution to the quiz, what is happening may not be obvious.

The noise power is always Var[N] = 1 Watt, for all values of B. As B is decreased, the PSD

S

N

( f ) becomes increasingly tall, but only over a bandwidth B that is decreasing. Thus as

B descreases, the ﬁlter

ˆ

H( f ) makes an increasingly deep and narrow notch at frequencies

| f | ≤ B. Two examples of the ﬁlter

ˆ

H( f ) are shown in Figure 7. As B shrinks, the ﬁlter

suppresses less of the signal of X(t ). The result is that the MSE goes down.

Finally, we note that we can choose B very large and also achieve MSE e

∗

L

= 0.05. In

particular, when B > W = 5000, S

N

( f ) = 1/2B over frequencies | f | < W. In this case,

the Wiener ﬁlter

ˆ

H( f ) is an ideal (ﬂat) lowpass ﬁlter

ˆ

H( f ) =

⎧

⎨

⎩

1

10

4

1

10

4

+

1

2B

| f | < 5,000,

0 otherwise.

(10)

Thus increasing B spreads the constant 1 watt of power of N(t ) over more bandwidth. The

Wiener ﬁlter removes the noise that is outside the band of the desired signal. The mean

square error is

e

∗

L

=

_

5000

−5000

1

10

4

1

2B

1

10

4

+

1

2B

d f =

1

2B

1

10

4

+

1

2B

=

1

B

5000

+1

(11)

In this case, B ≥ 9.5 ×10

4

guarantees e

∗

L

≤ 0.05.

Quiz 11.10

It is fairly straightforward to ﬁnd S

X

(φ) and S

Y

(φ). The only thing to keep in mind is

to use fftc to transform the autocorrelation R

X

[ f ] into the power spectral density S

X

(φ).

The following MATLAB program generates and plots the functions shown in Figure 8

74

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

−5000 −2000 0 2000 5000

0

0.5

1

f

H

(

f

)

B = 500 B = 2500

Figure 7: Wiener ﬁlter for Quiz 11.9.

%mquiz11.m

N=32;

rx=[2 4 2]; SX=fftc(rx,N); %autocorrelation and PSD

stem(0:N-1,abs(sx));

xlabel(’n’);ylabel(’S_X(n/N)’);

h2=0.5*[1 1]; H2=fft(h2,N); %impulse/filter response: M=2

SY2=SX.* ((abs(H2)).ˆ2);

figure; stem(0:N-1,abs(SY2)); %PSD of Y for M=2

xlabel(’n’);ylabel(’S_{Y_2}(n/N)’);

h10=0.1*ones(1,10); H10=fft(h10,N); %impulse/filter response: M=10

SY10=sx.*((abs(H10)).ˆ2);

figure; stem(0:N-1,abs(SY10));

xlabel(’n’);ylabel(’S_{Y_{10}}(n/N)’);

Relative to M = 2, when M = 10, the ﬁlter H(φ) ﬁlters out almost all of the high

frequency components of X(t ). In the context of Example 11.26, the low pass moving

average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter

output that varies very slowly.

As an aside, note that the vectors SX, SY2 and SY10 in mquiz11 should all be real-

valued vectors. However, the ﬁnite numerical precision of MATLAB results in tiny imagi-

nary parts. Although these imaginary parts have no computational signiﬁcance, they tend

to confuse the stem function. Hence, we generate stem plots of the magnitude of each

power spectral density.

75

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0 5 10 15 20 25 30 35

0

5

10

n

S

X

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

2

(

n

/

N

)

0 5 10 15 20 25 30 35

0

5

10

n

S

Y

1

0

(

n

/

N

)

Figure 8: For Quiz 11.10, graphs of S

X

(φ), S

Y

(n/N) for M = 2, and S

φ

(n/N) for M = 10

using an N = 32 point DFT.

76

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz Solutions – Chapter 12

Quiz 12.1

The system has two states depending on whether the previous packet was received in

error. From the problem statement, we are given the conditional probabilities

P

_

X

n+1

= 0|X

n

= 0

_

= 0.99 P

_

X

n+1

= 1|X

n

= 1

_

= 0.9 (1)

Since each X

n

must be either 0 or 1, we can conclude that

P

_

X

n+1

= 1|X

n

= 0

_

= 0.01 P

_

X

n+1

= 0|X

n

= 1

_

= 0.1 (2)

These conditional probabilities correspond to the transition matrix and Markov chain:

0 1

0.01

0.1

0.99 0.9

P =

_

0.99 0.01

0.10 0.90

_

(3)

Quiz 12.2

From the problem statement, the Markov chain and the transition matrix are

0 1 1

0.6 0.2

0.2 0.6

0.4 0.6 0.4

P =

⎡

⎣

0.4 0.6 0

0.2 0.6 0.2

0 0.6 0.4

⎤

⎦

(1)

The eigenvalues of P are

λ

1

= 0 λ

2

= 0.4 λ

3

= 1 (2)

We can diagonalize P into

P = S

−1

DS =

⎡

⎣

−0.6 0.5 1

0.4 0 1

−0.6 −0.5 1

⎤

⎦

⎡

⎣

λ

1

0 0

0 λ

2

0

0 0 λ

3

⎤

⎦

⎡

⎣

−0.5 1 −0.5

1 0 −1

0.2 0.6 0.2

⎤

⎦

(3)

where s

i

, the i th row of S, is the left eigenvector of P satisfying s

i

P = λ

i

s

i

. Algebra will

verify that the n-step transition matrix is

P

n

= S

−1

D

n

S =

⎡

⎣

0.2 0.6 0.2

0.2 0.6 0.2

0.2 0.6 0.2

⎤

⎦

+(0.4)

n

⎡

⎣

0.5 0 −0.5

0 0 0

−0.5 0 0.5

⎤

⎦

(4)

Quiz 12.3

The Markov chain describing the factory status and the corresponding state transition

matrix are

77

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

2

0 1

0.9

0.1

1

1

P =

⎡

⎣

0.9 0.1 0

0 0 1

1 0 0

⎤

⎦

(1)

With π =

_

π

0

π

1

π

2

_

, the system of equations π

= π

P yields π

1

= 0.1π

0

and

π

2

= π

1

. This implies

π

0

+π

1

+π

2

= π

0

(1 +0.1 +0.1) = 1 (2)

It follows that the limiting state probabilities are

π

0

= 5/6, π

1

= 1/12, π

2

= 1/12. (3)

Quiz 12.4

The communicating classes are

C

1

= {0, 1} C

2

= {2, 3} C

3

= {4, 5, 6} (1)

The states in C

1

and C

3

are aperiodic. The states in C

2

have period 2. Once the system

enters a state in C

1

, the class C

1

is never left. Thus the states in C

1

are recurrent. That

is, C

1

is a recurrent class. Similarly, the states in C

3

are recurrent. On the other hand, the

states in C

2

are transient. Once the system exits C

2

, the states in C

2

are never reentered.

Quiz 12.5

At any time t , the state n can take on the values 0, 1, 2, . . .. The state transition proba-

bilities are

P

n−1,n

= P [K > n|K > n −1] =

P [K > n]

P [K > n −1]

(1)

P

n−1,0

= P [K = n|K > n −1] =

P [K = n]

P [K > n −1]

(2)

(3)

The Markov chain resembles

0 1

P K=2 [ ]

P K= [ 1]

3 4

P K=4 [ ]

2

P K=3 [ ]

P K=5 [ ]

1 1 1 1 1

… ...

78

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

The stationary probabilities satisfy

π

0

= π

0

P [K = 1] +π

1

, (4)

π

1

= π

0

P [K = 2] +π

2

, (5)

.

.

.

π

k−1

= π

0

P [K = k] +π

k

, k = 1, 2, . . . (6)

From Equation (4), we obtain

π

1

= π

0

(1 − P [K = 1]) = π

0

P [K > 1] (7)

Similarly, Equation (5) implies

π

2

= π

1

−π

0

P [K = 2] = π

0

(P [K > 1] − P [K = 2]) = π

0

P [K > 2] (8)

This suggests that π

k

= π

0

P[K > k]. We verify this pattern by showing that π

k

=

π

0

P[K > k] satisﬁes Equation (6):

π

0

P [K > k −1] = π

0

P [K = k] +π

0

P [K > k] . (9)

When we apply

∞

k=0

π

k

= 1, we obtain π

0

∞

n=0

P[K > k] = 1. From Problem 2.5.11,

we recall that

∞

k=0

P[K > k] = E[K]. This implies

π

n

=

P [K > n]

E [K]

(10)

This Markov chain models repeated random countdowns. The system state is the time until

the counter expires. When the counter expires, the system is in state 0, and we randomly

reset the counter to a new value K = k and then we count down k units of time. Since we

spend one unit of time in each state, including state 0, we have k −1 units of time left after

the state 0 counter reset. If we have a random variable W such that the PMF of W satisﬁes

P

W

(n) = π

n

, then W has a discrete PMF representing the remaining time of the counter at

a time in the distant future.

Quiz 12.6

(1) By inspection, the number of transitions need to return to state 0 is always a multiple

of 2. Thus the period of state 0 is d = 2.

(2) To ﬁnd the stationary probabilities, we solve the system of equations π = πP and

3

i =0

π

i

= 1:

π

0

= (3/4)π

1

+(1/4)π

3

(1)

π

1

= (1/4)π

0

+(1/4)π

2

(2)

π

2

= (1/4)π

1

+(3/4)π

3

(3)

1 = π

0

+π

1

+π

2

+π

3

(4)

79

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Solving the second and third equations for π

2

and π

3

yields

π

2

= 4π

1

−π

0

π

3

= (4/3)π

2

−(1/3)π

1

= 5π

1

−(4/3)π

0

(5)

Substituting π

3

back into the ﬁrst equation yields

π

0

= (3/4)π

1

+(1/4)π

3

= (3/4)π

1

+(5/4)π

1

−(1/3)π

0

(6)

This implies π

1

= (2/3)π

0

. It follows from the ﬁrst and second equations that

π

2

= (5/3)π

0

and π

3

= 2π

0

. Lastly, we choose π

0

so the state probabilities sum to

1:

1 = π

0

+π

1

+π

2

+π

3

= π

0

_

1 +

2

3

+

5

3

+2

_

=

16

3

π

0

(7)

It follows that the state probabilities are

π

0

=

3

16

π

1

=

2

16

π

2

=

5

16

π

3

=

6

16

(8)

(3) Since the system starts in state 0 at time 0, we can use Theorem 12.14 to ﬁnd the

limiting probability that the system is in state 0 at time nd:

lim

n→∞

P

00

(nd) = dπ

0

=

3

8

(9)

Quiz 12.7

The Markov chain has the same structure as that in Example 12.22. The only difference

is the modiﬁed transition rates:

0 1

1

3 4

( ) 2/3

a

1 - ( ) 2/3

a

( ) 3/4

a

1 - 3/4 ( )

a

( ) 4/5

a

1 - 4/5 ( )

a

2

( ) 1/2

a

1- 1/2 ( )

a

…

The event T

00

> n occurs if the system reaches state n before returning to state 0, which

occurs with probability

P [T

00

> n] = 1 ×

_

1

2

_

α

×

_

2

3

_

α

×· · · ×

_

n −1

n

_

α

=

_

1

n

_

α

. (1)

Thus the CDF of T

00

satisﬁes F

T

00

(n) = 1−P[T

00

> n] = 1−1/n

α

. To determine whether

state 0 is recurrent, we observe that for all α > 0

P [V

00

] = lim

n→∞

F

T

00

(n) = lim

n→∞

1 −

1

n

α

= 1. (2)

80

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Thus state 0 is recurrent for all α > 0. Since the chain has only one communicating class,

all states are recurrent. ( We also note that if α = 0, then all states are transient.)

To determine whether the chain is null recurrent or positive recurrent, we need to calcu-

late E[T

00

]. In Example 12.24, we did this by deriving the PMF P

T

00

(n). In this problem,

it will be simpler to use the result of Problem 2.5.11 which says that

∞

k=0

P[K > k] =

E[K] for any non-negative integer-valued random variable K. Applying this result, the

expected time to return to state 0 is

E [T

00

] =

∞

n=0

P [T

00

> n] = 1 +

∞

n=1

1

n

α

. (3)

For 0 < α ≤ 1, 1/n

α

≥ 1/n and it follows that

E [T

00

] ≥ 1 +

∞

n=1

1

n

= ∞. (4)

We conclude that the Markov chain is null recurrent for 0 < α ≤ 1. On the other hand, for

α > 1,

E [T

00

] = 2 +

∞

n=2

1

n

α

. (5)

Note that for all n ≥ 2

1

n

α

≤

_

n

n−1

dx

x

α

(6)

This implies

E [T

00

] ≤ 2 +

∞

n=2

_

n

n−1

dx

x

α

(7)

= 2 +

_

∞

1

dx

x

α

(8)

= 2 +

x

−α+1

−α +1

¸

¸

¸

¸

∞

1

= 2 +

1

α −1

< ∞ (9)

Thus for all α > 1, the Markov chain is positive recurrent.

Quiz 12.8

The number of customers in the ”friendly” store is given by the Markov chain

1 i i+1

p p p

( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q ( )( ) 1-p 1-q

( ) 1-p q ( ) 1-p q ( ) 1-p q ( ) 1-p q

0

××× ×××

81

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

In the above chain, we note that (1 − p)q is the probability that no new customer arrives,

an existing customer gets one unit of service and then departs the store.

By applying Theorem 12.13 with state space partitioned between S = {0, 1, . . . , i } and

S

**= {i +1, i +2, . . .}, we see that for any state i ≥ 0,
**

π

i

p = π

i +1

(1 − p)q. (1)

This implies

π

i +1

=

p

(1 − p)q

π

i

. (2)

Since Equation (2) holds for i = 0, 1, . . ., we have that π

i

= π

0

α

i

where

α =

p

(1 − p)q

. (3)

Requiring the state probabilities to sum to 1, we have that for α < 1,

∞

i =0

π

i

= π

0

∞

i =0

α

i

=

π

0

1 −α

= 1. (4)

Thus for α < 1, the limiting state probabilities are

π

i

= (1 −α)α

i

, i = 0, 1, 2, . . . (5)

In addition, for α ≥ 1 or, equivalently, p ≥ q/(1 − q), the limiting state probabilities do

not exist.

Quiz 12.9

The continuous time Markov chain describing the processor is

0 1

2

3.01

3 4

2

3

2

3

2

2

3

0.01

0.01

0.01

Note that q

10

= 3.1 since the task completes at rate 3 per msec and the processor reboots

at rate 0.1 per msec and the rate to state 0 is the sum of those two rates. From the Markov

chain, we obtain the following useful equations for the stationary distribution.

5.01p

1

= 2p

0

+3p

2

5.01p

2

= 2p

1

+3p

3

5.01p

3

= 2p

2

+3p

4

3.01p

4

= 2p

3

We can solve these equations by working backward and solving for p

4

in terms of p

3

, p

3

in terms of p

2

and so on, yielding

p

4

=

20

31

p

3

p

3

=

620

981

p

2

p

2

=

19620

31431

p

1

p

1

=

628, 620

1, 014, 381

p

0

(1)

82

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Applying p

0

+ p

1

+ p

2

+ p

3

+ p

4

= 1 yields p

0

= 1, 014, 381/2, 443, 401 and the

stationary probabilities are

p

0

= 0.4151 p

1

= 0.2573 p

2

= 0.1606 p

3

= 0.1015 p

4

= 0.0655 (2)

Quiz 12.10

The M/M/c/∞queue has Markov chain

c c+1 1 0

λ λ λ λ λ

µ 2µ

cµ cµ cµ

From the Markov chain, the stationary probabilities must satisfy

p

n

=

_

(ρ/n) p

n−1

n = 1, 2, . . . , c

(ρ/c) p

n−1

n = c +1, c +2, . . .

(1)

It is straightforward to show that this implies

p

n

=

_

p

0

ρ

n

/n! n = 1, 2, . . . , c

p

0

(ρ/c)

n−c

ρ

c

/c! n = c +1, c +2, . . .

(2)

The requirement that

∞

n=0

p

n

= 1 yields

p

0

=

_

c

n=0

ρ

n

/n! +

ρ

c

c!

ρ/c

1 −ρ/c

_

−1

(3)

83

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Functions for Random Variables
**

bernoullipmf y=bernoullipmf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).

function pv=bernoullipmf(p,x) %For Bernoulli (p) rv X %input = vector x %output = vector pv %such that pv(i)=Prob(X=x(i)) pv=(1-p)*(x==0) + p*(x==1); pv=pv(:);

bernoullicdf

y=bernoullicdf(p,x) Input: p is the success probability of a Bernoulli random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=bernoullicdf(p,x) %Usage: cdf=bernoullicdf(p,x) % For Bernoulli (p) rv X, %given input vector x, output is %vector pv such that pv(i)=Prob[X<=x(i)] x=floor(x(:)); allx=0:1; allcdf=cumsum(bernoullipmf(p,allx)); okx=(x>=0); %x_i < 1 are bad values x=(okx.*x); %set bad x_i=0 cdf= okx.*allcdf(x); %zeroes out bad x_i

bernoullirv

x=bernoullirv(p,m) Input: p is the success probability of a Bernoulli random variable X , m is a positive integer vector of possible sample values Output: x is a vector of m independent sample values of X

function x=bernoullirv(p,m) %return m samples of bernoulli (p) rv r=rand(m,1); x=(r>=(1-p));

2

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

bignomialpmf

y=bignomialpmf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). Comment: This function should always produce the same output as binomialpmf(n,p,x); however, the function calculates the logarithm of the probability and thismay lead to small numerical innaccuracy.

function pmf=bignomialpmf(n,p,x) %binomial(n,p) rv X, %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] k=(0:n-1)’; a=log((p/(1-p))*((n-k)./(k+1))); L0=n*log(1-p); L=[L0; L0+cumsum(a)]; pb=exp(L); % pb=[P[X=0] ... P[X=n]]ˆt x=x(:); okx =(x>=0).*(x<=n).*(x==floor(x)); x=okx.*x; pmf=okx.*pb(x+1);

binomialcdf

y=binomialcdf(n,p,x) Input: n and p are the parameters of a binomial (n, p) random variable X , x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).

function cdf=binomialcdf(n,p,x) %Usage: cdf=binomialcdf(n,p,x) %For binomial(n,p) rv X, %and input vector x, output is %vector cdf: cdf(i)=P[X<=x(i)] x=floor(x(:)); %for noninteger x(i) allx=0:max(x); %calculate cdf from 0 to max(x) allcdf=cumsum(binomialpmf(n,p,allx)); okx=(x>=0); %x(i) < 0 are zero-prob values x=(okx.*x); %set zero-prob x(i)=0 cdf= okx.*allcdf(x+1); %zero for zero-prob x(i)

3

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

cdf=binomialcdf(n.ˆ2) .y) %Usage: f=bivariategausspdf(muX.sigmaX.Name:joey iwatsuru Email:joeyiwat@yahoo.y) %Evaluate the bivariate Gaussian (muX.1). hot springs.com Phone:5017621195 binomialpmf y=binomialpmf(n.ˆ2) +(ny.r). P[X=n]]ˆt x=x(:). 4 Address:104 pine meadows loop.sigmaX.p.p.sigmaY.*ny))/(2*(1-rhoˆ2))). f=f/(2*pi*sigmax*sigmay*sqrt(1-rhoˆ2)). okx =(x>=0).p) rv X.sigmaY.x) %binomial(n.m) % m binomial(n. us (United States) Zip Code:71901 .*pb(x+1).x) Input: n and p are the parameters of a binomial (n. AR. Input: Scalar parameters muX.sigmaY. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=binomialrv(n./(i+1))*(pp/(1-pp)). binomialrv x=binomialrv(n. end pb=pb(:).5 pp=p.muY. x=count(cdf. end i=0:n-1. ny=(y-muY)/sigmaY.(2*rho*nx. if pp < p pb=fliplr(pb).rho. function pmf=binomialpmf(n.p) samples r=rand(m. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] if p<0.y.muY.*(x<=n).sigmaX.*(x==floor(x)).rho of the bivariate Gaussian PDF.p. scalars x and y.. % pb=[P[X=0] . x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).muY. else pp=1-p. x=okx.p.x.sigmaY.0:n).sigmaX.muY.rho) PDF nx=(x-muX)/sigmaX. bivariategausspdf function f=bivariategausspdf(muX. pmf=okx. f=exp(-((nx.m) Input: n and p are the parameters of a binomial random variable X .p.rho. ip= ((n-i).*x.x. Output: f the value of the bivariate Gaussian PDF at x. pb=((1-pp)ˆn)*cumprod([1 ip]).. p) random variable X .

%set zero prob x(i)=k x=((1-okx)*k)+(okx.x) % For discrete uniform (k. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=duniformrv(k. %allcdf = cdf values from 0 to max(x) allcdf=cumsum(duniformpmf(k.l.l) rv X.*x). function pmf=duniformpmf(k.x) Input: k and l are the parameters of a discrete uniform (k.l.r). x=k+count(cdf.*(x==floor(x)). 5 Address:104 pine meadows loop. l) random variable X . x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).x) %Usage: cdf=duniformcdf(k. %x(i)=0 for zero prob x(i) cdf= okx. hot springs. l) random variable X . output is % vector cdf: cdf(i)=Prob[X<=x(i)] x=floor(x(:)).m) Input: k and l are the parameters of a discrete uniform (k.l. cdf=duniformcdf(k. pmf=pmf(:)/(l-k+1).l.l. %for noninteger x_i allx=k:max(x).allx)).com Phone:5017621195 duniformcdf y=duniformcdf(k.k:l). duniformrv x=duniformrv(k.1).l.Name:joey iwatsuru Email:joeyiwat@yahoo.x) Input: k and l are the parameters of a discrete uniform (k.l. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).l. %x_i < k are zero prob values okx=(x>=k).l) random variable r=rand(m.l) rv X % and input vector x. us (United States) Zip Code:71901 .*(x<=l).l. %input = vector x %output= vector pmf: pmf(i)=Prob[X=x(i)] pmf= (x>=k). function cdf=duniformcdf(k.m) %returns m samples of a discrete %uniform (k.*allcdf(x-k+1). duniformpmf y=duniformpmf(k. l) random variable X . AR.x) %discrete uniform(k.

n-1). erlangrv x=erlangrv(n.*exp(-lambda*x).com Phone:5017621195 erlangb pb=erlangb(rho.c) %returns the Erlang-B blocking %probability for sn M/M/c/c %queue with load rho pn=exp(-rho)*poissonpmf(rho.lambda. function F=exponentialcdf(lambda.c). function f=erlangpdf(n. the blocking probability of the queue function pb=erlangb(rho. *(x.x) Input: n and lambda are the parameters of an Erlang random variable X . erlangcdf y=erlangcdf(n.x) Input: n and lambda are the parameters of an Erlang random variable X .x) Input: lambda is the parameter of an exponential random variable X . vector x Output: Vector y such that yi = FX (xi ) = 1 − e−λxi .lambda. erlangpdf y=erlangpdf(n.n).lambda. exponentialcdf y=exponentialcdf(lambda. Output: pb. hot springs. vector x Output: Vector y such that yi = FX (xi ). integer m Output: Length m vector x such that each xi is a sample of X function x=erlangrv(n. x=sum(reshape(y.x) f=((lambdaˆn)/factorial(n)).lambda.x) F=1. vector x Output: Vector y such that yi = f X (xi ) = λn xin−1 e−λxi /(n − 1)!. and the number of servers c of an M/M/c/c queue.c) Input: Offered load rho (ρ = λ/µ). %Usage: pb=erlangb(rho.2).lambda.0:c).m. AR. us (United States) Zip Code:71901 ..m*n).ˆ(n-1)).x) F=1.0-exp(-lambda*x).m) Input: n and lambda are the parameters of an Erlang random variable X . function F=erlangcdf(n.m) y=exponentialrv(lambda.lambda. pb=pn(c+1)/sum(pn). 6 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo.0-poissoncdf(lambda*x..

PXY). R=finiteexp(SX. integer m Output: Length m vector x such that each xi is a sample of X function x=exponentialrv(lambda. SY and probability grid PXY describing the ﬁnite random variables X and Y .x) Input: sx is the range of a ﬁnite random variable X . hot springs.SY. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). f=f. exponentialrv x=exponentialrv(lambda.com Phone:5017621195 exponentialpdf y=exponentialpdf(lambda. vx=finitevar(SX.PXY).SY. vy=finitevar(SY.PXY).PXY).. AR.m) x=-(1/lambda)*log(1-rand(m. pxi].*SY. 7 Address:104 pine meadows loop.x) f=lambda*exp(-lambda*x). rho=(R-ex*ey)/sqrt(vx*vy). the correlation coefﬁcient of X and Y function rho=finitecoeff(SX.p. end finitecoeff rho=finitecoeff(SX.PXY) %Calculate the correlation coefficient rho of %finite random variables X and Y ex=finiteexp(SX. Output: rho. %Usage: rho=finitecoeff(SX. . ey=finiteexp(SY. finitecdf y=finitecdf(sx. cdf=[cdf.p. function f=exponentialpdf(lambda.SY. for i=1:length(x) pxi= sum(p(find(s<=x(i)))). vector x Output: Vector y such that yi = f X (xi ) = λe−λxi . us (United States) Zip Code:71901 .} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % cdf: cdf(i)=P[X=x(i)] cdf=[].sx(2).*(x>=0).Name:joey iwatsuru Email:joeyiwat@yahoo.PXY) Input: Grids SX.m) Input: lambda is the parameter of an exponential random variable X .PXY). function cdf=finitecdf(s..x) % finite random variable X: % vector sx of sample space % elements {sx(1).1)).PXY). px is the corresponding probability assignment.x) Input: lambda is the parameter of an exponential random variable X .

r=rand(m. ey=finiteexp(SY. covxy=R-ex*ey. the expected value E[X ]. p is the corresponding probability assignment.*(px(:))).px) %returns the expected value E[X] %of finite random variable X described %by samples sx and probabilities px ex=sum((sx(:)). function x=finiterv(s.} % vector px of probabilities % px(i)=P[X=sx(i)] % Output is the vector % pmf: pmf(i)=P[X=x(i)] pmf=zeros(size(x(:))).. AR.PXY).Name:joey iwatsuru Email:joeyiwat@yahoo. function covxy=finitecov(SX.PXY). function pmf=finitepmf(sx. us (United States) Zip Code:71901 .r)).PXY) %returns the covariance of %finite random variables X and Y %given by grids SX. and PXY ex=finiteexp(SX.PXY) Input: Grids SX. x is a vector of possible sample values Output: y is a vector with y(i) = P[X = x(i)]. m is positive integer Output: x is a vector of m sample values y(i) = FX (x(i)). for i=1:length(x) pmf(i)= sum(px(find(sx==x(i)))).SY. R=finiteexp(SX.sx(2). function ex=finiteexp(sx.com Phone:5017621195 finitecov covxy=finitecov(SX. SY.px.SY. SY and probability grid PXY describing the ﬁnite random variables X and Y .PXY).p. %Usage: ex=finiteexp(sx.PXY). %Usage: cxy=finitecov(SX. x=s(1+count(cdf. finitepmf y=finitepmf(sx.m) Input: sx is the range of a ﬁnite random variable X . cdf=cumsum(p). Output: covxy.p. .p.x) Input: sx is the range of a ﬁnite random variable X .*SY.px).SY.x) % finite random variable X: % vector sx of sample space % elements {sx(1). 8 Address:104 pine meadows loop. hot springs.. px is the corresponding probability assignment. Output: ex.1). finiteexp ex=finiteexp(sx.px) Input: Probability vector px.m) % returns m samples % of finite (s.p) rv %s=s(:). the covariance of X and Y. vector of samples sx describing random variable X . end finiterv x=finiterv(sx.p=p(:).

px) Input: Probability vector px and vector of samples sx describing random variable X .sigma. %Usage: ex=finitevar(sx. Output: v. ex=finiteexp(sx.com Phone:5017621195 finitevar v=finitevar(sx. 9 Address:104 pine meadows loop.ˆ2. hot springs.x) Input: mu and sigma are the parameters of an Guassian random variable X .x) f=exp(-(x-mu).px)..x) f=phi((x-mu)/sigma).px) % returns the variance Var[X] % of finite random variables X described by % samples sx and probabilities px ex2=finiteexp(sx. integer m Output: Length m vector x such that each xi is a sample of X function x=gaussrv(mu.ˆ2/(2*sigmaˆ2))/. function f=gausspdf(mu.m) Input: mu and sigma are the parameters of an Gaussian random variable X . vector x Output: Vector y such that yi = FX (xi ) = ((xi − µ)/σ ). sqrt(2*pi*sigmaˆ2).1)). function f=gausscdf(mu.sigma. us (United States) Zip Code:71901 . v=ex2-(exˆ2).px).sigma. variance function v=finitevar(sx. gaussrv x=gaussrv(mu. AR. gausspdf y=gausspdf(mu. gausscdf y=gausscdf(mu. the Var[X ].x) Input: mu and sigma are the parameters of an Guassian random variable X . vector x Output: Vector y such that yi = f X (xi ).m) x=mu +(sigma*randn(m.sigma.px).Name:joey iwatsuru Email:joeyiwat@yahoo..sigma.sigma.

1). geometriccdf y=geometriccdf(p. z=x(:)-mu(:). x is a length n vector. gaussvectorpdf f=gaussvector(mu. function cdf=geometriccdf(p.com Phone:5017621195 gaussvector x=gaussvector(mu.C. us (United States) Zip Code:71901 .C. m is an integer. AR. %each with mean mu %and covariance matrix C if (min(size(C))==1) C=toeplitz(C). x=V*(Dˆ(0. output is vector %cdf such that cdf_i=Prob(X<=x_i) x=(x(:)>=1).m) %output: m Gaussian vectors.m).V]=svd(C). gaussvector can be called in two ways: • C is the n × n covariance matrix. or a length 1 scalar.5))*randn(n. %For input vector x.x) Input: p is the parameter of a geometric random variable X .x) Input: For a Gaussian (µX . mu is either a length n vector. • C is the length n vector equal to the ﬁrst row of a symmetric Toeplitz covariance matrix CX . f=exp(-z’*inv(C)*z)/.x) % for geometric(p) rv X.*floor(x(:)). function f=gaussvectorpdf(mu. Output: f is the Gaussian vector PDF f X (x) evaluated at x. if (length(mu)==1) mu=mu*ones(n. hot springs. If mu is a length n vector. each element of X is assumed to have mean mu..Name:joey iwatsuru Email:joeyiwat@yahoo.m) Input: For a Gaussian (µX . CX ) random vector X. CX ) random vector X.m)).D. mu is a length n vector.. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)). cdf=1-((1-p).x) n=length(x).C. m is an integer. C is the n × n covariance matrix. or a length 1 scalar. then mu is the expected value vector. end [U.ˆx). otherwise. sqrt((2*pi)ˆn*det(C)). mu is either a length n vector.2)...C. end n=size(C. Output: n × m matrix x such that each column x(:. 10 Address:104 pine meadows loop.i) is a sample vector of X function x=gaussvector(mu. +(mu(:)*ones(1.

m) %Usage: x=geometricrv(p. pmf= (x>0).m) Input: p is the parameters of a geometric random variable X .1). icdfrv x=icdfrv(@icdf.m) %Usage: x=icdfrv(@icdf.u).x) Input: p is the parameter of a geometric random variable X . m is a positive integer Output: x is a vector of m independent samples of random variable X function x=geometricrv(p. function pmf=geometricpmf(p.*(x==floor(x)).Name:joey iwatsuru Email:joeyiwat@yahoo. geometricrv x=geometricrv(p. x=feval(icdfhandle.com Phone:5017621195 geometricpmf y=geometricpmf(p.m) %returns m samples of rv X %with inverse CDF icdf.m u=rand(m. integer m Output: Length m vector x such that each xi is a sample of X function x=icdfrv(icdfhandle. pmf= p*((1-p). 11 Address:104 pine meadows loop. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)). AR.m) Input: @icdfrv is a “handle” (a kind of pointer) to a M ATLAB function icdf.ˆ(x-1)).m that is M ATLAB’s representation of an inverse −1 CDF FX (x) of a random variable X .1).m) % returns m samples of a geometric (p) rv r=rand(m. x=ceil(log(1-r)/log(1-p)). us (United States) Zip Code:71901 . hot springs.*pmf.x) %geometric(p) rv X %out: pmf(i)=Prob[X=x(i)] x=x(:).

% for noninteger x(i) allx=k:max(x). % pmf(i)=0 unless x(i) >= k pmf=okx. % other values are OK okx=(x>=k).com Phone:5017621195 pascalcdf y=pascalcdf(k. cdf= okx. n=max(x). pascalpmf y=pascalpmf(k. %set zero-prob x(i)=k.p) rv X. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).p.x) %For Pascal (k.(1-p)*(i. AR. %allcdf holds all needed cdf values allcdf=cumsum(pascalpmf(k.p.*pb(x-k+1).Name:joey iwatsuru Email:joeyiwat@yahoo. %set bad x(i)=k to stop bad indexing x=(okx. and %input vector x.allx)). function pmf=pascalpmf(k. output is a %vector pmf: pmf(i)=Prob[X=x(i)] x=x(:). the output %is a vector cdf such that % cdf(i)=Prob[X<=x(i)] x=floor(x(:)). i=(k:n-1)’. %pb=all n-k+1 pascal probs pb=(pˆk)*cumprod(ip).*x) + k*(1-okx). hot springs.*x) +((1-okx)*k). %just so indexing is not fouled up x=(okx. x is a vector of possible sample values Output: y is a vector with y(i) = FX (x(i)).p) rv X %and input vector x./(i+1-k))].x) %For a pascal (k.*(x>=k). us (United States) Zip Code:71901 .*allcdf(x-k+1). 12 Address:104 pine meadows loop.p.p.x) %Usage: cdf=pascalcdf(k. %x_i < k have zero-prob. okx=(x==floor(x)).p.x) Input: k and p are the parameters of a Pascal (k. ip= [1 . p) random variable X .p. p) random variable X . function cdf=pascalcdf(k.x) Input: k and p are the parameters of a Pascal (k.

phi y=phi(x) Input: Vector x Output: Vector y such that y(i) = (x(i)). while cdf(length(cdf)) <=rmax xmax=2*xmax. us (United States) Zip Code:71901 . AR.Name:joey iwatsuru Email:joeyiwat@yahoo. xmax=ceil(2*(k/p)).x) Input: alpha is the parameter of a Poisson (α) random variable X . = function cdf=poissoncdf(alpha. hot springs.com Phone:5017621195 pascalrv x=pascalrv(k. cdf=cumsum(poissonpmf(alpha. x is a vector of possible sample values Output: y is a vector with y(i) FX (x(i)). function y=phi(x) sq2=sqrt(2).r).p. m is a positive integer Output: x is a vector of m independent samples of random variable X function x=pascalrv(k.p) rv r=rand(m.*cdf(x+1).m) Input: k and p are the parameters of a Pascal random variable X . %cdf=0 for x(i)<0 13 Address:104 pine meadows loop.*x).x) %output cdf(i)=Prob[X<=x(i)] x=floor(x(:)). cdf=pascalcdf(k. sx=xmin:xmax. %cdf from 0 to max(x) okx=(x>=0).5*erf(x/sq2). rmax=max(r).m) % return m samples of pascal(k.p.sx). sx=0:max(x). xmin=k. end x=xmin+countless(cdf. y= 0. %set max range sx=xmin:xmax.%x(i)<0 -> cdf=0 x=(okx.5 + 0.sx)). poissoncdf y=poissoncdf(alpha. cdf=pascalcdf(k.p.sx).%set negative x(i)=0 cdf= okx.p.1).

*x.*pb(x+1).. pmf=okx.b. pb=exp([-alpha. function pmf=poissonpmf(alpha.m) Input: alpha is the parameter of a Poisson (α) random variable X . %out=vector pmf: pmf(i)=P[X=x(i)] x=x(:). uniformcdf y=uniformcdf(a.0*(x>=b).b. 14 Address:104 pine meadows loop. x=okx.m) %return m samples of poisson(alpha) rv X r=rand(m.sx). vector x Output: Vector y such that yi = FX (xi ) function F=uniformcdf(a. xmax=ceil(2*alpha). . okx=(x>=0).sx). hot springs. F=f+1. cdf=poissoncdf(alpha. sx=xmin:xmax. AR. %set max range sx=xmin:xmax.x) Input: alpha is the parameter of a Poisson (α) random variable X .*(x==floor(x)). us (United States) Zip Code:71901 . logfacts =cumsum(log(k)).Name:joey iwatsuru Email:joeyiwat@yahoo. rmax=max(r).1). end x=xmin+countless(cdf. %pmf(i)=0 for zero-prob x(i) poissonrv x=poissonrv(alpha.x) %Usage: F=uniformcdf(a.x) Input: a and ( b) are parameters for continuous uniform random variable X .x) %Poisson (alpha) rv X.com Phone:5017621195 poissonpmf y=poissonpmf(alpha.*((x>=a) & (x<b))/(b-a). m is a positive integer Output: x is a vector of m independent samples of random variable X function x=poissonrv(alpha.x) %returns the CDF of a continuous %uniform rv evaluated at x F=x. cdf=poissoncdf(alpha.r). %while ( sum(cdf <=rmax) ==(xmax-xmin+1) ) while cdf(length(cdf)) <=rmax xmax=2*xmax. -alpha+ (k*log(alpha))-logfacts]). k=(1:max(x))’. xmin=0.. x is a vector of possible sample values Output: y is a vector with y(i) = PX (x(i)).b.

vector x Output: Vector y such that yi = f X (xi ) function f=uniformpdf(a. hot springs. function x=uniformrv(a.b) random varible x=a+(b-a)*rand(m.b.com Phone:5017621195 uniformpdf y=uniformpdf(a.x) %Usage: f=uniformpdf(a.x) %returns the PDF of a continuous %uniform rv evaluated at x f=((x>=a) & (x<b))/(b-a).m) %Returns m samples of a %uniform (a.b.b. 15 Address:104 pine meadows loop.m) Input: a and ( b) are parameters for continuous uniform random variable X . AR. us (United States) Zip Code:71901 .b.Name:joey iwatsuru Email:joeyiwat@yahoo. positive integer m Output: m element vector x such that each x(i) is a sample of X .1).b.b.x) Input: a and ( b) are parameters for continuous uniform random variable X . uniformrv x=uniformrv(a.m) %Usage: x=uniformrv(a.

.n-1)]*Rˆ(-1))’.com Phone:5017621195 Functions for Stochastic Processes brownian w=brownian(alpha.*gaussrv(0.t) %Brownian motion process %sampled at t(1)<t(2)< .p0.1).1)=ones(n. n=size(Q.1)-1. us (United States) Zip Code:71901 .1). cmcstatprob pv=cmcstatprob(Q) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain Output: pv is the stationary probability vector for the continuous-time Markov chain function pv = cmcstatprob(Q) %Q has zero diagonal rates R=Q-diag(sum(Q.t) Input: n × n state transition matrix Q for a continuous-time ﬁnite Markov chain. %max no.t(1:n-1)]. dmcstatprob pv=dmcstatprob(P) Input: n × n stochastic matrix P representing a discrete-time aperiodic irreducible ﬁnite Markov chain Output: pv is the stationary probability vector. hot springs.t) %Q has zero diagonal rates %initial state probabilities p0 K=size(Q. pv=([1 zeros(1. R(:. end R=Q-diag(sum(Q. AR. state %check for integer p0 if (length(p0)==1) p0=((0:K)==p0). A=(eye(n)-P). function w=brownian(alpha. function pv = dmcstatprob(P) n=size(P. delta=t-[0. Output: w is a vector such that w(i) is the position at time t(i) of the particle in Brownian motion.2)).1. A(:. length n vector p0 denoting the initial state probabilities.t) Input: t is a vector holding an ordered sequence of inspection times..n). x=sqrt(alpha*delta).1). w=cumsum(x). 16 Address:104 pine meadows loop. then the simulation starts in state p0 function pv = cmcprob(Q.p0.Name:joey iwatsuru Email:joeyiwat@yahoo. pv= (p0(:)’*expm(R*t))’. alpha is the scaling constant of a Brownian motion process such that the ith increment has variance α(ti − ti−1 ).1)=ones(n.2)). n=length(t).1). t=t(:).n-1)]*Aˆ(-1))’. cmcprob pv=cmcprob(Q. pv=([1 zeros(1. nonengative scalar t Output: Length n vector pv such that pv(t) is the state probability vector at time t of the Markov chain Comment: If p0 is a scalar integer.

p0. while (s(length(s))< T). us (United States) Zip Code:71901 .1)-1. T ]: The output is an n × 2 matrix ST such that the ﬁrst column ST(:. poissonprocess N=poissonprocess(lambda. hot springs. AR. %truncate last holding time ST(n. Note that length n is a Poisson random variable with expected value λT .max(t)). S=simcmcstep(Q. R=ps’*v.:)/v(1+s). function N=poissonprocess(lambda. Comment: If p0 is a scalar integer. There are decidedly better ways to create a set of arrival times.T)..p00.2*n).p0. is random.5. v=sum(Q. ST(i. cumsum(exponentialrv(lambda. s(n)] % s(n)<= T < s(n+1) n=ceil(1.T) %arrival times s=[s(1) .1).2))<T). .p0..t) %input: rate lambda>0. s_new].1) is the sequence of system states and the second column ST(:. p00=Q(1+s.’ Output: N is a vector such that N(i) is the number of arrival by inspection time t(i).2)).2))<T). of arrivals by t(i) s=poissonarrivals(lambda.n)). s_new=s(length(s))+ . K=size(Q.t). vector t %For a sample function of a %Poisson process of rate lambda. then the simulation starts in state p0. state %calc average trans. ST=simcmcstep(Q. function ST=simcmc(Q.2). the number of state occupancy periods.. end s=s(s<=T).6*T/R). N=count(s..2)=T-sum(ST(1:n-1. max no. end n=1+sum(cumsum(ST(:. %N(i) = no. s(n)]’ is a vector such that s(i) is ith arrival time.2) is the amount of time spent in each state. while (sum(ST(:.. s=cumsum(exponentialrv(lambda. s=[s.. integer n Output: A simulation of the Markov chain system over the time interval [0. T marks the end of an observation interval [0. rate ps=cmcstatprob(Q).T) function s=poissonarrivals(lambda. ST=ST(1:n. see Problem 10. Output: s=[s(1). n=ceil(0. 17 Address:104 pine meadows loop.n)). Input: lambda is the arrival rate of a Poisson process.Name:joey iwatsuru Email:joeyiwat@yahoo.T) Input: state transition matrix Q for a continuous-time ﬁnite Markov chain.t) Input: lambda is the arrival rate of a Poisson process. ST=[ST. vector p0 denoting the initial state probabilities.S]..n). T ].1).13. simcmc ST=simcmc(Q.:). That is.1*lambda*T). Comment: This code is pretty stupid. t is a vector of “inspection times’.com Phone:5017621195 poissonarrivals s=poissonarrivals(lambda. Note that n.1). s=ST(size(ST.2) is the amount of time the system spends in state ST(i.

then the simulation starts in state p0 18 Address:104 pine meadows loop. integer n Output: A simulation of n steps of the continuous-time Markov chain system: The output is an n × 2 matrix ST such that the ﬁrst column ST(:.P(x(m)+1. %highest no.1)=simdmc(P.2) is the amount of time spent in each state.. then the simulation starts in state p0.n) K=size(P. state probabilities p0 K=size(Q. Comment: If p0 is a scalar integer.com Phone:5017621195 simcmcstep S=simcmcstep(Q.1).p0./v.n).1)) .n) function x=simdmc(P.2).p0. %x(m)= state at time m-1 for m=1:n. vector p0 denoting the initial state probabilities.p0.2). rates t=1. hot springs.p0. %S=simcmcstep(Q. %state space x=zeros(n+1.p0. This program is the basis for simcmc.1). %max no.Name:joey iwatsuru Email:joeyiwat@yahoo.. x(m+1)=finiterv(sx. end x(1)=finiterv(sx.1). state sx=0:K. function S=simcmcstep(Q. end v=sum(Q. Output: A simulation of the Markov chain system such that for the length n vector x. %initialization if (length(p0)==1) %convert integer p0 to prob vector p0=((0:K)==p0). . length n vector p0 denoting the initial state probabilities.p0. x(m) is the state at time m-1 of the Markov chain. AR. simdmc x=simdmc(P.n+1). ST(i.1) is the length n sequence of system states and the second column ST(:.n).2)=t(1+S(:. P=diag(t)*Q. Comment: If p0 is a scalar integer. %state dep. state S=zeros(n+1. end Input: n ×n stochastic matrix P which is the state transition matrix of a discrete-time ﬁnite Markov chain. S(:. integer n.1)-1. rate matrix Q.n) Input: State transition matrix Q for a continuoustime ﬁnite Markov chain.p0. us (United States) Zip Code:71901 .*exponentialrv(1. % init.%init allocation %check for integer p0 if (length(p0)==1) p0=((0:K)==p0).1)-1.n) % Simulate n steps of a cts % Markov Chain.:). That is.2) is the amount of time the system spends in state ST(i. S(:.1).

function n=countequal(x. countequal n=countequal(x. hot springs. %each column of MX = x %each row of MY = y n=(sum((MX<MY). AR. %each column of MX = x %each row of MY = y n=(sum((MX<=MY). countless n=countless(x.0j)*2*pi*(n*(n’))/N).y) %Usage: n=countless(x.y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x less than or equal to y(i). Usage: F=dftmat(N) %F is the N by N DFT matrix n=(0:N-1)’.y) %n(i)= # elements of x < y(i) [MX.y).y) Input: Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x strictly less than y(i).MY]=ndgrid(x.y) %Usage: n=countequal(x. us (United States) Zip Code:71901 .y) Input: Vectors x and y Output: Vector n such that n(i) is the number of elements of x equal to y(i).y) %n(j)= # elements of x = y(j) [MX.y) %n(i)= # elements of x <= y(i) [MX. F=exp((-1. function n=countless(x.MY]=ndgrid(x. 19 Address:104 pine meadows loop. %each column of MX = x %each row of MY = y n=(sum((MX==MY).1))’.1))’.y).1))’. Output: F is the N by N discrete Fourier transform matrix function F = dftmat(N).y).MY]=ndgrid(x.y) %Usage n=count(x.com Phone:5017621195 Random Utilities count n=count(x. dftmat F=dftmat(N) Input: Integer N . function n=count(x.Name:joey iwatsuru Email:joeyiwat@yahoo.

The output fxy is ordered so that the rows match the ordering of rows in the matrix [SX(:) fftc S=fftc(r. . N=hist(J.3). r(2k+1)] holding the time sequence r−k . L=1+floor(length(r)/2). 20 Address:104 pine meadows loop. SY and the probability grid PXY.Name:joey iwatsuru Email:joeyiwat@yahoo. Grids SX and SY representing the sample space. xy is an m × 2 matrix holding a list of sample values pairs.2) fxy(k.2)] is a unique (X.I. Y ) pair with relative frequency fxy(k. .[2 1 3]). function fxy = freqxy(xy.2)] % = kth unique pair [x y] and % fxy(k. if (nargin>1) N=varargin{2}(1). [U. end R=fft(r.N): N point DFT of r % fftc(r): length(r) DFT of r r=varargin{1}. function S=fftc(varargin).SY) %xy is an m x 2 matrix: %xy(i. . In each row [fxy(k. Output: S is the DFT of r Comment: Supports the same calling conventions as fft.3)= corresp. %extend xy to include a sample %for all possible (X. phase=2*pi*(n/N)*(L-1). .SX.size(R)). . SY(:) PXY(:)].J]=unique(xy. . Output: fxy is a K × 3 matrix. S=fftc(r) Input: Vector r=[r(1) . S=R.*exp((1.0j)*phase). r0 .’rows’). %DFT for a signal r %centered at the origin %Usage: % fftc(r.PXY(:).Y) pairs: xy=[xy.com Phone:5017621195 freqxy fxy=freqxy(xy. us (United States) Zip Code:71901 .m). freq.3)] [fxy(k. .N). SX(:) SY(:)]. rel. fxy=[U N(:)]. n=reshape(0:(N-1). AR. N=N/sum(N).N).SY) Input: For random variables X and Y .:)= ith sample pair X.:) is the ith sample pair (X.1) fxy(k.1:max(J))-1. %reorder fxy rows to match %rows of [SX(:) SY(:) PXY(:)]: fxy=sortrows(fxy.SY) %Usage: fxy = freqxy(xy. else N=(2*L)-1.SX.. . yy(i.1) fxy(k. xy=finiterv(S.. Y ). rk centered around the origin. hot springs.Y %Output fxy is a K x 3 matrix: % [fxy(k.SX. Comment: Given the grids SX.1) fxy(k. a list of random sample value pairs xy can be simulated by the commands S=[SX(:) SY(:)].

1*max(px). xmin=xmin-xborder. function h=pmfplot(sx.’VerticalAlignment’.05*(xmax-xmin).yls) %Usage: pmfplot(sx.Name:joey iwatsuru Email:joeyiwat@yahoo.5 0 otherwise function y=rect(x). y=((1. px=(px(:))’. PM=[zeros(size(px)).’-k’). xborder=0.’y axis text’) Input: Sample space vector sx and PMF vector px for ﬁnite random variable PXY.PM. sinc y=sinc(x) Input: Vector x Output: Vector y such that yi = sinc(xi ) = sin(π xi ) π xi function y=sinc(x). sx=(sx(:))’.px.yls) %sx and px are vectors.com Phone:5017621195 pmfplot pmfplot(sx.xls.0-(x==0)). axis([xmin xmax 0 ymax]). %Usage:y=rect(x). h=plot(XM.0*(x==0)). y=sin(pi*xx). ylabel(yls. px].3). px=px(nonzero)./(pi*xx).5). us (United States) Zip Code:71901 .0*(abs(x)<0. XM = [sx. y=1. sx=sx(nonzero). sx]. rect y=rect(x) Input: Vector x Output: Vector y such that yi = rect(xi ) = 1 |xi | < 0. 21 Address:104 pine meadows loop.’x’. AR. xmax=xmax+xborder.’Bottom’).*y)+ (1.px. hot springs. ymax=1. set(h.xls. if (nargin==4) xlabel(xls). xmax=max(sx). xx=x+(x==0). end xmin=min(sx).px. Comment: The code is ugly because it makes sure to produce the right limit value at xi = 0.’LineWidth’. px is the PMF %xls and yls are x and y label strings nonzero=find(px). optional text strings xls and yls Output: A plot of the PMF PX (x) in the bar style used in the text.

2)]).p0. 22 Address:104 pine meadows loop.2) = state visit times. % The cumulative sum % of visit times are transition instances.p0. then the width of the stair is proportional to the time spent in that state.Name:joey iwatsuru Email:joeyiwat@yahoo. If S is n × 2 state/time matrix ST.com Phone:5017621195 simplot simplot(S.n) or the n × 2 state/time matrix ST generated by either ST=simcmc(Q. AR. a cts time Markov chain % is assumed where % S(:. Output: A “stairs” plot showing the sequence of simulation states over time.Y). S(:.1) .ylabel) % Plots the output of a simulated state sequence % If S is N by 1.1).yls). % S(:.xlabel.1) = state sequence. hot springs.’Bottom’). X=cumsum([0 . %h=simplot(S.1)].2)==1) S=[S ones(size(S))]. S(size(S.n). a discrete time chain is assumed % with visit times of one unit. % If S is an N by 2 matrix.xls.ylabel) function h=simplot(S.xlabel.’VerticalAlignment’. then each stair has equal width.T) or ST=simcmcstep(Q. us (United States) Zip Code:71901 . Comment: If S is just a state sequence vector. end Input: The simulated state sequence vector S generated by S=simdmc(P. % h is a handle to a stairs plot of the state sequence % vs state transition times %in case of discrete time simulation if (size(S. end Y=[S(:. h=stairs(X.p0. if (nargin==3) xlabel(xls). ylabel(yls.

please send email to ryates@winlab. When errors are found. Also available is a manual probmatlab. • We have made a substantial effort to check the solution to every quiz. This archive has programs of general purpose programs for solving probability problems as well as speciﬁc .edu. hot springs.zip. a probability close to unity) that errors will be found. AR. Yates and David J. 1 Address:104 pine meadows loop. Nevertheless.Name:joey iwatsuru Email:joeyiwat@yahoo.m ﬁles associated with examples or quizzes in the text. 2004 • The M ATLAB section quizzes at the end of each chapter use programs available for download as the archive matcode.pdf describing the general purpose . corrected solutions will be posted at the website.com Phone:5017621195 Probability and Stochastic Processes A Friendly Introduction for Electrical and Computer Engineers Second Edition Quiz Solutions Roy D. Goodman May 22.m ﬁles in matcode.rutgers. us (United States) Zip Code:71901 . there is a nonzero probability (in fact. If you ﬁnd errors or have suggestions or comments.zip.

dvd} (4) R ∩ M (6) T c − M (7) A4 = {vvv. The pair A4 and B4 are not mutually exclusive since dvd belongs to A4 and B4 . we can simply check for these properties.2 (1) A1 = {vvv. dvv. ddv. dvv. the shaded area represents the indicated set. The pair A1 and B1 are mutually exclusive and collectively exhaustive. vdd. M T O M T O M T O (1) R = T c (2) M ∪ O (3) M ∩ O M T O M T O M T O (4) R ∪ M Quiz 1. vvd. However. Also. ddd} (3) A2 = {vvv. us (United States) Zip Code:71901 . 2 Address:104 pine meadows loop. vdd} (2) B1 = {dvv. AR. ddv. dvd. vdd. A4 and B4 are collectively exhaustive. ddv.Name:joey iwatsuru Email:joeyiwat@yahoo.1 In the Venn diagrams for parts (a)-(g) below. The pair A2 and B2 are mutually exclusive and collectively exhaustive.com Phone:5017621195 Quiz Solutions – Chapter 1 Quiz 1. vdd} Recall that Ai and Bi are collectively exhaustive if Ai ∪ Bi = S. ddd} (5) A3 = {vvv. The pair A3 and B3 are mutually exclusive but not collectively exhaustive. Ai and Bi are mutually exclusive if Ai ∩ Bi = φ. dvd} (4) B2 = {vdv. vdv. dvd. dvd. vdv. ddv} (8) B4 = {ddd. ddd} (6) B3 = {vdv. vvd. Since we have written down each pair Ai and Bi above. vvd. hot springs.

02 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. .35 = 0.02 = 0.6 = P [V L] + P [DL] (1) (2) Since P[V L] = 0.35 ? B ? ? In a roundabout way. .25 B 0. s100 }] = 11 × 0. . s100 }] = 31 × 0.62 (8) P[student passes] = P[{s60 . the problem statement tells us how to ﬁll in the table. D B. s89 }] = 39 × 0.78 (7) P[a C grade or better] = P[{s70 . .35.05 and the complete table is V D L 0.22 (4) P[F] = P[{s51 . we can conclude that P[V B] = 0.25 B 0. Each of these outcomes has probability 0.7 = P [V L] + P [V B] P [L] = 0. s52 . . .35 0. us (United States) Zip Code:71901 . s59 }] = 9 × 0.18 (5) P[T ≥ 80] = P[{s80 . .02 (2) P[{s100 }] = 0.02 = 0. hot springs. . AR. This allows us to ﬁll in two more table entries: V D L 0. .02 = 0. This implies P[D B] = 0. P [V ] = 0.42 (6) P[T < 90] = P[{s51 . .4 We can describe this experiment by the event space consisting of the four possible events V B. . (1) P[{s79 }] = 0. We represent these events in the table: V D L 0.02 (3) P[A] = P[{s90 .35 0. . .25. . s100 }] = 41 × 0. s100 }] = 21 × 0. . . In particular. . .02 = 0. and DL.02 = 0. .35 ? The remaining table entry is ﬁlled in by observing that the probabilities must sum to 1.02.com Phone:5017621195 Quiz 1.82 Quiz 1.35 and that P[DL] = 0. . . .35 0.05 Finding the various probabilities is now straightforward: 3 Address:104 pine meadows loop.6 − 0. .3 There are exactly 50 equally likely outcomes: s51 through s100 . . V L.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

(1) P[DL] = 0.25 (2) P[D ∪ L] = P[V L] + P[DL] + P[D B] = 0.35 + 0.25 + 0.05 = 0.65. (3) P[V B] = 0.35 (4) P[V ∪ L] = P[V ] + P[L] − P[V L] = 0.7 + 0.6 − 0.35 = 0.95 (5) P[V ∪ D] = P[S] = 1 (6) P[L B] = P[L L c ] = 0 Quiz 1.5 (1) The probability of exactly two voice calls is P [N V = 2] = P [{vvd, vdv, dvv}] = 0.3 (2) The probability of at least one voice call is P [N V ≥ 1] = P [{vdd, dvd, ddv, vvd, vdv, dvv, vvv}] = 6(0.1) + 0.2 = 0.8 An easier way to get the same answer is to observe that P [N V ≥ 1] = 1 − P [N V < 1] = 1 − P [N V = 0] = 1 − P [{ddd}] = 0.8 (4) (2) (3) (1)

(3) The conditional probability of two voice calls followed by a data call given that there were two voice calls is 1 P [{vvd} , N V = 2] P [{vvd}] 0.1 = (5) = = P [{vvd} |N V = 2] = P [N V = 2] P [N V = 2] 0.3 3 (4) The conditional probability of two data calls followed by a voice call given there were two voice calls is P [{ddv} , N V = 2] P [{ddv} |N V = 2] = =0 (6) P [N V = 2] The joint event of the outcome ddv and exactly two voice calls has probability zero since there is only one voice call in the outcome ddv. (5) The conditional probability of exactly two voice calls given at least one voice call is P [N V = 2, N V ≥ 1] P [N V = 2] 0.3 3 = = = (7) P [N V = 2|Nv ≥ 1] = P [N V ≥ 1] P [N V ≥ 1] 0.8 8 (6) The conditional probability of at least one voice call given there were exactly two voice calls is P [N V ≥ 1, N V = 2] P [N V = 2] P [N V ≥ 1|N V = 2] = = =1 (8) P [N V = 2] P [N V = 2] Given that there were two voice calls, there must have been at least one voice call. 4

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.6 In this experiment, there are four outcomes with probabilities P[{vv}] = (0.8)2 = 0.64 P[{dv}] = (0.2)(0.8) = 0.16 P[{vd}] = (0.8)(0.2) = 0.16 P[{dd}] = (0.2)2 = 0.04

When checking the independence of any two events A and B, it’s wise to avoid intuition and simply check whether P[AB] = P[A]P[B]. Using the probabilities of the outcomes, we now can test for the independence of events. (1) First, we calculate the probability of the joint event: P [N V = 2, N V ≥ 1] = P [N V = 2] = P [{vv}] = 0.64 Next, we observe that P [N V ≥ 1] = P [{vd, dv, vv}] = 0.96 Finally, we make the comparison P [N V = 2] P [N V ≥ 1] = (0.64)(0.96) = P [N V = 2, N V ≥ 1] which shows the two events are dependent. (2) The probability of the joint event is P [N V ≥ 1, C1 = v] = P [{vd, vv}] = 0.80 From part (a), P[N V ≥ 1] = 0.96. Further, P[C1 = v] = 0.8 so that P [N V ≥ 1] P [C1 = v] = (0.96)(0.8) = 0.768 = P [N V ≥ 1, C1 = v] Hence, the events are dependent. (3) The problem statement that the calls were independent implies that the events the second call is a voice call, {C2 = v}, and the ﬁrst call is a data call, {C1 = d} are independent events. Just to be sure, we can do the calculations to check: P [C1 = d, C2 = v] = P [{dv}] = 0.16 (6) Since P[C1 = d]P[C2 = v] = (0.2)(0.8) = 0.16, we conﬁrm that the events are independent. Note that this shouldn’t be surprising since we used the information that the calls were independent in the problem statement to determine the probabilities of the outcomes. (4) The probability of the joint event is P [C2 = v, N V is even] = P [{vv}] = 0.64 Also, each event has probability P [C2 = v] = P [{dv, vv}] = 0.8, P [N V is even] = P [{dd, vv}] = 0.68 (8) Thus, P[C2 = v]P[N V is even] = (0.8)(0.68) = 0.544. Since P[C2 = v, N V is even] = 0.544, the events are dependent. 5 (7) (5) (4) (3) (2) (1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 1.7 Let Fi denote the event that that the user is found on page i. The tree for the experiment is

0.8 ¨ F1 0.8 ¨ F2 0.8 ¨ F3 ¨¨ ¨¨ ¨¨ ¨ ¨¨ ¨¨ c c c ¨¨ F3 F1 ¨ F2 ¨ 0.2 0.2 0.2

The user is found unless all three paging attempts fail. Thus the probability the user is found is c c c P [F] = 1 − P F1 F2 F3 = 1 − (0.2)3 = 0.992 (1) Quiz 1.8 (1) We can view choosing each bit in the code word as a subexperiment. Each subexperiment has two possible outcomes: 0 and 1. Thus by the fundamental principle of counting, there are 2 × 2 × 2 × 2 = 24 = 16 possible code words. (2) An experiment that can yield all possible code words with two zeroes is to choose which 2 bits (out of 4 bits) will be zero. The other two bits then must be ones. There are 4 = 6 ways to do this. Hence, there are six code words with exactly two zeroes. 2 For this problem, it is also possible to simply enumerate the six code words: 1100, 1010, 1001, 0101, 0110, 0011. (3) When the ﬁrst bit must be a zero, then the ﬁrst subexperiment of choosing the ﬁrst bit has only one outcome. For each of the next three bits, we have two choices. In this case, there are 1 × 2 × 2 × 2 = 8 ways of choosing a code word. (4) For the constant ratio code, we can specify a code word by choosing M of the bits to be ones. The other N − M bits will be zeroes. The number of ways of choosing such N a code word is M . For N = 8 and M = 3, there are 8 = 56 code words. 3 Quiz 1.9 (1) In this problem, k bits received in error is the same as k failures in 100 trials. The failure probability is = 1 − p and the success probability is 1 − = p. That is, the probability of k bits in error and 100 − k correctly received bits is P Sk,100−k = 100 k 6

k

(1 − )100−k

(1)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

we generate vector X as a function of R to represent the 3 possible outcomes of a ﬂip.4.com Phone:5017621195 For = 0.99) 2 3 99 (2) (3) (4) (5) = 0. Lastly. Thus each P[Ck ] has the binomial probability 9 (P [C])8 (1 − P [C])9−8 = 9 p 8n (1 − p n ).01.100).100 = (1 − )100 = (0.11 R=rand(1..01) (0.99) 8 = 0. Y=hist(X. X(i)=1 if ﬂip i was heads. X(i)=2 if ﬂip i was tails..01) (0. then X(i)=2. + (2*(R>0.. The probability that a chip works is P[C] = pn . 8 P [C9 ] = (P [C])9 = p 9n . To see how this works. Since transistor failures are independent of each other.1:3) (1) (2) (3) For a M ATLAB simulation.9819 = 0.. That is. Let Ck denote the event that exactly k chips work. + (3*(R>0.4. The module works if either 8 chips work or 9 chips work. These three cases will have probabilities 0.1.3660 P S1. AR. 700(0.98 = 4950(0. we note there are three cases: • If R(i) <= 0. and X(i)=3) is ﬂip i landed on the edge.97 = 161.9)) .Name:joey iwatsuru Email:joeyiwat@yahoo. P [C8 ] = The probability a memory module works is P [M] = P [C8 ] + P [C9 ] = p 8n (9 − 8 p n ) Quiz 1. 7 Address:104 pine meadows loop. 0.99) (2) The probability a packet is decoded correctly is just P [C] = P S0. chip failures are also independent. we use the hist function to count how many occurences of each possible value of X(i).4) .9. the transistors in the chip are like devices in series.9)).*(R<=0. Second.3700 9 97 P S2. us (United States) Zip Code:71901 . we ﬁrst generate a vector R of 100 random numbers.98 + P S3. hot springs.10 Since the chip works only if all n transistors work.99 = 100(0.0610 (6) Quiz 1. P S0.97 = 0.01)(0.5 and 0.99 + P S2.4 < R(i) and R(i)<=0.99)100 = 0.4).1849 P S3. then X(i)=3. X=(R<= 0. then X(i)=1. • If 0. • If 0.100 + P S1.9 < R(i).

that is.1 The sample space.Name:joey iwatsuru Email:joeyiwat@yahoo.3 Decoding each transmitted bit is an independent trial where we call a bit error a “success.” Each bit is in error.2 (1) To ﬁnd c. (1) The random variable X is the number of trials up to and including the ﬁrst success.1. the trial is a success.com Phone:5017621195 Quiz Solutions – Chapter 2 Quiz 2. us (United States) Zip Code:71901 .5 0. probabilities and corresponding grades for the experiment are Outcome P[·] BB BC CB CC Quiz 2. 0 otherwise (1) (2) If p = 0. .5 0.16 2 PN (n) = c 1 + n=1 1 1 + 2 3 =1 (1) This implies c = 6/11. (2) P[N = 1] = PN (1) = c = 6/11 (3) P[N ≥ 2] = PN (2) + PN (3) = c/2 + c/3 = 5/11 (4) P[N > 3] = ∞ n=4 PN (n) = 0 Quiz 2. That is.0387 8 (2) Address:104 pine meadows loop. .11. Similar to Example 2. . Now that we have found c.1)(0. hot springs.9)9 = 0.24 2. AR. Now we can interpret each experiment in the generic context of independent trials. X has the geometric PMF PX (x) = p(1 − p)x−1 x = 1. then the probability exactly 10 bits are sent is P [X = 10] = PX (10) = (0. 2. the remaining parts are straightforward. we recall that the PMF must sum to 1. 3 G 0.36 3.24 2. with probability p.0 0.

Thus Z has the Pascal PMF (see Example 2. us (United States) Zip Code:71901 .15) PZ (z) = z−1 3 p (1 − p)z−3 2 (9) Note that PZ (z) > 0 for z = 3.4 Each of these probabilities can be read off the CDF FY (y). (6) If p = 0.3487.01)2 (0. This x=10 sum is not too hard to calculate.910 = 0.com Phone:5017621195 The probability that at least 10 bits are sent is P[X ≥ 10] = ∞ PX (x).25. the probability of exactly 2 errors is P [Y = 2] = PY (2) = 100 (0. we must keep in + mind that when FY (y) has a discontinuity at y0 .99)100 + 100(0.13. Y has the binomial PMF PY (y) = 100 y p (1 − p)100−y y (4) (3) If p = 0.25)3 (0. . P[X ≥ 10] = 0. its even easier to observe that X ≥ 10 if the ﬁrst 10 bits are transmitted correctly.9207 100 (0. hot springs. 5. However. . However. That is. (3) The random variable Y is the number of successes in 100 independent trials. the probability that the third error occurs on bit 12 is PZ (12) = 11 (0.75)9 = 0. 4.1. . FY (y) takes the upper value FY (y0 ).0645 2 (10) Quiz 2.Name:joey iwatsuru Email:joeyiwat@yahoo.99)98 2 (6) (7) (8) (5) Random variable Z is the number of trials up to and including the third success. AR. P [X ≥ 10] = P [ﬁrst 10 bits are correct] = (1 − p)10 For p = 0.1849 2 (5) (4) The probability of no more than 2 errors is P [Y ≤ 2] = PY (0) + PY (1) + PY (2) = (0.99)99 + = 0.01)(0.. (1) P[Y < 1] = FY (1− ) = 0 9 Address:104 pine meadows loop.01.01)2 (0. Just as in Example 2.99)98 = 0.

us (United States) Zip Code:71901 .8 = 0 Quiz 2.1 t = 120 ⎩ 0 otherwise From the PMF PT (t).3) = 29. we can write down the PMF of T : ⎧ ⎨ 0. the expected value of T is E [T ] = 75PT (75) + 90PT (90) + 105PT (105) + 120PT (120) = (75 + 90 + 105)(0.7.5 (1) With probability 0.3$$N =1 •T =105 $ (2) (1) $ $$ ¨¨$ rr rr0.5 cents Quiz 2.8 − 0.3) + 120(0.1¨¨ ¨ ¨ ¨ 0.7 c = 25 PC (c) = 0.6 (1) As a function of N .1) = 62 (2) (3) (4) 10 Address:104 pine meadows loop.6 = 0. Otherwise. we have a data call and C = 40. This corresponds to the PMF ⎧ ⎨ 0.3.2 (4) P[Y ≥ 2] = 1 − P[Y < 2] = 1 − FY (2− ) = 1 − 0.6 (6) P[Y = 3] = P[Y ≤ 3] − P[Y < 3] = FY (3+ ) − FY (3− ) = 0.8 = 0. hot springs. the cost T is T = 25N + 40(3 − N ) = 120 − 15N (2) To ﬁnd the PMF of T .3 c = 40 (1) ⎩ 0 otherwise (2) The expected value of C is E [C] = 25(0.6 (3) P[Y > 2] = 1 − P[Y ≤ 2] = 1 − FY (2) = 1 − 0.4 (5) P[Y = 1] = P[Y ≤ 1] − P[Y < 1] = FY (1+ ) − FY (1− ) = 0.3 t = 75. 90.com Phone:5017621195 (2) P[Y ≤ 1] = FY (1) = 0.3 N =3 •T =75 From the tree. 105 PT (t) = 0. a call is a voice call and C = 25. with probability 0. AR. we can draw the following tree: N =0 •T =120 0.3 N =2 •T =90 r rr 0.7) + 40(0.Name:joey iwatsuru Email:joeyiwat@yahoo.

44 = 0.4 (2) (3) The variance of N is Var[N ] = E N 2 − (E [N ])2 = 2.4)2 = 0.44 (4) The standard deviation is σ N = √ Var[N ] = √ 0.7 (1) Using Deﬁnition 2.14.3) + 3(0.5) = 1. E[M] = 4.8 (3) Since E[A] = 2. Quiz 2.4) + 2(0.8 = g(E[A]).3) + 6(0. AR.663. hot springs.8 The PMF PN (n) allows to calculate each of the desired quantities. 2 g(A) = 6 A = 3 ⎩ 8 A=4 (3) By Theorem 2.5) = 2.4) + 22 (0.4 (1) (2) The second moment of N is 2 E N 2 = n=0 n 2 PN (n) = 02 (0. (1) The expected value of N is 2 E [N ] = n=0 n PN (n) = 0(0.2) + 4(0.1) + 12 (0.4) + 4(0.1) = 4.Name:joey iwatsuru Email:joeyiwat@yahoo. the expected number of memory chips is 4 (2) E [M] = a=1 g(A)PA (a) = 4(0.2) + 8(0.1) = 2 (1) (2) The number of memory chips is M = g(A) where ⎧ ⎨ 4 A = 1.4) + 2(0.1) + 1(0. g(E[A]) = g(2) = 4.com Phone:5017621195 Quiz 2.10. However. us (United States) Zip Code:71901 .4 − (1. the expected number of applications is 4 E [A] = a=1 a PA (a) = 1(0. (3) 11 Address:104 pine meadows loop. The two quantities are different because g(A) is not of the form α A + β.

8. 9. 9. 7.75) + 0.15625 12 Address:104 pine meadows loop. 2. 2. From Theorem 1.25) n = 1. 7. .00625 n = 6. 10 ⎩ 0 otherwise ⎧ ⎨ 0. 5 n = 6. 4. us (United States) Zip Code:71901 .155)(5) + (0. the conditional PMF of N given the event T is PN |T (n) = 0. 5 = 0. .02 n = 1.9 (1) From the problem statement. 4. 10 ⎩ 0 otherwise (5) Once we have the conditional PMF.17. we learn that the conditional PMF of N given the event I is 0.19375 n = 1. 50 ⎩ 0 otherwise (4) First we ﬁnd 10 (3) (4) (5) P [N ≤ 10] = n=1 PN (n) = (0. . 2. 7.com Phone:5017621195 Quiz 2. 7. 3.155/0. calculating conditional expectations is easy. .02(0.2(0. 4. .8 n = 1. 50 = 0(0. 4.005/0. E [N |N ≤ 10] = n 5 0 n ≤ 10 otherwise (7) (8) (9) n PN |N ≤10 (n) 10 (10) = n=1 n(0. 2. 8. AR. the conditional PMF of N given N ≤ 10 is PN |N ≤10 (n) = PN (n) P[N ≤10] ⎧ ⎨ 0. 5 = 0.005)(5) = 0. .80 (6) By Theorem 2.00625) (11) (12) = 3. 3.02(0.155 n = 1. .2 n = 1. 50 PN |I (n) = (1) 0 otherwise (2) Also from the problem statement. . 5 = 0. . hot springs. 3.25) ⎩ 0 otherwise ⎧ ⎨ 0.75) + 0.10 (the law of total probability). .19375) + n=6 n(0. 2. 3.8 n = 6.Name:joey iwatsuru Email:joeyiwat@yahoo. . . 5 0 otherwise (2) (3) The problem statement tells us that P[T ] = 1 − P[I ] = 3/4. 2.005 n = 6. we ﬁnd the PMF of N is PN (n) = PN |T (n) P [T ] + PN |I (n) P [I ] ⎧ ⎨ 0. 3. 4.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 10 8 6 4 2 0 0 50 100 10 8 6 4 2 0 0 500 1000 (a) samplemean(100) (b) samplemean(1000) Figure 1: Two examples of the output of samplemean(k) (6) To ﬁnd the conditional variance. M=zeros(k.19375) + 2 (14) (15) = 55(0. What is observed in these ﬁgures is that for small n.10 The function samplemean(k) generates and plots ﬁve m n sequences for n = 1. m k .71875 The conditional variance is Var[N |N ≤ 10] = E N 2 |N ≤ 10 − (E [N |N ≤ 10])2 = 12. M(:. The ith column M(:. X=duniformrv(0.15625)2 = 2.5). Each time samplemean(k) is called produces a random output. K=(1:k)’. 2. .i) of M holds a sequence m 1 . . .00625) n=6 = n=1 n (0. for i=1:5. .10. AR. . m n is fairly random but as n gets 13 Address:104 pine meadows loop.75684 (16) (17) Quiz 2. us (United States) Zip Code:71901 . k.k).00625) = 12. m 2 .M)./K. end.i)=cumsum(X). hot springs. . function M=samplemean(k).71875 − (3. we ﬁrst ﬁnd the conditional second moment E N 2 |N ≤ 10 = n 5 n 2 PN |N ≤10 (n) 10 (13) n 2 (0. . plot(K. Examples of the function calls (a) samplemean(100) and (b) samplemean(1000) are shown in Figure 1.19375) + 330(0. .

. m n gets close to E[X ] = 5. This random convergence is analyzed in Chapter 7. Although each sequence m 1 . m 2 . hot springs. . us (United States) Zip Code:71901 . . 14 Address:104 pine meadows loop. AR. the sequences always converges to E[X ]. that we generate is random.com Phone:5017621195 large.Name:joey iwatsuru Email:joeyiwat@yahoo.

We will evaluate this integral using integration by parts: ∞ −∞ f X (x) d x = 0 ∞ cxe−x/2 d x ∞ 0 (1) ∞ 0 = −2cxe−x/2 =0 + 2ce−x/2 d x (2) = −4ce−x/2 ∞ 0 = 4c (3) Thus c = 1/4 and X has the Erlang (n = 2. us (United States) Zip Code:71901 .5)/4 = 5/8 Quiz 3.Name:joey iwatsuru Email:joeyiwat@yahoo.5) = 1 − (1. To ﬁnd c.2 0. λ = 1/2) PDF 0.5] = 1 − FY (1. AR.1 0 0 5 x 10 15 f X (x) = (x/4)e−x/2 x ≥ 0 0 otherwise fX(x) (4) 15 Address:104 pine meadows loop.5] = 1 − P[Y ≤ 1. we can calculate the probabilities: (1) P[Y ≤ −1] = FY (−1) = 0 (2) P[Y ≤ 1] = FY (1) = 1/4 (3) P[2 < Y ≤ 3] = FY (3) − FY (2) = 3/4 − 2/4 = 1/4 (4) P[Y > 1.2 (1) First we will ﬁnd the constant c and then we will sketch the PDF. we use ∞ the fact that −∞ f X (x) d x = 1.com Phone:5017621195 Quiz Solutions – Chapter 3 Quiz 3.5 0 0 2 y 4 ⎧ y<0 ⎨ 0 y/4 0 ≤ y ≤ 4 FY (y) = ⎩ 1 y>4 (1) From the CDF FY (y). hot springs.1 The CDF of Y is 1 FY(y) 0.

com Phone:5017621195 (2) To ﬁnd the CDF FX (x). (3) 16 Address:104 pine meadows loop.e. FX (x) = 0 x f X (y) dy = 0 x y −y/2 e dy 4 (5) (6) (7) x x 1 y − e−y/2 dy = − e−y/2 − 2 2 0 0 x −x/2 =1− e − e−x/2 2 The complete expression for the CDF is 1 FX(x) 0. (2) Note that the above calculation wasn’t really necessary because E[Y ] = 0 whenever the PDF f Y (y) is an even function (i. (1) (1) The expected value of Y is E [Y ] = ∞ −∞ y f Y (y) dy = 1 −1 (3/2)y 3 dy = (3/8)y 4 1 −1 = 0.. (2) The second moment of Y is E Y2 = ∞ −∞ y 2 f Y (y) dy = 1 −1 (3/2)y 4 dy = (3/10)y 5 1 −1 = 3/5. us (United States) Zip Code:71901 . P [−2 ≤ X ≤ 2] = FX (2) − FX (−2) = 1 − 3e−1 . (4) Similarly.Name:joey iwatsuru Email:joeyiwat@yahoo. (9) (10) Quiz 3. f Y (y) = f Y (−y)). 0 otherwise. For x ≥ 0. hot springs.3 The PDF of Y is 3 fY(y) 2 1 0 −2 0 y 2 f Y (y) = 3y 2 /2 −1 ≤ y ≤ 1. P [0 ≤ X ≤ 4] = FX (4) − FX (0) = 1 − 3e−2 . AR. we ﬁrst note X is a nonnegative random variable so that FX (x) = 0 for all x < 0.5 0 0 5 x 10 15 FX (x) = 1− 0 x 2 + 1 e−x/2 x ≥ 0 otherwise (8) (3) From the CDF FX (x).

0 otherwise.5 Each of the requested probabilities can be calculated using or Q(z) and Table 3. (1) √ Var[Y ] = √ 3/5.1 (1) The PDFs of X and Y are shown below. we must have λ = 1/3. The fact that Y has twice the standard deviation of X is reﬂected in the greater spread of f Y (y). The PDF of X is f X (x) = (1/3)e−x/3 x ≥ 0. fY(y) 0.com Phone:5017621195 (3) The variance of Y is Var[Y ] = E Y 2 − (E [Y ])2 = 3/5.2. √ b = 3 + 3 3. us (United States) Zip Code:71901 . The only valid solution with a < b is √ a = 3 − 3 3. To ﬁnd a and b.6 to write E [X ] = This implies a + b = 6. (4) (2) We know X is a uniform (a. 12 (2) √ b − a = ±6 3.Name:joey iwatsuru Email:joeyiwat@yahoo. E[X ] = 1/λ and Var[X ] = 1/λ2 . (5) (z) function and Table 3. b) random variable. Since E[X ] = 3 and Var[X ] = 9. (4) The standard deviation of Y is σY = Quiz 3. We start with the sketches. it is important to remember that as the standard deviation increases. we apply Theorem 3. hot springs. Quiz 3.4 (1) When X is an exponential (λ) random variable.2 fX(x) 0 −5 x ← fX(x) ← f (y) Y 0 y 5 17 Address:104 pine meadows loop. a+b =3 2 Var[X ] = (b − a)2 = 9. (3) (4) The complete expression for the PDF of X is √ √ √ 1/(6 3) 3 − 3 3 ≤ x < 3 + 3 3.4 0. f X (x) = 0 otherwise. the peak value of the Gaussian PDF goes down. However. AR.

since X is Gaussian (0.Name:joey iwatsuru Email:joeyiwat@yahoo. ⎨ 0 FX (x) = (x + 1)/4 −1 ≤ x < 1.75) = 1 − 2 0. P [−1 < Y ≤ 1] = FY (1) − FY (−1) 1 −1 = − σY σY (3) =2 1 − 1 = 0. hot springs. (2) Quiz 3.5 ) = Q(1.7 18 Address:104 pine meadows loop.33 × 10−4 . AR. 2).5 0 −2 (1. 1). ⎩ 0 otherwise.5 fX(x) 0.5] = Q(3. P[X > 3. (3) Since Y is Gaussian (0. ⎩ 1 x ≥ 1. 2 (4) (1) (2) (4) Again. 2).0401.5] = Q( 3. ⎨ 1/4 f X (x) = (1/2)δ(x − 1) x = 1. (2) P[X < 1] = FX (1− ) = 1/2. P[Y > 3.75) = 0 x 2 ⎧ x < −1. Quiz 3.6826. P [−1 < X ≤ 1] = FX (1) − FX (−1) = (1) − (−1) = 2 (1) − 1 = 0. 1). (4) We ﬁnd the PDF f Y (y) by taking the derivative of FY (y).5 0 −2 0 x 2 ⎧ −1 ≤ x < 1. The resulting PDF is 0. (5) Since Y is Gaussian (0.com Phone:5017621195 (2) Since X is Gaussian (0.6 The CDF of X is 1 FX(x) 0. (3) P[X = 1] = FX (1+ ) − FX (1− ) = 1 − 1/2 = 1/2.5) = 2.383. us (United States) Zip Code:71901 . (1) The following probabilities can be read directly from the CDF: (1) P[X ≤ 1] = FX (1) = 1.

com Phone:5017621195 (1) Since X is always nonnegative. Note that when y < 0 or y > 1.5 f (y) 1 0. the complete expression for the CDF of Y is 1 F (y) 0. ⎨ 0 2 /4 0 ≤ y < 1.5 0 −1 0 1 y 2 3 1 y 2 3 ⎧ y < 0. for 0 < y < 1. FY (y) = y−y ⎩ 1 y ≥ 1. (3) (3) Since X is nonnegative. for 0 ≤ x ≤ 2. the PDF is zero.6 .25 f Y (y) = 1 − y/2 + (1/4)δ(y − 1) 0 ≤ y ≤ 1 0 otherwise Y (6) Quiz 3. 19 Address:104 pine meadows loop.8 (1) P[Y ≤ 6] = 6 −∞ f Y (y) dy = 6 0 (1/10) dy = 0. (1) The complete CDF of X is 1 F (x) 0. hot springs. Lastly. Y is also nonnegative. FX (x) = 1 for x ≥ 2 since its always true that x ≤ 2. AR. us (United States) Zip Code:71901 . FX (x) = x−x ⎩ 1 x > 2. Also. ⎨ 0 2 /4 0 ≤ x ≤ 2.Name:joey iwatsuru Email:joeyiwat@yahoo. FX (x) = 0 for x < 0. Also. we obtain the PDF f Y (y). because Y ≤ 1.5 0 −1 X 0 1 x 2 3 ⎧ x < 0. (5) 0. (4) By taking the derivative of FY (y). (2) (2) The probability that Y = 1 is P [Y = 1] = P [X ≥ 1] = 1 − FX (1) = 1 − 3/4 = 1/4. Finally. FY (y) = P [Y ≤ y] = P [X ≤ y] = FX (y) . Using the CDF FX (x). Thus FY (y) = 0 for y < 0. FX (x) = x −∞ f X (y) dy = 0 x (1 − y/2) dy = x − x 2 /4.5 0 −1 Y (4) 0 As expected. we see that the jump in FY (y) at y = 1 is exactly equal to P[Y = 1]. FY (y) = 1 for all y ≥ 1. 1.

(3) (5) From the conditional PDF f Y |Y ≤6 (y). = otherwise.15. we can calculate the conditional expectation E [Y |Y ≤ 6] = ∞ −∞ y f Y |Y ≤6 (y) dy = 6 0 y dy = 3. while (i<m).2 . 0 otherwise.1). end end A second method exploits the fact that if T is an exponential (λ) random variable. (1) 1 dy = 0.15. 10 (2) (4) From Deﬁnition 3. hot springs. Here is a M ATLAB function that uses this method: function t=t2rv(m) i=0. 1/2 8 < y ≤ 10.9 A natural way to produce random variables with PDF f T |T >2 (t) is to generate samples of T with PDF f T (t) and then to discard those samples which fail to satisfy the condition T > 2. we can calculate the conditional expectation E [Y |Y > 8] = ∞ −∞ y f Y |Y >8 (y) dy = 10 8 y dy = 9.1).Name:joey iwatsuru Email:joeyiwat@yahoo. 6 (4) (6) From the conditional PDF f Y |Y >8 (y). 20 Address:104 pine meadows loop.lambda=1/3. x=exponentialrv(lambda. = otherwise. if (x>2) t(i+1)=x.0+exponentialrv(1/3.com Phone:5017621195 (2) From Deﬁnition 3. 2 (5) Quiz 3. us (United States) Zip Code:71901 . 0 otherwise. t=zeros(m. In this case the command t=2. the conditional PDF of Y given Y > 8 is f Y |Y >8 (y) = f Y (y) P[Y >8] 0 y > 8.m) generates the vector t. the conditional PDF of Y given Y ≤ 6 is f Y |Y ≤6 (y) = (3) The probability Y > 8 is P [Y > 8] = 8 10 f Y (y) P[Y ≤6] 0 y ≤ 6. AR. then T = T + 2 has PDF f T (t) = f T |T >2 (t). 1/6 0 ≤ y ≤ 6. i=i+1.

12 + 0.12 + 0. g) (6) (7) = 0. Y ≤ y] = P[Y ≤ y] = FY (y).G (0. This result is given in Theorem 4.com Phone:5017621195 Quiz Solutions – Chapter 4 Quiz 4. AR. Y ≤ −∞] = 0 since Y cannot take on the value −∞.1 Each value of the joint CDF can be found by considering the corresponding probability. −∞) = P[X ≤ ∞. (4) FX. g) (4) (5) = 0. Y ≤ ∞] = 1.06 + 0.2 From the joint PMF of Q and G given in the table.Y (∞.08 = 0. 1) = 0. 0) + PQ. 2) = P[X ≤ −∞. ∞) = P[X ≤ ∞.G (0.1. (3) FX. y) = P[X ≤ ∞.24 + 0.24 + 0.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs. 1) + PQ.6 (4) The probability that G > Q is 1 3 P [G > Q] = q=0 g=q+1 PQ.G (0. (1) The probability that Q = 0 is P [Q = 0] = PQ.18 + 0.12 = 0.G (0.G (1. 0) + PQ.18 + 0. 3) = 0.24 + 0.G (q. (1) FX. we can calculate the requested probabilities by summing the PMF over those values of Q and G that correspond to the event.16 + 0. us (United States) Zip Code:71901 .78 21 Address:104 pine meadows loop. Quiz 4. 2) + PQ.6 (2) The probability that Q = G is P [Q = G] = PQ.Y (−∞.G (q. Y ≤ 2] ≤ P[X ≤ −∞] = 0 since X cannot take on the value −∞.Y (∞.08 = 0. (2) FX.18 (3) The probability that G > 1 is 3 1 (1) (2) (3) P [G > 1] = g=2 q=0 PQ.16 + 0.Y (∞.G (0.

y) d x d y = =c cx y d x dy y 0 2 0 (1) dy 2 0 x 2 /2 1 0 (2) =c (3) = (c/2) Thus c = 1.3.4 To ﬁnd the constant c. we apply ∞ ∞ −∞ −∞ ∞ ∞ −∞ −∞ (3) f X. yielding 2 1 Y P [A] = 0 π/2 0 1 0 1 r 2 sin θ cos θ r dr dθ π/2 0 2 π/2 (5) (6) A 1 X = = r 3 dr ⎛ 1 0 sin θ cos θ dθ ⎞ ⎠ = 1/8 r 4 /4 ⎝ sin θ 2 (7) 0 22 Address:104 pine meadows loop. this corresponds to calculating the row sum across the table of the joint PMF. AR.Y (x. To calculate P[A]. 2 0 0 2 1 f X.2 0.B (h. this corresponds to the column sum down the table of the joint PMF.2. b) b = 0 b = 2 b = 4 PH (h) h = −1 0 0. The easiest way to calculate these marginal PMFs is to simply sum each row and column: PH.1 0 0.2 PB (b) 0. hot springs.B (h.3 Quiz 4. we write P [A] = A y dy = (c/4)y 2 f X.6 0.Name:joey iwatsuru Email:joeyiwat@yahoo.4 0.2 0.5 0. y) d x d y (4) To integrate over A.Y (x.4 PH.1 0. the marginal PMF of H is PH (h) = b=0. b) (1) For each value of h.1 0. Speciﬁcally. the marginal PMF of B is 1 PB (b) = h=−1 PH. y) d x d y = 1.2 h=0 h=1 0.Y (x.B (h.com Phone:5017621195 Quiz 4.3 By Theorem 4. b) (2) For each value of b. we convert to polar coordinates using the substitutions x = r cos θ .1 0 0. y = r sin θ and d x d y = r dr dθ . Similarly. us (United States) Zip Code:71901 .

For 0 ≤ y ≤ 1. We can write these down on the table for the joint PMF of L and B as follows: PL . f Y (y) = = ∞ −∞ 6 1 f X. y) dy (1) For x < 0 or x > 1.8. writing down the PMF of T is straightforward.2 t = 270 ⎪ ⎪ ⎪ ⎪ 0. 600 0. f X (x) = 6 5 1 0 (x + y 2 ) dy = 6 x y + y 3 /3 5 y=1 y=0 6x + 2 6 = (x + 1/3) = 5 5 (2) The complete expression for the PDf of X is f X (x) = (6x + 2)/5 0 ≤ x ≤ 1 0 otherwise (3) By the same method we obtain the marginal PDF for Y .1 t = 360 ⎪ ⎪ ⎩ 0 otherwise 23 (1) Address:104 pine meadows loop. 592. hot springs. 800 0. 400 0.Y (x. For each pair of values of L and B.05 (T =180) 0.1 t = 120 PT (t) = ⎪ 0.B (l. For 0 ≤ x ≤ 1.com Phone:5017621195 Quiz 4. 400 l = 2.1 t = 24 ⎪ ⎪ ⎪ ⎪ 0.6 (A) The time required for the transfer is T = L/B.5 By Theorem 4.05 t = 18 ⎪ ⎪ ⎪ 0.20 (T =270) (3 + 6y 2 )/5 0 ≤ y ≤ 1 0 otherwise (6) From the table.2 t = 36. the marginal PDF of X is f X (x) = ∞ −∞ f X.05 t = 180 ⎪ ⎪ ⎪ 0. 90 ⎪ ⎪ ⎨ 0. we can calculate the time T needed for the transfer. 000 b = 14.00 (T =540) b = 21.10 (T =120) 0. 000 l = 7. ⎧ ⎪ 0.05 (T =18) 0. the complete expression for the PDF of Y is f Y (y) = Quiz 4.10 (T =360) b = 28. b) l = 518. us (United States) Zip Code:71901 .20 (T =90) 0. f X (x) = 0. AR.20 (T =36) 0. y) dy (x + y 2 ) d x = 6 2 x /2 + x y 2 5 x=1 x=0 (4) 6 3 + 6y 2 = (1/2 + y 2 ) = 5 5 (5) 5 0 Since f Y (y) = 0 for y < 0 or y > 1.Y (x.Name:joey iwatsuru Email:joeyiwat@yahoo.10 (T =24) 0. 776.

PL .com Phone:5017621195 (B) First.5. The calculus is simpler if we integrate over the region X Y > w.5.T (l.25) + 2(0.4 PL (l) 0.5) + 32 (0. we observe that since 0 ≤ X ≤ 1 and 0 ≤ Y ≤ 1. Speciﬁcally.25) + 22 (0. us (United States) Zip Code:71901 .5 0.1 0. For 0 < w < 1. the variance of L is Var [L] = E L 2 − (E [L])2 = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. AR.25 0.6 t = 60 0. Thus f W (0) = 0 and f W (1) = 1. Y 1 w w 1 XY > w FW (w) = 1 − P [X Y > w] =1− =1− 1 1 w w/x 1 w (2) (3) (4) (5) (6) dy dx XY = w X (1 − w/x) d x = 1 − x − w ln x|x=1 x=w = 1 − (1 − w + w ln w) = w − w ln w The complete expression for the CDF is ⎧ w<0 ⎨ 0 FW (w) = w − w ln w 0 ≤ w ≤ 1 ⎩ 1 w>1 By taking the derivative of the CDF.2 0.25 (7) (8) (1) (2) (3) Address:104 pine meadows loop. hot springs. As shown below.1 0. integrating over the region W ≤ w is fairly complex.5) + 3(0. t) l=1 l=2 l=3 PT (t) (1) The expected value of L is E [L] = 1(0.15 0. 24 t = 40 0.3 0. Since the second moment of L is E L 2 = 12 (0. W = X Y satisﬁes 0 ≤ W ≤ 1. we calculate the CDF FW (w) = P[W ≤ w].7 (A) It is helpful to ﬁrst make a table that includes the marginal PMFs.15 0.25) = 2. we ﬁnd the PDF is ⎧ 0 w<0 d FW (w) ⎨ f W (w) = = − ln w 0 ≤ w ≤ 1 ⎩ dw 0 w>1 Quiz 4.25) = 4.

T ] = E [L T ] − E [L] E [T ] = 96 − 2(48) = 0 (5) Since Cov[L . 25 Address:104 pine meadows loop.Y (x.15) + 1(60)(0. (11) (B) As in the discrete case.6) + 602 (0.3) + 3(40)(0.1) = 96 (4) From Theorem 4.15) + 2(40)(0. it is straightforward to calculate the various expectations. f X (x) = ∞ −∞ f X.com Phone:5017621195 (2) The expected value of T is E [T ] = 40(0.Name:joey iwatsuru Email:joeyiwat@yahoo.60 l=1 lt PL T (lt) (7) (8) (9) (10) = 1(40)(0. Thus Var[T ] = E T 2 − (E [T ])2 = 2400 − 482 = 96. y) d x = 0 2 xy dx = 1 2 x y 2 x=1 = x=0 y 2 (13) The complete expressions for the marginal PDFs are f X (x) = 2x 0 ≤ x ≤ 1 0 otherwise f Y (y) = y/2 0 ≤ y ≤ 2 0 otherwise (14) From the marginal PDFs. hot springs. T ] = 0. the calculations become easier if we ﬁrst calculate the marginal PDFs f X (x) and f Y (y). us (United States) Zip Code:71901 .4) = 48. For 0 ≤ x ≤ 1.4) = 2400.6) + 60(0.16(a).Y (x. f Y (y) = ∞ −∞ f X. The second moment of T is E T 2 = 402 (0. AR.T = 0.2) + 3(60)(0. the correlation coefﬁcient is ρ L . y) dy = 0 2 1 x y dy = x y 2 2 y=2 = 2x y=0 (12) Similarly. the covariance of L and T is Cov [L .1) + 2(60)(0. for 0 ≤ y ≤ 2. (3) The correlation is 3 (4) (5) (6) E [L T ] = t=40.

9.Y (x. 60) + PL . Quiz 4. hot springs.T |A (l.8 (A) Since the event V > 80 occurs only for the pairs (L . t) = 26 PL .Name:joey iwatsuru Email:joeyiwat@yahoo.T (2. 60).Y = 0. dy = 3 y3 3 = 0 8 9 (21) (4) The covariance of X and Y is Cov [X. (L . AR. T ) = (3.T (l.T (3. 40) + PL . P [A] = P [V > 80] = PL . PL . us (United States) Zip Code:71901 . Y ] = 0. y) d x.com Phone:5017621195 (1) The ﬁrst and second moments of X are E [X ] = E X2 = ∞ −∞ ∞ −∞ x f X (x) d x = 0 1 2x 2 d x = 1 2 3 1 2 (15) (16) (17) x 2 f X (x) d x = 0 2x 3 d x = The variance of X is Var[X ] = E[X 2 ] − (E[X ])2 = 1/18. T ) = (2.t) P[A] (1) 0 lt > 80 otherwise (2) Address:104 pine meadows loop. 60) = 0. T ) = (3.45 By Deﬁnition 4. 60).T (3. dy 1 0 (20) 2 x3 x y d x. (3) The correlation of X and Y is E [X Y ] = = ∞ ∞ −∞ −∞ 1 2 2 2 0 0 x y f X. (2) The ﬁrst and second moments of Y are E [Y ] = E Y2 4 1 2 y dy = 3 −∞ 0 2 ∞ 2 1 = y 2 f Y (y) dy = y 3 dy = 2 −∞ 0 2 y f Y (y) dy = ∞ 2 (18) (19) The variance of Y is Var[Y ] = E[Y 2 ] − (E[Y ])2 = 2 − 16/9 = 2/9. the correlation coefﬁcient is ρ X. Y ] = E [X Y ] − E [X ] E [Y ] = 2 8 − 9 3 4 3 = 0. (22) (5) Since Cov[X. 40) and (L .

AR.T |A (l. 400 9 3 9 It follows that Var [V |A] = E V 2 |A − (E [V |A])2 = 622 2 9 (7) (B) For continuous random variables X and Y . us (United States) Zip Code:71901 . y) ∈ B 0 otherwise K x y 40 ≤ y ≤ 60. we ﬁrst calculate the probability of the conditioning event. y) = = f X. y) d x d y = = = 60 40 60 40 60 3 80/y xy dx dy 4000 x2 2 3 (8) dy (9) (10) (11) y 4000 80/y 9 3200 y − 2 y 40 4000 2 9 4 3 = − ln ≈ 0. t) (5) (6) 4 1 2 = (2 · 60)2 + (3 · 40)2 + (3 · 60)2 = 18. t) t = 40 t = 60 l=1 0 0 l=2 0 4/9 1/3 2/9 l=3 The conditional expectation of V can be found from the conditional PMF. t) (3) (4) 1 2 1 4 = (2 · 60) + (3 · 40) + (3 · 60) = 133 9 3 9 3 For the conditional variance Var[V |A].com Phone:5017621195 We can represent this conditional PMF in the following table: PL .Name:joey iwatsuru Email:joeyiwat@yahoo.Y |B (x. y) /P [B] (x. hot springs.Y (x. 80/y ≤ x ≤ 3 0 otherwise 27 (12) (13) Address:104 pine meadows loop.Y (x.T |A (l. E [V |A] = l t lt PL . we ﬁrst ﬁnd the conditional second moment E V 2 |A = l t (lt)2 PL .801 8 5 2 dy The conditional PDF of X and Y is f X.T |A (l. P [B] = B f X.

b) = PB|A (b|a)PA (a). b) b=0 b=1 a=0 PB|A (0|0)PA (0) PB|A (1|0)PA (0) PB|A (0|2)PA (2) PB|A (1|2)PA (2) a=2 28 Address:104 pine meadows loop.78 −∞ −∞ 60 3 40 (x y)2 f X. AR. Incorporating the information from the given conditional PMFs can be confusing.B (a. we can note that A has range S A = {0. us (United States) Zip Code:71901 .30 Quiz 4. y) d x d y K x 2 y2 d x d y y2 x 3 x=3 x=80/y (14) (15) = (K /3) = (K /3) 80/y 60 40 60 40 dy (16) (17) (18) 27y 2 − 803 /y dy 60 40 = (K /3) 9y 3 − 803 ln y The conditional second moment of K given B is E W 2 |B = = ∞ ∞ ≈ 120. hot springs.Y |B (x.9 (24) (A) (1) The joint PMF of A and B can be found from the marginal and conditional PMFs via PA.Name:joey iwatsuru Email:joeyiwat@yahoo. 116. 2} and B has range S B = {0. however. Consequently. y) d x d y K x 3 y3 d x d y y3 x 4 x=3 x=80/y (19) (20) = (K /4) 80/y 60 40 60 40 dy (21) (22) ≈ 16.10 (23) = (K /4) 81y 3 − 804 /y dy 60 40 = (K /4) (81/4)y 4 − 804 ln y It follows that the conditional variance of W given B is Var [W |B] = E W 2 |B − (E [W |B])2 ≈ 1528.Y |B (x. The general form of the table is PA. A table of the joint PMF will include all four possible combinations of A and B.B (a.com Phone:5017621195 where K = (4000P[B])−1 . 1}. The conditional expectation of W given event B is E [W |B] = = ∞ ∞ −∞ −∞ 60 3 40 x y f X.

b).3/0.3 a=2 (2) Given the conditional PMF PB|A (b|2).5 (1) (3) From the joint PMF PA.32/0.B (a.4) (0.Name:joey iwatsuru Email:joeyiwat@yahoo. b) b = 0 b = 1 a=0 0.6) (0. f Y |X (y|1/2) = 29 8y 0 ≤ y ≤ 1/2 0 otherwise (8) 6y 0 ≤ y ≤ x.08 0. AR.B (a.62 a = 2 (2) ⎩ PB (0) 0 otherwise ⎧ ⎨ 16/31 a = 0 = 15/31 a = 2 (3) ⎩ 0 otherwise (4) We can calculate the conditional variance Var[A|B = 0] using the conditional PMF PA|B (a|0). First we calculate the conditional expected value E [A|B = 0] = a a PA|B (a|0) = 0(16/31) + 2(15/31) = 30/31 (4) The conditional second moment is E A2 |B = 0 = a a 2 PA|B (a|0) = 02 (16/31) + 22 (15/31) = 60/31 (5) The conditional variance is then Var[A|B = 0] = E A2 |B = 0 − (E [A|B = 0])2 = (B) (1) The joint PDF of X and Y is f X.6) a=2 or PA. us (United States) Zip Code:71901 .8)(0. b) a=0 (0. y) = f Y |X (y|x) f X (x) = (2) From the given conditional PDF f Y |X (y|x). 0) ⎨ PA|B (a|0) = = 0.5) = 0. 0 ≤ x ≤ 1 0 otherwise (7) 960 961 (6) Address:104 pine meadows loop. we have b=0 b=1 PA.62 a = 0 PA.B (a.2)(0.4) (0.5) + (1)(0. it is easy to calculate the conditional expectation 1 E [B|A = 2] = b=0 b PB|A (b|2) = (0)(0.B (a.5)(0.3 0. hot springs. we can calculate the the conditional PMF ⎧ 0.com Phone:5017621195 Substituting values from PB|A (b|a) and PA (a).Y (x.32 0.5)(0.

Name:joey iwatsuru Email:joeyiwat@yahoo. for 1/2 ≤ x ≤ 1. y such that PX. there are no obvious pairs q.40 q=1 PG (g) 0.16 0.Y (x. PQ.G (q. To ﬁnd f Y (1/2).18 0. b) PDF.com Phone:5017621195 (3) The conditional PDF of Y given X = 1/2 is f X |Y (x|1/2) = f X.X 2 (x1 . AR.01. (B) (1) Since X 1 and X 2 are independent. In this case. y) = 0. g that fail the independence requirement.Y (x.40 0. g) g = 0 g = 1 g = 2 g = 3 PQ (q) q=0 0. Note that whenever PX. Hence Q and G are independent. us (United States) Zip Code:71901 . by the deﬁnition of the uniform (a. 1) = 0 = PX (0) PY (1) (1) (1 − 1/2)2 1 = 12 48 (11) Since we have found a pair x.Y (0.06 0. hot springs.G (q. we see that given Y = 1/2.1/2 ( ) d x = 1 1/2 6(1/2) d x = 3/2 (9) (4) From the pervious part. 1/2) 6(1/2) =2 = f Y (1/2) 3/2 (10) ∞ −∞ f X. PX. y) = PX (x)PY (y). f X 1 . the conditional PDF of X is uniform (1/2. (2) For random variables Q and G from Quiz 4. g) in Quiz 4.09 and PX (0) = 0.12 0.20 Careful study of the table will verify that PQ. independence requires that either PX (x) = 0 or PY (y) = 0. Var [X |Y = 1/2] = Quiz 4. f Y (1/2) = Thus. f X |Y (x|1/2) = f X. However. we integrate the joint PDF. it is not obvious whether they are independent.04 0.10 (A) (1) For random variables X and Y from Example 4. 1). we can conclude that X and Y are dependent. 0 ≤ x2 ≤ 2 0 otherwise 30 (2) (3) Address:104 pine meadows loop.Y (x.G (q.1.Y (x.30 0. Unlike X and Y in part (a). x2 ) = f X 1 (x1 ) f X 2 (x2 ) = (1 − x1 /2)(1 − x2 /2) 0 ≤ x1 ≤ 2. Thus.10 0.2.24 0.2.08 0. we observe that PY (1) = 0. 1/2)/ f Y (1/2). g) = PQ (q)PG (g) for every pair q.60 0. g. we calculate the marginal PMFs from the table of the joint PMF PQ.12 0.

17. we see that E[X |Y = 2] = 1 and Var[X |Y = 2] = 3/4. y) = √ 3π 2 (3) µ2 = µY = 0.11 This problem just requires identifying the various terms in Deﬁnition 4. Speciﬁcally. f X. us (United States) Zip Code:71901 . (2) (1) Applying these facts to Deﬁnition 4.29. P [Z ≤ z] = P [X 1 ≤ z. σ2 = σY = 1. The CDF of Z = max(X 1 . X 2 ) is found by observing that Z ≤ z iff X 1 ≤ z and X 2 ≤ z. AR. we need to ﬁnd the CDF of each X i . µ1 = µ X = 0. (1) (2) By Theorem 4.30. X 2 ≤ z] = P [X 1 ≤ z] P [X 2 ≤ z] = [FX (z)]2 (4) (5) To complete the problem. From the PDF f X (x). That is.17 and Theorem 4. the CDF is ⎧ x <0 ⎨ 0 x 2 /4 0 ≤ x ≤ 2 FX (x) = f X (y) dy = (6) x−x ⎩ −∞ 1 x >2 Thus for 0 ≤ z ≤ 2.Y (x. hot springs. and that σ1 = σ X = 1.Name:joey iwatsuru Email:joeyiwat@yahoo. f X |Y (x|2) = √ 3π/2 (5) 31 Address:104 pine meadows loop. ˜ (4) When Y = y = 2. The conditional PDF of X given Y = 2 is simply the Gaussian PDF 1 2 e−2(x−1) /3 . FZ (z) = (z − z 2 /4)2 (7) The complete expression for the CDF of Z is ⎧ z<0 ⎨ 0 2 /4)2 0 ≤ z ≤ 2 FZ (z) = (z − z ⎩ 1 z>1 (8) Quiz 4. from the problem statement. the conditional expected value and standard deviation of X given Y = y are 2 E [X |Y = y] = y/2 σ X = σ1 (1 − ρ 2 ) = 3/4. we know that ρ = 1/2. we have 1 2 2 e−2(x −x y+y )/3 .com Phone:5017621195 (2) Let FX (x) denote the CDF of both X 1 and X 2 .

28.Name:joey iwatsuru Email:joeyiwat@yahoo.y’]. 32 Address:104 pine meadows loop.12 One straightforward method is to follow the approach of Example 4. .px.1). we use an alternate approach. . xy=[x’. x=finiterv(sx. This observation prompts the following program: function xy=dtrianglerv(m) sx=[1. given X = x. 2. PX (x) = 1/4 x = 1.com Phone:5017621195 Quiz 4.2. we can generate a sample value of Y with a discrete uniform (1.4]. us (United States) Zip Code:71901 . 4) PMF.m). and an independent uniform (0.25*ones(4. x) PMF via Y = xU . y=ceil(x. x) PMF. 1) random variable U . . hot springs. Also.*rand(m. Instead. That is. px=0. . x 0 otherwise (1) Given X = x. Y has a discrete uniform (1. 0 otherwise. First we observe that X has the discrete uniform (1.1)). 4. PY |X (y|x) = 1/x y = 1. 3.3. AR.

X 2 (x1 . X 2 − X 1 = y2 . (1) (2) (3) x2 x2 0 x3 x1 In particular. y3 ∈ {1. x3 ) = 0 unless 0 ≤ x 1 ≤ 33 Address:104 pine meadows loop. y2 . Thus. X 2 = y2 + y1 . . Speciﬁcally.X 3 (x1 . for y1 . x3 ) = f X 1 .X 3 (x2 . y3 ∈ {1. Y1 = X 1 . x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X (x) d x3 = f X (x) d x1 = f X (x) d x2 = 1 6 d x3 = 6(1 − x2 ). PY (y) = P [Y1 = y1 . P [C] = 0 1/2 y2 1/2 y4 dy2 0 1/2 dy1 0 dy4 0 1/2 4dy3 = 1/4. 2. AR. Y2 = X 2 − X 1 and Y3 = X 3 − X 2 . . hot springs. f X 2 . x2 ) = f X 2 .Name:joey iwatsuru Email:joeyiwat@yahoo. y2 . we have f X 1 . . Within these constraints. Y2 = y2 . the complete expression for the joint PMF of Y is PY (y) = (1 − p) p a y y1 . (1) (2) =4 0 y2 dy2 0 y4 dy4 Quiz 5.}.3 First we note that each marginal PDF is nonzero only if any subset of the xi obeys the ordering contraints 0 ≤ x 1 ≤ x2 ≤ x3 ≤ 1. X 3 = y3 + y2 + y1 ] = (1 − p)3 p y1 +y2 +y3 (1) (2) (3) (4) By deﬁning the vector a = 1 1 1 .X 3 (x2 . Y3 = y3 ] = P [X 1 = y1 . x2 ) = 0 unless 0 ≤ x 1 ≤ x2 ≤ 1. x3 ) = 0 unless 0 ≤ x2 ≤ x3 ≤ 1. 6 d x2 = 6(x3 − x1 ). each Yi must be a strictly positive integer. 2. and that f X 1 . . us (United States) Zip Code:71901 . .1 We ﬁnd P[C] by integrating the joint PDF over the region of interest. we must keep in mind that f X 1 .X 3 (x1 .com Phone:5017621195 Quiz Solutions – Chapter 5 Quiz 5.2 By deﬁnition of A. Since 0 < X 1 < X 2 < X 3 .} 0 otherwise (5) Quiz 5. X 3 − X 2 = y3 ] = P [X 1 = y1 . .X 2 (x1 . 6 d x1 = 6x2 .

The complete expressions are f X 1 .X 2 (x1 . the components have dependencies as a result of the ordering constraints Y1 ≤ Y2 and Y3 ≤ Y4 .X 2 (x1 . x3 ) = f X 1 . x3 ) d x3 = f X 2 . us (United States) Zip Code:71901 .4 In the PDF f Y (y).W (v. hot springs.X 3 (x2 . When 0 ≤ xi ≤ 1 for each xi . Y2 W= Y3 . AR.com Phone:5017621195 x3 ≤ 1. We can separate these constraints by creating the vectors V= The joint PDF of V and W is f V. 0 ≤ w1 ≤ w2 ≤ 1 0 otherwise (2) Y1 . x2 ) d x2 = f X 2 . x2 ) = f X 2 .X 3 (x2 . x3 ) d x2 = 1 x1 1 6(1 − x2 ) d x2 = 3(1 − x1 )2 6x2 d x3 = 6x2 (1 − x2 ) 2 6x2 d x2 = 3x3 (7) (8) (9) x2 x3 0 The complete expressions are f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = 3(1 − x1 )2 0 ≤ x1 ≤ 1 0 otherwise 6x2 (1 − x2 ) 0 ≤ x2 ≤ 1 0 otherwise 2 3x3 0 ≤ x3 ≤ 1 0 otherwise (10) (11) (12) Quiz 5. x3 ) = 6(1 − x2 ) 0 ≤ x1 ≤ x2 ≤ 1 0 otherwise 6x2 0 ≤ x2 ≤ x3 ≤ 1 0 otherwise 6(x3 − x1 ) 0 ≤ x1 ≤ x3 ≤ 1 0 otherwise (4) (5) (6) Now we can ﬁnd the marginal PDFs.Name:joey iwatsuru Email:joeyiwat@yahoo.X 3 (x1 . w) = 4 0 ≤ v1 ≤ v2 ≤ 1.X 3 (x2 . f X 1 (x1 ) = f X 2 (x2 ) = f X 3 (x3 ) = ∞ −∞ ∞ −∞ ∞ −∞ f X 1 . Y4 (1) 34 Address:104 pine meadows loop.

W (v. however it is simpler to just start from ﬁrst principles and observe that X 1 is the number of occurrences of L in ﬁve independent tests.1) random variable. p2 = 0. 0.5 (A) Referring to Theorem 1. . each test is a subexperiment with three possible outcomes: L. 5} ⎩ 0 otherwise We can ﬁnd the marginal PMF for each X i from the joint PMF PX (x).6)x2 (0. f W (w) = = 4(1 − w1 ) dw1 = 2 f V. us (United States) Zip Code:71901 . For 0 ≤ v1 ≤ v2 ≤ 1. w) = f V (v) f W (w). 0. 0 otherwise f W (w) = 2 0 ≤ w 1 ≤ w2 ≤ 1 0 otherwise (8) It is easy to verify that f V. 1. PX (x) = (1) x1 .x3 (0. That is. 1. w) dw1 dw2 1 w1 1 0 (3) (4) (5) 4 dw2 dw1 = Similarly. . If we view each test as a trial with success probability P[L] = 0. w) dv1 dv2 1 0 1 v1 (6) (7) 4 dv2 dv1 = 2 It follows that V and W have PDFs f V (v) = 2 0 ≤ v1 ≤ v2 ≤ 1 . In ﬁve trials. PX i (x) = pix (1 − pi )5−x x = 0. 0. .3)x1 (0.com Phone:5017621195 We must verify that V and W are independent. . A and R. we see that X 1 is a binomial (n.3. AR.6 and p3 = 0. Quiz 5. x2 . conﬁrming that V and W are independent vectors. X 2 is a binomial (5.1)x3 x1 + x2 + x3 = 5. . for p1 = 0. p) = (5. the vector X = X 1 X 2 X 3 indicating the number of outcomes of each subexperiment has the multinomial PMF ⎧ 5 ⎨ x1 .1.x2 .3) random variable. f V (v) = = 0 1 f V. 5 0 otherwise 35 5 x (2) Address:104 pine meadows loop.6) random variable and X 3 is a binomial (5.Name:joey iwatsuru Email:joeyiwat@yahoo. .W (v. for 0 ≤ w1 ≤ w2 ≤ 1.19. Similarly. . . hot springs. x3 ∈ {0.3.W (v.

we must use Theorem 5.6)(0. for w = 3.3(0.32 (0. 2.486 PW (4) = PX 1 (4) + PX 2 (4) + PX 3 (4) = 0. we use 3x(1 − x)2 d x = 1/4. To do so.6 to ﬁnd the PMF of W .1458 = (3) (4) (5) In addition. and w = 5. 2) + PX (2. (1) (2) (3) E [X 2 ] = 0 1 E [X 3 ] = 0 1 To ﬁnd the correlation matrix R X . since X 1 + X 2 + X 3 = 5 and since each X i is non-negative.6)2 (0. the constraints on y resulting from the constraints 0 ≤ X 1 ≤ X 2 ≤ X 3 can be much more complicated. we can apply Theorem 5.3: E [X 1 ] = 0 1 ∞ −∞ x f X i (x) d x of µ X . 2) + PX (2. 1) 5![0. 2.32 (0. 3x 3 d x = 3/4. f 3 X 2 2 2 2 (1/8)e−(y3 −4)/2 4 ≤ y1 ≤ y2 ≤ y3 = 0 otherwise (9) (10) (6) (7) (8) Note that for other matrices A.10 to write f Y (y) = y1 − 4 y2 − 4 y3 − 4 1 . PW (0) = PW (1) = 0. or X 3 = w occurs. AR. we need to ﬁnd E[X i X j ] for all i and j.6 We start by ﬁnding the components E[X i ] = the marginal PDFs f X i (x) found in Quiz 5. w = 4. Hence.6)2 (0.1)] 2!2!1! = 0. X 2 = w.0802 (B) Since each Yi = 2X i + 4. Thus. Furthermore.1)2 + 0. the event W = w occurs if and only if one of the mutually exclusive events X 1 = w. Quiz 5.com Phone:5017621195 From the marginal PMFs. PW (3) = PX 1 (3) + PX 2 (3) + PX 3 (3) = 0. 6x 2 (1 − x) d x = 1/2.Name:joey iwatsuru Email:joeyiwat@yahoo. we see that X 1 . X 2 and X 3 are not independent. In particular. . PW (2) = PX (1.288 PW (5) = PX 1 (5) + PX 2 (5) + PX 3 (5) = 0. hot springs.1)2 + 0. We start with 36 Address:104 pine meadows loop. us (United States) Zip Code:71901 . 1.

6x 3 (1 − x) d x = 3/10. 1 x2 1 0 2 6x2 x3 d x3 d x2 x1 x2 f X 1 .X 2 (x1 . us (United States) Zip Code:71901 . x2 ) . (4) (5) (6) 2 E X2 = 2 E X3 = 1 0 1 0 Using marginal PDFs from Quiz 5. Summarizing the results. d x1 d x2 d x1 (7) (8) (9) (10) (11) (12) (13) (14) 6x1 x2 (1 − x2 ) d x2 E [X 2 X 3 ] = 0 1 = E [X 1 X 3 ] = 0 2 4 [3x2 − 3x2 ] d x2 = 2/5 1 x1 1 6x1 x3 (x3 − x1 ) d x3 d x1 . X has correlation matrix ⎡ ⎤ 1/10 3/20 1/5 R X = ⎣3/20 3/10 2/5⎦ .Name:joey iwatsuru Email:joeyiwat@yahoo. 1/5 2/5 3/5 Vector X has covariance matrix C X = R X − E [X] E [X] ⎡ ⎤ ⎡ ⎤ 1/10 3/20 1/5 1/4 ⎣3/20 3/10 2/5⎦ − ⎣1/2⎦ = 1/5 2/5 3/5 3/4 ⎡ ⎤ ⎡ 1/10 3/20 1/5 1/16 ⎣3/20 3/10 2/5⎦ − ⎣ 1/8 = 1/5 2/5 3/5 3/16 37 (15) (16) 1/4 1/2 3/4 ⎤ ⎡ ⎤ 3 2 1 1/8 3/16 1 ⎣ 2 4 2⎦ .com Phone:5017621195 the second moments: E 2 X1 = 0 1 3x 2 (1 − x)2 d x = 1/10.3. AR. 3x 4 d x = 3/5. x3 =1 x3 =x1 = 0 1 3 2 2 (2x1 x3 − 3x1 x3 ) d x1 = 0 1 2 4 [2x1 − 3x1 + x1 ] d x1 = 1/5. 1/4 3/8 ⎦ = 80 1 2 3 3/8 9/16 (17) (18) Address:104 pine meadows loop. hot springs. the cross terms are E [X 1 X 2 ] = = = 0 ∞ ∞ −∞ −∞ 1 1 0 1 x1 3 4 [x1 − 3x1 + 2x1 ] d x1 = 3/20.

0 (1) It follows from Theorem 5.0000. hot springs.m: >> julytemps([70 75 80 85 90 95]) ans = 0./(1+abs(D1-D2)). Var[Y ] = ACT A .99999999922010 0. function p=julytemps(T).5000 0.99997155736872 0.m.00002844263128 0.e. i.50000000000000 0.0000 0. CT=36. Theorem 5.com Phone:5017621195 This problem shows that even for fairly simple joint PDFs. we observe that Y = AT where A = 1/31 1/31 · · · 1/31 . just a Gaussian random variable. CY=(A’)*CT*A. The expected value of Y is µY = µT = 80. 1 −1 b= 2 . [D1 D2]=ndgrid((1:31). Its just that the M ATLAB’s short format output.Name:joey iwatsuru Email:joeyiwat@yahoo. Quiz 5. Thus.1)/31.0221 0.18 that µ X = b and that C X = AA = 2 1 1 −1 2 1 5 1 = . or CT . the ﬁrst two lines generate the 31 × 31 covariance matrix CT. by Theorem 5. p=phi((T-80)/sqrt(CY)).9779 1. 1 −1 1 2 (2) Quiz 5.16 tells us that Y is a 1 dimensional Gaussian vector.0000 1. The ﬁnal step is to use the (·) function to calculate P[Y < T ]. Since T is a Gaussian random vector. Here is the output of julytemps. A=ones(31.02207383067604 Columns 5 through 6 0. rounds off those probabilities. The covariance matrix of Y is 1 × 1 and is just equal to Var[Y ]. Here is the long format output: >> format long >> julytemps([70 75 80 85 90 95]) ans = Columns 1 through 4 0.(1:31)). computing the covariance matrix by calculus can be a time consuming task.0000 Note that P[T ≤ 70] is not actually zero and that P[T ≤ 90] is not actually 1.0.7 We observe that X = AZ + b where A= 2 1 . invoked with the command format short. us (United States) Zip Code:71901 .8 First. AR. In julytemps..16. Next we calculate Var[Y ].97792616932396 38 Address:104 pine meadows loop.

We will see in Chapters 9 and 11 that Toeplitz covariance matrices are quite common. The function julytemps2 use the toeplitz to generate the correlation matrix CT .com Phone:5017621195 The ndgrid function is a useful to way calculate many covariance matrices. hot springs. . p=phi((T-80)/sqrt(CY)). c1 ⎦ . . 39 Address:104 pine meadows loop. CY=(A’)*CT*A.. 1 + |i − j| (1) If we write out the elements of the covariance matrix. AR. . A=ones(31. ⎣ .0. c=36.. ⎥ ⎢ . us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo. ⎢ c1 c0 CT = ⎢ .. C X has a special structure. function p=julytemps2(T). c30 · · · c1 c0 (2) This covariance matrix is known as a symmetric Toeplitz matrix./(1+abs(0:30)). ⎥. jth element is CT (i. In fact. . in this problem. CT=toeplitz(c). ⎥ .1)/31.. However. . M ATLAB has a toeplitz function for generating them. the i. we see that ⎡ ⎤ c0 c1 · · · c30 . j) = c|i− j| = 36 .

we note that the ﬁrst two moments of K i are E [K i ] = (1 + 2 + 3 + 4)/4 = 2. . the random variables K 1 .5. By Theorem 6.com Phone:5017621195 Quiz Solutions – Chapter 6 Quiz 6. the variance of the sum equals the sum of the variances.2 Random variables X and Y have PDFs f X (x) = 3e−3x x ≥ 0 0 otherwise f Y (y) = 2e−2y y ≥ 0 0 otherwise (1) (6) (4) (2) (3) Since X and Y are nonnegative. . .5)2 = 1.5n (5) Since the rolls are independent. Hence.25 Since E[K i ] = 2. . First.1 Let K 1 . .3.5. . f W (w) = e−3w e y w 0 = 6 e−2w − e−3w (3) Since f W (w) = 0 for w < 0.5 Thus the variance of K i is Var[K i ] = E K i2 − (E [K i ])2 = 7. the PDF of W = X + Y is f W (w) = ∞ −∞ f X (w − y) f Y (y) dy = 6 0 w e−3(w−y) e−2y dy (2) Fortunately. .Name:joey iwatsuru Email:joeyiwat@yahoo. the expected value of Wn is E [Wn ] = E [K 1 ] + · · · + E [K n ] = n E [K i ] = 2. .25n Quiz 6. W = X + Y is nonnegative.5 E K i2 = (12 + 22 + 32 + 42 )/4 = 7. . K n denote a sequence of iid random variables each with PMF PK (k) = 1/4 k = 1. hot springs. . a conmplete expression for the PDF of W is f W (w) = 6e−2w 1 − e−w 0 w ≥ 0. . this integral is easy to evaluate. Var[Wn ] = Var[K 1 ] + · · · + Var[K n ] = 1. (4) 40 Address:104 pine meadows loop. For w > 0. K n are independent.5 − (2. That is. 4 0 otherwise (1) We can write Wn in the form of Wn = K 1 + · · · + K n . us (United States) Zip Code:71901 . by Theorem 6. AR. otherwise. .

AR. us (United States) Zip Code:71901 .2(es + 8e2s + 27e3s + 64e4s ) s=0 s=0 = 0.2 1 + es + e2s + e3s + e4s (1) We ﬁnd the moments by taking derivatives.Name:joey iwatsuru Email:joeyiwat@yahoo. Thus to ﬁnd the PDF of W . we need only ﬁnd the expected value and variance. hot springs.8 (4) (5) (6) (7) = 0.com Phone:5017621195 Quiz 6. we continue to take derivatives: E K2 = E K3 E K4 d 2 φ K (s) ds 2 d 3 φ K (s) = ds 3 d 4 φ K (s) = ds 4 = 0.2(es + 4e2s + 9e3s + 16e4s ) s=0 s=0 =6 = 20 = 70.2)esk = 0.3 The MGF of K is 4 φ K (s) = E es K == k=0 (0. The ﬁrst derivative of φ K (s) is d φ K (s) = 0.4 (A) Each K i has MGF φ K (s) = E es K i = es (1 − ens ) es + e2s + · · · + ens = n n(1 − es ) ems (1 − ens )m n m (1 − es )m (1) Since the sequence of K i is independent.2(es + 16e2s + 81e3s + 256e4s ) s=0 s=0 Quiz 6.2(1 + 2 + 3 + 4) = 2 s=0 (2) (3) To ﬁnd higher-order moments. Theorem 6.10 says that W is a Gaussian random variable. Since the expectation of the sum equals the sum of the expectations: E [W ] = α E [X 1 ] + α 2 E [X 2 ] + · · · + α n E [X n ] = 0 41 (3) Address:104 pine meadows loop. Theorem 6.8 says the MGF of J is φ J (s) = (φ K (s))m = (2) (B) Since the set of α j X j are independent Gaussian random variables.2(es + 2e2s + 3e3s + 4e4s ) ds Evaluating the derivative at s = 0 yields E [K ] = d φ K (s) ds = 0.

we can use Math Fact B. we see that R has the MGF of an exponential (1/5) random variable. 1−s φ N (s) = 1 s 5e . we can write the PDF of W as f W (w) = 1 2 2π σW e−w 2 /2σ 2 W (7) Quiz 6. 42 Address:104 pine meadows loop. R has MGF φ R (s) = φ N (ln φ X (s)) = Substituting the expression for φ X (s) yields φ R (s) = 1 5 1 5 1 5 φ X (s) 1 − 4 φ X (s) 5 (2) −s . us (United States) Zip Code:71901 . hot springs.12.6 to write Var[W ] = α 2 − α 2n+2 [1 + n(1 − α 2 )] (1 − α 2 )2 (6) (4) (5) 2 With E[W ] = 0 and σW = Var[W ].Name:joey iwatsuru Email:joeyiwat@yahoo. The corresponding PDF is f R (r ) = (1/5)e−r/5 r ≥ 0 0 otherwise (4) This quiz is an example of the general result that a geometric sum of exponential random variables is an exponential random variable. the variance of the sum equals the sum of the variances: Var[W ] = α 2 Var[X 1 ] + α 4 Var[X 2 ] + · · · + α 2n Var[X n ] = α 2 + 2(α 2 )2 + 3(α 2 )3 + · · · + n(α 2 )n Deﬁning q = α 2 . 1 − 4 es 5 (1) From Theorem 6.com Phone:5017621195 Since the α j X j are independent. each X i has MGF φ X (s) and random variable N has MGF φ N (s) where φ X (s) = 1 .1.5 (1) From Table 6.1. (3) (2) From Table 6. AR.

Name:joey iwatsuru Email:joeyiwat@yahoo. Var[A] = Var[X 1 ] + · · · + Var[X 12 ] = 12 Var[X ] = 144 Hence.25). (3) Using X i to denote the access time of block i.com Phone:5017621195 Quiz 6. we can write A = X 1 + X 2 + · · · + X 12 Since the expectation of the sum equals the sum of the expectations.9773 = 0. the standard deviation of A is σ A = 12 (5) To use the central limit theorem. (6) (7) (8) (9) (5) (4) (3) (6) Once again. we write P [A > 75] = 1 − P [A ≤ 75] 75 − E [A] A − E [A] ≤ =1− P σA σA 75 − 72 ≈1− 12 = 1 − 0.6 (1) The expected access time is E [X ] = ∞ −∞ x f X (x) d x = 0 12 x d x = 6 msec 12 (1) (2) The second moment of the access time is E X2 = ∞ −∞ x 2 f X (x) d x = 0 12 x2 d x = 48 12 (2) The variance of the access time is Var[X ] = E[X 2 ] − (E[X ])2 = 48 − 36 = 12. we use the central limit theorem and Table 3. us (United States) Zip Code:71901 .5987 = 0.1 to estimate P [A < 48] = P 48 − E [A] A − E [A] < σA σA 48 − 72 ≈ 12 = 1 − (2) = 1 − 0.0227 (10) (11) (12) 43 Address:104 pine meadows loop. E [A] = E [X 1 ] + · · · + E [X 12 ] = 12E [X ] = 72 msec (4) Since the X i are independent. hot springs.4013 Note that we used Table 3. AR.1 to look up (0.

we have (3) 30 − 36 3 = (2) − (−2) (2) (1) P [30 ≤ K 48 ≤ 42] ≈ 2 (2) − 1 = 0. (1) The expected number of voice calls out of 48 calls is E[K 48 ] = 48P[V ] = 36. us (United States) Zip Code:71901 . (3) Using the ordinary central limit theorem and Table 3.5 − 36 − 3 3 = 2 (2. we ﬁnd that W has expected value and variance E [W ] = 3/λ = 6 Var[W ] = 3/λ2 = 12 (2) (1) By the Central Limit Theorem. X 2 . AR. P [W > 20] = P √ W −6 20 − 6 > √ ≈ Q(7/ 3) = 2.7 Random variable K n has a binomial distribution for n trials and success probability P[V ] = 3/4. we found that the sum of three iid exponential (λ) random variables is an Erlang (n = 3.11.8 The train interarrival times X 1 .com Phone:5017621195 Quiz 6. λ) random variable. X 3 are iid exponential (λ) random variables.9545 (4) Since K 48 is a discrete random variable.66 × 10−5 √ 12 12 (3) 44 Address:104 pine meadows loop.5 − 36 30 − 0.1 yields P [30 ≤ K 48 ≤ 42] ≈ Recalling that (−x) = 1 − 42 − 36 − 3 (x). The arrival time of the third train is W = X 1 + X 2 + X 3. we can use the De Moivre-Laplace approximation to estimate P [30 ≤ K 48 ≤ 42] ≈ 42 + 0.16666) − 1 = 0.9687 (4) (5) Quiz 6. (1) In Theorem 6. hot springs. From Appendix A.Name:joey iwatsuru Email:joeyiwat@yahoo. (2) The variance of K 48 is Var[K 48 ] = 48P [V ] (1 − P [V ]) = 48(3/4)(1/4) = 9 Thus K 48 has standard deviation σ K 48 = 3.

*PY. [SX.m sx=0:100.’\itw’.0. it should be apparent that the finitepmf function is implementing the convolution of the two PMFs. the Central Limit Theorem approximation grossly underestimates the true probability.PY]=ndgrid(px.11 says that for any w > 0.PW.sy=0:100. Applying s = 7/20 into the Chernoff bound yields P [W > 20] ≤ e−20s (1 − 2s)3 = (10/3)3 e−7 = 0.com Phone:5017621195 (2) To use the Chernoff bound. pw=finitepmf(SW.sy). for λ = 1/2 and w = 20. we note that the MGF of W is φW (s) = The Chernoff bound states that P [W > 20] ≤ min e−20s φ X (s) = min s≥0 s≥0 λ λ−s 3 = 1 (1 − 2s)3 e−20s (1 − 2s)3 (4) (5) To minimize h(s) = e−20s /(1 − 2s)3 . Quiz 6. [PX. sw=unique(SW).py). hot springs.sy).9 One solution to this problem is to follow the approach of Example 6. A graph of the PMF PW (w) appears in Figure 2 With some thought. we set the derivative of h(s) to zero: −20(1 − 2s)3 e−20s + 6e−20s (1 − 2s)2 d h(s) = =0 ds (1 − 2s)6 (6) This implies 20(1 − 2s) = 6 or s = 7/20. py=duniformpmf(0. P [W > 20] = 1 − FW (20) = e−10 1 + 10 102 + 1! 2! = 61e−10 = 0. PW=PX.SY]=ndgrid(sx. px=binomialpmf(100. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo. By contrast.19: %unifbinom100. 3) random variable W satisﬁes 2 (λw)k e−λw FW (w) = 1 − (8) k! k=0 Equivalently.sx). AR. 45 Address:104 pine meadows loop.sw).5.’\itP_W(w)’).100. pmfplot(sw. SW=SX+SY.0338 s=7/20 (7) (3) Theorem 3. it is a valid bound.pw.0028 (9) (10) Although the Chernoff bound is relatively weak in that it overestimates the probability by roughly a factor of 12. the CDF of the Erlang (λ.

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

0.01 0.008 PW(w) 0.006 0.004 0.002 0 0 20 40 60 80 100 w 120 140 160 180 200

Figure 2: From Quiz 6.9, the PMF PW (w) of the independent sum of a binomial (100, 0.5) random variable and a discrete uniform (0, 100) random variable.

46

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

**Quiz Solutions – Chapter 7
**

Quiz 7.1 An exponential random variable with expected value 1 also has variance 1. By Theorem 7.1, Mn (X ) has variance Var[Mn (X )] = 1/n. Hence, we need n = 100 samples. Quiz 7.2 The arrival time of the third elevator is W = X 1 + X 2 + X 3 . Since each X i is uniform (0, 30), (30 − 0)2 Var [X i ] = = 75. (1) E [X i ] = 15, 12 Thus E[W ] = 3E[X i ] = 45, and Var[W ] = 3 Var[X i ] = 225. (1) By the Markov inequality, P [W > 75] ≤ (2) By the Chebyshev inequality, P [W > 75] = P [W − E [W ] > 30] ≤ P [|W − E [W ]| > 30] ≤ 225 Var [W ] 1 = = 2 900 4 30 (3) (4) E [W ] 45 3 = = 75 75 5 (2)

Quiz 7.3 Deﬁne the random variable W = (X − µ X )2 . Observe that V100 (X ) = M100 (W ). By Theorem 7.6, the mean square error is E (M100 (W ) − µW )2 = Observe that µ X = 0 so that W = X 2 . Thus, µW = E X

2

Var[W ] 100

(1)

=

1 −1 1 −1

x 2 f X (x) d x = 1/3 x 4 f X (x) d x = 1/5

(2) (3)

E W2 = E X4 =

Therefore Var[W ] = E[W 2 ] − µ2 = 1/5 − (1/3)2 = 4/45 and the mean square error is W 4/4500 = 0.000889.

47

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195

Quiz 7.4 Assuming the number n of samples is large, we can use a Gaussian approximation for Mn (X ). SinceE[X ] = p and Var[X ] = p(1 − p), we apply Theorem 7.13 which says that the interval estimate Mn (X ) − c ≤ p ≤ Mn (X ) + c (1) has conﬁdence coefﬁcient 1 − α where α =2−2 √ c n . p(1 − p)

(2)

We must ensure for every value of p that 1 − α ≥ 0.9 or α ≤ 0.1. Equivalently, we must have √ c n ≥ 0.95 (3) p(1 − p) √ for every value of p. Since (x) is an increasing function of x, we must satisfy c n ≥ 1.65 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that 1.65 0.41 c≥ √ = √ . 4 n n The 0.9 conﬁdence interval estimate of p is 0.41 0.41 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n (5) (4)

√ For the 0.99 conﬁdence interval, we have α ≤ 0.01, implying (c n/( p(1− p))) ≥ 0.995. √ This implies c n√ 2.58 p(1 − p). Since p(1 − p) ≤ 1/4 for all p, we require that ≥ c ≥ (0.25)(2.58)/ n. In this case, the 0.99 conﬁdence interval estimate is 0.645 0.645 Mn (X ) − √ ≤ p ≤ Mn (X ) + √ . n n Note that if M100 (X ) = 0.4, then the 0.99 conﬁdence interval estimate is 0.3355 ≤ p ≤ 0.4645. The interval is wide because the 0.99 conﬁdence is high. Quiz 7.5 Following the approach of bernoullitraces.m, we generate m = 1000 sample paths, each sample path having n = 100 Bernoulli traces. at time k, OK(k) counts the fraction of sample paths that have sample mean within one standard error of p. The program bernoullisample.m generates graphs the number of traces within one standard error as a function of the time, i.e. the number of trials in each trace. 48 (7) (6)

Address:104 pine meadows loop, hot springs, AR, us (United States) Zip Code:71901

’-s’). The unusual sawtooth pattern.0.2)/m.m).p).m*n). plot(1:n.9 0.68.m.m).2.n.5000. The following graph was generated by bernoullisample(100.6 0. x=reshape(bernoullirv(p. stderr=sqrt(p*(1-p)).8 0. the fraction of traces within one standard error approaches 2 (1) − 1 ≈ 0.5.Name:joey iwatsuru Email:joeyiwat@yahoo. is examined in Problem 7.7 0. MN=cumsum(x).com Phone:5017621195 function OK=bernoullisample(n. as m gets large.OK. stderrmat=stderr*ones(1.5): 1 0.4 0 10 20 30 40 50 60 70 80 90 100 As we would expect./sqrt((1:n)’). us (United States) Zip Code:71901 .5 0. OK=sum(abs(MN-p)<stderrmat. hot springs. nn=(1:n)’*ones(1. 49 Address:104 pine meadows loop.m). though perhaps unexpected./nn. AR.

we obtain α = P [X ≤ r ] = (1 − e−r )15 = 0. .7. if we observe X < 1. X 15 obeys FX (x) = P [X ≤ x] = P [X 1 ≤ x. . 50 Address:104 pine meadows loop. · · · . (4) Thus if we observe at least 214. . This implies that for x ≥ 0. That is. 1. FX (x) = FX i (x) 15 (2) = 1 − e−x 15 (3) To design a signiﬁcance test. the conditional PMFs of K are PK |H0 (k) = PK |H1 (k) = 104k e−10 k! 4 (4) (5) 0 106k e−10 k! 6 k = 0. . . Quiz 8. the CDF of the maximum of X 1 . 975. the ML hypothesis rule is k ∈ A0 if PK |H0 (k) ≥ PK |H1 (k) .01 It is straightforward to show that r = − ln 1 − (0. A reasonable choice is to reject the hypothesis if X is too small. hot springs. 976 photons. let R = {X ≤ r }. 1.6. ln 100 ∗ k ∈ A1 otherwise.com Phone:5017621195 Quiz Solutions – Chapter 8 Quiz 8.33. we must choose a rejection region for X . This rule simpliﬁes to 106 − 104 k ∈ A0 if k ≤ k = = 214.2 From the problem statement.33 Hence. otherwise k = 0. . then we reject the hypothesis. the MAP and ML tests are the same. . X 15 ≤ x] = [P [X i ≤ x]]15 . For a signiﬁcance level of α = 0. AR. each X i has PDF and CDF f X i (x) = e−x x ≥ 0 0 otherwise FX i (x) = 0 x <0 1 − e−x x ≥ 0 (1) Hence. . .01)1/15 = 1.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 . From Theorem 8. . (3) k ∈ A1 otherwise. X 2 ≤ x. otherwise (1) (2) 0 Since the two hypotheses are equally likely. then we accept hypothesis H1 .1 From the problem statement.01.

us (United States) Zip Code:71901 .FM2(:. FM5(:.1’.TT]=ndgrid(x. [XX.2). 51 Address:104 pine meadows loop.T). P[C|H0 ] = P[C|Hi ] for all i. sqdistroc. Here is the modiﬁed code: function FM=sqdistroc(v.2’. Since N1 and N2 are iid Gaussian (0.T) %square law distortion recvr %P(error) for m bits tested %transmit v volts or -v volts.2).0.. σ ) random variables.FM1(:.m.’--k’.m calls sqdistroc three times to generate a plot that compares the receiver performance for the three requested values of d. x= -v+randn(m.1. the probability 2 PERR = 1 − P [C] = 1 − E 2σ 2 (5) Quiz 8. AR. a symbol error occurs when si is transmitted but (X 1 .4 To generate the ROC. FM2=sqdistroc(v.Name:joey iwatsuru Email:joeyiwat@yahoo.m.2.m is essentially the same as sqdistor except the output is a matrix FM whose columns are the false alarm and miss probabilities.0.0.3) ylabel(’P_{MISS}’).3’.T(:)). Equivalently. otherwise 0 %FM = [P(FA) P(MISS)] x=(v+randn(m.m. xlabel(’P_{FA}’). Next. FM=[P10(:) P01(:)].T). ’\it d=0.1). The modiﬁed program.ˆ2)>TT).. %add N volts. FM1=sqdistroc(v.’:k’).. function FM=sqdistrocplot(v. . X 2 ) ∈ A j for some j = i. Given H0 . we have P[C] = 2( E/2σ 2 ). X 2 > 0|H0 ] = P E/2 + N1 > 0.. [XX.FM5(:. For a QPSK system. E/2 + N2 > 0 (1) Because of the symmetry of the signals.2). FM5=sqdistroc(v.d. This implies the probability of a correct decision is P[C] = P[C|H0 ].T(:)). the conditional probability of a correct decision is √ √ P [C|H0 ] = P [X 1 > 0.1)/m. hot springs..1)).1).1).1). loglog(FM1(:.1) %add d(v+N)ˆ2 distortion %receive 1 if x>T. it is easier to calculate the probability of a correct decision.’-k’. FM=[FM1 FM2 FM5]. P10=sum((XX+d*(XX.m.. P01=sum((XX+d*(XX.TT]=ndgrid(x.T). the program sqdistrocplot.m.T). we have √ √ P [C] = P [C|H0 ] = P E/2 + N1 > 0 P E/2 + N2 > 0 (2) √ 2 (3) = P N1 > − E/2 √ 2 − E/2 (4) = 1− σ Since (−x) = 1 − of error is (x).’\it d=0.. FM2(:. N is Gauss(0.ˆ2)< TT).3. legend(’\it d=0. the existing program sqdistor already calculates this miss probability PMISS = P01 and the false alarm probability PFA = P10 . .1)/m.com Phone:5017621195 Quiz 8.3 For the QPSK system.

sqdistrocplot(3.2 d=0. 10 0 10 −1 10 PMISS 10 10 −2 −3 −4 10 −5 d=0. generated the plot shown in Figure 3.100000. Figure 3: The receiver operating curve for the communications system of Quiz 8.Name:joey iwatsuru Email:joeyiwat@yahoo. us (United States) Zip Code:71901 .3 −5 10 10 −4 10 −3 10 PFA −2 10 −1 10 0 T=-3:0. 52 Address:104 pine meadows loop.1 d=0. AR.1:3. the commands T=-3:0. sqdistrocplot(3.com Phone:5017621195 To see the effect of d.T). hot springs.T).4 with squared distortion.100000.1:3.

hot springs. f X (x) = x 1 2(y + x) dy = y 2 + 2x y y=1 y=x = 1 + 2x − 3x 2 (4) (5) For 0 ≤ x ≤ 1. For 0 ≤ x ≤ 1.1 (1) First.Y (x.com Phone:5017621195 Quiz Solutions – Chapter 9 Quiz 9.Name:joey iwatsuru Email:joeyiwat@yahoo. we need the marginal PDF f X (x). AR. the conditional PDF of Y given X is f Y |X (y|x) = 2(y+x) 1+2x−3x 2 0 x ≤y≤1 otherwise (6) (4) The MMSE estimate of Y given X = x is y M (x) = E [Y |X = x] = ˆ x 1 2y 2 + 2x y dy 1 + 2x − 3x 2 y=1 y=x (7) (8) (9) 2y 3 /3 + x y 2 = 1 + 2x − 3x 2 = 2 + 3x − 5x 3 3 + 6x − 9x 2 53 Address:104 pine meadows loop. (3) To obtain the conditional PDF f Y |X (y|x). y) = f Y (y) 2 3y + 2x 3y 2 0 0≤x ≤y otherwise (2) (2) The minimum mean square error estimate of X given Y = y is x M (y) = E [X |Y = y] = ˆ 0 y 2x 2 2x + 2 3y 3y d x = 5y/9 (3) ˆ Thus the MMSE estimator of X given Y is X M (Y ) = 5Y /9. us (United States) Zip Code:71901 . we calculate the marginal PDF for 0 ≤ y ≤ 1: f Y (y) = 0 y 2(y + x) d x = 2x y + x 2 x=y x=0 = 3y 2 (1) This implies the conditional PDF of X given Y is f X |Y (x|y) = f X.

R ) = 9(1 − 3/4) = 9/4 L 2 σT (5) σR R= 2 2 σT 2 2 σT + σ X R= 3 R 4 (6) (7) Quiz 9.com Phone:5017621195 Quiz 9.4. the optimum linear estimate of T given R is σT ˆ TL (R) = ρT. us (United States) Zip Code:71901 . E [R] = E [T ] + E [X ] = 0 (2) Since T and X are independent. the correlation coefﬁcient of T and R is ρT. The conditional PDF of X given R is 1 2 f X |R (x|r ) = √ e−(x+40+40 log10 r ) /128 128π 54 (1) Address:104 pine meadows loop.4. (4) From Deﬁnition 4.Name:joey iwatsuru Email:joeyiwat@yahoo. Cov [T.2 (1) Since the expectation of the sum equals the sum of the expectations. E[T X ] = E[T ]E[X ] = 0 and E[T 2 ] = Var[T ]. the conditional PDF of X = Y −40−40 log10 r is Gaussian with expected value −40 − 40 log10 r and variance 64. the variance of the sum R = T + X is Var[R] = Var[T ] + Var[X ] = 9 + 3 = 12 (3) Since T and R have expected values E[R] = E[T ] = 0. R] = E [T R] = E [T (T + X )] = E T 2 + E [T X ] (3) (2) (1) Since T and X are independent and have zero expected value.3 When R = r .R = σT /σ R .R = √ √ σT Cov [T. ˆ TL (R) = Hence a ∗ = 3/4 and b∗ = 0. Thus Cov[T.8. the mean square error of the linear estimate is 2 e∗ = Var[T ](1 − ρT. hot springs.R (R − E [R]) + E [T ] σR Since E[R] = E[T ] = 0 and ρT. AR. R] = = 3/2 σR Var[R] Var[T ] (4) (5) From Theorem 9. R] = Var[T ] = 9. (6) By Theorem 9.

yielding log10 r = −1 − x/40 or rML (x) = (0. if x = −120dB. This corresponds to a distance estimate of rML (−120) = 100 m.1236)10 (9) For example. However.1236)10−x/40 (8) This is the MAP estimate of R given X = x as long as r ≤ 1000 m.1)10−x/40 m ˆ (3) (4) If the result doesn’t look correct. us (United States) Zip Code:71901 . we observe that the joint PDF of X and R is f X. This reﬂects the fact that large values of R are a priori more probable than small values.2 to write the ML estimate of R given X = x as rML (x) = arg max f X |R (x|r ) ˆ r ≥0 (2) We observe that f X |R (x|r ) is maximized when the exponent (x + 40 + 40 log10 r )2 is minimized. (6) rMAP (x) = arg max f X.6. That is. which is not possible in our probability model. hot springs. the MAP estimate is 23. for very low signal strengths.R (x.R (x. Setting the derivative of f X.3 (0. When the measured signal ˆ strength is not too low. r ) = f X |R (x|r ) f R (r ) = 106 32π 1 √ r e−(x+40+40 log10 r ) 2 /128 (5) From Theorem 9.com Phone:5017621195 From the conditional PDF f X |R (x|r ). the MAP estimate takes into account that the distance can never exceed 1000 m. r ). 55 Address:104 pine meadows loop.R (x. Hence. r ) ˆ 0≤r ≤1000 Note that we have included the constraint r ≤ 1000 in the maximization to highlight the fact that under our probability model. the above estimate will exceed 1000 m. note that a typical ﬁgure for the signal strength might be x = −120 dB. the MAP estimate of R given X = x is the value of r that maximizes f X. ˆ For the MAP estimate.Name:joey iwatsuru Email:joeyiwat@yahoo. When x ≤ −156.3 −x/40 x ≥ −156.6 m. we can use Deﬁnition 9. then rMAP (−120) = 123. R ≤ 1000 m.3 dB.6% larger than the ML estimate.R (x. AR. This minimum occurs when the exponent is zero. the complete description of the MAP estimate is rMAP (x) = ˆ 1000 x < −156. r ) with respect to r to zero yields e−(x+40+40 log10 r ) Solving for r yields r = 10 1 25 log10 e −1 2 /128 1− 80 log10 e (x + 40 + 40 log10 r ) = 0 128 (7) 10−x/40 = (0.

−0.9 1.1 0 .1 (2) (3) It follows that a ∗ = 1/1. Y2 ] . the LMSE estimate of X 2 given Y2 is X 2 (Y2 ) = a ∗ Y2 + b∗ where a∗ = Cov [X 2 .Y2 = The expected square error is 2 e∗ = Var[X 2 ](1 − ρ X 2 .7. hot springs.1 (6) In terms of Theorem 9. (1) Because E[X] = E[Y] = 0.4. us (United States) Zip Code:71901 . To apply Theorem 9. Y2 ] 1 =√ σ X 2 σY2 1. we need to ﬁnd RYX 2 = E [YX 2 ] = E [Y1 X 2 ] E [(X 1 + W1 )X 2 ] = . 0 0. Because µ X 2 = µY2 = 0.1 (4) 1 1 = = 0.1 11 (5) (2) Since Y = X + W and E[X] = E[W] = 0. AR.9 .9 1 RW = 0. (7) (8) Because X and W are independent. E[XW ] = E[X]E[W ] = 0. −0.4 ˆ (1) From Theorem 9.0909 1.com Phone:5017621195 Quiz 9. it follows that E[Y] = 0.1 −0. Y2 ] = E [X 2 Y2 ] = E [X 2 (X 2 + W2 )] = E X 2 = 1 2 2 Var[Y2 ] = Var[X 2 ] + Var[W2 ] = E X 2 + E W2 = 1. Similarly. n = 2 and we wish to estimate X 2 given the observation vector Y = Y1 Y2 .Y2 ) = 1 − L Cov [X 2 . Finally. This implies RY = E XX + E WW = RX + RW = In addition. E[WX ] = 0. Note that X and W have correlation matrices RX = 1 −0. to compute the expected square error. we need to ﬁnd RY and RYX 2 . it follows that b∗ = 0. Var[Y2 ] b ∗ = µ X 2 − a ∗ µ Y2 .7.1. E [Y2 X 2 ] E [(X 2 + W2 )X 2 ] 56 (10) 1. RY = E YY = E (X + W)(X + W ) = E XX + XW + WX + WW . 2 Cov [X 2 .Name:joey iwatsuru Email:joeyiwat@yahoo. Thus we can apply Theorem 9.7.9 .1 (9) Address:104 pine meadows loop. we calculate the correlation coefﬁcient ρ X 2 .

(11) 2 1 E X2 By Theorem 9.7.X 2 = 0. X L (Y) = a Y where a = R−1 RYX . the optimum linear estimator of X 2 given Y1 and Y2 is ˆ ˆ X L = a Y = −0. Y also has zero expected value. Since X and W are independent. AR.0725.225 0.X 2 − a2rY2 .Name:joey iwatsuru Email:joeyiwat@yahoo. the correlation matrix of Y is RY = E YY = E (1X + W)(1 X + W ) = 11 E X 2 + 1E X W + E [WX ] 1 + E WW = 11 + RW Note that 11 is a 20 × 20 matrix with every entry equal to 1.225Y1 + 0. us (United States) Zip Code:71901 . Thus E[X 1 X 2 ] −0.com Phone:5017621195 Since X and W are independent vectors. This implies RYX = E [YX ] = E [(1X + W)X ] = 1E X 2 = 1. The question we must address is what value c minimizes e∗ .725 (12) Therefore. ˆ a = R−1 RYX = 11 + RW Y and the optimal linear estimator is ˆ X L (Y) = 1 11 + RW The mean square error is ˆ e∗ = Var[X ] − a RYX = 1 − 1 11 + RW L −1 −1 −1 (1) (2) (3) (4) 1 (5) Y (6) 1 (7) Now we note that RW has i. By the same reasoning. (14) (13) Quiz 9. This problem is atypical in that one does not usually get L 57 Address:104 pine meadows loop.725Y2 . jth entry RW (i.9 RYX 2 = = . ˆ a = R−1 RYX 2 = Y −0. j) = c|i− j|−1 . Thus.7. E[W1 X 2 ] = E[W1 ]E[X 2 ] = 0 and E[W2 X 2 ] = 0. The mean square error is ˆ Var [X 2 ] − a RYX 2 = Var [X ] − a1rY1 . by ˆ ˆ ˆ Theorem 9. Y E[WX ] = 0 and E[X W ] = 0 . hot springs. Thus.5 Since X and W have zero expected value.

01:0.msec). AR. af=(inv(RY))*v1. Note in mquiz9 that v1 corresponds to the vector 1 of all ones. consider the extreme case in which every Wi and W j have correlation coefﬁcient ρi j = 1. This would suggest that large values of c will also result in poor MSE. function cmin=mquiz9minc(c). If this argument is not clear.1). our 20 measurements will be all the same and one measurement is as good as 20 measurements.4500 1 0. for k=1:length(c).8 e* L 0. xlabel(’c’). if c is large Wi and W j are highly correlated and the separate measurements of X are very dependent. both small values and large values of c result in large MSE. 58 Address:104 pine meadows loop. msec=zeros(size(c)). v1=ones(20. >> mquiz9minc(c) ans = 0. However. The following commands ﬁnds the minimum c and also produces the following graph: >> c=0.af]=mquiz9(c).ˆ((0:19)-1)).2 0 0.Name:joey iwatsuru Email:joeyiwat@yahoo. On the other hand. function [mse.4 0.ylabel(’e_Lˆ*’).optk]=min(msec). [msemin.com Phone:5017621195 to choose the correlation structure of the noise. we observe that Var[Wi ] = RW (i. mse=1-((v1’)*af). RW=toeplitz(c.01:0. i) = 1/c. [msec(k). cmin=c(optk). hot springs.af]=mquiz9(c(k)). we write a M ATLAB function mquiz9(c) to calculate the MSE for a given c and second function that ﬁnds plots the MSE for a range of values of c.99. end plot(c. We note that the answer is not obviously apparent from Equation (7). In this case. we will see that the answer is somewhat instructive.5 c 1 As we see in the graph. RY=(v1*(v1’)) +RW. the noises Wi have high variance and we would expect our estimator to be poor. Thus. To ﬁnd the optimal value of c. us (United States) Zip Code:71901 . when c is small. In particular.6 0.

AR. .2 (1) We obtain a continuous time. (2) If at every moment in time. . Quiz 10. s). s) is • m(0.01 950 ≤ r ≤ 1050 0 otherwise (1) The probability that a test produces a 1% resistor is p = P [990 ≤ R ≤ 1010] = 1010 990 (0.2 (2) 59 Address:104 pine meadows loop. the number of calls that hang up during the experiment • D1 . (4) Rounding the samples in part (c) to the nearest integer degree yields a discrete time.01) dr = 0. the number of new calls that arrive during the experiment • X 1 . us (United States) Zip Code:71901 . .com Phone:5017621195 Quiz Solutions – Chapter 10 Quiz 10. . the call completion times of the H calls that hang up Quiz 10. continuous valued process when we record the temperature as a continuous waveform over time. D H . One choice for an alternate set of random variables that would specify m(t.1 There are many correct answers to this question. (3) If we sample the process in part (a) every T seconds. discrete valued process. X N .3 (1) Each resistor has resistance R in ohms with uniform PDF f R (r ) = 0. . discrete valued process. . . the interarrival times of the N new arrivals • H . the number of ongoing calls at the start of the experiment • N . hot springs. we round the temperature to the nearest degree. then we obtain a continuous time. then we obtain a discrete time. . A correct answer speciﬁes enough random variables to specify the sample path exactly.Name:joey iwatsuru Email:joeyiwat@yahoo. continuous valued process.

In this problem. A success occurs on a trial with probability p if we ﬁnd a 1% resistor. AR. . . . 1) random variable. That is. . T1 has the geometric PMF PT1 (t) = (1 − p)t−1 p t = 1. exactly t resistors are tested. t − 1 followed by a success on trial t. . the number of 1% resistors found has the binomial PMF PN (t) (n) = p n (1 − p)t−n n = 0. 9 otherwise (4) Since p = 0. xn ) = i=1 f X (xi ) = 1 2 2 e−(x1 +···+xn )/2 n/2 (2π ) (2) 60 Address:104 pine meadows loop. .X (n) (x1 . the probability the ﬁrst 1% resistor is found in exactly ﬁve seconds is PT1 (5) = (0.. E[T1 ] = 1/ p = 5. The ﬁrst 1% resistor is found at time T1 = t if we observe failures on trials 1. the number of additional trials needed to ﬁnd the second 1% resistor once again has a geometric PMF with expected value 1/ p since each independent trial is a success with probability p.8)4 (0. . Each resistor is a 1% resistor with probability p. . t 0 otherwise t n (3) (3) First we will ﬁnd the PMF of T1 .5. hot springs.08192. . Thus E [T2 |T1 = 10] = E [T1 |T1 = 10] + E T |T1 = 10 = 10 + E T = 10 + 5 = 15 (5) (6) Quiz 10. a geometric random variable with success probability p has expected value 1/ p. each X i has PDF 1 2 f X (i) (x) = √ e−x /2 2π By Theorem 10.com Phone:5017621195 (2) In t seconds. the joint PDF of X = X 1 · · · X n is k (1) f X (x) = f X (1). us (United States) Zip Code:71901 . . . 1..2. . ... Consequently. 2. This problem is easy if we view each resistor test as an independent trial.1. Hence. (5) Note that once we ﬁnd the ﬁrst 1% resistor. . T2 = T1 + T where T is independent and identically distributed to T1 . .4 Since each X i is a N (0. (4) From Theorem 2.Name:joey iwatsuru Email:joeyiwat@yahoo.11.2) = 0. independent of any other resistor. just as in Example 2.

11.7 First. Y1 is an Erlang (n = 2. 1. Since we count only evennumbered arrival for N (t). Since X 1 and X 2 are independent exponential (λ) random variables. X 2 . . Since Yi (t). . X (t) − X (s) = W (t) − W (s) √ α (1) Since W (t) − W (s) is a Gaussian random variable.13 states that W (t) − W (s) is Gaussian with expected value E [X (t) − X (s)] = and variance E (W (t) − W (s))2 = E (W (t) − W (s))2 α(t − s) = α α (3) E [W (t) − W (s)] =0 √ α (2) Consider s ≤ s √ t. Since s ≥ s . the ith interarrival time of the N (t) process. . PM1 . . we can conclude that the interarrival times of N (t) are not exponential random variables. otherwise (1) Since M1 and M2 are independent. Quiz 10. AR. 1. That is. the time until the ﬁrst arrival of the N (t) is Y1 = X 1 + X 2 . Theorem 3.6 To answer whether N (t) is a Poisson process. 2.M2 (m 1 . the joint PMF of M1 and M2 is ⎧ α m 1 +m 2 e−2α m 1 = 0. Thus N (t) is not a Poisson process. m 2 ) = PM1 (m 1 ) PM2 (m 2 ) = ⎪ ⎪ ⎩ 0 otherwise. λ) random variable. hot springs. Thus X (t) is a Brownian motion process with variance Var[X (t)] = t. see Theorem 6. Since one hour equals 3600 sec and the Poisson process has a rate of 10 packets/sec. . This implies M1 and M2 are independent Poisson random variables each with PMF PMi (m) = α m e−α m! 0 m = 0. This implies < √ [W (t) − W (s)]/ α is independent of W (s )/ α for all s ≥ s . X (t) − X (s) is independent of X (s ) for all s ≥ s . denote the interarrival times of the N (t) process. has the same PDF as Y1 (t). Let X 1 . (2) Quiz 10. . . .5 The ﬁrst and second hours are nonoverlapping intervals. we look at the interarrival times. .Name:joey iwatsuru Email:joeyiwat@yahoo. . .com Phone:5017621195 Quiz 10. we note that for t > s. 000. 61 Address:104 pine meadows loop. W (t) − W (s) is independent of W (s ). ⎪ m 1 !m 2 ! ⎪ ⎨ m 2 = 0. . us (United States) Zip Code:71901 . . 1. . the expected number of packets in each hour is E[Mi ] = α = 36.

... τ ) + R N (t. for time instants n 1 + k.12: R(τ ) ≥ 0 R(τ ) = R(−τ ) |R(τ )| ≤ R(0) (1) (3) (2) (1) (1) R1 (τ ) = e−|τ | meets all three conditions and thus is valid. xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) Similarly. .. f X n1 . we have RY (t. we observe that since X (t) and N (t) are independent and since N (t) has zero expected value. . . (2) (3) (4) Quiz 10.. 2 (3) R3 (τ ) = e−τ cos τ is not valid because R3 (−2π ) = e2π cos 2π = e2π > 1 = R3 (0) (4) R4 (τ ) = e−τ sin τ also cannot be an autocorrelation function because 2 (2) R4 (π/2) = e−π/2 sin π/2 = e−π/2 > 0 = R4 (0) (3) 62 Address:104 pine meadows loop.Name:joey iwatsuru Email:joeyiwat@yahoo. xm ) = f X n1 +k . xm ) = f X (x1 ) f X (x2 ) · · · f X (xm ) We can conclude that the iid random sequence is stationary. . . X 2 .. Since RY (t.9 From Deﬁnition 10... . ..X nm +k (x1 . f X n1 . . . . . . E[X (t)N (t )] = E[X (t)]E[N (t )] = 0. . (2) R2 (τ ) = e−τ also is valid.X nm +k (x1 . .. . xm ) Since the random sequence is iid. τ ). . . Quiz 10. (1) To ﬁnd the autocorrelation. X 1 ..10 We must check whether each function R(τ ) meets the conditions of Theorem 10..com Phone:5017621195 Quiz 10. ...X nm (x1 .. . hot springs. n m and time offset k. . f X n1 +k . AR. . .. .. τ ) = E [(X (t) + N (t)) (X (t + τ ) + N (t + τ ))] = E [X (t)X (t + τ )] + E [X (t)N (t + τ )] + E [X (t + τ )N (t)] + E [N (t)N (t + τ )] = R X (t.14. us (United States) Zip Code:71901 .8 First we ﬁnd the expected value µY (t) = µ X (t) + µ N (t) = µ X (t). n m + k. τ ) = E[Y (t)Y (t + τ )]. . is a stationary random sequence if for all sets of time instants n 1 . . .X nm (x1 .

we conclude that X (t) and Y (t) are not jointly wide sense stationary. Quiz 10. we can conclude that Y (t) is a wide sense stationary process. as t gets larger.X (t+1) (x0 . (2) Since X (t) and Y (t) are both wide sense stationary processes. τ ) depends on both t and τ . In this case.11 (1) The autocorrelation of Y (t) is RY (t. we can check whether they are jointly wide sense stationary by seeing if R X Y (t. To see why this is.com Phone:5017621195 Quiz 10.Name:joey iwatsuru Email:joeyiwat@yahoo. x1 ) = 1 (2π )n/2 [det (CX )]1/2 1 3π 2 e− 3 2 2 2 x0 −x0 x1 +x1 (5) 1 exp − x C−1 x X 2 (6) (7) =√ 63 Address:104 pine meadows loop. Y (t) = X (−t) and X (t) become less and less correlated. we see the same second order statistics. In fact. hot springs.12 From the problem statement. suppose R X (τ ) = e−|τ | so that samples of X (t) far apart in time have almost no correlation. τ ) is just a function of τ . τ ) = E [Y (t)Y (t + τ )] = E [X (−t)X (−t − τ )] = R X (−t − (−t − τ )) = R X (τ ) (1) (2) (3) Since E[Y (t)] = E[X (−t)] = µ X . we see that by viewing a process backwards in time. R X Y (t. In this case. us (United States) Zip Code:71901 . E [X (t)] = E [X (t + 1)] = 0 E [X (t)X (t + 1)] = 1/2 Var[X (t)] = Var[X (t + 1)] = 1 The Gaussian random vector X = X (t) X (t + 1) sponding inverse CX = Since 1 1/2 1/2 1 C−1 = X (1) (2) (3) has covariance matrix and corre- 4 1 −1/2 1 3 −1/2 (4) 4 4 2 1 −1/2 x0 2 x − x0 x+ x1 = 1 x1 3 −1/2 3 0 the joint PDF of X (t) and X (t + 1) is the Gaussian vector PDF x C−1 x = x0 x1 X f X (t). τ ) = E [X (t)Y (t + τ )] = E [X (t)X (−t − τ )] = R X (t − (−t − τ )) = R X (2t + τ ) (4) (5) (6) Since R X Y (t. AR.

Start at time t = 0 with an empty system.Name:joey iwatsuru Email:joeyiwat@yahoo. check the state M(t). hot springs. After the head-of-schedule event is completed and any new events (departures in this system) are scheduled. Examine the head-of-schedule event. – If M(t) < c. 64 Address:104 pine meadows loop. increase the system state n by 1. 3. The program simply executes the event at the head of the schedule.28 admits a deceptively simple solution in terms of the vector of arrivals A and the vector of departures D. we know the system state cannot change until the next scheduled event. we must block the call. The blocking switch is an example of a discrete event system. With the introduction of call blocking. us (United States) Zip Code:71901 . the number of ongoing calls. admit the arrival. Call blocking can be implemented by setting the service time of the call to zero so that the call departs as soon as it arrives. namely arrivals and departures.com Phone:5017621195 120 100 80 M(t) 60 40 20 0 0 10 20 30 40 50 t 60 70 80 90 100 Figure 4: Sample path of 100 minutes of the blocking switch of Quiz 10. do not schedule a departure event. when an arrival occurs at time t. we need to know that M(t). block the arrival. In particular. • If the head of schedule event is a departure. an exponential (λ) random variable. satisﬁes M(t) < c = 120. when M(t) = c. and schedule a departure to occur at time t + Sn . Otherwise.13.13 The simple structure of the switch simulation of Example 10. reduce the system state n by 1. The logic of such a simulation is 1. Quiz 10. at discrete time instances. AR. Delete the head-of-schedule event and go to step 2. – If M(t) = c. • When the head-of-schedule event is the kth arrival is at time t. we cannot generate these vectors all at once. The system evolves via a sequence of discrete events. A simulation of the system moves from one time instant to the next by maintaining a chronological schedule of future events (arrivals and departures) to be executed. where Sk is an exponential (λ) random variable. Schedule the ﬁrst arrival to occur at S1 . 2.

1. The complete program is shown in Figure 5.0057.0048.m).000 minutes. Note that in Chapter 12. Chapter 12 develops techniques for analyzing and simulating systems described by Markov chains that are much simpler than the discrete event simulation technique shown here. [m. One reason our simulation underestimates the blocking probability is that in a 5. In this case. a simple (but not elegant) way to do this is to have maintain two vectors: time is a list of timestamps of scheduled events and event is a the list of event types.0057 is that a simulation that includes only 239 blocks is not all that likely to give a very accurate result for the blocking probability. When the program is passed a vector t. The following instructions t=0:0. In most programming languages.000 minute full simulation produced a=49658 admitted calls and b=239 blocked calls. Thus this would account for only part of the disparity. we can calculate that the exact blocking probability is Pb = 0. this says that roughly the ﬁrst two percent of the simulation time was unusual.0048 and 0. the output [m a b] is such that m(i) is the number of ongoing calls at time t(i) while a and b are the number of admits and blocks. The rest of the gap between 0. We can estimate the probability a call is blocked as b ˆ = 0. AR. a result known as the “Erlang-B formula. In M ATLAB.120.93). (1) Pb = a+b In Chapter 12. us (United States) Zip Code:71901 . for very complicated systems.000 minute simulation. we will learn that the exact blocking probability is given by Equation (12.0.com Phone:5017621195 Thus we know that M(t) will stay the same until then.b]=simblockswitch(10. 65 Address:104 pine meadows loop. However. A sample path of the ﬁrst 100 minutes of that simulation is shown in Figure 4. we use the vector t as the set of time instances at which we inspect the system state. Thus for all times t(i) between the current head-of-schedule event and the next. Nevertheless. The 5. In our simulation.a.t). or event(i)=-1 if the ith scheduled event is a departure. the discrete event simulation is widely-used and often very efﬁcient simulation method. plot(t. we set m(i) to the current switch state.Name:joey iwatsuru Email:joeyiwat@yahoo. generated a simulation lasting 5. hot springs. event(i)=1 if the ith scheduled event is an arrival. we will learn that the blocking switch is an example of an M/M/c/c queue. roughly the ﬁrst 100 minutes are needed to load up the switch since the switch is idle when the simulation starts at time t = 0. it is common to implement the event schedule as a linked list where each item in the list has a data structure indicating an event timestamp and the type of the event. a kind of Markov chain.” From the Erlang-B formula.1:5000.

mu.. n=n+1.1) ].Name:joey iwatsuru Email:joeyiwat@yahoo. tmax=max(t). blocks=0. % clear current event if (eventnow==1) % arrival arrival=timenow+exponentialrv(lam..blocks]=simblockswitch(lam. %one more block.13. %first event is an arrival timenow=0. timenow=time(1). 66 Address:104 pine meadows loop. b4depart=time<depart. n=0. if n<c %call admitted admits=admits+1. event=[ 1 ]. immed departure disp(sprintf(’Time %10. eventnow=event(1). % # in system time=[ exponentialrv(lam. time=[time(b4depart) depart time(˜b4depart)]. time=[time(b4arrival) arrival time(˜b4arrival)]. while (timenow<tmax) M((timenow<=t)&(t<time(1)))=n. end elseif (eventnow==-1) %departure n=n-1.admits. time(1)= [ ].blocks)).admits. else blocks=blocks+1. hot springs. depart=timenow+exponentialrv(mu.1).t). timenow. AR. end end Figure 5: Discrete event simulation of the blocking switch of Quiz 10.. event=[event(b4depart) -1 event(˜b4depart)]. %total # blocks admits=0.c. us (United States) Zip Code:71901 . % next arrival b4arrival=time<arrival. event=[event(b4arrival) 1 event(˜b4arrival)].com Phone:5017621195 function [M. event(1)=[ ].1). %total # admits M=zeros(size(t)).3d Admits %10d Blocks %10d’.

2. we 2 can double check. 67 Address:104 pine meadows loop. 1 RY (τ ) = e−|τ | 2 Quiz 11. the autocorrelation function of the output is RY (τ ) = ∞ −∞ ∞ h(u) −∞ h(v)δ(τ + u − v) dv du = ∞ −∞ h(u)h(τ + u) du (2) For τ > 0.2 The expected value of the output is ∞ ∞ −τ h(u)h(τ + u) du = ∞ −τ 1 e−u e−τ −u du = eτ 2 (4) (5) µY = µ X n=−∞ h n = 0. The variance of Yn is Var[Yn ] = E[Yn ] = RY [0] = 1. For τ < 0. RY (τ ) = Hence. we can deduce that RY (τ ) = 1 e−|τ | by symmetry.5(1 + −1) = 0 (1) The autocorrelation of the output is 1 1 RY [n] = i=0 j=0 h i h j R X [n + i − j] 1 n=0 0 otherwise (2) (3) = 2R X [n] − R X [n − 1] − R X [n + 1] = 2 Since µY = 0. us (United States) Zip Code:71901 . Just to be safe though.com Phone:5017621195 Quiz Solutions – Chapter 11 Quiz 11.Name:joey iwatsuru Email:joeyiwat@yahoo. hot springs.1 By Theorem 11. AR. we have RY (τ ) = 0 ∞ e−u e−τ −u du = e−τ 0 ∞ 1 e−2u du = e−τ 2 (3) For τ < 0. µY = µ X ∞ −∞ h(t)dt = 2 0 ∞ e−t dt = 2 (1) Since R X (τ ) = δ(τ ).

7. which equals the correlation matrix RY since Y has zero expected value.5.13 with µX = 0 and A = H. Quiz 11.6 and to use Theorem 11.8. it is simpler to observe that Y = HX where X = X 30 X 31 X 32 X 33 X 34 X 35 and ⎡ ⎤ 1 1 1 1 0 0 1 H = ⎣0 1 1 1 1 0⎦ . Fo ﬁnd the PDF of the Gaussian vector Y. us (United States) Zip Code:71901 . Y = Y33 Y34 Y35 is a Gaussian random vector since X n is a Gaussian random process. RX = I.2 X (a) W = 10 (b) W = 1000 Figure 6: The autocorrelation R X (τ ) and power spectral density S X ( f ) for process X (t) in Quiz 11. In this problem. 68 Address:104 pine meadows loop.3 By Theorem 11.6 SX(f) 0.4 0.5. 4 0 0 1 1 1 1 (2) (3) In this case. we obtain RY = HRX H . following Theorem 11. One way to ﬁnd the RY is to observe that RY has the Toeplitz structure of Theorem 11.1 0 τ 0.1 0. the identity matrix. using Equation (1) is surprisingly tedious because we still need to sum over all i and j such that n + i − j = 0. we need to ﬁnd the covariance matrix CY .2 0 −15 −10 −5 0 f 5 10 15 SX(f) 6 4 2 0 −1500−1000 −500 10 R (τ) 5 0 −5 −2 −1 0 τ 1 x 10 2 −3 0 f 500 1000 1500 10 RX(τ) 5 0 −5 −0. each Yn has expected value E[Yn ] = µ X ∞ n=−∞ h n = 0.Name:joey iwatsuru Email:joeyiwat@yahoo. Thus E[Y] = 0.2 −0. (1) Despite the fact that R X [k] is an impulse. Since R X [n] = δn .5 to ﬁnd the autocorrelation function ∞ ∞ RY [n] = i=−∞ j=−∞ h i h j R X [n + i − j].com Phone:5017621195 x 10 8 0. hot springs. by Theorem 11. Moreover. AR. or by directly applying Theorem 5.

Xn = X n−1 X n and RXn = and RXn X n+1 = E 1. us (United States) Zip Code:71901 .9 h = R−1 RXn X n+1 = Xn 0.1 R X [1] R X [0] 0.9 1.81 81 = . AR. 0.1 0. X n+1 = Xn 0. X n+1 = 400 400 (4) to ﬁnd the mean square error. Y Quiz 11. hot springs.com Phone:5017621195 Thus ⎡ ⎤ 4 3 2 1 ⎣ 3 4 3⎦ . L 69 Address:104 pine meadows loop.1 0. In this case. Y 2 (5) (6) A disagreeable amount of algebra will show det(CY ) = 3/1024 and that the PDF can be “simpliﬁed” to 16 7 2 7 2 1 2 y33 + y34 + y35 − y33 y34 + y33 y35 − y34 y35 exp −8 f Y (y) = √ 3 12 12 6 6π .9 R X [0] R X [1] = 0. C−1 = 16 ⎣−1/2 Y 1/12 −1/2 7/12 Thus.Name:joey iwatsuru Email:joeyiwat@yahoo.13 and to directly calculate ˆ (5) e∗ = E (X n+1 − X n+1 )2 .9 for the case of k = 1 and M = 2.4 This quiz is solved using Theorem 11. (7) Equation (7) shows that one of the nicest features of the multivariate Gaussian distribution is that y C−1 y is a very concise representation of the cross-terms in the exponent of f Y (y).9 R X [1] −1 (1) (2) The MMSE linear ﬁrst order ﬁlter for predicting X n+1 at time n is the ﬁlter h such that ← − 1.9 1. CY = RY = HH = 16 2 3 4 (4) It follows (very quickly if you use M ATLAB for 3 × 3 matrix inversion) that ⎡ ⎤ 7/12 −1/2 1/12 1 −1/2⎦ . the PDF of Y is f Y (y) = 1 (2π )3/2 [det (CY )]1/2 1 exp − y C−1 y .1 1 0.9 400 261 (3) It follows that the ﬁlter is h = 261/400 81/400 and the MMSE linear predictor is 81 261 ˆ X n−1 + Xn. one approach is to follow the method of Example 11.81 X n−1 R X [2] = .

we obtain Xn e∗ = R X [0] − RXn X n+1 R−1 RXn X n+1 L Xn ← − = R X [0] − h RXn X n+1 (11) (12) Note that this is essentially the same result as Theorem 9.1. us (United States) Zip Code:71901 .7 with Y = Xn . graphs of S X ( f ) and R X (τ ) appear in Figure 6.3487. we obtain ← − ← − ← − e∗ = R X [0] − 2 h RXn X n+1 + h RXn h L (9) (10) ← − with the substitution h = R−1 RXn X n+1 .9 400 1451 recalling that the blind estimate would yield a mean square error of Var[X ] = 1. the average power of X (t) is E X 2 (t) = ∞ −∞ W −W SX ( f ) d f = 5 d f = 10 Watts W (1) (2) The autocorrelation function is the inverse Fourier transform of S X ( f ).1 − = = 0. Instead.81 81 261 e∗ = R X [0] − h RXn X n+1 = 1. e∗ = E L ← − X n+1 − h Xn 2 (6) (7) (8) ← − ← − = E (X n+1 − h Xn )(X n+1 − h Xn ) ← − ← − = E (X n+1 − h Xn )(X n+1 − Xn h ) After a bit of algebra. X = X n+1 and ← − ˆ a = h .7 by using the orthoginality property of the LMSE estimator. 70 Address:104 pine meadows loop. In any case.1. the mean square error is 1 506 ← − 0.5 (1) By Theorem 11. we see that observing X n−1 and X n improves the accuracy of our prediction of X n+1 . (13) L 0.com Phone:5017621195 This method is workable for this simple problem but becomes increasingly tedious for higher order ﬁlters.13(b). we note that 1 f S X ( f ) = 10 rect (2) 2W 2W It follows that the inverse transform of S X ( f ) is sin(2π W τ ) R X (τ ) = 10 sinc(2W τ ) = 10 (3) 2π W τ (3) For W = 10 Hz and W = 1 kHZ. Consulting Table 11.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. we can derive the mean square error for an arbitary prediction ← − ˆ ﬁlter h. hot springs. Quiz 11. Since X n+1 = h Xn . It is noteworthy that the result is derived in a much simpler way in the proof of Theorem 9.

Name:joey iwatsuru Email:joeyiwat@yahoo. From Table 11. τ ) = E [X (t)Y (t + τ )] = E [X (t)X (t + τ − t0 )] = R X (τ − t0 ) (1) We see that R X Y (t. 2 [a1 + j2π f ] a0 + (2π f )2 (4) a1 a1 + j2π f (3) Address:104 pine meadows loop. us (United States) Zip Code:71901 . if R X [n] = 10δ[n].17.1. SY ( f ) = H ∗ ( f )S X Y ( f ) = |H ( f )|2 S X ( f ).17. R X Y (t. the discrete time impulse δ[n] has a ﬂat discrete Fourier transform. where u(t) is the unit step function and a1 = 1/RC where RC = 10−4 is the ﬁlter time constant. S X Y ( f ) = H ( f )S X ( f ) = (2) Again by Theorem 11. (2) Quiz 11. R X [n] = 10δ[n]. AR.1.1.8 We solve this quiz using Theorem 11. First we need some preliminary facts. a0 Consulting with the Fourier transforms in Table 11. we recall the property that g(τ − τ0 ) has Fourier transform G( f )e− j2π f τ0 . then ∞ S X (φ) = n=−∞ 10δ[n]e− j2π φn = 10 (1) Thus.17. (This quiz is really lame!) Quiz 11.7 Since Y (t) = X (t − t0 ). H( f ) = (1) Theorem 11.com Phone:5017621195 Quiz 11.000 so that 1 (1) R X (τ ) = a0 e−a0 |τ | . That is. From Table 11. Thus the Fourier transform of R X Y (τ ) = R X (τ − t0 ) = g(τ − t0 ) is S X Y ( f ) = S X ( f )e− j2π f t0 . 71 (5) 2a0 a1 . we see that 2 2a0 1 2a0 SX ( f ) = = 2 2 + (2π f )2 a0 a0 a0 + (2π f )2 (2) The RC ﬁlter has impulse response h(t) = a1 e−a1 t u(t). hot springs.6 In a sampled system. Let a0 = 5. τ ) = R X Y (τ ) = R X (τ − t0 ).

(9) (10) Consulting with Table 11. the output signal has almost as much power as the input. hot springs. Using partial fractions and the Fourier transform table. 72 Address:104 pine meadows loop. SY ( f ) = 2 2 2a0 a0 + (2π f )2 2a1 a1 + (2π f )2 2 a0 K0 K1 + + (2π f )2 a1 + (2π f )2 2 −2a0 a1 2 2 a1 − a0 (8) 2 2a0 a1 2 a1 − a0 . we can either use basic calculus and ∞ calculate −∞ SY ( f ) d f directly or we can ﬁnd RY (τ ) as an inverse transform of SY ( f ). us (United States) Zip Code:71901 . In particular.com Phone:5017621195 Note that |H ( f )|2 = H ( f )H ∗ ( f ) = Thus. 2 K1 = . we see that RY (τ ) = K0 K1 a e−a0 |τ | + 2 a1 e−a1 |τ | 2 0 2a0 2a1 (11) Substituting the values of K 0 and K 1 . some algebra will show that SY ( f ) = where K0 = Thus.000 rad/sec. (12) The average power of the Y (t) process is RY (0) = a1 2 = . the latter method is actually less algebra.Name:joey iwatsuru Email:joeyiwat@yahoo. SY ( f ) = |H ( f )|2 S X ( f ) = 2 2a0 a1 2 2 a1 + (2π f )2 a0 + (2π f )2 2 a1 a1 a1 = 2 (a1 + j2π f ) (a1 − j2π f ) a1 + (2π f )2 (6) (7) (3) To ﬁnd the average power at the ﬁlter output.1. AR. we obtain RY (τ ) = 2 a1 e−a0 |τ | − a0 a1 e−a1 |τ | 2 2 a1 − a0 . a1 + a0 3 (13) Note that the input signal has average power R X (0) = 1. 2 2 2a0 2a1 K0 K1 + 2 . Since the RC ﬁlter has a 3dB bandwidth of 10.000 rad/sec and the signal X (t) has most of its its signal energy below 5.

we see from Table 11.146) and (11. the optimal ﬁlter is ˆ H( f ) = SX ( f ) = SX ( f ) + SN ( f ) 1 104 1 104 rect + f 104 1 2B rect f 104 rect f 2B . Because the noise process N (t) has constant power R N (0) = 1. The ˆ solution to this quiz is just to ﬁnd the ﬁlter H ( f ) using Equation (11.146) and (11.147) for a system in which we ﬁlter Y (t) = X (t) + N (t) to produce an optimal linear estimate of X (t). (2) Since R X (τ ) = sinc(2W τ ).24 showed that RY (τ ) = R X (τ ) + R N (τ ).147). (1) Now we can go on to the quiz. Comment: Since the text omitted the derivations of Equations (11.147). R N (0) = Var[N ] = 1. (6) 73 Address:104 pine meadows loop.146) and to calculate the mean square error e L ∗ using Equation (11. SY X ( f ) = S X ( f ).9 This quiz implements an example of Equations (11. Taking Fourier transforms.Name:joey iwatsuru Email:joeyiwat@yahoo. AR. (1) Since µ N = 0.146). us (United States) Zip Code:71901 . hot springs. (5) From Equation (11.1 that SX ( f ) = 1 f rect . where W = 5. This implies R N (0) = ∞ −∞ SN ( f ) d f = B −B N0 d f = 2N0 B (3) Thus N0 = 1/(2B). (2) RY X (τ ) = R X (τ ). it follows that SY ( f ) = S X ( f ) + S N ( f ).000 Hz. we note that Example 10. decreasing the single-sided bandwidth B increases the power spectral density of the noise over frequencies | f | < B. 4 10 104 (4) The noise power spectral density can be written as S N ( f ) = N0 rect f 2B = 1 f rect 2B 2B .com Phone:5017621195 Quiz 11. at peace with the derivations.

05.000 B (9) To obtain MSE e∗ ≤ 0. When B ≤ W . the PSD S N ( f ) becomes increasingly tall.5 × 104 guarantees e∗ ≤ 0.000/19 = 263. As B shrinks. Thus as ˆ B descreases. The Wiener ﬁlter removes the noise that is outside the band of the desired signal. (8) To evaluate the MSE e∗ . for all values of B. The mean square error is e∗ L = 1 1 104 2B 1 1 −5000 104 + 2B 5000 df = 1 2B 1 104 + 1 2B = 1 B 5000 +1 (11) In this case. From Equation (11. Two examples of the ﬁlter H ( f ) are shown in Figure 7. when B > W = 5000. 1 ˆ + 1 (10) H( f ) = 104 2B ⎩ 0 otherwise. ˆ the Wiener ﬁlter H ( f ) is an ideal (ﬂat) lowpass ﬁlter ⎧ 1 ⎨ 104 | f | < 5. us (United States) Zip Code:71901 . the mean square error of the estimate is e∗ = L = ∞ −∞ ∞ −∞ S X ( f )S N ( f ) df SX ( f ) + SN ( f ) 1 104 1 104 (7) f 2B f 2B rect f 104 f 104 1 2B rect rect rect + 1 2B d f. The only thing to keep in mind is to use fftc to transform the autocorrelation R X [ f ] into the power spectral density S X (φ). the MSE is e∗ L = 1 1 104 2B 1 1 −B 104 + 2B B df = 1 104 1 104 + 1 2B = 1 1+ 5. what is happening may not be obvious.05 requires B ≤ 5.147). The following M ATLAB program generates and plots the functions shown in Figure 8 74 Address:104 pine meadows loop. Since the problem asks us to L ﬁnd the largest possible B. the ﬁlter H ( f ) makes an increasingly deep and narrow notch at frequencies ˆ | f | ≤ B.000. We can go back and consider the case B > W later. The result is that the MSE goes down. AR. As B is decreased. Finally. S N ( f ) = 1/2B over frequencies | f | < W . the ﬁlter suppresses less of the signal of X (t). but only over a bandwidth B that is decreasing. In this case. hot springs.Name:joey iwatsuru Email:joeyiwat@yahoo. we note that we can choose B very large and also achieve MSE e∗ = 0.com Phone:5017621195 ˆ ˆ (3) We produce the output X (t) by passing the noisy signal Y (t) through the ﬁlter H ( f ).05. In L particular. L Although this completes the solution to the quiz. we need to whether B ≤ W .16 Hz. Thus increasing B spreads the constant 1 watt of power of N (t) over more bandwidth. B ≥ 9. L Quiz 11. let’s suppose B ≤ W .10 It is fairly straightforward to ﬁnd S X (φ) and SY (φ). The noise power is always Var[N ] = 1 Watt.

As an aside. Relative to M = 2. the low pass moving average ﬁlter for M = 10 removes the high frquency components and results in a ﬁlter output that varies very slowly. note that the vectors SX. xlabel(’n’).N). H10=fft(h10.5*[1 1].ylabel(’S_{Y_{10}}(n/N)’). AR. xlabel(’n’).com Phone:5017621195 1 H(f) 0. %impulse/filter response: M=10 SY10=sx. stem(0:N-1.9. However. Hence. figure.abs(sx)). %impulse/filter response: M=2 SY2=SX.ylabel(’S_X(n/N)’). h2=0.Name:joey iwatsuru Email:joeyiwat@yahoo. when M = 10. h10=0. the ﬁnite numerical precision of M ATLAB results in tiny imaginary parts. us (United States) Zip Code:71901 . SX=fftc(rx.ylabel(’S_{Y_2}(n/N)’). 75 Address:104 pine meadows loop. %autocorrelation and PSD stem(0:N-1.5 0 H(f) −5000 −2000 0 f 2000 5000 1 0. they tend to confuse the stem function.N).26.1*ones(1. SY2 and SY10 in mquiz11 should all be realvalued vectors. figure.m N=32.ˆ2).abs(SY2)). H2=fft(h2.5 0 −5000 −2000 0 f 2000 5000 B = 500 B = 2500 Figure 7: Wiener ﬁlter for Quiz 11.*((abs(H10)). we generate stem plots of the magnitude of each power spectral density. In the context of Example 11.abs(SY10)). %mquiz11. Although these imaginary parts have no computational signiﬁcance. %PSD of Y for M=2 xlabel(’n’). stem(0:N-1.* ((abs(H2)). hot springs. rx=[2 4 2].ˆ2).N).10). the ﬁlter H (φ) ﬁlters out almost all of the high frequency components of X (t).

76 Address:104 pine meadows loop. us (United States) Zip Code:71901 . hot springs. SY (n/N ) for M = 2. AR.com Phone:5017621195 10 SX(n/N) 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 2 5 0 0 5 10 15 n 20 25 30 35 10 SY (n/N) 10 5 0 0 5 10 15 n 20 25 30 35 Figure 8: For Quiz 11. and Sφ (n/N ) for M = 10 using an N = 32 point DFT.Name:joey iwatsuru Email:joeyiwat@yahoo. graphs of S X (φ).10.

2⎦ 1 0 1 0 0.1 The system has two states depending on whether the previous packet was received in error.9 P X n+1 = 0|X n = 1 = 0.com Phone:5017621195 Quiz Solutions – Chapter 12 Quiz 12.4 λ3 = 1 (1) (2) We can diagonalize P into ⎤⎡ ⎡ ⎤ ⎤⎡ −0.5 1 λ1 0 0 0 −1 ⎦ 0 1 ⎦ ⎣ 0 λ2 0 ⎦ ⎣ 1 P = S−1 DS = ⎣ 0.5 0.6 0.6 0.1 (2) These conditional probabilities correspond to the transition matrix and Markov chain: 0.99 P X n+1 = 1|X n = 1 = 0.2 Quiz 12.99 0.5 1 −0.6 0. the Markov chain and the transition matrix are ⎡ ⎤ 0. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo.4 0.01 0.5 0 −0.10 0.99 0. hot springs.5 0.2 0. we are given the conditional probabilities P X n+1 = 0|X n = 0 = 0.6 P = ⎣0.1 1 P= 0.6 0.2 0.2 The eigenvalues of P are λ1 = 0 λ2 = 0.6 −0.2 −0.2 0.5 0 0.4 0. AR. is the left eigenvector of P satisfying si P = λi si .2 0.6 0 0.6 0.6 0.2 From the problem statement. the ith row of S.6 0.5 1 (3) where si .4)n ⎣ 0 (4) −0.5 −0.6 0. Algebra will verify that the n-step transition matrix is ⎡ ⎡ ⎤ ⎤ 0.4 0. From the problem statement.2 0 0 ⎦ Pn = S−1 Dn S = ⎣0.01 0.2 0.2⎦ + (0.6 0.01 0 0.90 (3) Quiz 12.3 The Markov chain describing the factory status and the corresponding state transition matrix are 77 Address:104 pine meadows loop.4 0.9 (1) Since each X n must be either 0 or 1. we can conclude that P X n+1 = 1|X n = 0 = 0.4 0 0 λ3 0.2 0.

the states in C3 are recurrent.. 1. the class C1 is never left.1 0 0 1⎦ P=⎣ 0 1 0 0 (1) 2 With π = π0 π1 π2 . The states in C2 have period 2. hot springs.com Phone:5017621195 0.. .1 + 0. the system of equations π = π P yields π1 = 0. 6} (1) π1 = 1/12. This implies π0 + π1 + π2 = π0 (1 + 0. On the other hand.9 0. .5 At any time t. The state transition probabilities are Pn−1. AR. That is. Thus the states in C1 are recurrent.1 0 1 1 1 ⎡ ⎤ 0. the state n can take on the values 0. the states in C2 are never reentered. 2.9 0. Quiz 12. C1 is a recurrent class. Quiz 12.1) = 1 It follows that the limiting state probabilities are π0 = 5/6. (3) (2) The states in C1 and C3 are aperiodic. Similarly. π2 = 1/12. the states in C2 are transient. 3} C3 = {4. Once the system enters a state in C1 .0 P [K > n] P [K > n − 1] P [K = n] = P [K = n|K > n − 1] = P [K > n − 1] (1) (2) (3) The Markov chain resembles P[K=5] P[K=4] P[K= 1] P[K=2] P[K=3] 0 1 1 1 2 1 3 1 4 . Once the system exits C2 .4 The communicating classes are C1 = {0.1π0 and π2 = π1 . . 1} C2 = {2. 1 … 78 Address:104 pine meadows loop. 5. us (United States) Zip Code:71901 .Name:joey iwatsuru Email:joeyiwat@yahoo..n = P [K > n|K > n − 1] = Pn−1.

the system is in state 0. .11. . and we randomly reset the counter to a new value K = k and then we count down k units of time. we obtain π0 ∞ P[K > k] = 1. Since we spend one unit of time in each state. the number of transitions need to return to state 0 is always a multiple of 2.Name:joey iwatsuru Email:joeyiwat@yahoo. When the counter expires. 2. If we have a random variable W such that the PMF of W satisﬁes PW (n) = πn . πk−1 = π0 P [K = k] + πk . (2) To ﬁnd the stationary probabilities. π1 = π0 P [K = 2] + π2 . . n=0 > k] = E[K ]. Equation (5) implies π2 = π1 − π0 P [K = 2] = π0 (P [K > 1] − P [K = 2]) = π0 P [K > 2] (8) (7) (4) (5) k = 1. AR. including state 0. When we apply we recall that ∞ k=0 πk ∞ k=0 P[K (9) = 1. Thus the period of state 0 is d = 2. we have k − 1 units of time left after the state 0 counter reset. The system state is the time until the counter expires. From Problem 2. This implies πn = P [K > n] E [K ] (10) This Markov chain models repeated random countdowns. we solve the system of equations π = πP and 3 i=0 πi = 1: π0 = (3/4)π1 + (1/4)π3 π1 = (1/4)π0 + (1/4)π2 π2 = (1/4)π1 + (3/4)π3 1 = π0 + π1 + π2 + π3 79 (1) (2) (3) (4) Address:104 pine meadows loop.com Phone:5017621195 The stationary probabilities satisfy π0 = π0 P [K = 1] + π1 . . then W has a discrete PMF representing the remaining time of the counter at a time in the distant future. . hot springs. us (United States) Zip Code:71901 . . We verify this pattern by showing that πk = π0 P[K > k] satisﬁes Equation (6): π0 P [K > k − 1] = π0 P [K = k] + π0 P [K > k] . From Equation (4).6 (1) By inspection.5. we obtain π1 = π0 (1 − P [K = 1]) = π0 P [K > 1] Similarly. Quiz 12. (6) This suggests that πk = π0 P[K > k].

which occurs with probability P [T00 1 > n] = 1 × 2 α 2 × 3 α n−1 × ··· × n α = 1 n α . (1) Thus the CDF of T00 satisﬁes FT00 (n) = 1− P[T00 > n] = 1−1/n α .14 to ﬁnd the limiting probability that the system is in state 0 at time nd: lim P00 (nd) = dπ0 = 3 8 (9) n→∞ Quiz 12.(4/5)a a 2 3 4 … The event T00 > n occurs if the system reaches state n before returning to state 0. us (United States) Zip Code:71901 .(2/3) a 1 .(3/4) 1 . we can use Theorem 12. nα (2) 80 Address:104 pine meadows loop. we observe that for all α > 0 P [V00 ] = lim FT00 (n) = lim 1 − n→∞ n→∞ 1 = 1. AR. Lastly. we choose π0 so the state probabilities sum to 1: 16 2 5 1 = π0 + π1 + π2 + π3 = π0 1 + + + 2 = π0 (7) 3 3 3 It follows that the state probabilities are π0 = 3 16 π1 = 2 16 π2 = 5 16 π3 = 6 16 (8) (3) Since the system starts in state 0 at time 0. It follows from the ﬁrst and second equations that π2 = (5/3)π0 and π3 = 2π0 .(1/2) a 1 1 . The only difference is the modiﬁed transition rates: 1 (1/2)a (2/3)a (3/4) a (4/5) a 0 1.22.com Phone:5017621195 Solving the second and third equations for π2 and π3 yields π2 = 4π1 − π0 π3 = (4/3)π2 − (1/3)π1 = 5π1 − (4/3)π0 (5) Substituting π3 back into the ﬁrst equation yields π0 = (3/4)π1 + (1/4)π3 = (3/4)π1 + (5/4)π1 − (1/3)π0 (6) This implies π1 = (2/3)π0 .Name:joey iwatsuru Email:joeyiwat@yahoo. To determine whether state 0 is recurrent.7 The Markov chain has the same structure as that in Example 12. hot springs.

all states are recurrent. the expected time to return to state 0 is ∞ ∞ E [T00 ] = n=0 P [T00 > n] = 1 + n=1 1 . Applying this result. 1/n α ≥ 1/n and it follows that ∞ E [T00 ] ≥ 1 + n=1 1 = ∞. the Markov chain is positive recurrent. ∞ 1 E [T00 ] = 2 + .5. (5) nα n=2 Note that for all n ≥ 2 1 ≤ nα ∞ n n−1 dx xα (6) This implies E [T00 ] ≤ 2 + =2+ n n=2 n−1 ∞ dx 1 dx xα (7) (8) xα x −α+1 =2+ −α + 1 ∞ =2+ 1 1 <∞ α−1 (9) Thus for all α > 1. nα (3) For 0 < α ≤ 1. In Example 12. we need to calculate E[T00 ]. In this problem. hot springs. Quiz 12. AR. us (United States) Zip Code:71901 . then all states are transient.8 The number of customers in the ”friendly” store is given by the Markov chain (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) p (1-p)(1-q) 0 (1-p)q 1 (1-p)q ××× i (1-p)q (1-p)q i+1 ××× 81 Address:104 pine meadows loop. we did this by deriving the PMF PT00 (n). On the other hand. Since the chain has only one communicating class. n (4) We conclude that the Markov chain is null recurrent for 0 < α ≤ 1.) To determine whether the chain is null recurrent or positive recurrent. for α > 1. ( We also note that if α = 0.24.com Phone:5017621195 Thus state 0 is recurrent for all α > 0.11 which says that ∞ P[K > k] = k=0 E[K ] for any non-negative integer-valued random variable K .Name:joey iwatsuru Email:joeyiwat@yahoo. it will be simpler to use the result of Problem 2.

. (1 − p)q (1) (2) Since Equation (2) holds for i = 0.01 p1 = 2 p0 + 3 p2 5. for α ≥ 1 or. . ∞ ∞ (3) πi = π0 i=0 i=0 αi = π0 = 1. From the Markov chain. we note that (1 − p)q is the probability that no new customer arrives. p3 in terms of p2 and so on. . . we see that for any state i ≥ 0. an existing customer gets one unit of service and then departs the store. we have that for α < 1.01 p2 = 2 p1 + 3 p3 3. 1. the limiting state probabilities do not exist.Name:joey iwatsuru Email:joeyiwat@yahoo.com Phone:5017621195 In the above chain.01 2 3 3 3 4 Note that q10 = 3. the limiting state probabilities are πi = (1 − α)α i . equivalently. yielding p4 = 20 p3 31 p3 = 620 p2 981 p2 = 82 19620 p1 31431 p1 = 628. . . 014. α= (1 − p)q Requiring the state probabilities to sum to 1. i + 2.01 0. 1−α (4) Thus for α < 1.01 p4 = 2 p3 We can solve these equations by working backward and solving for p4 in terms of p3 .1 since the task completes at rate 3 per msec and the processor reboots at rate 0. . i = 0.01 p3 = 2 p2 + 3 p4 5. . we have that πi = π0 α i where p .1 per msec and the rate to state 0 is the sum of those two rates. This implies πi+1 = p πi . πi p = πi+1 (1 − p)q. we obtain the following useful equations for the stationary distribution.9 The continuous time Markov chain describing the processor is 2 2 2 2 0 3.01 1 3 0. us (United States) Zip Code:71901 .13 with state space partitioned between S = {0. 5. 1. . p ≥ q/(1 − q). . 620 p0 1. AR.}. .01 0. 1. hot springs. By applying Theorem 12. 2. . i} and S = {i + 1. (5) In addition. . Quiz 12. 381 (1) Address:104 pine meadows loop..

. 381/2.1015 p4 = 0.1606 p3 = 0. 2. hot springs. (1) It is straightforward to show that this implies pn = The requirement that ∞ n=0 p0 ρ n /n! n = 1. . .com Phone:5017621195 Applying p0 + p1 + p2 + p3 + p4 = 1 yields p0 = 1. 2. c + 2. . .10 The M/M/c/∞ queue has Markov chain λ λ λ λ λ (2) 0 µ 1 2µ cµ c cµ c+1 cµ From the Markov chain. AR. c (ρ/c) pn−1 n = c + 1.2573 p2 = 0. .Name:joey iwatsuru Email:joeyiwat@yahoo. . . pn = 1 yields c (2) p0 = n=0 ρ c ρ/c ρ /n! + c! 1 − ρ/c n −1 (3) 83 Address:104 pine meadows loop. 401 and the stationary probabilities are p0 = 0. .4151 p1 = 0. c + 2. 443. . . . . c n−c c p0 (ρ/c) ρ /c! n = c + 1.0655 Quiz 12. the stationary probabilities must satisfy pn = (ρ/n) pn−1 n = 1. us (United States) Zip Code:71901 . 014. .

Download

by jonjones666

by Ashiquzzaman Akash

by Kristine Dikiciyan

by Ravikumar Vachhani

by Solidstate Raman

by gazalive

Probability and Stochastic Processes 2nd Roy D Yates and David J Goodman

Sols Probability and Stochastic Processes 2nd Roy D Yates and David J Goodman Solution Manual

[솔루션] Probability and Stochastic Processes 2nd Roy D. Yates and David J. Goodman 2판 확률과 통계 솔루션 433 4000

YatesGoodman_Probability_and_Stochastic_Processes

Probability and Stochastic Processes 2nd Roy D Yates and David J Goodman

Probability and Stocastic Processes

lectureAll_ece5325_6325_f11

Probability and Stochastic Processes 2nd edition, Yates and Goodman Solutions

Quiz_sol Probability and Stochastic Processes

Probability and Stochastic Processes

probability and stochastic processes solution manual

43864366 Propagation Study

Instrumentos de Medição Eletrica

1

Signal Detection and Estimation - Solution Manual

Sol Manual

Yates - Probability and Stochastic Processes (2nd Edition)

Planing a Microwave Radio Link

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

scribd