Está en la página 1de 21

INTRODUCTION

In most physical situations additional filtering is introduced by the medium or the system through which signals are transmitted. A digital signal transmitted over the wires and cables of the telephone plant is smeared out in time by the distributed capacitance of lines.

Conceptually we denote this by saying that some input signal x(t) is converted to another signal y(t) after passing through the physical system. This is indicated by the below figure.

If we are attempting to the input x(t) , we must know the characteristics of the medium. For simple systems or media , the transfer function or impulse response can be determined quite readily in a straight forward way. But in much more complicated situations i.e an entire telephone system over which signals must be conveyed or water through which underwater signals must travel.

These physical considerations are not as readily applied in determining the characteristics of the medium. In addition , the medium or signal may be changing its characteristics with time. Therefore the method we discuss here, as an application of maximum-likelihood estimation , is to assume a model of the system under study, with certain parameters unknown and use measurements to determine these parameters.

Here we take representation of a linear system of a non recursive digital filter. Therefore digital signal processing are therefore interested in discrete-time models. In which signal samples are measured only at prescribed instants of time rather than all times. Our objective is then to estimate the filter(system) coefficients.

Specifically, if the input-signal samples are Xj ,the samples at the output of the system under consideration are given by Yj=(n=0 to m) h(n).Xj-n We thus are interested in determining the m+ 1 coefficients hn , n=0..m that characterize the system. Roughly speaking the number of coefficients needed m+1 in this case,is determined by the expected spread or dispersion introduced by the system.

As an example, consider the solitary pulse Xo introduced at time t=0 in fig Fig

The output is shown spread out over seven time slots. One would then expect the model to need at least seven coefficients for appropriate characterization.

In many physical situations the system impulse response builds up to a peak value some time units after the impulse is applied and then decays back to zero. For simplicity we shall assume that the spread or dispersive effect of the medium is equally distributed about the peak value

The output samples can then equally well be written with time now measured with reference to the output samples and 2N+1=m+1 the number of coefficients needed. Yj=(n= -N to N)hn*Xj-n. Fig below compares output with input time for that example.

Comparision of input with output and non recursive filter

One problem with measuring the desired coefficients directly is that the Yj samples normally appear corrupted with noise. This noise may be introduced during signal transmission or it may represent inaccuracies in the measurements or the model representation itself. The actual received information is thus given by Yj=(n= -N to N)hn*Xj-n+nj with the desired coefficients now to be determined from noisy measurements.

How does one now estimate the h(n)s from the measured samples Yj? One common method is that of sending a known signal sequence Xj , measuring the error Yj-Xj introduced by the medium and the additive noise, and using the sequence of errors to find the h(n)s. For example taking pseudorandom pulse sequence,this is a relatively long sequence of binary pulses , + or 1.One per time slot or sampling interval that can be made to approximate truly random binary symbols.

The output signal sequence is then Yj=(n=-N to N) h(n).Aj-n + nj If there were no dispersion but simply innate time delay N units along the medium output would be hoAj+nj. The rest of the terms in Yj represent distortion due to the medium. Since we assume Aj known ,we can measure the error j=Yj-Aj and use it to estimate the medium coefficients.

Therefore the error for N coefficients is given by j=Yj-Aj=h(-N)*Aj+N + .+(h01)+..+h(N)*Aj-N+nj j=(n=-N to N) h(n)*Aj-n+nj With hn indicating that ho=ho-1. Assume that we now measure K such terms in succession , say e1,e2,.ek and use these measured errors to estimate the hn coefficients.

Using Maximum-likelihood estimation we find h(n)s. Specifically with the noise samples assumed gaussian and independent , the ensemble of K samples e1,e2,.ek , denoted by vector e and conditioned on the coefficients h-N.. h0-1.hN, to be estimated , is itself jointly gaussian. The density function f( /h-N) being given by the product of the individual density functions. Function f( /h-N.)=exp[-(j=1 to k).( j-(n)hn*Ajn)2/2 ^2]/(2pi ^2).

The maximum-likelihood estimate is now found by setting the derivative of the logarithm of f() equal to zero. d/dx(ln f( /h-N.))=0= (j=1 to N)( jhn*Aj-n)Aj-i i=-N..N The hat notation is again used to denote the maximum-likelihood estimate. Rewriting this set of equations, we have (j=1 to k)Aj-i* j= (j=1 to k) (n=-N to N)hn*Aj-n*Aj-I i=-NN

We thus have to solve this set of 2N+1 equations simultaneously to find the estimates hn. They look rather formidable but can be put in less forbidding form by defining the coefficients gi= (j=1 to k)Aj- j Rin=(j= 1 to k)Aj-n*Aj-i.

The set of equations is then written much more simply as gi=(n =-N to N)hn*Rin i=-N..N Taking the above equation in vector form we get g=Rh^ Therefore h^=R-1g The above vector equations can be easily manipulated but can be costly to solve if the order of the matrices(here (2N+1)*(2N+1) is large.

Iterative techniques have been developed for solving such equations on a computer. Here we simply point out another estimate for the hns developed for the maximum-likelihood approach just considered,that obviates the need for simultaneous solution of 2N+1 equations.i.e going to suboptimum estimation procedures.

También podría gustarte