Está en la página 1de 5

Active Noise Cancellation Using Pseudo-Random MLB Sequences

Paulo A. C. Lopes; Moiss S. Piedade Universidade Tcnica de Lisboa, IST/INESC Rua Alves Redol n.9, 1000-029 Lisboa, Portugal paclopes@eniac.inesc.pt Abstract The Filtered-X LMS algorithm is widely used in ANC, but it requires a model of the secondary path. One of the most popular algorithms used for on-line secondary path modeling is the additive random noise technique. In this technique, a white noise signal is added to the anti-noise signal. This signal is then used as the input to the Least Mean Squares (LMS) algorithm so that it models the secondary path. In the field of acoustics instrumentation there has been a rising interest in pseudo-random Maximum Length Binary Sequences (MLBS) for system identification. This paper proposes to replace the white noise signal by MLBS. This enable us to develop a recursive least squares algorithm with a computational complexity equal to half the one of the LMS algorithm. Some simulation results are presented. They attest to the validity of the algorithm and compare it with the additive random noise technique.1 Introduction Active Noise control (ANC) systems have been successfully applied to mitigate the low frequency limitations of traditional passive noise control strategies [4][8].
s(n)
Secondary path Model White noise generator LMS

used in ANC, but it requires a model of the secondary path, and if there are significant changes during the working of the system, on-line modeling is essential. One of the most popular algorithms used for on-line secondary path modeling is the additive random noise technique [3]. In this technique, a white noise signal -s(n)- is added to the anti-noise signal. This signal is then used as the input to the LMS algorithm so that it models the secondary path (figure 1). This algorithm has been improved [1] by adding an additional primary path-modeling filter to reduce the disturbance produced by the reference signal, resulting in a cleaner error signal (n).
s(n)
MLB Sequence Circular Cross-correlation

(n)

Figure 2. MLBS based algorithm for on-line Secondary Path Modeling.

(n) - +

Figure 1. Additive Random Noise and MLSB based algorithm for on-line Secondary Path Modeling

The Filtered-X LMS algorithm is widely


This research was supported by PRAXIS XXI under contract number BD / 13791 / 97
1

In the field of acoustics instrumentation there has been a rising interest in pseudorandom Maximum Length Binary Sequences (MLBS) for system identification [2]. These sequences are applied to acoustic transfer function estimation with advantages over white noise based algorithms. This paper proposes to replace the white noise signal by MLBS and develop an adaptive algorithm that uses the special properties of these sequences. MLBS are periodic pseudorandom binary sequences, which have a very important property. Their circular autocorrelation is essentially an impulse. This enables us to develop a rectangular window recursive least-squares algorithm [9] with a computational complexity equal to half of the one of the LMS algorithm. Some simulation results are presented. They attest

to the validity of the algorithm and compare it with the additive random noise technique. Maximum Length Binary Sequences MLBS are periodic pseudo-random binary sequences conveniently generated recursively by shift registers [7]. They have a very important property. Their circular auto-correlation is an impulse, apart from a DC "error". Namely, if s(n) is a MLSB with period L=2N-1 and with '1' and '0' mapped respectively to the levels +1 and -1, its renormalized circular auto-correlation function is [2]: ss (n) = s(n) s(n) (1) 1 L 1 1 s( k ). s( k + n) = ' (n) . = L + 1 k =0 L +1 ' ( n) =
+ i =

$' (n) - can changing, its estimate at time n - S j be calculated using the most actual value for the renormalized circular cross-correlation function: $ ' ( n) = S j 1 L 1 e(n k '). s(n k ' j ) L + 1 k '= 0 (4)

So there is a simple formula for calculating the secondary-path impulse response and it is still possible to reduce the computational complexity by using, = 1 e(n + 1) e(n + 1 L) L +1 $ ' (n + 1) = S $ ' (n) + . s(n + 1 j ). S j j (5)

(n L.i ) is the period L unit

sample. This means that if s(n) is applied to a linear time invariant system with impulse response h(n), and then the cross-correlation with s(n) is calculated, we get (for DC free h(n) ): $(n) = (h(n) * s(n)) s(n) h = h( n) * ' ( n) = h' ( n) (2)

When in this form, the algorithm has half of the computational cost of the LMS algorithm, since it doesn't require the implementation of a FIR filter to compute the error signal. It can be shown [5] that this algorithm is an exact solution to the rectangular window least error squares problem of path identification. Comparison of the MLBS Based Algorithm with the Additive Random Noise Algorithm All the simulations presented in this section used a 32 length low-pass FIR filter to model the secondary path -S(n). The primary path was modeled by the convolution of the secondary path filter with another 32 length low-pass FIR filter -Wo(n). The filters used in the adaptive algorithms were also 32 length FIR filters. In this way, perfect cancellation was possible in the case of stationary paths and error free measures. This section will be divided in two parts. One refers to open loop results obtained in a simple system-identification task. The other refers to close-loop results, when the algorithms are tested in a full ANC system. This can influence the performance, since in closed-loop some form of feedback is possible. For instance, errors in the secondary path model increase the error in

h'(n) is the system periodic impulse response, that equals the true impulse response for L greater than the length of h(n). MLBS Based On-line Secondary Path Modeling Algorithm Replacing the white noise in the additive random noise algorithm (see figure 2) with a MLB Sequence and calculating the renormalized circular cross-correlation of the cleaned error signal -(n)- and the sequence -s(n)- an estimate of the secondary path - S (n) - is obtained: (n) s(n) = ( S (n) * s(n)) s(n) + v (n) s(n) = S ' (n) + v ' (n) (3)

v'(n) represents the error due to the components in (n) uncorrelated to s(n). Now, assuming the secondary path is slowly

the control filter, which increase the residual error of the system, which, in turn, increases the errors in the secondary path. To analyze the algorithms it is important to have some quantitative measurement of the accuracy of the secondary path model. To do this, a normalized form of the misalignment ( 2 s ) will be used: (n) =
2 s i =0 N 1 2

2 s = J min

L . ( L + 1) 2

(8)

$ ( n) S ( n) S i i

N.

(6)

Open loop results

In order to make comparisons between the algorithms, it is required to find some way to match their parameters, namely, the MLBS length and the step-size. This will be done in a way that the open loop residual steady state misalignments are the same for both.
0 -5 Misalignment (dB) -10 -15 -20 -25 -30 -35 -40 1800 MLBS LMS

Where, J min is the measurement noise power, N is the FIR filter length and L is the MLBS length. Figure 3 shows the learning curves of both algorithms, obtained by ensemble average of many computer simulations. The high value for the MLBS length (2047) is, simply, to produce charts that are more readable. It can be seen that the LMS algorithm is quicker in the first iterations, but that the MLBS based algorithm reaches the optimal solution before the LMS. In some cases, this happens in 60% less time.
Closed loop results

Now the simulation results of a full ANC system will be presented. In this case some form of feedback is present, which is why we call it closed loop results. These results refer to the learning curves and the tracking performance of the algorithms. For the simulations in this section the additive signal power was 20-dB bellow the primary noise power.
Learning curves
5 0 -5 -10 -15 -20 -25 3200 Md MLBS Md LMS Av LMS 4200 5200 6200 Iterations 7200 Av MLBS

3800 5800 Iterations

7800

Figure 3. Comparison of the learning curves of the LMS (thin line) and the MLBS based algorithm (heavy line) for different levels of measurement noise. The MLBS length was 2047.

In [5], it is shown that the steady state normalized misalignment of the LMS algorithm is: J min 2 s = 2 N +1 (7)

For the MLBS based algorithm, the steady state misalignment can be calculated from (4) and it is given by:

Figure 4. Comparison of the learning curves of the LMS and MLBS based algorithm in closed loop form. The MLBS length was 127. "Md" means Median and "Av" means average.

The learning curves show how the algorithms adapt to sudden secondary path

Noise reduction (dB)

changes. In figure 4, the learning curves of both algorithms are shown. They were obtained by ensemble average of 128 computer simulations. The mean misalignment and the median of the noise reduction, of the MLSB based algorithm are slightly lower than the corresponding values for the LMS algorithm. On the other hand, the mean noise is lower for the LMS algorithm. For a gaussian, or even any other symmetric distribution, the median should be equal to the mean. This implies a strong asymmetric distribution of the square error signal, probably due to the previously referred feedback mechanism. This, in turn, seems to indicate that the MLBS based algorithm has a lower typical (median) square error, but that situations, where the feedback increases the error to higher values, are more frequent.
Tracking

. Nj(n) and Vj(n) are a set of independent white noise processes. Their powers were 2 SN = ( 2 ) / so that the variances of their taps coefficients were one. Wo j (n + 1) = (1 ).Wo j (n) + . N j (n) S j (n + 1) = (1 ). S j (n) + .V j (n) (9)

In figure 5 the variation of the misalignment with the LMS algorithm step size () and the MLSB length (L) are plotted. These curves were obtained through ensemble average of many computer simulations. The abscissa for the plot of the LMS algorithm was made 1.5/ so that the minimum of the two curves roughly coincides. It can be seen that in general the LMS algorithm achieves a slightly lower misalignment. Figures 6 shows the time domain comparisons of both algorithms. In this case, we will not use medians because they give results very similar to the mean. This implies that, in this case the error signal power distribution should be more symmetric. The MLBS based algorithm performs slightly better, but the difference is small and is related to the way the step size and MLBS lengths were matched (see [5]).
-5 -7 -9 -11 -13 -15 -17 -19 -21 -23 -25 8100 LMS

Now it remains to compare the performance of both algorithms in tracking continuing secondary path changes in a full ANC system, with the FX-LMS algorithm.
-9 -11 Misalignment (dB) -13 -15 -17 -19 -21 -23 100 LMS MLBS

Noise reduction (dB)

MLBS

1000 MLBS Lenght - 1.5 /

10000

Figure 5. Misalignment for the MLBS based algorithm (heavy line) and the LMS algorithm (light line) as a function of the MLBS length. The minimums of the two curves were made coincident. The reference-signal power was 0dB. The forgetting factor, , was 10 6 .

9100

10100 11100 12100 Iterations

To do this we used a Markov-1 model for the variation in the primary path -Wo(n)and secondary path -S(n)-, as in equation (9)

Figure 6. Comparison of the tracking performance of the LMS and MLBS based algorithm in closed loop form when the step size and MLBS length are adjusted to produce the same residual error in open loop form. The MLBS length was 127.

Discussion The use of the MLBS enables us to develop a working exact-rectangular-windowrecursive-least-squares-algorithm with half the computational complexity of the LMS algorithm. The LMS algorithm, for white noise excitation, is in fact an approximation of the recursive least squares algorithm with an exponential window. Since the MLBS allow an exact implementation, if the windows types were equal, the MLBS based algorithm would certainly outperform the LMS algorithm. However, the rectangular window gives a higher importance to old values, so the algorithms based on it reach the optimal residual noise value in less iterations, but are slower at the beginning. This behavior is shown the figure 3. In ANC, the initial convergence is critical, because of the feedback in the system. Namely, the faster initial convergence of the secondary path model leads to a faster initial convergence of the primary path model, which reduces the residual noise and leads to a faster overall convergence of the secondary path model. In the implementation described in this article, this proved to be crucial and resulted in a slightly better performance for the LMS algorithm (see figure 5). Nevertheless, both algorithms perform very similarly. Namely, the small difference in the accuracy of the model of the secondary path (misalignment) has a very small effect in the noise reduction (figure 6). However, the proposed algorithm has half the computational complexity of the LMS algorithm, so this gives it an advantage. Moreover, the fact that the sequences are only formed by positive and negative ones eliminates the need for a multiplier in a hardware implementation, and the absence of a white noise generator might also be helpful. However, in some cases the memory requirements may by greater for the MLBS based algorithm.

Conclusion In short, MLBS are increasingly used in acoustic instrumentation, modeling acoustic transfer functions, with performance and computational advantages over white noise based algorithms. In ANC, the feedback presented in the system, and the required tracking properties make that the resulting performance remains almost equal to that of the LMS algorithm, for a system such as the one described in this article. However, they allow a reduction of the computational complexity to roughly half of that of the LMS algorithm. REFERENCES
[1] C. Bao, P. Sas, and H. Van Brussel, "Adaptive active control of noise in 3-D reverberant enclosures," Journal of Sound and Vibration, 161, 501-514, 1993. [2] Douglas D. Rife and John Vanderkooy, "Transfer-Function Measurement with Maximum-Length Sequences", J. Audio Eng. Soc., Vol 37, No. 6, June 1989. [3] L. J. Eriksson and M. C. Allie. "Use of random noise for on-line transducer modeling in an adaptive active attenuation system.", J. Acoustic Society of America, 85(2): 797-802, 1989. [4] P. Lopes, B. Santos, M. Bento, M. Piedade, Active Noise Control System, Recpad98 Proceedings, IST, Lisbon, Portugal, March 1998. [5] P. Lopes, M. Piedade, "Pseudo-random MLS sequences for on line secondary path modeling," ACTIVE 99 Proceedings, Fort Lauderdale, Florida, USA, December 1999 [6] P. Lopes, M. Piedade, "The Kalman filter in active noise control," ACTIVE 99 Proceedings, Fort Lauderdale, Florida, USA, December 1999. [7] S. W. Golomb, "Shift Register Sequences", Aegean Park Press, Laguna Hills, CA, 1982. [8] Sen M. Kuo and Dennis R. Morgan, "Active Noise Control Systems, Algorithms and DSP implementations", 1996 John Wiley & Sons, Inc. [9] Simon Haykin, "Adaptive Filter Theory", 1991 Prentice-Hall, Inc.

También podría gustarte