Está en la página 1de 6

An Input-Output Clustering Method for Fuzzy System

Identification'

Di Wang, Xiao-Jun Zeng and John A. Keane2

Abstract- Clustering algorithms are often used for fuzzy system identification. However, most clustering algorithms do not consider the outputs for clustering. In addition, these algorithms do not consider how to obtain the optimal number of clusters. Without the optimal number of clusters, the final set of clusters may be inappropriate. To address this, this paper presents an Input-Output Clustering (IOC) algorithm to determine both the correct number of clusters and the appropriate location for them by considering both inputs and outputs. The proposed algorithm, when used for fuzzy system identification, achieves better performance than existing clustering methods. This performance is illustrated by examples of function approximation and dynamic system identification.
I. INTRODUCTION

Fuzzy systems are used in many areas due to their representation capability and also to recent development of methods for fuzzy system identification [1, 2]. Two approaches are usually used for fuzzy system identification. The first is to define fuzzy rules by grid partition. However, this grid-based method suffers the "curse of dimensionality". An alternative way is to cluster the data first. Each cluster is then an initial fuzzy rule for fuzzy systems. By doing this, the "curse of dimensionality" is relieved in most cases. Thus, clustering algorithms, including Fuzzy C-Mean (FCM) algorithm [1, 2], K-Nearest Neighbour (KNN) algorithm [3], etc, are usually used to determine the initial fuzzy rules for fuzzy systems. However, clustering algorithms do not consider the outputs for clustering. In addition, these algorithms only try to find appropriate locations for clusters without considering the optimal number of clusters. Unfortunately, without the optimal correct number of clusters, the final solutions of clustering may be inappropriate. To address this problem, we introduce a concept of separability. Based on this separability, we propose an Input-Output Clustering (IOC) algorithm, which can determine both the optimal number of clusters and the appropriate location for these clusters by considering both inputs and outputs. Due to these improvements, IOC is able to achieve better performance when used for fuzzy system identification than existing clustering methods. This performance is illustrated by example simulations. The idea of IOC is to group data within connective input region with similar outputs as the same cluster; and to group data located in non-connective input regions with similar
This work was supported by U.K.EPSRC under Grant EP/C513355/1 authors are with the School of Computer Science, University of Manchester, M60 1QD, U.K. (e-mail: ; x.zeng @manchester.ac.uk; john.keane @manchester.ac.uk).

outputs as different clusters; and to group data with different outputs as different clusters. To summarize, the advantages of IOC are: * IOC groups data based not only on the similarity of inputs, but also the similarity of outputs. * IOC can automatically locate non-connective input regions with similar outputs. * IOC is able to achieve appropriate final clusters by introducing the concept of separability. * The above advantages result in better approximation performance than with existing clustering methods when used for fuzzy system identification. This paper is organized as follows: Section II overviews clustering algorithms; the IOC approach is introduced in Section III; Section IV then presents the IOC algorithm; Section V illustrates the advantages of IOC using classical simulation examples; conclusions are given in Section VI.
II. OVERVIEW OF CLUSTERING ALGORITHMS

2The

A good clustering algorithm should determine both the optimal number of clusters and their appropriate centroids for a dataset. Popular clustering algorithms such as FCM and KNN do not address how to determine the optimal number of clusters needed. From another point of view, we could either consider only inputs for clustering (which is considered as unsupervised learning), or consider both inputs and outputs for clustering (which is considered as supervised learning) [4-9]. It is reasonable to consider outputs as guidance for clustering and approximation [4]. For example, in Fig. 1, by using a standard FCM algorithm without considering outputs, all data are grouped into two clusters as shown in Fig. 1 (a), which mixes up black and white samples. By considering outputs for clustering, the result is shown in Fig. 1 (b). We can see by considering both inputs and outputs, we could identify the optimal set of clusters. One simple way of considering outputs for clustering is to combine inputs x and outputs y as a new vector z = (x, wy) for clustering [4]. w is the weight of the output and is a heuristic value denoting the contribution ofthe output y for clustering. An alternative way is to cluster inputs x in the output context [6-9]. However, no work on how to determine the optimal number of clusters has been done. Unfortunately, a non-optimal number of clusters causes an incorrect resultant model: too many clusters result in redundant over-fit modelling, whilst too few clusters result in rough and imprecise modelling. Our proposed scheme can address this inadequacy. IOC considers outputs as a constriction for input vector clustering. IOC can find the optimal number of clusters automatically by

1-4244-1210-2/07/$25.00 C 2007 IEEE.

locating the non-connective input regions within each output constriction. The idea and algorithm for IOC are explained in Section III and IV respectively.

within that output constriction. By doing this, the optimal number of sub-clusters and the appropriate locations for these sub-clusters within each output constriction can be found. Hence the overall optimal number of sub-clusters can be found. B. Fuzziness of input region clustering Now we consider the training dataset represented by (X, Y) ={(X yl),., (xk yk )...,(XN,yN)} . The partition matrix for clustering is defined as U = (Uk])NXM where N is the total number of training samples, and Ukj is the membership of the kth training data belonging to the jth
sub-cluster. In JOG, M = E
m

sub-clusters for the rth output constriction, m is the total number of output constrictions. By using hard clustering, for any training data sample xk , there is one (and only one) sub-cluster with indexj, xk E sub - clusterj . That is,

r=l

, where mr is the number of

III. INPUT-OUTPUT CLUSTERING

A.

Overview of Input-Output Clustering The goal of IOC is to separate similar inputs with similar outputs from the remaining data. IOC is motivated by the following idea: data with different outputs should go to different clusters; data with similar outputs but unconnective inputs should go to different clusters; and data with connective inputs and similar outputs should go to the same cluster. To achieve this objective, IOC consists of two stages of clustering: rough clustering and refined clustering. In the rough clustering stage, the output space is partitioned into intervals, each being an output constriction, and then the training data are roughly grouped based on this output partition to obtain a set of clusters. The resultant clusters generated by the output partition are called Clusters in the following discussion. In the refined clustering stage, data within each output constriction are further grouped based on the data distribution (connectivity character) of the inputs. The resultant refined clusters within one output constriction are called Sub-clusters in the following discussion. If all data within one output constriction are located in one connective input region, then only one sub-cluster is needed to separate these data from the remaining data. If data within one output constriction are segmented into several non-connective input regions by data outside this output constriction, then multiple sub-clusters (each of which represents one connective input region) are needed to separate these data from the remaining data. The number of sub-clusters needed is equal to the corresponding number of non-connective input regions

FI if x k sub-clusterj (1) 0 else For one-dimensional inputs, any sub-cluster represents a hard interval of input space with similar outputs. For the high dimensional case, any sub-cluster represents a hard hypercube of input region with similar outputs. This hard clustering results in unsmooth approximation. It is even harder to represent irregular input regions by hard hypercube when the proposed clustering algorithm is applied to high dimensional problems. So the idea of uncertainty (fuzziness) is introduced to IOC. A data sample should belong to its neighboring clusters to some degree. To do this, the elements for the partition matrix are represented by fuzzy membership functions. Here we apply a Gaussian function as the membership function Aj (x), defined as:

Aj(x)
where

expr- 1 xE(

(2)
is
=

x = (x1, x2,..., xn )

the

input

vector,
are the

v = (vj1, vj2 ..., vjn) and af

(Uj,j2 ,.., jn)

centroid and width for thejth sub-cluster,j is the index for M sub-clusters in all m output constrictions. To obtain these fuzzy sub-clusters, the FCM algorithm is used in the refined clustering process within each output constriction. C. Introducing ofseparability A main advantage of IOC is that it can automatically find the optimal number of sub-clusters in each output constriction, hence it can find the overall optimal number of sub-clusters. To achieve this, a criterion of separability is introduced. To define the criterion of separability, a Gaussian function is used to define the membership by Equation (3),

(X)

=exp

2xY, ('

_)2

(3)

mr

arg(max(sr
1I. .. mma

)(

(8)

is the index for the output constriction, other parameters are the same as for Equation (2).The difference between Equation (3) and Equation (2) is that, Equation (2) is a general form of membership and Equation (3) is the membership of the jth sub-cluster within the rth output constriction. After applying the FCM algorithm, data within one output constriction are grouped into several sub-clusters. The centres for these sub-clusters are directly obtained by the FCM algorithm by Equation (4), where
r

vri

k= kr ,k=l SUkj
=

(4)

where Vjr

vj) (V<rl' Vr2 ,...,r

is the centroid of the jth

sub-cluster in the rth output constriction, N r is the total number of data located within the rth output constriction, and ,U r is the element of partition matrix Ur = (ur ) for the rth output constriction computed by Equation (5):
-1

The largest value of sr indicates the optimal number of sub-clusters m r within the rth output constriction. We illustrate this optimization property by Equation (7) and Equation (8) with an example: y = sin(x), x E [0,2fz]. We consider the output Constriction 2 and set its number of sub-clusters to m2 = 1, as shown in Fig.2(a). Fig.2 (b) shows the membership function A 2 (x) generated by FCM applied to the input data within the output Constriction 2. We can see from Fig.2 (b), A2 (x) is activated by data within both the output Constriction 2 and the output Constriction 1. If y E Constricti on3 or y E Constricti on4, then A 2 (x) 0; if ye Constrictionl then A2 (x) = al; if ye Constricti on2 then A2(x) = a2 , and al > a2 Then we obtain,
s2

(2 (X)) ~ (A2(x)C+n(rcuion
xe Constriction

by Equation (7).

s2

xe Constriction 2

urj

=u(x

k)
is the number of sub-clusters for the rth output

where

mr

constriction The widths a' in (3) is defined by the deviation of data within the rth output constriction by Equation (6),

is a small value due to heavy activation of A 2(x) by data within the output constriction 1. The resultant sub-cluster for the output Constriction 2 cannot separate the data within it from the remaining data, because the input data within the output Constriction 1 are also included in it. So m2 = 1, with a small value of s2, is not the optimal number of sub-clusters for the output Constriction 2.
Constriction 1
Constriction 2

iN'rk _-vr ) Jl~~~,= ki where aj' jWI,j2,...Uj'n)


0f r
(X | Ek=NUkri \ E
=(
)

(6)

is the width of the jth sub-clusters in the rth output constriction. Then, the criterion of separability sr is defined by Equation (7):
'y
=

Constrictioh13/
Constriction 4

(a)
1I .X

INr'(rnr
c=l

r(A (x k)))
i

MIJ

oa _ 19X

Ejl= (kl

(x )

O~~~~~~~8~~~
-

oG
O!4
o :2

where mr (mC) is the number of sub-clusters within the rth (cth) output constriction. Sr is between 0 and 1. From Equation (7), the separability is explained as the summarization of membership activation by the data within the corresponding output constriction, divided by the summarization of membership activation by all data in the dataset. A large value of sr indicates a better separability, which means the data within the rth output constriction can be well separated from the remaining data. So in our scheme we try to find the optimal number of sub-clusters mr with the largest value of separability sr.

L-

(b)
Fig. 2. Membership function(s) for output constriction 2 for y
with M2 =1
=

sin(x)

Then we set the number of sub-clusters for the output Constriction 2 to m2 = 2, as shown in Fig.3 (a). Fig.3 (b) shows the resultant membership function Al (x) and A2 (x)

generated by FCM. We

can see

from Fig.3(b),

Al (x) and

A2 (x) are activated only by data within the output Constriction 2. If ye Constricton2, then Al (x)>0 or A2 (x) > 0 If y o Constriction2 , then Al (x) 0 and (2 (X)) A2 (x) ~0 . Then S2 ~xeConstriction2t = 1 by Equation (7).
xe Constriction2

where vji is the ith central element ofthejth sub-cluster, yrnax

and y1min are the upper and lower bound of the jth output constriction. We applied Gaussian membership functions for the fuzzy system as:

s is a value close to 1 due to heavy activation of Al (x) or A2 (x) by data with the output Constriction 2, and zero activation of Al2 (x) and A22 (x) by data outside the output Constriction 2. So two sub-clusters for the output Constriction 2 can well separate the data within it from the remaining data. So m2 = 2, with large value of s2 (close to 1), is the optimal number of sub-clusters for the output Constriction 2.

Aji (xi) = expr

(9)

where x = (x1 , x2,..., xn) , Aji (x) represent the membership of xi for the jth fuzzy rule, v, and a, are centre and deviation respectively, computed by Equation (5) and (6), which are later refined tuned by a gradient descent algorithm. Accordingly,

Aj (x) = ii Aji (Xi) = exp -Ix E ( Jr-)2


The centre-average defuzzifier is applied as:
O(X)
=

(10)

j=1

L( yjAj (x)

(Aj (x)) j=1

(1 1)

(a)

IV. INPUT-OUTPUT CLUSTERING (IOC) ALGORITHM FOR FuzzY SYSTEM IDENTIFICATION IOC consists ofthree stages: * Rough clustering based on output partition * Refined clustering: determination of the number and location of sub-clusters within each output constriction * Parameter refined training by the gradient descent
In the first stage, the output space is evenly partitioned. We apply an even hard interval partition for the output space for easy and clear locating of a non-connective input region within each output constriction. Hence data are grouped according to this hard interval partition and the optimal number of sub-clusters for each output constriction can be found. After this grouping, a data sample uniquely belongs to one output constriction. Consider y = f(x1, x2 ,..., x ), where ye [a,,8] The output is evenly partitioned by m, then U y e [ao, a ).U U [am- , am ] where a0 = a and anm =,8. Then the input vectors can be grouped into m clusters. For a training sample: (x k 7 , where k is the index of the

method

(b)
Fig. 3. Membership function(s) for output constriction 2 for y with 2 = 2
=

sin(x)

Hence a larger value of sr indicates the optimal number of sub-clusters mr. sr is an important criterion to identify the optimal number of sub-clusters for each output constriction in
our

proposed algorithm.

D. Identification offuzzy rules forfuzzy systems We collect all the sub-clusters in each output constriction obtained by IOC, and treat them as one set of sub-clusters. Each sub-cluster is one fuzzy rule. A (the jth) sub-cluster is projected to every input variable to obtain the premise part of a fuzzy rule, and the consequence part of this fuzzy rule is set as the middle value of its output constriction: Ifx1 is vj1, x2 is vj2, ..., xn is vj1, theny is yj
WiUI-

training data, if y k E [ar-1iar r < m (or y k E [ar-1 ar] r = m ), then xk e Cr (Cr denotes the rth cluster).
In the second stage, the number of sub-clusters for each output constriction is determined. After rough clustering, data within each output constriction are further grouped based on the connectivity of the inputs. We applied FCM for refined clustering. We have tried different numbers of sub-clusters (from 1 to the maximal predefined number of sub-clusters), and find the optimal one with the largest separability Sr,

Yj

max min +y j yJ
=

which is defined by Equation (7) and Equation (8). The rough initial values of centroids and widths are determined along with the computation of separability sr by Equation (4) and Equation (6). In the third stage, parameters of vi, ,a, and yj are refine-tuned by the gradient descent algorithm. We now give the formulas for parameters refined training. The objective is to minimize the Square Error (SE):
E = x(o(x)-y) 2

using Pedrycz's method. Hence, IOC applies fewer but effective fuzzy rules (hidden neurons) for approximation. So we can say that IOC can obtain better performance than Pedryc's method.
TABLE 1. ERROR IN RMSE BY IOC FOR SIMULATION 1

(12)

Number of constriction 2 3 4 5

Number of fuzzy rules


2 4 6 7

Training accuracy (RMSE) 0.13175696 0.0879254 0.08476824 0.03359765

Testing accuracy (RMSE) 0.10455946 0.09479383 0.09566857 0.03576547

Then the formulae for parameter gradient updating are: (x) a(lE - (13) m aw J

TABLE 2. ERROR IN RMSE BY USING PEDRYCZ'S METHOD FOR SIMULATION 1 RMSE / number of clusters

IAj

()

vji(t+1) = vji(t)-7x a
=

vji (t)-77x e x (Y7 x

x L/it)

(14)
j(
2

j=l

uji (t + 1) Uji (t)


=

aJE

Number of constriction Number Training 2 of Result 3 clusters 4 5 per 6 context Testing 2 Result 3 4 5 _

0.180 0.192 0.174 0.149 0.141 0.189 0.214 0.197 0.166 0.143

6 9 12 15 18 6 9 12 15 18

0.150 / 8 0.140 /12 0.140 /16 0.136 20 0.102 24 0.153 8 0.156 12 0.153 16 0.136 20 0.110 24

0.147 / 10 0.123 /15 0.108 /20 0.102 25 0.097 30 0.162 10 0.136 15 0.121 20 0.113 25 0.107 30

0.144 / 12 0.114 /18 0.100 /24 0.092 30 0.061 36 0.164 12 0.132 15 0.117 24 0.107 30 0.068 36

uii

(t)-7xex(yj-o)x AXj(x) E' (Aj.(x)) j=l

v)2 vji) xj (Xi (3


-

(15)
)

V. SIMULATIONS

To illustrate the advantages of IOC, several classical simulation examples are presented in this section. Simulation 1. We begin with a function approximation problem with single input and single output [8], y = 0.6 sin(,-c) + 0.3 sin(3,-c) + 0.1 sin(51Dc) where x E [-1,1] . 200 training data are randomly generated on the definition domain, 100 for training and 100 for testing. The accuracy in RMSE obtained by IOC is reported in Table 1. We set the number of output partitions between 2 and 5 as shown in the first column in Table 1. The second column shows the total number of fuzzy rules automatically generated by IOC. The accuracy in RMSE for training data and testing data are reported in the last two columns. The accuracy in RMSE and the corresponding number of hidden neurons obtained by Pedrycz's method are reported in Table 2 for comparison. The values reported before the slash is the accuracy in RMSE, whilst the values after the slash are the numbers of hidden neurons needed. Each fuzzy rule obtained using IOC is equivalent to one hidden neuron obtained by Pedrycz's method. The number of fuzzy rules obtained by IOC is determined by the input distribution within each output constriction, whereas the number of hidden neurons obtained by Pedrycz's method is predefined and fixed. In addition, in Pedrycz's method, the numbers of sub-clusters in each output context are the same. Some redundant hidden neurons might be involved when

Fig.4 and Fig.5 are the simulation results obtained by IOC by setting the number of output constriction to 5 and by using 7 hidden neurons. Fig.4 shows the comparison between the actual outputs and the model outputs for training data (a) and testing data (b) respectively. Fig.5 shows the error for the training data (a) and testing data (b) respectively. IOC can simulate this example very well.

Fig. 4. Comparison between the actual outputs and the model outputs by IOC for the training (a) and testing (b) data for simulation 1

(a)

(b)

Fig.5. Error by IOC for the training (a) and testing (b) data for simulation 1

(a)

(b)

Simulation 2.

This is a two-dimensional non-linear approximation used for comparison [10];


=2
E
+

function

x-1.5 )2

where x1 E [1,5] , x2 [1,5] . 50 training samples are randomly generated on the definition domain. The improvement obtained by IOC is shown in Table 3. IOC obtains better accuracy by using fewer parameters than the other methods. For limited space reasons, the detailed discussion is omitted.
TABLE 3. PERFORMANCE COMPARISON IN MSE WITH OTHER ALGORITHMS FOR SIMULATION 2 MSE Number of Number of Model parameters neurons 6 65 0.0790 Sugeno and Yasukawa [11] 8 91 0.0040 Emami and Turksen [12] 40 0.0042 8 Tsekouras [13] 21 0.0197 Kim et all [14] 3 40 0.007534 8 IOC IOC 11 55 0.002520 65 0.002529 IOC 13

separability based on the output constriction is proposed, which is the key point in determining the correct number of sub-clusters for each output constriction. Due to these improvements, IOC is able to achieve better approximation performance than existing clustering methods when applied to fuzzy system identification. Simulation examples in Section V show this better performance.
[1] [2]

[3]

[4]
[5]

Simulation 3. Finally, IOC is used to identify a non-linear dynamic system,

[6]

[7]
[8] [9]
[10]

y(k) = g(y(k - 1), y(k - 2)) + u(k)


-

where g(y(k 1), y(k - 2)) =

y(k I)y(k 2)[y(k 1) 0.5] 1+y2 (k I)y2(k-2)


-

The objective is to approximate the non-linear part g(y(k -1), y(k -2)) . 400 samples are generated, 200 for training and 200 for testing, with y (0) = y (1) = 0. The accuracy comparison in MSE is shown in Table 4. We can conclude that IOC obtains better accuracy by using fewer fuzzy rules than the other methods.
TABLE 4. PERFORMANCE COMPARISON IN MSE WITH OTHER ALGORITHMS FOR SIMULATION 3

[11]
[12] [13] [14]

Model GG-TLS [14] GG-LS [14] EM-TI [14] EM-NI [14] Wang [15] Wang [16] Wang [17] IOC IOC IOC IOC

Number of rules 12 12 12 12 28 23 20 3 8 10 12

Training data 3.7E-4 3.7E-4 2.4E-4 3.4E-4 3.3E-4 3.2E-5 6.8E-4 1.13E-3 4.28E-4 4.2E-4 4.9E-5

Testing data 2.9E-4 2.9E-4 4.1E-4 2.3E-4 6.OE-4 1.9E-3 2.4E-4 1.12E-3 4.41E-4 4.2E-4 5E-5

[15] [16]

[17]

VI. CONCLUSIONS Clustering algorithms are usually used for fuzzy system identification. However, most clustering algorithms do not consider the outputs for clustering, and do not address determination of the optimal number of clusters. These two flaws may lead to poor fuzzy system identification. To address this, this paper proposes an Input-Output Clustering algorithm to determine both the optimal number of clusters and the appropriate location for these clusters by considering both inputs and outputs. To do this, a concept of

REFERENCES F. G6mez-Skarmeta, M. Delgado and M. A. Vila, "About the use of fuzzy clustering techniques for fuzzy model identification", Fuzzy Sets and Systems, Vol. 106, No. 2, pp. 179-188, 1999. M. Delgado, A. F. Gomez-Skarmeta, and F. Martin, "A fuzzy clustering-based rapid prototyping for fuzzy rule-based modelling", IEEE Transactions on Fuzzy Systems, Vol. 5, No. 2, pp. 223-233, 1997. L. X. Wang, "Training of fuzzy logic systems using nearest neighbourhood clustering", Second IEEE International Conference on Fuzzy Systems, Vol. 1, pp. 13-17, Saint Francisco, CA, US, March 28-April 1, 1993. J. Gonzalez, I. Rojas, H. P. and J. Ortega, "A new clustering technique for function approximation", IEEE Trans. Neural Networks, Vol. 13, No. 1, pp. 132-142, 2002. E. Kim, M. Park and S. Ji, "A new approach to fuzzy modelling", IEEE Tran. Fuzzy Systems, Vol. 5, No.3, pp. 328-337, 1997. W. Pedrycz, "Constriction fuzzy c-mean algorithm", Pattern Recognition Letters, Vol. 17, No. 6, pp. 625-631, 1996. W. Pedrycz, "Context fuzzy clustering in the design of radial basis function neural network", IEEE Trans. Neural Networks, Vol. 9, No. 4, pp.601-612, 2002. W. Pedrycz, "Linguistic models as a framework of user-centric system modelling", IEEE Trans. Syst. Man and Cybern. pt- A, Vol. 36, No. 4, pp. 727-745, 2006. J. M. Leski, "Generalized weighted conditional fuzzy clustering", IEEE Tran. Fuzzy Systems, Vol. 11, No. 6, pp. 709-715, 2003. V. H, Grisales, J. J. Soriano, S. Barato, and D. M. Gonzalez, "Robust agglomerative clustering algorithm for fuzzy modelling purposes", Proceeding of the 2004 American control conference, pp. 1782-1787, Boston, Nassachusetts, June 30-July 2, 2004. M. Sugeno and T. Yasukawa, "A fuzzy-logic based approach to qualitative modelling", IEEE Trans. Fuzzy Systems, Vol. 1, No.1, pp.7-31, 1993. G. E. Tsekouras, "On the use of the weighted fuzzy C-means in fuzzy modelling", Advances in Eng. Software, Vol. 36, No. 5, pp. 287-300, 2005. E. Kim, M. Park, S. Ji and M. Park, "A new approach to fuzzy modelling", IEEE Trans. Fuzzy Systems, Vol. 5, No.3, pp.328-337, 1997. J. Abonyi, R. Babuska and F. Szeifert, "Modified Gath-Geva fuzzy clustering for identification of Takagi-Sugeno fuzzy models", IEEE Trans. Syst., Man, Cybern., pt-B, Vol. 32, No. 5, pp. 612-621, 1999. L. Wang and J. Yen, "Extracting fuzzy rules for system modeling using a hybrid of genetic algorithms and Kalman filter," Fuzzy Sets Syst., Vol. 101, No.3, pp. 353-362, 1999. J. Yen and L. Wang, "Application of statistical information criteria for optimal fuzzy model construction," IEEE Trans. Fuzzy Syst., Vol. 6, No.3 , pp. 362-371, Aug. 1998. J. Yen and L.Wang, "Simplifying fuzzy rule-based models using orthogonal transformation methods," IEEE Trans. Syst., Man, Cybern., pt. B, Vol. 29, No.1, pp. 13-24, Feb. 1999.

También podría gustarte