Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Introduction
There are appliances (every physical component possess an inherent
f(x,y),
+ x
f(x,y)dy dx
(5.1.1)
f(x)g(y)dy dx
(5.1.2)
G y ( x) =
g(y)dy
R=
(5.1.3)
G y (x)f(x)dx
Y is
o
Y
o
X o Yo is
153
situation the system survives only if the safety factor is greater than 1 or
equivalently safety margin is positive.
In the traditional approach to design of a system the safety factor or
margin is made large enough to compensate uncertainties in the values of
the stress and strength variates. Thus uncertainties in the stress and strength
of a system make the system to be viewed as random variables. In
engineering situations the calculations are often deterministic. However, the
probabilistic analysis demands the use of random variables for the concepts
of stress and strength for the evaluation of survival probabilities of such
systems. This analysis is particularly useful in situations in which no fixed
bound can be put on the stress. For example, with earth quakes, floods and
other natural phenomena, the shortcomings may result in failures of systems
with unusually small strengths. Similarly when economics rather than
safety is the primary criterion, the comparison of survival performance can
be better studied by knowing the increase in failure probability as the stress
and strength approach one another. In foregoing lines it is mentioned that
the stress-strength variates are more reasonable to be random variables than
purely deterministic.
Let the random samples Y , X 1 , X 2 ,..., X k being independent, G(y) be
i =s
154
(5.1.4)
where X 1 , X 2 ,..., X k are iid with common cdf F(x), this system is subjected
to common random stress Y.
The R.H.S of the equation (5.1.3) is called the survival probability of
a system of single component having a random strength X, and
experiencing a random stress Y. Let a system consist of say k components
whose strengths are given by independently identically distributed random
variables with cumulative distribution function F(.) each experiencing a
random stress governed by a random variable Y with cumulative
distribution function G(.).The above probability given by (5.1.4) is called
reliability in a multi-component stress-strength model ( Battacharya and
Johnson, 1974).
A salient feature of probability mentioned in (5.1.1) is that the
probability density functions of strength and stress variates will have a nonempty overlapping ranges for the random variables. In some cases the
failure probability may strongly depend on the lower tail of the strength
distribution.
When the stress or strength is not determined by either sum or the
product of many small components, it may be the extremes of these small
components that decide the stress or the strength. Extreme ordered variates
have proved to be very useful in the analysis of reliability problems of this
nature.
Suppose
that
number
of
random
stresses
say
(Y1 ,Y2 ,Y3 ,.......,Ym ) with cumulative distribution function G(.) are acting on
a system whose strengths are given by m random variables
(X1 ,X 2 ,X3 ,.......,X n ) with cumulative distribution function F(.). If V is the
maximum
of
and
is
the
minimum
of
156
R = P(Y < X) =
212
3
00
=
0
212
3
x
1
x
1
x
222
3
y
2
x
2
y
dydx
dx
12
.
12 + 22
(5.2.1)
Note that, R =
, = 1
2
2
1+
(5.2.2)
157
Therefore,
2
R
=
> 0.
(1 + 2 )2
R is an increasing function in .
Now to compute R, we need to estimate the parameters 1 and 2 ,
and these are estimated by using MLE and MOM.
m
n
m
1
1
22 2 log x13 log y 3j
2
xi
j =1 y j
i =1
j =1
(5.2.3)
The MLEs of 1 and 2 , say 1 and 2 respectively can be obtained as
1 =
2 =
n
1
x2
i
(5.2.4)
m
.
1
y2
j
(5.2.5)
12
.
12 + 22
(5.2.6)
158
~2
R 2 = ~ 2 1 ~ 2 .
1 + 2
(5.2.7)
I 11 I 12
I ( ) =
.
I
I
21 22
(5.2.8)
where
2 log L 4n
I11 = E
.
=
12 12
2 log L 4m
I 22 = E
= 2 .
2
159
2 log L
= 0.
I12 = I 21 = E
1
2
As n , m ,
d
n (1 1 ), m ( 2 2 )
N 0, A1 ( 1 , 2 ) ,
0
a
where A( 1 , 2 ) = 11
,
0 a22
and a11 = 1n I11 =
a22 =
1
m I 22
12
22
d1 ( 1 , 2 ) =
R
2 2
= 2 1 22 2
1 ( 1 + 2 )
d 2 ( 1, 2 ) =
R
2 2
= 2 1 22 2 .
2 ( 1 + 2 )
This gives
Var ( R1 ) = var(1 )d12 ( 1 , 2 ) + var( 2 ) d 22 ( 1 , 2 )
=
12
4n
d12 ( 1 , 2 ) +
22
4m
d 22 ( 1 , 2 )
12 2 1 22 22 2 12 2
=
4n ( 12 + 22 ) 2 4m ( 12 + 22 ) 2
160
1
2 1
= [ R(1 R) ] +
n m
(5.2.9)
N (0,1) .
( L1, U1 ) , where
L1 = R1 Z (1 / 2) R1 1 R1
1 1
+ ,
n m
(5.2.10)
and
U1 = R1 + Z (1 / 2) R1 1 R1
1 1
+ .
n m
(5.2.11)
m
1
1
and
x2 y2 are
i=1 i
j =1 j
m
1
1
2
and
2
2
Thus, R1 in equation (5.2.6) could be rewritten as R1 = 1 + 22 .
1
Using (5.2.4) and (5.2.5) we obtain
22
R1 = 1 + 2 F
1
n
where F =
m
n
(5.2.12)
x2
2
1 i =1
2 m
2
1
y2
j =1 j
L2 = 1 + ( R1 1) F(1 / 2) (2m,2n)
1
U 2 = 1 + ( R1 1) F( / 2) (2m,2n)
1
(5.2.13)
(5.2.14)
where F( /2) (2m,2n) and F(1 /2) (2m,2n) are the lower and upper ( / 2) th
percentiles of F distribution with 2m and 2n degrees of freedom.
162
bootstrap samples x1 , x2 ,..., xn and y1* , y2* ,..., ym* respectively. Compute the
bootstrap estimates of 1 and 2 , say 1 and 2 respectively. Using 1
and 2 and equation (5.2.6), compute the bootstrap estimate of R, say R .
Step 3: Repeat step 2, NBOOT times, where N=1000.
Step 4: Let G ( x ) = P( R x) be the empirical distribution function of R .
boot p
163
( ) [1 e
Rs ,k = K
i =s
()
= k
i=s
( 2 / y ) 2 i
] [e
( 2 / y )2 k i
2 12 (1 / y )2
e
dy
y3
/ y
1/ i 1/ k i
[1 t ] [t ] dt where t = e 1 and =
2
= ( k ) [1 z ][ z ] dz
12
22
if z = t
k i + 2 1
1 / 2
i=s
= ( k ) (k i + , i + 1).
k
i=s
k!
(k - i)! (k + - j ) , since k and i are integers.
-1
(5.3.1)
i = s
j = 0
distributed random variables with cdf F(.). Then the system reliability,
which is the probability that the system does not fail is the function
Rs ,k given in (5.3.1).
If , are not known, it is necessary to estimate , to estimate
1
moment thus giving rise to two estimates. The estimates are substituted in
E ( 2 log L / i2 ) = i2 / 4n ; i = 1,2
when m=n.
(5.3.2)
s ,k
s ,k
s ,k
(5.3.3)
165
R
6
R
6
.
=
and
=
(3 + )
(3 + )
2
1 ,3
1 ,3
R
24 (2 + 7)
R
24 (2 + 7)
.
=
=
and
(3 + )(4 + )
(3 + )(4 + )
2 ,4
2 ,4
where = 1 / 2
9
(3 + )
4
Thus AV(R )=
1 ,3
1 1
+
n m
144 (2 + 7)
AV(R )=
[(3 + )(4 + )]
4
2 ,4
As n ,
1 1
+
n m
R R
N(0,1) ,
AV(R )
s ,k
s ,k
s ,k
2
2
(3 + ) 2
1 1
+
n m
R 2,4 1.96
12 (2 + 7)
(3 + 2 )(4 + 2 )
1 1
+
n m
169
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
40
80
120
160
200
240
Figure 5.4.1: The empirical and fitted survival functions for Data Set I.
171
1
Empirical survival function
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
200
250
300
350
400
450
500
550
Figure 5.4.2: The empirical and fitted survival functions for Data Set II.
5.5 Conclusions
We compare two methods of estimating R = P(Y < X ) when Y and
X both follow inverse Rayleigh distributions with different scale
parameters. We provide MLE and MOM procedure to estimate the
unknown scale parameters and use them to estimate of R. We also obtain
the asymptotic distribution of estimated R and that was used to compute the
asymptotic confidence intervals. The simulation results indicate that MLE
shows better performance than MOM in the average bias and average MSE
for different choices of the parameters. The exact confidence intervals are
preferable for average short length of confidence intervals whereas
asymptotic confidence intervals are advisable to use with respect to
coverage probabilities for different choices of the parameters. We proposed
bootstrap confidence intervals and its performance is also quite satisfactory.
172
confidence
interval
for
multicomponent
stress-strength
173
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
(n, m)
R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
0.0384 0.0414 0.0405 0.0286 -0.0066 -0.0403 -0.0500 -0.0492 -0.0449
(5,5)
0.0149 0.0170 0.0175 0.0129 -0.0036 -0.0193 -0.0224 -0.0207 -0.0179
0.0246 0.0271 0.0269 0.0187 -0.0070 -0.0315 -0.0375 -0.0357 -0.0317
(10,10)
0.0061 0.0069 0.0071 0.0046 -0.0038 -0.0111 -0.0119 -0.0106 -0.0088
0.0223 0.0252 0.0263 0.0212 0.0012 -0.0192 -0.0249 -0.0242 -0.0216
(15,15)
0.0058 0.0069 0.0078 0.0070 0.0016 -0.0043 -0.0057 -0.0054 -0.0046
0.0198 0.0225 0.0235 0.0192 0.0018 -0.0159 -0.0208 -0.0202 -0.0180
(20,20)
0.0035 0.0041 0.0044 0.0034 -0.0010 -0.0050 -0.0056 -0.0050 -0.0042
0.0127 0.0141 0.0139 0.0089 -0.0068 -0.0209 -0.0234 -0.0214 -0.0185
(25,25)
0.0025 0.0029 0.0031 0.0022 -0.0013 -0.0044 -0.0047 -0.0041 -0.0034
In each cell the first row represents the average bias of R using the
MOM and second row represents average bias of R using the MLE.
174
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
(n, m)
R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
0.0192 0.0255 0.0338 0.0431 0.0490 0.0443 0.0354 0.0274 0.0211
(5,5)
0.0053 0.0082 0.0127 0.0190 0.0236 0.0193 0.0130 0.0084 0.0055
0.0109 0.0154 0.0218 0.0301 0.0362 0.0320 0.0240 0.0172 0.0123
(10,10)
0.0020 0.0033 0.0056 0.0090 0.0118 0.0091 0.0056 0.0033 0.0020
0.0084 0.0121 0.0176 0.0246 0.0294 0.0247 0.0178 0.0123 0.0086
(15,15)
0.0014 0.0023 0.0039 0.0065 0.0085 0.0064 0.0038 0.0022 0.0013
0.0074 0.0107 0.0155 0.0216 0.0256 0.0211 0.0149 0.0101 0.0069
(20,20)
0.0010 0.0016 0.0028 0.0047 0.0063 0.0047 0.0028 0.0016 0.0009
0.0053 0.0079 0.0118 0.0174 0.0218 0.0184 0.0129 0.0086 0.0058
(25,25)
0.0007 0.0012 0.0022 0.0037 0.0050 0.0037 0.0022 0.0012 0.0007
In each cell the first row represents the average MSE of R using the
MOM and second row represents average MSE of R using the MLE.
175
(10,10)
(15,15)
(20,20)
(25,25)
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
Interval R= 0.10 R= 0.14 R= 0.20 R= 0.31 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90
A
176
(10,10)
(15,15)
(20,20)
(25,25)
177
(s,k)
(1,3)
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
(1,3) Rs ,k =0.964 Rs ,k =0.949 Rs ,k =0.923 Rs ,k =0.871 Rs ,k =0.750 Rs ,k =0.571 Rs ,k =0.429 Rs ,k =0.324 Rs ,k =0.250
-0.0063
-0.0082
-0.0108
-0.0136
-0.0120
0.0011
0.0137
0.0209
0.0236
-0.0239
-0.0296
-0.0367
-0.0437
-0.0402
-0.0127
0.0148
0.0328
0.0423
-0.0036
-0.0048
-0.0065
-0.0087
-0.0091
-0.0026
0.0044
0.0087
0.0105
-0.0128
-0.0165
-0.0215
-0.0271
-0.0262
-0.0075
0.0119
0.0244
0.0306
-0.0022
-0.0029
-0.0039
-0.0053
-0.0055
-0.0012
0.0033
0.0060
0.0070
-0.0106
-0.0133
-0.0166
-0.0198
-0.0166
0.0006
0.0166
0.0260
0.0301
-0.0012
-0.0016
-0.0022
-0.0028
-0.0022
0.0016
0.0051
0.0069
0.0074
-0.0087
-0.0113
-0.0149
-0.0191
-0.0191
-0.0060
0.0079
0.0169
0.0212
-0.0015
-0.0020
-0.0028
-0.0040
-0.0048
-0.0028
0.0002
0.0016
0.0025
-0.0073
-0.0097
-0.0128
-0.0166
-0.0164
-0.0044
0.0081
0.0158
0.0193
(5,5)
(10,10)
(15,15)
(20,20)
(25,25)
(n, m)
(2,4)
-0.0102
-0.0129
-0.0159
-0.0173
-0.0067
0.0186
0.0321
0.0335
0.0294
-0.0362
-0.0432
-0.0504
-0.0525
-0.0289
0.0241
0.0560
0.0658
0.0636
-0.0059
-0.0077
-0.0099
-0.0118
-0.0076
0.0066
0.0150
0.0163
0.0143
-0.0204
-0.0255
-0.0313
-0.0345
-0.0198
0.0182
0.0411
0.0471
0.0443
-0.0036
-0.0046
-0.0060
-0.0072
-0.0044
0.0048
0.0101
0.0107
0.0092
-0.0163
-0.0197
-0.0232
-0.0237
-0.0087
0.0231
0.0401
0.0428
0.0387
-0.0020
-0.0026
-0.0032
-0.0035
-0.0006
0.0067
0.0102
0.0100
0.0083
-0.0140
-0.0176
-0.0220
-0.0249
-0.0152
0.0122
0.0289
0.0330
0.0308
-0.0025
-0.0033
-0.0044
-0.0057
-0.0050
0.0003
0.0039
0.0048
0.0043
-0.0119
-0.0152
-0.0191
-0.0217
-0.0127
0.0122
0.0267
0.0297
0.0270
(5,5)
(10,10)
(15,15)
(20,20)
(25,25)
In each cell the first row represents the average bias of Rs ,k using the MLE
and second row represents average bias of Rs ,k using the MOM.
178
(s,k)
(1,3)
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
(1,3) Rs ,k =0.964 Rs ,k =0.949 Rs ,k =0.923 Rs ,k =0.871 Rs ,k =0.750 Rs ,k =0.571 Rs ,k =0.429 Rs ,k =0.324 Rs ,k =0.250
0.0010
0.0017
0.0033
0.0069
0.0148
0.0216
0.0219
0.0190
0.0153
0.0072
0.0105
0.0158
0.0248
0.0389
0.0478
0.0482
0.0449
0.0400
0.0004
0.0007
0.0014
0.0033
0.0079
0.0122
0.0122
0.0102
0.0079
0.0024
0.0040
0.0071
0.0131
0.0246
0.0334
0.0341
0.0310
0.0267
0.0002
0.0004
0.0008
0.0020
0.0050
0.0079
0.0078
0.0064
0.0048
0.0028
0.0042
0.0068
0.0118
0.0208
0.0277
0.0280
0.0251
0.0212
0.0001
0.0003
0.0006
0.0014
0.0037
0.0060
0.0061
0.0050
0.0038
0.0014
0.0024
0.0044
0.0086
0.0171
0.0240
0.0244
0.0219
0.0185
0.0001
0.0002
0.0004
0.0011
0.0028
0.0046
0.0046
0.0037
0.0028
0.0011
0.0019
0.0036
0.0074
0.0152
0.0216
0.0217
0.0190
0.0156
(5,5)
(10,10)
(15,15)
(20,20)
(25,25)
(n, m) (2,4) Rs ,k =0.938 Rs ,k =0.913 Rs ,k =0.869 Rs ,k =0.784 Rs ,k =0.600 Rs ,k =0.366 Rs ,k =0.214 Rs ,k =0.127 Rs ,k =0.077
0.0026
0.0045
0.0082
0.0153
0.0265
0.0286
0.0218
0.0142
0.0087
0.0152
0.0212
0.0304
0.0436
0.0580
0.0594
0.0521
0.0422
0.0328
0.0010
0.0019
0.0037
0.0078
0.0151
0.0163
0.0114
0.0067
0.0037
0.0060
0.0096
0.0158
0.0266
0.0409
0.0434
0.0360
0.0271
0.0194
0.0006
0.0011
0.0022
0.0048
0.0097
0.0106
0.0071
0.0039
0.0020
0.0063
0.0092
0.0142
0.0225
0.0338
0.0356
0.0289
0.0210
0.0146
0.0004
0.0008
0.0016
0.0035
0.0073
0.0083
0.0056
0.0031
0.0016
0.0037
0.0060
0.0103
0.0181
0.0294
0.0314
0.0252
0.0183
0.0128
0.0003
0.0006
0.0012
0.0027
0.0056
0.0062
0.0041
0.0022
0.0011
0.0028
0.0049
0.0087
0.0160
0.0267
0.0280
0.0216
0.0149
0.0098
(5,5)
(10,10)
(15,15)
(20,20)
(25,25)
In each cell the first row represents the average MSE of Rs ,k using the MLE
and second row represents average MSE of Rs ,k using the MOM.
179
(s,k)
(n, m)
(1,3)
(1,3)
(1,2.5)
(1,2)
(1,1.5)
(1,1)
(1.5,1)
(2,1)
(2.5,1)
(3,1)
0.9413
0.9410
0.9503
0.9510
0.9490
(5,5)
0.9413
0.9413
0.9423
0.9470
0.9483
0.9413
0.9377
0.9413
(10,10)
0.9417
0.9430
0.9440
0.9457
0.9440
0.9493
0.9493
0.9483
(15,15)
0.9503
0.9503
0.9500
0.9483
0.9480
0.9450
0.9447
0.9470
(20,20)
0.9510
0.9500
0.9500
0.9463
0.9457
0.9510
0.9500
0.9500
(25,25)
0.9497
0.9500
0.9493
0.9490
0.9507
0.9520
0.9537
0.9547
(1.5,1)
(2,1)
(2.5,1)
(3,1)
( 1 , 2 )
(n, m) (2,4)
(1,3)
(1,2.5)
(1,2)
(1,1.5)
(1,1)
0.9413
0.9417
0.9503
0.9510
0.9497
(5,5)
0.9413
0.9413
0.9440
0.9463
0.9457
0.9400
0.9307
0.9287
(10,10)
0.9423
0.9437
0.9460
0.9433
0.9510
0.9510
0.9477
0.9443
(15,15)
0.9503
0.9487
0.9500
0.9483
0.9467
0.9487
0.9473
0.9447
(20,20)
0.9507
0.9500
0.9483
0.9473
0.9520
0.9517
0.9473
0.9450
(25,25)
0.9497
0.9503
0.9483
0.9520
0.9510
0.9563
0.9570
B: Coverage probability
180
0.9547