Está en la página 1de 25

MTBF = 2T/X2( , 2r+2) MTBF = mean time between failures T = Operating time = 1 confidence level r = # failures, if 0, 2r + 2 = 2, which is the

e degrees of freedom MTBF95% conf = 2T/ X2(0.05, 2) MTBF50% conf = 2T/ X


2 (0.5, 2)

2T = (MTBF50% conf)(X
2

Substituting: MTBF95% conf = (MTBF50% conf)(X Therefore (X2(0.5, 2)/ X2(0.05, 2)) = 0.23137 And, MTBF95% conf = MTBF50% conf (0.23137) R = exp[-t/MTBF]

(0.5, 2)) 2 (0.5, 2))/ X (0.05, 2)

= MTBF50% conf (X2(0.5, 2)/ X2(0.05, 2))

From table look-up[1], X2(0.05, 2) = 5.99146; X2(0.5, 2) = 1.38629

[1] Probability & Statistics for Engineers & Scientists 7th Ed., Walpole, Myer, Myer, and Ye, Prentice Hall Publishers, NJ, 2002

Reliability @ 50% Time: Failure Rate: MTBF result: R: 1 hr 3.30E-06 units 303030.303 same units 0.99999670000544500000 0.999994733 Change 1-hr to 30 Min. Failure Rate: 60 MIN: 30 Min.: R: 5.23E-06 8.71119E-08 2.61336E-06 0.999997387

Change the 50% to 95% CL Reliability @ 95% Time: 1 hr

MTBF result: R:

70112.12121 0.999985737

Confidence Intervals
Confidence intervals for fraction nonconforming -- normal distribution. Z=x- W Where: x = any value of interest, such as a specification limit. W = the population standard deviation Z = normal distribution value for a given confidence level Example: The upper specification limit for a process is 90. The process average is 88.5 and the standard deviation is 1.23. Both of these values were estimated from a relatively large sample size (n = 150). Using the above equation, calculate: Z = 90 - 88.5 = 1.22 1.23 Using Excel: x = 90 = 88.5 W =1.23 Z = 1.22

ESTIMATING SAMPLE-SIZE FOR MAGNITUDE-BASED INFERENCES Sample sizes via statistical significance appear further down on this sheet. Author: Will G Hopkins, AUT University, NZ. Email: will@clear.net.nz The method is based on keeping chances of what I call Type 1 and Type 2 clinical errors acceptably low. A Type 1 clinical error occurs when, on the basis of your study, you decide to use an effect that is actually harmful. I have set the default chances to Type 1 = 0.5% and Type 2 = 25%. You need this kind of disparity if you want a reasonably low chance of an You may change cells with numbers in blue. Sample size and other useful stats are in red and plum. Do not change the cells in red. You may change cells in plum to see the effect on The smallest change, difference, correlation, relative risk or odds ratio is the smallest value of the statistic that has a substantial effect on the health, wealth, The larger the true effect, the smaller the sample size for a clear outcome. To estimate the sample size for a suspected true effect, replace the Ignore the error-variance factor, which is used to calculate the sample size and other stats. Ignore the grey cells on the far right. Sample sizes <10 are only approximate. Tweak to make chances the true effect is harmful and beneficial = Types 1 and 2 error.

Sample Size for Magnitude-Based Inferences

Maximum chances (%) of clinical errors Type 1 Type 2

0.5
most unlikely

25
unlikely

90 % conf. lim

Change in mean in a crossover


Smallest harmful change

Outcomes in a subsequent study


Maximum chances (%) of clinical errors Type 1 Type 2

Smallest Within-subject SD beneficial change (typical error)

-0.5

0.5

0.5
most unlikely

25
unlikely

Sample size 24

Choose an observed change

0.30

90 % conf. lim lower upper -0.19 0.79

Difference in changes in means in a fully controlled trial


Smallest harmful change Proportion in experimental group (%)

Outcomes in a subsequent study


Maximum chances (%) of clinical errors Type 1 Type 2
Choose an observed change

Smallest Within-subject SD beneficial change (typical error)

-0.5

0.5

50

0.5
most unlikely

25
unlikely

Sample size Total Exptal Control 88 44 44

0.30

90 % conf. lim lower upper -0.21 0.80

Difference in means in a cross-sectional study


Smallest harmful difference Smallest beneficial difference

Outcomes in a subsequent study


Maximum chances (%) of clinical errors Type 1 Type 2
Choose an observed difference

Between-subject SD

Proportion in Group A (%)

0.2

-0.2

50

0.5
most unlikely

25
unlikely

Sample size Total Group A Group B 267 134 134

-0.12

90 % conf. lim lower upper -0.32 0.08

Correlation in a cross-sectional study


Smallest harmful correlation Smallest beneficial correlation

Outcomes in a subsequent study


Maximum chances (%) of clinical errors Type 1 Type 2

-0.10

0.10

0.5
most unlikely

25
unlikely

Sample size 265

Choose an observed correlation

0.06

90 % conf. lim lower upper -0.04 0.16

Relative rate, frequency, risk or hazard ratio in a prospective cohort study or intervention
Smallest Smallest harmful relative beneficial relative rate rate

Outcomes in a subsequent study


Choose an observed rel risk or freq.

Incidence of disease (%)

Prevalence of exposure (%)

Maximum chances (%) of clinical errors Type 1 Type 2

1.10

0.91

50

50

0.5
most unlikely

25
unlikely

Sample size Cohort or Intervention Exptal Control Total 1057 529 529

0.95

90 % conf. lim lower upper 0.86 1.04

Odds ratio in a case-control study


Smallest harmful odds ratio Smallest beneficial odds ratio Prevalence of exposure in controls (%) Proportion of cases in study (%)

Outcomes in a subsequent study


Maximum chances (%) of clinical errors Type 1 Type 2
Choose an observed odds ratio

1.10

0.91

50

50

0.5
most unlikely

25
unlikely

Sample size Total Cases Controls 4657 2329 2329

0.95

90 % conf. lim lower upper 0.86 1.04

Sample Size for Statistical Significance

Maximum rates of statistical errors (%) Type I Type II

5
very unlikely

20
unlikely

95 % conf. lim

Change in mean in a crossover


Smallest change Within-subject SD (typical error)

Outcomes in a subsequent study


Maximum rates of statistical errors (%) Type I Type II

0.5

5
very unlikely

20
unlikely

Sample size 65

Choose an observed change

0.35

95 % conf. lim lower upper 0.00 0.70

Difference in changes in means in a fully controlled trial


Smallest change Within-subject SD (typical error) Proportion in experimental group (%)

Outcomes in a subsequent study


Maximum rates of statistical errors (%) Type I Type II

0.5

50

5
very unlikely

20
unlikely

Sample size Total Exptal Control 253 127 127

Choose an observed change

0.35

95 % conf. lim lower upper 0.00 0.70

Differences in means in a cross-sectional study


Maximum rates of

Outcomes in a subsequent study

Smallest difference

Between-subject SD

Proportion in Group A (%)

statistical errors (%) Type I Type II

0.2

50

5
very unlikely

20
unlikely

Sample size Total Group A Group B 787 393 393

Choose an observed change

0.14

95 % conf. lim lower upper 0.00 0.28

Correlations in a cross-sectional study


Smallest correlation

Outcomes in a subsequent study


Maximum rates of statistical errors (%) Type I Type II

0.10

5
very unlikely

20
unlikely

Sample size 783

Choose an observed change

0.07

95 % conf. lim lower upper 0.00 0.14

Relative rate, frequency, risk or hazard ratio in a prospective cohort study or intervention
Smallest relative rate Incidence of disease (%) Prevalence of exposure (%)

Outcomes in a subsequent study


Choose an observed change

Maximum rates of statistical errors (%) Type I Type II

1.10

50

50

5
very unlikely

20
unlikely

Sample size Intervention Cohort Exptal Control 3142 1571 1571

1.07

95 % conf. lim lower upper 1.00 1.14

Odds ratio in a case-control study


Smallest odds ratio Prevalence of exposure in controls (%) Proportion of cases in study (%)

Outcomes in a subsequent study


Maximum rates of statistical errors (%) Type I Type II

1.10

50

50

5
very unlikely

20
unlikely

Sample size Total Cases Controls 13840 6920 6920

Choose an observed change

1.07

95 % conf. lim lower upper 1.00 1.14

"" 0.49

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

-0.50

0.50

0.50
most unlikely

25
unlikely

15
unlikely

0
most unlikely

91
likely

9
unlikely

2.3
very unlikely

0.2
most unlikely

15

"" 0.50

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

-0.50

0.50

0.50
most unlikely

25
unlikely

17
unlikely

1
very unlikely

89
likely

10
unlikely

2.2
very unlikely

0.1
most unlikely

17

"" 0.20

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

0.20

-0.20

0.5
most unlikely

25
unlikely

17
unlikely

1
very unlikely

88
likely

10
unlikely

2.2
very unlikely

0.1
most unlikely

17

"" approx

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

0.10

-0.10

0.10

0.5
most unlikely

25
unlikely

17
unlikely

1
very unlikely

89
likely

10
unlikely

2.1
very unlikely

0.1
most unlikely

17

"vfz" 1.10

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

1.10

0.91

0.5
most unlikely

25
unlikely

17
unlikely

2
very unlikely

88
likely

10
unlikely

2.1
very unlikely

0.1
most unlikely

17

"vfz" 1.10

Choose a harmful value

Choose a beneficial value

Chances the true effect is: harmful beneficial

Decide to use effect clinically

Chances of these outcomes when the true effect is null Observe the following magnitudes unclear trivial any non-trivial likely non-trivial very likely nontrivial

Chances of observin true effe >Type 2 chance of benefit

1.10

0.91

0.5
most unlikely

25
unlikely

17
unlikely

2
very unlikely

88
likely

10
unlikely

2.1
very unlikely

0.1
most unlikely

17

"" 0.35

"" 0.35

"" 0.14

"" approx

0.07

"vfz" 1.07

"vfz" 1.07

<Type 1 chance of harm

Errorvariance factor

15

2.0

Iterations 29.9 23.8

24.1

24.6

24.4

24.4

<Type 1 chance of harm

Errorvariance factor

17

8.0

Iterations 119.8 86.9

87.8

87.8

<Type 1 chance of harm

Errorvariance factor

17

4.0

Iterations 272.7 267.2 267.3

<Type 1 chance of harm

Errorvariance factor

17

1.0

<Type 1 chance of harm

Errorvariance factor

17

3.6

<Type 1 chance of harm

Errorvariance factor

17

16.0

Errorvariance factor

2.0

Iterations 77.2 64.5

64.8

Errorvariance factor

8.0

Iterations 277.7 253.0 253.1

Errorvariance factor

4.0

Iterations 788.7 786.8 786.8

Errorvariance factor

1.0

Errorvariance factor

3.6

Errorvariance factor

16.0

Stanton Glantz. Primer of Biostatistics Chapter 5. How to analyze rates and proportions Example: efficacy of low-dose aspirin in preventing blood clots (thrombus) Outcome observed (cells C12:D13): Placebo Aspirin Thrombus 18 6 No thrombus 7 13

In this case, we observe that all the patients who received the drug were cured, while none of the patients who received the placebo were cured, If there were no assocation between drug and cure, we'd expect half of each group to be cured, and half not to be cured Outcome expected by chance (cells C21:D22): Thrombus Placebo 13.6 Aspirin 10.4

No thrombus 11.4 8.6

The chi-squared test examines the difference between the observed outcome and the outcome expected by chance, and tells us the probability that the observed outcome would have occurred in the absence of any true association (between durg and cure in this case). Use the Excel workbook CHITEST function: The chitest function requires that we specify the observed data (C12:D13) and the expected values (C21:D22) =CHITEST(C9:D10,C18:D19) p-value = 0.007647977

How to calculate the expected value in each cell and get the p-value using chi-square 1. Calculate the row totals, column totals, and grand total for the observed data Thrombus No thrombus Row Total 18 7 25 Placebo 6 13 19 Aspirin Column Total 24 20 44 <= Grand total 2. Calculate expected value for each cell as (row total * column total / grand total) Thrombus No thrombus Row Total 13.6 11.4 25 Placebo Aspirin 10.4 8.6 19 Column Total 24 20 44 <= Grand total 3. Calculate the p-value for the chi-square test p-value= =CHITEST(C36:D37,C42:D43) p-value= 0.007647977

CALCULATING THE RELIABILITY INTRACLASS CORRELATION COEFFICIENT AND ITS CONFIDENCE LIMITS
Reference: Hopkins WG (2009). Calculating the reliability intraclass correlation coefficient and its confidence limits (Excel spreadsheet). newstats.org/xICC.xls

The ICC here is the one that gives the same value as the test-retest Pearson correlation for large samples, when there are 2 tests. It's almost always the one you w (For small samples the Pearson is slightly biased, but the ICC is unbiased.) If you want the ICC and its confidence limits, given a between-subject SD and a within-subject SD: You will also need the number of subjects and the degrees of freedom of the within-subject SD. If you have the ICC and you want its confidence limits: You will also need the number of subjects and the number of tests. If there are missing values, you will also need the number of test entries (observations). (When there are no missing values, the number of entries is the number of subjects times the number of tests.) If you have an F value from an ANOVA that you used for the reliability analysis, and you want the ICC and its confidence limits: Use data from a two-way ANOVA with subjects and tests as the two effects and no interaction. Use the F statistic for the subjects term. Don't use the F for tests or the overall F. Replace the numbers in blue with your numbers, or put your numbers in the blank cells underneath. The results are in red. Don't modify black numbers in the cells with a grey background. Enter values of statistics with one more significant digit than you would normally publish, to avoid substantial rounding errors. If your SD are percents with any values >10%, turn them into factor SD and log-transform them as you enter them, as follows: =ln(1+your_SD_value/100). If your SD are factors, log-transform them as you enter them, as follows: ln(your_SD_value).

The relationship between the ICC and F statistic is due to Bartko JJ (1966). The intraclass correlation coefficient as a measure of reliability. Psychological Reports 1 The relationship is ICC=(F-1)/(F+k-1), where k is the number of tests or the effective number of tests when there are missing values in one or more tests for one o I devised a value for k when there are missing values. Bartko's formula is too complex. Mine gives the same answer as Proc GLM in SAS. It is the number of tests that would give the number of degrees of freedom for the within-subject SD (the standard or typical error of measurement, or the RMSE fr The formula for confidence limits for the ICC is due to McGraw KO & Wong SP (1996). Forming inferences about some intraclass correlation coefficients. Psycholo I used the formula for their ICC(C,1), case 3. This is the same as Bartko's ICC.

Between- and within-subject SD known N Between Within Deg. subjects SD SD freedom 25 2 0.86 24

Conf. level (%) 90

ICC 0.82 #DIV/0!

ICC and confidence limits lower upper approx. 0.66 0.90 0.12 #DIV/0! #DIV/0! #DIV/0!

Effective no.of tests 2.00 1.00

F 9.8 #DIV/0!

F lower 4.9 #DIV/0!

F upper 19.5 #DIV/0!

ICC known, when there are no missing values N k Conf. ICC and confidence limits level (%) ICC lower upper approx. subjects tests ICC 25 2 0.82 90 0.82 0.66 0.90 0.12 #DIV/0! #NUM! #NUM! #NUM!

Deg free 24 1

F 9.8 1.0

F lower 4.9 #NUM!

F upper 19.5 #NUM!

ICC known, when there ARE missing values N k n Conf. level (%) subjects tests entries ICC 18 3 50 0.75 90

ICC 0.75 #DIV/0!

ICC and confidence limits lower upper approx. 0.57 0.87 0.15 #NUM! #NUM! #NUM!

Deg free 30 1

Effective no.of tests 2.76 0.00

F 9.3 1.0

F lower 4.7 #NUM!

F upper 20.0 #NUM!

F known, when there are no missing values N k F for Conf. ICC and confidence limits ICC lower upper approx. subjects tests subjects level (%) 15 2 29.2 90 0.93 0.84 0.97 0.06 1.00 #NUM! #NUM! #NUM!

Deg free 14 1

F lower 11.8 #NUM!

F upper 72.5 #NUM!

F known, when there ARE missing values N k n F for Conf. subjects tests entries subjects level (%) 15 4 27 33.8 90

ICC 0.95 1.00

ICC and confidence limits lower upper approx. 0.86 0.98 0.06 #NUM! #NUM! #NUM!

Deg free 9 1

Effective no.of tests 1.64 0.00

F lower 11.2 #NUM!

F upper 89.4 #NUM!

ANALYSIS OF RELIABILITY WITH A SPREADSHEET (beta version)

Reference: I will be publishing a paper at Sportscience mid 2009. Meantime cite A New View of Statistics. Updated May 2009 to include standardized changes in mean and typical error, instructions about factor effects, better estimates of intraclass correlation and adjustments for small-sample bias. I also deleted estimates of total error and limits of agreement. These are still in the old spreadsheet at http://n Updated Sept 2009 to remove error of measurement from the formulae for standardizing. I will address this issue in the forthcoming article. Updated Sept 2010 to replace unbiased estimates with correction factors.

This spreadsheet calculates reliability statistics for consecutive pairs of trials and for the means of these statistics when there are >2 trials. For trials spaced at equal intervals, these means are better estimates of test-retest reliability than the more usual ANOVA-based reliablity statistics. The reliability statistics are calculated for raw data and after log transformation. I generated the data shown with http://www.sportsci.org/2007/SimulateSamples.xls. See the accompanying article at http://www.sportsci.org/2007/w I deliberately chose data with large errors that are clearly better analyzed via log transformation to give percent or factor effects and errors. Unless your data are times to exhaustion or gene transcription scores, they won't be so obviously in need of log transformaiton, but log transformaiton Log transformation is not appropriate for data that can be zero or negative or for data with an arbitrary reference point as zero (e.g., angles and some Delete and replace the numbers in blue. Stats you might want are in red. Don't change any cells with a colored background. Missing values must be blanks or the graphs will display incorrectly. (The X values will plot as consecutive integers.) Clear all the #NUM! corresponding to missing raw values from the log-transformed columns. Restore the formula (by copying an adjacent cell) if you re Check the graphs (especially of the change scores) for outliers and non-uniformity of scatter. The log-transformed variables may show less non-uniform For more than 20 subjects, COPY any number of rows after the first data row and INSERT in the same place without deselecting. Do NOT copy and Double-click on any of the red mean or SD cells to check that you have done this operation properly. Colored boxes should enclose all rows of your da Delete or Clear any unwanted or empty rows. For 5 or 6 trials, add "5" and "6" in the Trials row. To remove a trial, clear its data, or set the corrresponding "Include which trials?" cell to 0 or blank. For overall means with data requiring log transformation, you will still have to clear any #NUM! from the log-transformed values. For more than 6 trials, copy and insert the entire column(s) for Trials 5 and/or 6 and their change scores for the RAW and the LOG data. And follow the instructions in the highlighted cells indicated by "Hover cursor for >6 trials". If such editing is too difficult for you, contact me.

Choose conf. limits (%): 90 RAW Data Trial 1


Alex Ariel Ashley Bernie Casey Chris Corey Courtney Devon Drew Dylan Frances Gene Jaimie Jean Jesse Jo Jody Jordan Kade Mean SD N Include which trials? DF 0.7 2.83 0.71 14.81 0.29 3.31 1.76 8.41 3.67 1.92 0.53 0.06 2.75 1.73 2.17 1.01 1 1.74 23.09 1.49 3.7 5.7 20 1 19

Hover cursor for >6 trials:

Hover cursor for >6 trials:

2
4.77 1.37 3.29 2.15 0.61 2.07 4.41 9.01 5.67 3.74 0.47 0.17 2.74 8.8 2.36 4 1.39 2.02 18.56 8.5 4.3 4.3 20 1 19

3
7.58 1.41 4.49 14.83 0.94 2.25 2.94 11.53 10.91 4.4 1.68 0.46 7.03 2.93 1 4.52 1.11 2.49 26.65 4.46 5.7 6.3 20 1 19

4
2.83 3.86

Mean

Change scores of ra 2-1 3-2


4.07 -1.46 2.58 -12.66 0.32 -1.24 2.65 0.6 2 1.82 -0.06 0.11 -0.01 7.07 0.19 2.99 0.39 0.28 -4.53 7.01 0.6 4.1 20 2.81 0.04 1.2 12.68 0.33 0.18 -1.47 2.52 5.24 0.66 1.21 0.29 4.29 -5.87 -1.36 0.52 -0.28 0.47 8.09 -4.04 1.4 4.0 20

14.28 7.81 6.71 2.39 0.54 4.79 7.25 1.6 11.29 1.36 1.86 27.12 8.85 4.6 5.5 20 19 Mean SD N

0 0 0

0 0 0

0 0 0

For your data, you may have to set max and min values of X and Y axes to appropriate values.
20 15 Trial 2 10 5 0 0 10 Trial 1 20 30 -15 Trial 1 Trial 2-1 10 5 0 0 -5 10 20 30

Measures of reliability via the RAW variab


Raw Trials 2-1 3-2
Change in mean Lower conf. limit Upper conf. limit Conf. limits as value Typical error Lower conf. limit Upper conf. limit Conf. limits as approx. /factor Bias correction factor Degrees of freedom Standardized Change in mean Lower conf. limit Upper conf. limit Conf. limits as value Typical error Lower conf. limit Upper conf. limit Conf. limits as approx. /factor Bias correction factor Correlations Pearson correlation Lower conf. limit Upper conf. limit 0.61 1.38 -0.99 -0.18 2.20 2.93 1.60 1.56 2.92 2.84 2.32 2.26 4.00 3.90 1.31 1.31 1.01 1.01 19 19 0.12 0.25 -0.20 -0.03 0.44 0.54 0.32 0.29 0.58 0.53 0.46 0.42 0.79 0.72 1.31 1.31 1.01 1.01 0.69 0.78 0.42 0.57 0.85 0.89

-10

30 25 20 Trial 3 15 10 5 0 0 5 10 Trial 2 15 20 Trial 3-2

15 10 5 0 -5 -10 0 5 10 15 20

Trial 2

30 25 20 l4 -3

8 6 4 2

Trial 4

Tri

15 10 5 0 0 10 Trial 3 20 30

2 0 10 20 30

-2 0 -4 -6

Trial 3

1.2 1 0.8 Trial 5 0.6 0.4 0.2 0 0 10 Trial 4 20 30

1 0.8 Trial 5-4 0.6 0.4 0.2 0 0 10 Trial 4 20 30

Bias correction factor 1.02 1.01 Intraclass correlation 0.69 0.74 Lower conf. limit 0.42 0.52 Upper conf. limit 0.84 0.87 Effective no. of trials 2.0 2.0 F 5.4 6.8 Numerator DF 19 19 Denominator DF 19 19 F lower 2.5 3.1 F upper 11.6 14.7

1.2 1 0.8 Trial 6 0.6 0.4 0.2 0 0 0.5 Trial 5 1 1.5

1 0.8 Trial 6-5 0.6 0.4 0.2 0 0 0.5 Trial 5 1 1.5

4-3
-4.75 2.45

-4

2.75 -3.1 2.31 0.71 0.08 -2.24 4.32 0.6 6.77 0.25 -0.63 0.47 4.39 1.0 3.0 15

Do not modify Any this col. subject? 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Total no. 20 subjects

Hover cursor for >6 trials:

100*LOG-Transformed Data (Clear any #NUM! Restore formula for new data.) 1 2 3 4 0 0
Alex -35.7 Ariel 104.0 Ashley -34.2 Bernie 269.5 Casey -123.8 Chris 119.7 Corey 56.5 Courtney 212.9 Devon 130.0 Drew 65.2 Dylan -63.5 Frances -281.3 Gene 101.2 Jaimie 54.8 Jean 77.5 Jesse 1.0 Jo 0.0 Jody 55.4 Jordan 313.9 Kade 39.9 Mean 53.15 SD 132.66 N 20 Include which trials? 1 DF 19 1.7 Back-transformed mean SD as a / factor 3.77 SD as a CV (%) 276.8 156.2 31.5 119.1 76.5 -49.4 72.8 148.4 219.8 173.5 131.9 -75.5 -177.2 100.8 217.5 85.9 138.6 32.9 70.3 292.1 214.0 98.99 110.82 20 1 19 2.7 3.03 202.9 202.6 34.4 150.2 269.7 -6.2 81.1 107.8 244.5 239.0 148.2 51.9 -77.7 195.0 107.5 0.0 150.9 10.4 91.2 328.3 149.5 123.91 104.41 20 1 19 3.5 2.84 184.1 104.0 135.1 #NUM! #NUM! #NUM! #NUM! #NUM! 265.9 205.5 190.4 87.1 -61.6 156.7 198.1 47.0 242.4 30.7 62.1 330.0 218.0 #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM!

Do not modify

Mean

92.0 116.6 0 0 0 0 0 0 0 0 0

Mean 2.5 3.21 220.9

Measure
4-3 -4 Mean Clear any #NUM! from the log-transformed data to stop them plotting as zeros. Restore the log-transformation formula to those cells if you enter new data.
400 300 250 200 150 100 50 0 -50 0 -100 -150 -200 -250 Trial 1

Factors Facto

2.88 2.40 3.73 1.25 1.01 30

200 Trial 2 100 0 -500 -100 0 -200 -300 Trial 1 500 Trial 2-1

Conf Fa
500

-500

Conf. limits a Bia D Percents Ch

0.52 0.44 0.68 1.25 1.01

-400

350 300 250 200 150 100 50 0 -200 -50 0 -100 Trial 2 350

250 200 150 Trial 3-2 100 50 0 -400 200 400 -200-50 0 -100 -150 Trial 2 150 200 400

Conf. limits Typical

Trial 3

Conf. limits a Bias appr Standardi

0.75 0.57 0.87 2.6 8.5 19 30 4.3 17.7

-200

300 250 200 150 100 50 0 -50 0 -100

100 Trial 4-3 50 0 -200 -50 0 -100 200 Trial 3 400 -150 Trial 3 200 400

Co

Trial 4

Conf. limits a B Correlatio Pe

1 0.8 Trial 5 0.6 0.4 0.2 0 -200 0 Trial 4 200 400 -200 Trial 5-4

1 0.8 0.6 0.4 0.2 0 0 200 Trial 4 400

B Intra

Measures

1 0.8 Trial 6 0.6 0.4 0.2 0 0 0.5 Trial 5 1

1 0.8 Trial 6-5 0.6 0.4 0.2 0 0 0.5 Trial 5 1

Bia

Stdized

Std

Bia

Ef

Hover cursor for >6 trials:

Change scores of Log-transformed data 2-1 3-2 4-3 -4


191.9021 -72.5466 153.3378 -192.983 74.3578 -46.94 91.85609 6.89136 43.49975 66.67604 -12.0144 104.1454 -0.3643 162.663 8.393445 137.6344 32.93037 14.92124 -21.8391 174.129 Mean 45.8 SD 95.1 N 20 46.31669 2.877896 31.09651 193.1184 43.24209 8.338161 -40.54651 24.66173 65.44907 16.25189 127.3816 99.54281 94.22288 -109.9749 -85.86616 12.22176 -22.49437 20.91852 36.17804 -64.49174 24.9 71.7 20 -98.5236 100.7077

Do not modify this col.

Any
subject? #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! #NUM! 0

21.39076 -33.4275 42.19944 35.24996 16.03427 -38.3656 90.5999 47.00036 91.54054 20.31247 -29.1706 1.748231 68.52687 22.4 55.7 15

Total no. subjects 0 0

Trials

2-1

3-2

4-3

-4

Mean

1.58 1.28 Lower conf. limit 1.09 0.97 Upper conf. limit 2.28 1.69 1.44 1.32 Factor typical error 1.96 1.66 Lower conf. limit 1.71 1.50 Upper conf. limit 2.51 2.00 1.21 1.16 ias correction factor 1.01 1.01 Degrees of freedom 19 19

hange in mean (%) 58.1 28.3 Lower conf. limit 9.5 -2.8 Upper conf. limit 128.5 69.3 59.5 36.0 96.0 66.1 Lower conf. limit 70.6 49.6 Upper conf. limit 151.4 100.4 1.46 1.42 1.02 1.02 Change in mean 0.37 0.23 Lower conf. limit 0.07 -0.03 Upper conf. limit 0.68 0.49

onf. limits as value 0.30 0.26 Typical error 0.55 0.47 Lower conf. limit 0.44 0.37 Upper conf. limit 0.75 0.65 1.31 1.31 Bias correction factor 1.01 1.01

earson correlation 0.71 0.78 Lower conf. limit 0.45 0.57 Upper conf. limit 0.86 0.89 Bias correction factor 1.01 1.01 raclass correlation 0.72 0.80 Lower conf. limit 0.48 0.60 Upper conf. limit 0.86 0.90

2-1

3-2

4-3

-4

Mean

Change in mean 45.83 24.92 Lower conf. limit 9.05 -2.81 Upper conf. limit 82.62 52.66 Typical error 67.27 50.72 Lower conf. limit 53.41 40.27 Upper conf. limit 92.19 69.51 ias correction factor 1.01 1.01 DF 19 19

#DIV/0! #DIV/0! #DIV/0! 0 0 0 #DIV/0!

0.37 0.23 Lower conf. limit 0.07 -0.03 Upper conf. limit 0.68 0.49 tdized typical error 0.55 0.47 Lower conf. limit 0.44 0.37 Upper conf. limit 0.75 0.65

Pearson r 0.71 0.78 Lower conf. limit 0.45 0.57 Upper conf. limit 0.86 0.89 ias correction factor 1.01 1.01 Intraclass r 0.72 0.80 Lower conf. limit 0.48 0.60 Upper conf. limit 0.86 0.90 Effective no. of trials 2.0 2.0 F 6.1 8.8 Numerator DF 19 19 Denominator DF 19 19 F lower 2.8 4.1 F upper 13.2 19.1

#DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0!

MIL-HDBK-213F, Table 4-1: R Major Concerns: MODELS: * Are all functional elements included in the reliability block diagram /model? * Are all modes of operation considered in the math mode?. * Does the math model results show that the design achieves the reliability requirement? ALLOCATION: * Are system reliability requirements allocated (subdivided) to useful levels? * Does the allocation process consider complexity, design flexibility, and safety margins? PREDICTION: * Does the sum of the parts equal the value of the module or unit? - Are environmental conditions and part quality representative of the requirements? * Are the circuit and part temperatures defined and do they represent the design? * Are equipment, assembly, subassembly and part reliability drivers identified? * Are alternate (Non MIL-HDBK-217) failure rates highlighted along with the rationale for their use? * Is the level of detail for the part failure rate models sufficient to reconstruct the result? - Are critical components such as VHSIC, Monolithic Microwave Integrated Circuits (MMIC), Application Specific Integrated Circuits (ASIC) or Hybrids highlighted?

Comments: * System design drawing/diagrams must be reviewed to be sure that the reliability model/diagram agrees with the hardware. * Duty cycles, alternate paths, degraded conditions and redundant units must be defined and modeled * Unit failure rates and redundancy equations are used from the detailed part predictions in the system math model (See MIL-HDBK-338 Revision B Electronic Reliability Design Handbook) * Useful Ievels are defined as: equipment for subcontractors, assemblies for sub-subcontractors, circuit boards for designers. * Conservative values are needed to prevent reallocation at every design change. * Many prediction neglect to include all the parts producing optimistic results (check for solder connections, connectors, circuit boards). * Optimistic quality levels and favorable environmental conditions are often assumed causing optimistic results * Temperature is the biggest driver of part failure rates; low temperature assumptions will cause optimistic results. * Identification is needed so that corrective actions for reliability improvement can be considered. * Use of alternate failure rates, if deemed necessary; require submission of backup data to provide credence in the values. * Each component type should be sampled and failure rates completely reconstructed for accuracy. * Prediction methods for advanced technology parts should be carefully evaluated for impact on the module and system.

Fault Tree Analysis (FTA)


DEFINITIO N Fault Tree Analysis ( FTA ) is a top-down approach to failure mode analysis. An FTA identifies failures and strives to eliminate the cause of the failure.

Reliability Block Diagrams


DEFINITION Reliability Block Diagrams ( RBD 's) establish system reliability on a modular/block basis rather than a component basis using a block diagram approach.

SITUATION While troubleshooting a failure or trying to identify possible causes to a specific failure effect, an FTA can be a very useful tool. SITUATION For complex systems, RBD's make the reliability of a system much easier to understand, expose weaknesses much quicker, and make what-if analyses much easier.

OBJECTIV E FTA is a systematic, deductive method for defining a single specific undesirable event and determining all possible failures that could VALUE TO YOUR ORGANIZATION Although an FTA can be very useful in the initial product design phase as an evaluation tool, it is probably more powerful as a troubleshooting tool after an event (or proposed event) has taken

OBJECTIVES To develop a reliability model using blocks or modules to make the model easier to understand and change.

VALUE TO YOUR ORGANIZATION Used early in a design cycle, it allocates reliability among blocks, guiding architecture and design decisions to achieve an overall system reliability requirement. Used on established designs, it sorts every component into blocks according to thei function to expose the distribution of failure rates.

RELIABILITY INTEGRATION An example of Reliability Integration during Fault Tree Analysis is as follows:

RELIABILITY INTEGRATION An example of Reliability Integration during Reliability Block Diagramming (RBD) is as follows: Information from Reliability Block Diagrams feed directly into design decisions If the block diagram concludes that the only way to meet the design goal is to add redundancy, then this becomes input to the design early on so that cost and schedule impacts are kept to a minimum. METHODOLOGY We start out with a high levelBOM or functional block diagram and from that, we establish redundancy paths and assign an MTBF to each block. We make this assignment from vendor data, past products, benchmarking, and a variety of other means. Then we calculate the total system reliability based on each block.

Using FTA's during HALT planning: When a FMECA identifies a critical effect, an FTA is often deployed to evaluate all possible failure modes that can also cause the same critical effect. This is especially helpful when planning a HALT so that the appropriate stresses can be applied and so that the failure can easily be troubleshot if the critical effect is exposed during

METHODOLOGY When we perform an FTA, we start with an undesired event. The undesired FMECA's are very similar in this a fault tree diagram. FTA's and event constitutes the top event inregard but the goal is We much different. Whereas a FMECA is trying to identifying all How we decide whether to use an FTA or a FMECA

CASE STUDIES/OPTIONS The following case studies and options provide example approaches. We shall tailor our approach to meet your specific situation.

FTA is preferred over FMECA when: A small number of top events can be identified Product functionality is highly complex The product is not repairable once initiated FMECA is preferred over FTA when:

1) RDB's Make Understanding Reliability of Complex Systems Easier For a Networking company with a highly complex core switch, we developed a reliability block diagram to make the reliability of the system easier to understand.

2) Using RDB's to Help Drive Design Decisions A Medical Device company had a system comprised of many blocks. We developed a Reliability Block diagram to help them understand the driving forces of reliability early in a design effort and created a functional block diagram to determine where to concentrate design efforts.

The events cannot be limited to a small number Multiple successful functional profiles are possible Identification of "all possible" failure modes is important

3) Using RDB's to Drive Redundancy Decisions A Military contractor had a specific reliability goal in mind and had a choice of using redundancies but needed to make this decision very early in the design process. We developed a Reliability Block Diagram for them and showed them where they could take advantage of redundancies to meet their goals.

CASE STUDIES/OPTIONS The following case studies and options provide example approaches. We shall tailor our approach to meet your specific 1) Using FTA's to Identify Safety-Critical Failures A Computer manufacturer had a safety-critical failure in their product and they wanted to identify if there were any other failure 2) Using FTA's During Failure Analysis For a Medical Device company, we showed them how to use FTA

También podría gustarte