Está en la página 1de 19

CH4413 SPSS for Windows Weeks 12 and 13 Excel is a useful package for an elementary analysis of the data.

However, it is not suitable for an in-depth treatment. For this SPSS can be used. Switch on the computer and wait until you observe the desk-top. Click on Start, Programs, SPSS for Windows. Click on Type in data Click OK (You can save files using Save As in the usual way) . SPSS has two screens. The first is the data Editor screen where data is placed. A second screen is the output screen called the SPSS viewer. You can cut and paste materials from this screen into Word for report purposes. You can enter data manually (i.e. by typing it in). Usually, experimenters record and save data as text files or in spreadsheets for ease of analysis. SPSS will read such files. Example 1. (Independent samples comparison) We will now enter some data into the data editor so that you can learn the basics of using SPSS. The data we will use was collected during the comparison between two groups. One group was a control group given an existing drug for treating a heart problem; a second group was given a new drug. The drug was used as a diuretic, it removes fluid from the body. The volume of fluid is a measure of the success of the drug the more effective the drug, the larger the volume expelled. The following data was collected. In the first column, a 1 represents the first drug and the corresponding volumes in column 2. This changes to a 2 when the results for the second drug are recorded. Group Volume 1 1.27 1 2.12 1 1.15 1 2.01 1 1.83 1 2.18 1 2.14 1 1.41 1 1.15 1 1.11 1 1.56
Ray Binns Page 1 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

1 1.3 1 2.69 1 1.77 2 1.89 2 2.32 2 2.27 2 1.93 2 2.1 2 1.97 2 1.89 2 3.63 2 2.26 2 2.66 2 1.29 2 1.8 2 2.31 2 2.93 2 1.86 We will now start to enter the data. First, we label the columns, group and volume. Point to var in column 1. Double click Observe Define Variable Box Observe var change to group Enter the word group into variable name Click OK Repeat with column 2 call it volume Click Ok Now type in the values shown above. Save the data by using Save As as usual. Once the data has been entered, you can cut and paste as you need. We have put the data in using column 1 as an index. You might want to enter the data in two separate columns. You can also select subsets of data, transform data (if you want to stabilize the variance or make it more normal), carry out statistical analyses, plot charts (useful for histograms, Box plots, stem and leaf, normal P-P or Q-Q plots). Lets now carry out a preliminary look at these data. Ignore the fact that these are two groups for the present. We can use the Descriptives routine. Select Statistics, Summarize, Descriptives Observe the Descriptives box Highlight volume. Click the arrow to move it into the variables box Click OK

Ray Binns Page 2 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

Observe the SPSS viewer

The default output gives the characteristics shown: 29 observations, minimum, maximum etc. This is useful to ensure that you have all he data entered that you should have. To return back to the SPSS Data Editor, go via the Window menu. Click on Window in the menu bar. Select SPSS Data Editor . You can experiment with the Descriptives routine by clicking on Statistics and Options to see what else the routine will give you. For the moment, a useful routine to look at is the Explore routine. This looks at the data in the two groups. It also calculates various characteristics such as confidence intervals and produces histograms (and its more detailed equivalent, the stem-and-leaf diagram) plus Box plots (to look for skewness). Select Statistics, Summarize, Explore. Observe Explore Box with volume and group in the appropriate boxes. Move volume into dependent list Move group into factor list Click on Statistics to see defaults Click on Plots to see defaults Click OK Observe output In this case the Box plots show different variabilities and the first set of data is more skewed. One would be cautious about the t test as the data may not be normal. Now lets do some tests. In spite of our suspicions about normality, lets carry out a t test on the two groups. Select Statistics, Compare means, Independent samples t test Observe Independent samples t test box Move volume into test variables Move group into grouping variable Click on define groups and enter 1, 2 in the spaces. Click OK Observe output. The test assumes normality and equal variances. Levenes test will test the null hypothesis of equal population variances (which the sample variances estimate) . The P value is 0.845. We cannot reject the null hypothesis. We have no evidence to conclude that the population variances are not equal.

Ray Binns Page 3 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

The t test tests the null hypothesis that mu1=mu2. For the t test, the P value is 0.013. We reject the null hypothesis at the 5% level but not at the 1% level. There is some evidence that the groups have different population means i.e. have different effects. The larger of the two sample means is the second drug. This suggests we have evidence that the second drug is more effective as a diuretic. The 95% CI for the difference group1 minus group2 mean is 0.91 to 0.12 (to 2 d.p.) Example 2. Two groups of students compare the ease of using two packages SPSS and Excel to carry out tasks. Times were recorded: the longer the time taken, the less easy the software is to use. Group 1 consists of ten students and used SPSS. Group 2 consists of (different)8 students and used Excel. Results are shown. Group 1: 15 13 18 17 16 14 12 9 16 14 Group 2: 10 12 19 6 14 21 15 7 Is there any evidence of a difference? The null hypothesis is versus H0: spss= excel H1: spss excel

Data must be entered in a special way. Column 1 contains code 1 for group 1 and code 2 for group 2. Column 2 contains the times. Thus enter SPSS in the usual way. If you are already in SPSS you will need to open up a new data window. Hence: Select File and New followed by Data. If a message 'Save contents of data window newdata' appears click No. Observe a new data window called Newdata. At the data window: Click on var, call it group (this is the grouping variable with value 1 for SPSS and 2 for Excel). Click on a second var, call it time. In column 1, enter ten 1's followed by eight 2's. In column 2, enter the time values. To carry out the test, Click on Statistics..Compare means..Independent samples t test. Observe independent samples T test Box.
Ray Binns Page 4 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

Move time into Test Variables box. Move group into grouping variable box. Click on Define groups Observe Define groups box Enter 1 into group 1 box and 2 into the second box. Click Continue Click OK. Observe output (results) on screen. Means and standard deviations etc. are given so that you can report on the data. Locate Levene's test for equality of variances. This has a p value of 0.047. Since this is less than 0.05, we reject the hypothesis of equal variances and conclude that the two samples have come from populations with unequal variances. With this evidence of unequal variances, we now move to the next table. The second line refers to unequal variances and has 2 tail sig p=0.514. Since this exceeds 0.05, we cannot reject the null hypothesis of no difference between the two population means. There is insufficient evidence to conclude there is a difference. There is insufficient evidence suggest the times are different. Paired Samples comparison Example 3 A groups of students were used to compare the ease of using SPSS and Excel to carry out statistical tasks. Times were recorded: the longer the time taken, the less easy the software is to use. In example 2, two different groups (i.e. independent samples) were used. This increases the variation since any differences between SPSS and Excel may also be due to differences between groups of people. It makes sense therefore to use the same people for each software. Hence each person uses SPSS and then Excel (or vice versa) and differences taken. This is called a paired samples comparison. Example. Person SPSS 1 15 2 14 3 13 4 15 5 16 6 17 7 10 8 19 9 18 10 12

Ray Binns Page 5 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

Excel 13

12

11

16

12

15

11

17

14

10

As you can see, only persons 4 and 7 take longer with Excel than SPSS. The other eight are the other way round. Let's see if the ten differences provide any evidence in general of a difference. Select File..New..data..No Observe new window For a paired comparison, we enter SPSS values in column 1 and the Excel values in column 2. label the two columns appropriately (SPSS and EXCEL) Now Click on Statistics..Compare means..Paired samples t-test Observe paired samples t test box Click on Excel See it move to Current Selection box as variable 1. Click on SPSS See it move to Current Selection box as variable 2. Click on arrow and observe Excel-SPSS move into Paired Variables box. Click on OK Results (output) appear in table labelled t tests for paired samples. Look for the t value in the second table , t=-3.338 (indicating Excel smaller time than SPSS) with 2 tail sig value of 0.008. Since p=0.008<0.01 we reject H0 at the 1% level. We have strong evidence of a difference. The difference is -1.8 (Excel better than SPSS) with 95%CI -3.0,-0.6. Non-parametric tests: independent samples comparison of two population medians. In the previous notes, the conditions for the t test are that the data should be normally distributed with common variance. If these conditions are not met, the t tests should not be carried out. A better test is to test the equality of the population medians (the value of the data which divides the population into two equal parts). The test which does this is called the Mann-Whitney test. In this test, ranks are substituted for values and probabilities calculated based upon them. Ranks are ordinal scores. Example first, second, third etc. So for a set of data such as 124, 65, 45,167, we have ranks 3,2,1,4. (1 means the smallest). These values are then used in the Mann Whitney test.

Ray Binns Page 6 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

Example 4. Two instruments A and B are compared using two groups of technicians. Group 1 uses A and group 2 uses B. Standard solutions are prepared and results of using the instruments are shown. A B 16 13 14 12 13 11 16 16 15 12 17 13 16 14 12 14

H0: Population medianA = Population median B H1: Population medianA Population median B Open new data window. Enter the data as for independent samples t test. Label groups group and label response column volts Select Statistics...Non-parametric tests..2 independent samples Move volts to test variable list Move groups to grouping variable. Select define groups - enter 1 then 2. Click continue Ensure that Mann-Whitney checked. Click OK From the output, pick out the 2 tail value (corrected for ties). This is 0.0414. It is less than 0.05: i.e. reject H0 at the 5% level. There is some evidence of a difference. group 1 gives a higher median than the second group. Non-parametric test: paired samples comparison of the difference between population medians. For the paired samples comparison t test, the assumption is that the differences are normally distributed. If they are not, the t test is not valid. In this case, the non-parametric version is used - the Wilcoxon test. Example 5 Person Group 1 Group 2 1 8 4 2 7 3 3 9 8 4 6 6 5 5 4 6 7 4 7 8 6

Enter data in two columns as for the paired t test. label column 1 group1 and the second column group2.
Ray Binns Page 7 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

Hypothesis being tested is H0: population median difference =0. Select Statistics..Nonparametric test..2 related samples Click on group1 and move to Current selections box Click on group2 and move to Current Selections box. Click on arrow to observe group1-group2 appear in test pairs list. Ensure Wilcoxon box crossed. Click OK Observe 2 tailed p = 0.0277<0.05 i.e. reject H0 at the 5% level. Hence there is good evidence of a difference. (Note that since 0.0277 is not less than 0.01, we cannot reject H0 at the 1%). Obviously group1 values are higher than group 2 values. One-way ANOVA Independent samples and paired tests are used to compare two population means. To carry out a similar test of several means we require the Analysis of Variance test. This test compares the variance between the sample means with the sampling variance (error variance or experimental error). It is known that the distribution of this ratio is the F distribution. Hence, when comparing four population means for example,, we can carry out a test of the hypothesis
H0: 1 = 2 = 3 = 4 versus H1: at least one differs from the rest.

Example 6 Set of data shown below refer to liver weights of rats fed on three diets. Diet A 10 11 12 11 10 10 Diet B 4 3 4 4 1 3 Diet C 7 8 7 6 5 4
11/24/2011

Ray Binns Page 8 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

Carry out a test of the above null hypothesis. SOLUTION For a one-way ANOVA we use the Compare Means option under Statistics. For further analysis we use the post-hoc procedure under one-way ANOVA. Data which is to be subjected to ANOVA requires to be entered in two columns (irrespective of the number of treatments (diets in this case) or populations to be compared) The first column represents the group (1,2 or 3) and the second the response (in this case weights for each rat). Carry out the following: Select Statistics..Compare means..oneway ANOVA.. Observe oneway ANOVA box. Move weight into Dependent list box Move group into Factor box. Select define range to make min 1 and max 3 (since there are three treatments to compare). Click continue Click OK. Observe the following output (approximately). ----- ONEWAY ----Variable WEIGHT By Variable GROUP Analysis of Variance Source Sum of Mean F D.F. Squares Squares 89.1905 1.9365 F Ratio Prob. 46.0574 .0000

Between Groups 2 178.3810 Within Groups 18 34.8571 Total 20 213.2381

Interpretation. If there is no difference between the three population means, then the variance (called mean square in the table) between the three diets should be equal to the sampling variation (called within groups or experimental error). The ratio between groups mean square to the within groups mean square is distributed as F. The value of the F ratio for this sample of data is given above as 46.0574.
Ray Binns Page 9 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

The probability (called F prob) of obtaining a value as large (or larger) than this if the null hypothesis is true is less than 0.00004 (rounded to 0.0000) i.e. less than 4 in 100,000. Since F prob<0.01, we reject the null hypothesis at the 1% level and conclude there is strong evidence that at least one population mean diet is different from the other three. To find which one (or more than one) we use a Multiple Comparison test. This will also provide a table of mean values useful for summary purposes. Multiple Comparison test This provides evidence of any specific differences. Several tests are provided: we use the Least Significant difference and the Duncan's Multiple Range test Example 6 (continued) Select Statistics..Compare Means..One-way ANOVA.. Observe the One-way ANOVA box. This will already be arranged for carrying out the oneway ANOVA described above. Select Post Hoc box. Check the Least Significant Difference box Check the Duncan's Multiple range test box. Click Continue. Click OK Observe the following output. LSD test ----- ONEWAY ----Variable WEIGHT By Variable GROUP Multiple Range Tests: LSD test with significance level .05 The difference between two means is significant if MEAN(J)-MEAN(I) >= .9840 * RANGE * SQRT(1/N(I) + 1/N(J)) with the following value(s) for RANGE: 2.97 (*) Indicates significant differences which are shown in the lower triangle
Ray Binns Page 10 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

GGG rrr ppp Mean 231 GROUP * **

3.4286 Grp 2 5.7143 Grp 3 10.4286 Grp 1

This small two way table indicates that group mean 2 differs from group mean 3, group mean 1 differs from group mean 2 and group mean 1 differs from group mean 3. A table of means is provided at the side. The 95% CI for differences is the difference RANGE (in this case 2.97). In this example, all the sample means are significantly different. If they weren't we might want to know for which there was no evidence of a difference. These are called homogeneous subsets and are listed. Homogeneous Subsets (highest and lowest means are not significantly different) Subset 1 Group Grp 2 Mean 3.4286 Subset 2 Group Grp 3 Mean 5.7143 Subset 3 Group Grp 1 Mean 10.4286 Similar remarks are applicable to the second test: Duncan's Multiple Range test. Duncan's Multiple Range test ----- ONEWAY ----Variable WEIGHT By Variable GROUP Multiple Range Tests: Duncan test with significance level .05
Ray Binns Page 11 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

The difference between two means is significant if MEAN(J)-MEAN(I) >= .9840 * RANGE * SQRT(1/N(I) + 1/N(J)) with the following value(s) for RANGE: Step 2 3 RANGE 2.97 3.11 (*) Indicates significant differences which are shown in the lower triangle GGG rrr ppp Mean 231 GROUP * **

3.4286 Grp 2 5.7143 Grp 3 10.4286 Grp 1

Homogeneous Subsets (highest and lowest means are not significantly different) Subset 1 Group Grp 2 Mean 3.4286 Subset 2 Group Grp 3 Mean 5.7143 Subset 3 Group Grp 1 Mean 10.4286 Note: the LSD test, although popular among researchers, gives too rosy a view of differences. The actual significance level being used is not 0.05 but very approximately 0.05 the number of means being compared. In this case, since there are three, this is (approximately) 0.05 3 =0.15. i.e. there is a 0.15 chance of being wrong in concluding there is at least one difference. The 0.05 significance level is called the pairwise significance level; the 0.15 is called the experimentwise significant level. For the LSD test, the experimentwise level is only approximate. Hence it is better to use another test. Duncan's Multiple range test uses the correct experimentwise significant level.
Ray Binns Page 12 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

Non-parametric version of oneway ANOVA The assumptions made for one-way ANOVA are that the errors are independent, normally distributed with common variance and sum to zero (random errors). If this is not so, the ANOVA test may still be carried out provided the departures from these assumptions are not too drastic. An examination of the 'residuals' (estimates of the errors) may give some idea of this. However, a nonparametric test is available which requires fewer assumptions: merely that the samples are random. This test is called the Kruskall Wallis ANOVA test. It replaces the measurements by ranks and carries out the ANOVA test on the ranks. The null hypothesis now replaces the means by the medians. Example 7. Use the above data from example 6. Data is entered as for one-way ANOVA i.e. group and weight. Select Statistics...Nonparametric test..K independent samples.. Observe Tests for Several Independent samples box Move weight to Test variable list box Move group to grouping variable box Click Define Range Enter 1 and 3 Click Continue Ensure Kruskall Wallis H box is checked Click OK. Observe the following output. Subset 1 Group Grp 2 Mean 3.4286 Subset 2 Group Grp 3 Mean 5.7143 Subset 3 Group Grp 1 Mean 10.4286

Ray Binns Page 13 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

- - - - - Kruskal-Wallis 1-Way Anova WEIGHT by GROUP Mean Rank 18.00 5.14 9.86

Cases

7 GROUP = 1 7 GROUP = 2 7 GROUP = 3 21 Total Corrected for ties Chi-Square D.F. Significance Chi-Square D.F. Significance 15.3840 2 .0005 15.5967 2 .0004 In this example, the probability value is the final value, the significance under the 'corrected for ties'. I.e. p=0.0004<0.01 hence we reject H0 that the four population medians are equal. No tests exist similar to the multiple comparisons test. Two-way ANOVA Example 8 An experiment was carried out to determine the effect on serum glucose in mice of two factors: A and B. Each mouse was subjected to a dose of each and then the amount of glucose present in a sample was measured. Two levels of each factor were examined. The results are given below: Factor A A1 Factor B B1 B2 (Measurement 221 94 s of glucose) 200 109 233 114 A2 B1 330 302 283 B2 163 157 177

Use two way ANOVA with replication to test produce an ANOVA table. Enter the data in the following way. A B response 1.00 1.00 221.00 1.00 1.00 200.00 1.00 1.00 233.00
Ray Binns Page 14 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

1.00 1.00 1.00 2.00 2.00 2.00 2.00 2.00 2.00

2.00 2.00 2.00 1.00 1.00 1.00 2.00 2.00 2.00

94.00 109.00 114.00 330.00 302.00 283.00 163.00 157.00 177.00

Select Statistics..ANOVA models..Simple factorial Move response into dependent box Move A and B into factors box Click on define range..enter 1 then 2 for A click continue Repeat for B. Select Options..In Methods box click experimental In Maximum interactions click 2 way Click Continue Click OK Observe the following output. *** ANALYSIS OF VARIANCE *** RESPONSE by A B EXPERIMENTAL sums of squares Covariates entered FIRST Sum of Source of Variation Squares Main Effects 63708.833 A 16206.750 1 B 47502.083 1 2-Way Interactions 546.750 A B 546.750 1 Explained Residual 64255.583 2103.333 3 8 Mean DF Sig Square

of F

2 31854.417 121.158 .000 16206.750 61.642 .000 47502.083 180.674 .000 1 546.750 2.080 .187 546.750 2.080 .187 21418.528 262.917 81.465 .000

Ray Binns Page 15 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

Total

66358.917

11

6032.629

12 cases were processed. 0 cases (.0 pct) were missing. Interpretation The sig of F is the p value. There are three null hypotheses. H0: a1 = a2 which is rejected at the 1% level because the p value=0.000<0.01. H0: b1 = b2 which is rejected at the 1% level because p=0.000<0.01. H0: A B = 0 which cannot be rejected at the 5% level because p=0.187>0.05. Three-way ANOVA Example 9 For three factors, we will have three single factor effects, three two factor interaction effects and one three factor interaction effect to test for significance. An example follows. Example. An experimental design was set up to investigate the yields of maize caused by nitrogen level (N), sulphate level (S) and location in a field (L). Plots of land were sown with maize seeds and treated with a mixture of levels of the three factors. The full design and results are shown below. Nitrogen (2 levels) 1 1 1 1 1 1 1 1 1 1 1 1 1 Sulphate (4 levels) 1 1 1 1 1 1 2 2 2 2 2 2 3 Location (3 levels) 1 1 2 2 3 3 1 1 2 2 3 3 1 Yield 259 645 614 470 355 570 609 837 601 707 627 470 608
11/24/2011

Ray Binns Page 16 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2

3 3 3 3 3 4 4 4 4 4 4 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4

1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3

590 369 499 523 540 408 321 311 457 459 483 403 308 351 469 425 262 272 421 585 455 427 305 361 586 416 357 590 490 527 321 259 263 304 295

To run this, select general Linear Model..GLM general factorial Move yield into dependent variable Move nitrate, sulphate and location into Fixed factors. To get a Post Hoc test (e.g. LSD) click on Post hoc and move sulphate and location across. (There is no point in moving nitrate as there are only two levels of nitrate). Note that these would be ignored if the factor was found to be not significant. Tests of Between-Subjects Effects
Ray Binns Page 17 /opt/scribd/conversion/tmp/scratch6270/76721636.doc 11/24/2011

Dependent Variable: YIELD Source Corrected Model Intercept NITRATE SULPHATE LOCATION NITRATE * SULPHATE NITRATE * LOCATION SULPHATE * LOCATION NITRATE * SULPHATE * LOCATION Error

Type III Sum of Squares 583195.66 7 9886305.3 33 172800.00 0 180571.50 0 4425.292 54963.500 1403.375 98126.875 70905.125

df

Mean Square

Sig. .033 .000 .001 .007 .829 .223 .942 .255 .441

280471.00 0 Total 10749972. 000 Corrected Total 863666.66 47 7 a R Squared = .675 (Adjusted R Squared = .364)

2325356.33 2.170 3 19886305. 845.975 333 1172800.0 14.787 00 360190.50 5.151 0 22212.646 .189 318321.16 1.568 7 2 701.688 .060 616354.47 1.399 9 611817.52 1.011 1 2411686.29 2 48

Look at the P values. What do you conclude? Multiple Comparisons SULPHATE Dependent Variable: YIELD LSD Mean Differenc e (I-J) (I) (J) SULPHA SULPHA TE TE 1.00 2.00 -98.7500 3.00 -66.5000

Std. Error

Sig.

95% Confiden ce Interval Lower Bound

Upper Bound

44.133 44.133

.035

- -7.6641 189.835 9 .145 - 24.5859 157.585


11/24/2011

Ray Binns Page 18 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

2.00

4.00 1.00 3.00 4.00

3.00

60.2500 98.7500 32.2500 159.000 0 1.00 66.5000 2.00 -32.2500 4.00 126.750 0 1.00 -60.2500 2.00

44.133 44.133 44.133 44.133 44.133 44.133 44.133 44.133

9 .185 -30.8359151.3359 .035 7.6641189.8359 .472 -58.8359123.3359 .001 67.9141250.0859 .145 -24.5859157.5859 .472 - 58.8359 123.335 9 .008 35.6641217.8359 .185 - 30.8359 151.335 9 .001 - -67.9141 250.085 9 .008 - -35.6641 217.835 9

4.00

- 44.133 159.000 0 3.00 - 44.133 126.750 0 Based on observed means. * The mean difference is significant at the .05 level. Multiple Comparisons LOCATION Dependent Variable: YIELD LSD Mean Differenc e (I-J) (I) (J) LOCATI LOCATI ON ON 1.00 2.00 18.3125 3.00 21.9375 2.00 1.00 -18.3125 3.00 3.6250 3.00 1.00 -21.9375 2.00 -3.6250 Based on observed means.

Std. Error

Sig.

95% Confiden ce Interval Lower Bound

Upper Bound

38.220 38.220 38.220 38.220 38.220 38.220

.636 -60.5702 97.1952 .571 -56.9452100.8202 .636 -97.1952 60.5702 .925 -75.2577 82.5077 .571 - 56.9452 100.820 2 .925 -82.5077 75.2577

Ray Binns Page 19 /opt/scribd/conversion/tmp/scratch6270/76721636.doc

11/24/2011

También podría gustarte