Documentos de Académico
Documentos de Profesional
Documentos de Cultura
3 March 2014
Statistics 516
R Output:
> model1=lm(len~supp+dose+supp:dose, data=ToothGrowth)
> summary(model1)
Call:
lm(formula = len ~ supp + dose + supp:dose, data = ToothGrowth)
Residuals:
Min
1Q Median 3Q Max
-8.2264 -2.8463 0.0504 2.2893 7.9386
Coefficients:
Estimate Std. Error t value
(Intercept) 11.550
1.581 7.304
suppVC
-8.255
2.236 -3.691
dose
7.811
1.195 6.534
suppVC:dose 3.904
1.691 2.309
Pr(>|t|)
(Intercept) 1.09e-09 ***
suppVC
0.000507 ***
dose
2.03e-08 ***
suppVC:dose 0.024631 *
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 4.083 on 56 degrees of freedom
Multiple R-squared: 0.7296,
Adjusted R-squared: 0.7151
F-statistic: 50.36 on 3 and 56 DF, p-value: 6.521e-16
Aaron Vincent
3 March 2014
Statistics 516
Part 2:
Answers:
Since the parameter estimates are the same in both summaries, I know I have done this
correctly
R Input:
model3=lm(len~supp+log(dose)+supp:log(dose),data=ToothGrowth)
summary(model3)
model4=nls(len~ ifelse(supp ==
"VC",theta1+theta2+(theta3+theta4)*log(dose),theta1+theta3*log(dose)),start = c(theta1 = 0,
theta2 = 0, theta3 = 0, theta4 = 0),data = ToothGrowth)
summary(model4)
R Output:
> model3=lm(len~supp+log(dose)+supp:log(dose),data=ToothGrowth)
> summary(model3)
Call:
lm(formula = len ~ supp + log(dose) + supp:log(dose), data = ToothGrowth)
Residuals:
Min
1Q Median 3Q Max
-7.5433 -2.4921 -0.5033 2.7117 7.8567
Aaron Vincent
3 March 2014
Statistics 516
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
20.6633 0.6791 30.425 < 2e-16 ***
suppVC
-3.7000 0.9605 -3.852 0.000303 ***
log(dose)
9.2549 1.2000 7.712 2.3e-10 ***
suppVC:log(dose) 3.8448 1.6971 2.266 0.027366 *
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 3.72 on 56 degrees of freedom
Multiple R-squared: 0.7755,
Adjusted R-squared: 0.7635
F-statistic: 64.5 on 3 and 56 DF, p-value: < 2.2e-16
> model4=nls(len~ ifelse(supp == "VC",theta1+theta2+(theta3+theta4)*log(dose),theta1+theta3*log(dose)),start =
c(theta1 = 0, theta2 = 0, theta3 = 0, theta4 = 0),data = ToothGrowth)
> summary(model4)
Formula: len ~ ifelse(supp == "VC", theta1 + theta2 + (theta3 + theta4) *
log(dose), theta1 + theta3 * log(dose))
Parameters:
Estimate Std. Error t value Pr(>|t|)
theta1 20.6633 0.6791 30.425 < 2e-16 ***
theta2 -3.7000 0.9605 -3.852 0.000303 ***
theta3 9.2549 1.2000 7.712 2.3e-10 ***
theta4 3.8448 1.6971 2.266 0.027366 *
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 3.72 on 56 degrees of freedom
Number of iterations to convergence: 1
Achieved convergence tolerance: 1.52e-09
Part 3:
R Input:
g <- function(x, lambda) {
if (lambda == 0) {
x <- log(x)
}
else {
x <- (x^lambda - 1)/lambda
}
return(x)
}
model5=lm(len~supp+g(dose,0)+supp:g(dose,0),data=ToothGrowth)
summary(model5)
model6=nls(len~ ifelse(supp ==
"VC",theta1+theta2+(theta3+theta4)*g(dose,lambda),theta1+theta3*g(dose,lambda)),start = c(theta1 = 20.6633,
theta2 = -3.7, theta3 = 9.2549, theta4 = 3.8448, lambda = 0),data = ToothGrowth)
Aaron Vincent
3 March 2014
Statistics 516
summary(model6)
R Output:
> model6=nls(len~ ifelse(supp == "VC",theta1+theta2+(theta3+theta4)*g(dose,la
mbda),theta1+theta3*g(dose,lambda)),start = c(theta1 = 20.6633, theta2
= -3.7, theta3 = 9.2549, theta4 = 3.8448, lambda = 0),data = ToothGrowt
h)
> summary(model6)
Formula: len ~ ifelse(supp == "VC", theta1 + theta2 + (theta3 + theta4) *
g(dose, lambda), theta1 + theta3 * g(dose, lambda))
Parameters:
Estimate Std. Error t value
theta1 21.2710
0.8831 24.085
theta2 -3.4675
0.9889 -3.506
theta3
9.2759
1.2129
7.648
theta4
3.5492
1.6708
2.124
lambda -0.4064
0.3821 -1.063
--Signif. codes: 0 *** 0.001 **
Pr(>|t|)
< 2e-16
0.000913
3.28e-10
0.038158
0.292258
***
***
***
*
Part 4:
Answers:
R Input:
ToothGrowth$yhat=predict(model6)
Aaron Vincent
3 March 2014
Statistics 516
newdata <- expand.grid(dose = seq(0, 2, length = 60), supp = c("VC", "OJ"))
newdata$yhat <- predict(model6, newdata)
install.packages("ggplot2")
library(ggplot2)
myplot <- ggplot(ToothGrowth, aes(x = dose, y = len, color = supp))
myplot <- myplot + geom_point()
myplot <- myplot + geom_line(aes(y = yhat), data = newdata)
myplot <- myplot + xlab("Dose") + ylab("Length")
plot(myplot)
R Output:
> ToothGrowth$yhat=predict(model6)
> newdata <- expand.grid(dose = seq(0, 2, length = 60), supp = c("VC", "OJ"))
> newdata$yhat <- predict(model6, newdata)
> library(ggplot2)
> myplot <- ggplot(ToothGrowth, aes(x = dose, y = len, color = supp))
> myplot <- myplot + geom_point()
> myplot <- myplot + geom_line(aes(y = yhat), data = newdata)
> myplot <- myplot + xlab("Dose") + ylab("Length")
> plot(myplot)
Part 5:
Answers:
According to our ANOVA, with an F value of 4.4462 and a p-value of 0.03955 we
reject the null hypothesis supporting the observation that expected tooth length does not
increase at the same rate with dose when comparing the two supplement types.
Additionally, since the confidence interval for theta 4 does not contain 0 we once again
reject the null hypothesis supporting the observation that expected tooth length does not
increase at the same rate with dose when comparing the two supplement types.
R Input:
theta=list(theta1 = 21.2710, theta2 = -3.4675, lambda=-0.4064, theta3 = 9.2759, theta4=3.5492)
model7=nls(len~ ifelse(supp ==
"VC",theta1+theta2+(theta3+theta4)*g(dose,lambda),theta1+theta3*g(dose,lambda)),start =theta,data =
ToothGrowth)
model7.5=nls(len~ ifelse(supp ==
"VC",theta1+theta2+(theta3)*g(dose,lambda),theta1+theta3*g(dose,lambda)),data=ToothGrowth,
start=theta [1:4])
Aaron Vincent
3 March 2014
Statistics 516
anova(model7, model7.5)
confint(model7)
R Output:
> anova(model7, model7.5)
Analysis of Variance Table
Model 1: len ~ ifelse(supp == "VC", theta1 + theta2 + (theta3 + theta4) * g(dose, lambda), theta1 + theta3 * g(dose,
lambda))
Model 2: len ~ ifelse(supp == "VC", theta1 + theta2 + (theta3) * g(dose, lambda), theta1 + theta3 * g(dose, lambda))
Res.Df Res.Sum Sq Df Sum Sq F value Pr(>F)
1 55 759.06
2 56 820.43 -1 -61.362 4.4462 0.03955
1
2*
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
> confint(model7)
Waiting for profiling to be done...
2.5%
97.5%
theta1 19.500701 23.0886329
theta2 -5.438901 -1.4824484
lambda -1.253519 0.3556167
theta3 6.902859 11.6648535
theta4 0.174306 6.9825694
Aaron Vincent
3 March 2014
Statistics 516
Aerial Snow Geese Counting
Part 1:
Answers:
Being that this residual vs predicted plot is relatively trumpet shaped towards the higher number
counts due in particular to 5 observations (28, 29, 73, 74, and 75), which may be transcription
errors, there appears to be an issue with heteroscedasticity.
Aaron Vincent
3 March 2014
Statistics 516
This qq plot is consistent with the assumption that the errors are normally distributed because it
is a roughly straight diagonal line. This occurs this way because even though there are
apparent issues with heteroscedasticity shown by the residual vs. predicted plot the sample size is
large enough to make it so that we do not need to worry about the normality assumption.
R Input:
install.packages("alr3")
library(alr3) # contains the data, no data statement needed
library(reshape2) # contains the function melt used below
library(ggplot2) # for plotting
snowgeese.long <- melt(snowgeese, measure.vars = c("obs1","obs2"),variable.name = "observer", value.name =
"count")
model1 <- lm(count ~ observer + photo + observer:photo, data = snowgeese.long)
snowgeese.long$yhat <- predict(model1) # predicted values
snowgeese.long$r <- rstudent(model1) # studentized residuals
p <- ggplot(snowgeese.long, aes(x = yhat, y = r, color = observer)) + geom_point()
p <- p + geom_segment(aes(x = yhat, xend = yhat, y = 0, yend = r))
p <- p + geom_hline(yintercept = 0)
p <- p + xlab("Predicted Value") + ylab("Studentized Residual")
p <- p + ggtitle("Snowgeese Data 1")
print(p)
qqnorm(snowgeese.long$r)
snowgeese.long
R Output:
> print(p)
> qqnorm(snowgeese.long$r)
> snowgeese.long
photo observer count
yhat
r
1 56 obs1 50 42.681013 0.174803055
2 38 obs1 25 27.378559 -0.056930376
3 25 obs1 30 16.326786 0.328216727
4 48 obs1 35 35.879922 -0.021030888
5 38 obs1 25 27.378559 -0.056930376
6 22 obs1 20 13.776377 0.149409096
7 22 obs1 12 13.776377 -0.042639935
8 42 obs1 34 30.779104 0.077046657
9 34 obs1 20 23.978013 -0.095278005
10 14 obs1 10 6.975286 0.072733155
11 30 obs1 25 20.577468 0.105999981
12 9 obs1 10 2.724604 0.175181401
13 18 obs1 15 10.375832 0.111098935
14 25 obs1 20 16.326786 0.088121347
15 62 obs1 40 47.781831 -0.185759660
16 26 obs1 30 17.176922 0.307726301
17 88 obs1 75 69.885377 0.121939896
18 56 obs1 35 42.681013 -0.183452861
19 11 obs1 9 4.424877 0.110096935
20 66 obs1 55 51.182377 0.091088387
21 42 obs1 30 30.779104 -0.018636245
22 30 obs1 25 20.577468 0.105999981
23 90 obs1 40 71.585649 -0.755503506
Aaron Vincent
3 March 2014
Statistics 516
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
119 obs1
165 obs1
152 obs1
205 obs1
409 obs1
342 obs1
200 obs1
73 obs1
123 obs1
150 obs1
70 obs1
90 obs1
110 obs1
95 obs1
57 obs1
43 obs1
55 obs1
325 obs1
114 obs1
83 obs1
91 obs1
56 obs1
56 obs2
38 obs2
25 obs2
48 obs2
38 obs2
22 obs2
22 obs2
42 obs2
34 obs2
14 obs2
30 obs2
9 obs2
18 obs2
25 obs2
62 obs2
26 obs2
88 obs2
56 obs2
11 obs2
66 obs2
42 obs2
30 obs2
90 obs2
119 obs2
165 obs2
152 obs2
205 obs2
409 obs2
342 obs2
200 obs2
73 obs2
123 obs2
75 96.239604 -0.507778637
100 135.345876 -0.853620652
150 124.294104 0.617852006
120 169.351331 -1.211275561
250 342.779148 -2.776416796
500 285.820012 7.213791417
200 165.100649 0.851371432
50 57.133331 -0.170150602
75 99.640149 -0.589611927
150 122.593831 0.658672099
50 54.582922 -0.109322644
60 71.585649 -0.276317714
75 88.588377 -0.324346604
150 75.836331 1.801533386
40 43.531150 -0.084316241
25 31.629241 -0.158572669
100 41.830877 1.405218186
200 271.367694 -1.903795729
60 91.988922 -0.765923189
40 65.634695 -0.612496417
35 72.435786 -0.896656646
20 42.681013 -0.542541718
40 58.090235 -0.432455553
30 38.085626 -0.193566965
40 23.637853 0.392870404
45 49.199298 -0.100372457
30 38.085626 -0.193566965
20 20.303751 -0.007291135
20 20.303751 -0.007291135
35 42.531095 -0.180178518
30 33.640158 -0.087185212
12 11.412814 0.014119225
30 29.194689 0.019300598
10 5.855978 0.099770114
18 15.858283 0.051453348
30 23.637853 0.152643492
50 64.758439 -0.352483700
20 24.749220 -0.113916338
120 93.653985 0.629534039
60 58.090235 0.045604232
10 8.078712 0.046231644
80 69.203907 0.257682504
35 42.531095 -0.180178518
30 29.194689 0.019300598
120 95.876720 0.576203999
200 128.106368 1.746700026
200 179.229258 0.500222116
150 164.781485 -0.354746391
200 223.683945 -0.577473666
300 450.402850 -4.875025903
500 375.941249 3.522840852
300 218.127109 2.036786662
40 76.983478 -0.886076945
80 132.551837 -1.266734442
Aaron Vincent
3 March 2014
Statistics 516
78
79
80
81
82
83
84
85
86
87
88
89
90
150
70
90
110
95
57
43
55
325
114
83
91
56
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
10
Part 2:
Answers:
The use of weights in the model makes the residual vs. predicted plot more uniform and
eliminates the trumpet shape found in the previous plot. However, the use of weights also creates
more questionable observations than the previous model did. So there is improvement in come
aspects, such as uniformity of the model, but it also creates more questionable observations (29,
37, 40, 69, 73, 74, and 85).
Aaron Vincent
3 March 2014
Statistics 516
11
Adding weights to the model slightly improves the overall trend of the qq plot. However, this
improvement in the overall trend comes at the cost of an extremely abnormal curvature at the end
of the qq plot making the improvement questionable. Again, there is no reason to worry about
the assumption of normality in this case because the sample size is very large.
R Input:
mywt=1/snowgeese.long$photo
model2 <- lm(count ~ observer + photo + observer:photo, data = snowgeese.long, weights=mywt)
snowgeese.long$yhat <- predict(model2) # predicted values
snowgeese.long$r <- rstudent(model2) # studentized residuals
myplot <- ggplot(snowgeese.long, aes(x = yhat, y = r, color = observer)) + geom_point()
myplot <- myplot + geom_segment(aes(x = yhat, xend = yhat, y = 0, yend = r))
myplot <- myplot + geom_hline(yintercept = 0)
myplot <- myplot + xlab("Predicted Value") + ylab("Studentized Residual")
myplot <- myplot + ggtitle("Snowgeese Data 2")
print(myplot)
qqnorm(snowgeese.long$r)
snowgeese.long
R Output:
> print(myplot)
> qqnorm(snowgeese.long$r)
> snowgeese.long
photo observer count
yhat
r logcount logphoto sqrtcount sqrtphoto
1 56 obs1 50 44.328357 0.24347357 3.912023 4.025352 7.071068 7.483315
2 38 obs1 25 29.916062 -0.25705864 3.218876 3.637586 5.000000 6.164414
3 25 obs1 30 19.507182 0.68445091 3.401197 3.218876 5.477226 5.000000
4 48 obs1 35 37.922893 -0.13560024 3.555348 3.871201 5.916080 6.928203
5 38 obs1 25 29.916062 -0.25705864 3.218876 3.637586 5.000000 6.164414
6 22 obs1 20 17.105133 0.20166696 2.995732 3.091042 4.472136 4.690416
7 22 obs1 12 17.105133 -0.35582170 2.484907 3.091042 3.464102 4.690416
8 42 obs1 34 33.118794 0.04375468 3.526361 3.737670 5.830952 6.480741
9 34 obs1 20 26.713330 -0.37196003 2.995732 3.526361 4.472136 5.830952
10 14 obs1 10 10.699668 -0.06249970 2.302585 2.639057 3.162278 3.741657
11 30 obs1 25 23.510597 0.08801467 3.218876 3.401197 5.000000 5.477226
12 9 obs1 10 6.696253 0.38307207 2.302585 2.197225 3.162278 3.000000
13 18 obs1 15 13.902400 0.08524104 2.708050 2.890372 3.872983 4.242641
Aaron Vincent
3 March 2014
Statistics 516
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
25 obs1
62 obs1
26 obs1
88 obs1
56 obs1
11 obs1
66 obs1
42 obs1
30 obs1
90 obs1
119 obs1
165 obs1
152 obs1
205 obs1
409 obs1
342 obs1
200 obs1
73 obs1
123 obs1
150 obs1
70 obs1
90 obs1
110 obs1
95 obs1
57 obs1
43 obs1
55 obs1
325 obs1
114 obs1
83 obs1
91 obs1
56 obs1
56 obs2
38 obs2
25 obs2
48 obs2
38 obs2
22 obs2
22 obs2
42 obs2
34 obs2
14 obs2
30 obs2
9 obs2
18 obs2
25 obs2
62 obs2
26 obs2
88 obs2
56 obs2
11 obs2
66 obs2
42 obs2
30 obs2
12
Aaron Vincent
3 March 2014
Statistics 516
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
90
119
165
152
205
409
342
200
73
123
150
70
90
110
95
57
43
55
325
114
83
91
56
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
13
Aaron Vincent
3 March 2014
Statistics 516
Loging the data reduces heteroscedasticity on both of the tails of the model as shown in this
plot. However, it increases heterscedasticity in the center of the data; producing, as in the
previous plot, 7 questionable observsaitons (29, 37, 40, 44, 45, 85, and 88).
Logging the data creates a nearly perfect qq plot. The only abnormalities are on the tails, but
being as these two portions arent all that bad in and of themselves I would call this a normal
distribution unlike the original data.
R Input:
snowgeese.long$logcount=log(snowgeese.long$count)
snowgeese.long$logphoto=log(snowgeese.long$photo)
model3 <- lm(logcount ~ observer + logphoto + observer:logphoto, data = snowgeese.long)
snowgeese.long$yhat <- predict(model3) # predicted values
snowgeese.long$r <- rstudent(model3) # studentized residuals
plot3 <- ggplot(snowgeese.long, aes(x = yhat, y = r, color = observer)) + geom_point()
plot3 <- plot3 + geom_segment(aes(x = yhat, xend = yhat, y = 0, yend = r))
plot3 <- plot3 + geom_hline(yintercept = 0)
plot3 <- plot3 + xlab("Predicted Value") + ylab("Studentized Residual")
plot3 <- plot3 + ggtitle("Snowgeese Data 3a")
print(plot3)
qqnorm(snowgeese.long$r)
snowgeese.long
R Output:
> print(plot3)
> qqnorm(snowgeese.long$r)
> snowgeese.long
photo observer count yhat
r logcount logphoto sqrtcount sqrtphoto
1 56 obs1 50 3.733964 0.56070145 3.912023 4.025352 7.071068 7.483315
2 38 obs1 25 3.363716 -0.45729971 3.218876 3.637586 5.000000 6.164414
3 25 obs1 30 2.963921 1.40696803 3.401197 3.218876 5.477226 5.000000
4 48 obs1 35 3.586777 -0.09886791 3.555348 3.871201 5.916080 6.928203
5 38 obs1 25 3.363716 -0.45729971 3.218876 3.637586 5.000000 6.164414
6 22 obs1 20 2.841862 0.49195151 2.995732 3.091042 4.472136 4.690416
7 22 obs1 12 2.841862 -1.14844204 2.484907 3.091042 3.464102 4.690416
14
Aaron Vincent
3 March 2014
Statistics 516
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
42
34
14
30
9
18
25
62
26
88
56
11
66
42
30
90
119
165
152
205
409
342
200
73
123
150
70
90
110
95
57
43
55
325
114
83
91
56
56
38
25
48
38
22
22
42
34
14
30
9
18
25
62
26
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs1
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
15
Aaron Vincent
3 March 2014
Statistics 516
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
88
56
11
66
42
30
90
119
165
152
205
409
342
200
73
123
150
70
90
110
95
57
43
55
325
114
83
91
56
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
16
Aaron Vincent
3 March 2014
Statistics 516
17
Taking the square root of the model produces a very nice and uniform residual vs. predicted plot.
It does not have the trumpet shape seen in the first problem and only has 6 questionable
observations (29, 37, 40, 73, 74, and 85) which are spread out throughout the data.
In my opinion this is the best qq plot I have produced. It shows a very normal distribution unlike
the original data.
R Input:
snowgeese.long$sqrtcount=sqrt(snowgeese.long$count)
snowgeese.long$sqrtphoto=sqrt(snowgeese.long$photo)
model4 <- lm(sqrtcount ~ observer + sqrtphoto + observer:sqrtphoto, data = snowgeese.long)
snowgeese.long$yhat <- predict(model4) # predicted values
snowgeese.long$r <- rstudent(model4) # studentized residuals
plot4 <- ggplot(snowgeese.long, aes(x = yhat, y = r, color = observer)) + geom_point()
plot4 <- plot4 + geom_segment(aes(x = yhat, xend = yhat, y = 0, yend = r))
plot4 <- plot4 + geom_hline(yintercept = 0)
plot4 <- plot4 + xlab("Predicted Value") + ylab("Studentized Residual")
plot4 <- plot4 + ggtitle("Snowgeese Data 3b")
print(plot4)
qqnorm(snowgeese.long$r)
snowgeese.long
R Output:
> print(plot4)
> qqnorm(snowgeese.long$r)
> snowgeese.long
photo observer count
yhat
r logcount logphoto sqrtcount sqrtphoto
1 56 obs1 50 6.503183 0.367075621 3.912023 4.025352 7.071068 7.483315
2 38 obs1 25 5.323539 -0.209756966 3.218876 3.637586 5.000000 6.164414
3 25 obs1 30 4.282071 0.781562158 3.401197 3.218876 5.477226 5.000000
4 48 obs1 35 6.006683 -0.058587750 3.555348 3.871201 5.916080 6.928203
5 38 obs1 25 5.323539 -0.209756966 3.218876 3.637586 5.000000 6.164414
6 22 obs1 20 4.005174 0.304980192 2.995732 3.091042 4.472136 4.690416
7 22 obs1 12 4.005174 -0.353449475 2.484907 3.091042 3.464102 4.690416
8 42 obs1 34 5.606466 0.145361598 3.526361 3.737670 5.830952 6.480741
9 34 obs1 20 5.025286 -0.359266907 2.995732 3.526361 4.472136 5.830952
Aaron Vincent
3 March 2014
Statistics 516
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
14 obs1
30 obs1
9 obs1
18 obs1
25 obs1
62 obs1
26 obs1
88 obs1
56 obs1
11 obs1
66 obs1
42 obs1
30 obs1
90 obs1
119 obs1
165 obs1
152 obs1
205 obs1
409 obs1
342 obs1
200 obs1
73 obs1
123 obs1
150 obs1
70 obs1
90 obs1
110 obs1
95 obs1
57 obs1
43 obs1
55 obs1
325 obs1
114 obs1
83 obs1
91 obs1
56 obs1
56 obs2
38 obs2
25 obs2
48 obs2
38 obs2
22 obs2
22 obs2
42 obs2
34 obs2
14 obs2
30 obs2
9 obs2
18 obs2
25 obs2
62 obs2
26 obs2
88 obs2
56 obs2
18
Aaron Vincent
3 March 2014
Statistics 516
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
11
66
42
30
90
119
165
152
205
409
342
200
73
123
150
70
90
110
95
57
43
55
325
114
83
91
56
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
obs2
19