Documentos de Académico
Documentos de Profesional
Documentos de Cultura
into class x given the ground truth class is x. User Accuracy is the probability that the ground
truth class is x given a pixel is put into class x in the classification image.
Unclassified
Grass
Forest
Swamp
Total
Unclassified
43689
26949
40
18001
88679
Grass
32835
64516
1741
3329
102421
Forest
8202
7277
4096
654
20229
Swamp
15227
10742
18702
44671
Total
99953
109484
5877
40686
256000
Unclassified
Grass
Forest
Swamp
Total
Unclassified
43.71
24.61
0.68
44.24
34.64
Grass
32.85
58.93
29.62
8.18
40.01
Unclassified
Grass
Forest
Swamp
Total
Forest
8.21
6.65
69.70
1.61
7.90
Swamp
15.23
9.81
0.00
45.97
17.45
Total
100.00
100.00
100.00
100.00
100.00
Class
Commission
Omission
Commission
Omission
(Percent)
(Percent)
(Pixels)
(Pixels)
Unclassified
50.73
56.29
44990/88679
56264/99953
Grass
37.01
41.07
37905/102421
44968/109484
Forest
79.75
30.30
16133/20229
1781/5877
Swamp
58.13
54.03
25969/44671
21984/40686
Class
Prod. Acc.
User Acc.
Prod. Acc.
User Acc.
(Percent)
(Percent)
(Pixels)
(Pixels)
Unclassified
43.71
49.27
43689/99953
43689/88679
Grass
58.93
62.99
64516/109484
64516/102421
Forest
69.70
20.25
4096/5877
4096/20229
Swamp
45.97
41.87
18702/40686
18702/44671
Overall Accuracy
The overall accuracy is calculated by summing the number of pixels classified correctly and
dividing by the total number of pixels. The ground truth image or ground truth ROIs define the
true class of the pixels. The pixels classified correctly are found along the diagonal of the
confusion matrix table, which lists the number of pixels that were classified into the correct
ground truth class. The total number of pixels is the sum of all the pixels in all the ground truth
classes.
Kappa Coefficient
The kappa () coefficient measures the agreement between classification and ground truth
pixels. A kappa value of 1 represents perfect agreement while a value of 0 represents no
agreement.
Where :
This formula returns a kappa coefficient of 0.26, using the example matrix below:
Commission
Errors of commission represent pixels that belong to another class that are labeled as belonging
to the class of interest. The errors of commission are shown in the rows of the confusion matrix.
In the confusion matrix example, the Grass class has a total of 102,421 pixels where 64,516
pixels are classified correctly and 37,905 other pixels are classified incorrectly as Grass (37,905
is the sum of all the other classes in the Grass row of the confusion matrix). The ratio of the
number of pixels classified incorrectly by the total number of pixels in the ground truth class
forms an error of commission. For the Grass class the error of commission is 37,905/102,421
which equals 37%.
Omission
Errors of omission represent pixels that belong to the ground truth class but the classification
technique has failed to classify them into the proper class. The errors of omission are shown in
the columns of the confusion matrix. In the confusion matrix example, the Grass class has a
total of 109,484 ground truth pixels where 64,516 pixels are classified correctly and 44,968
Grass ground truth pixels are classified incorrectly (44,968 is the sum of all the other classes in
the Grass column of the confusion matrix). The ratio of the number of pixels classified
incorrectly by the total number of pixels in the ground truth class forms an error of omission. For
the Grass class the error of omission is 44,968/109,484 which equals 41.1%.
Producer Accuracy
The producer accuracy is a measure indicating the probability that the classifier has labeled an
image pixel into Class A given that the ground truth is Class A. In the confusion matrix example,
the Grass class has a total of 109,484 ground truth pixels where 64,516 pixels are classified
correctly. The producer accuracy is the ratio 64,516/109,484 or 58.9%.
User Accuracy
User accuracy is a measure indicating the probability that a pixel is Class A given that the
classifier has labeled the pixel into Class A. In the confusion matrix example, the classifier has
labeled 102,421 pixels as the Grass class and a total of 64,516 pixels are classified correctly.
The user accuracy is the ratio 64,516/102,421 or 63.0%.