Archives

  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • br Classification evaluation methods br Cervical cancer clas

    2020-08-30


    2.8. Classification evaluation methods
    Cervical cancer classification is a complex task; therefore, classifi-cation models are also usually complex. However, the more complex the classification model, the less the chance of finding a model that fits the data well [36]. This issue was handled by dividing the problem into subproblems and tackling them one-by-one using the defuzzification method described. This approach is referred to as the hierarchical ap-proach [76] and has been reported to yield better classification results [65]. The performance of a classifier was evaluated using accuracy, false positive, false negative, sensitivity, specificity and ROC area me-trics. Sensitivity (true positive rate) measures the proportion of actual positives that are correctly identified as such, whereas specificity (true negative rate) measures the proportion of actual negatives that are correctly identified as such. Sensitivity and specificity are given by Equation (8).
    Where TP = True positives, FN=False negatives, TN = True negatives and FP=False positives.
    3. Results
    3.1. Classification accuracy
    A confusion matrix for the classification results on the test single Vatinoxan (Dataset 1 consisting of 717 single cells) is shown in Table 4. Of the 158 normal cells, 154 were correctly classified as normal and four were  Informatics in Medicine Unlocked 14 (2019) 23–33
    Table 4
    Cervical cancer classification results from single cells.
    Abnormal
    Normal
    False Negative 4 True Negative 154 True Positive 555 False Positive 4 Total 559 Total 158
    Fig. 10. ROC curve for the classifier performance on single cell images from DTU/Herlev dataset (Dataset 1).
    incorrectly classified as abnormal (one normal superficial, one inter-mediate and two normal columnar). Of the 559 abnormal cells, 555 were correctly classified as abnormal and four were incorrectly classi-fied as normal (two carcinoma in situ cell, one moderate dysplastic and one mild dysplastic). The overall accuracy, sensitivity and specificity of the classifier on this dataset were 98.88%, 99.28% and 97.47% re-spectively. A False Negative Rate (FNR), False Positive Rate (FPR) and classification error of 0.72%, 2.53% and 1.12% respectively were ob-tained.
    A Receiver Operating Characteristic (ROC) curve was plotted to analyze how the classifier can distinguish between the true positives and negatives. This was necessary because the classifier needs to not only correctly predict a positive as a positive, but also a negative as a negative. This ROC was obtained by plotting sensitivity (the probability of predicting a real positive as positive), against 100-specificity (the probability of predicting a real negative as negative) as shown in Fig. 10.
    A confusion matrix for the classification results on test Pap smear slides (Dataset 2 of 297 full slide images) is shown in Table 5. Of the 141 normal slides, 137 were correctly classified as normal and four were incorrectly classified as abnormal. Of the 156 abnormal slides, 153 were correctly classified as abnormal and three were incorrectly classified as normal. The overall accuracy, sensitivity and specificity of the classifier on this dataset were 97.64%, 98.08% and 97.16% re-spectively. A False Negative Rate (FNR), False Positive Rate (FPR) and classification error of 1.92%, 2.84% and 2.36% respectively were ob-tained.
    Furthermore, the classifier was evaluated on a dataset of 500 single cell images (250 normal cells and 250 abnormal cells) that had been prepared and classified by a cytotechnologist as normal or abnormal from Mbarara Regional Referral Hospital. A confusion matrix for the
    Fig. 11. ROC curve for the classifier performance on full slide Pap smears from DTU/Herlev dataset (Dataset 2).
    Table 5
    Cervical cancer classification results from single cells.
    Abnormal
    Normal
    False Negative 3 True Negative 137 True Positive 153 False Positive 4 Total 156 Total 141
    Table 6
    Cervical cancer classification results from Pap smear cells.
    Abnormal
    Normal
    False Negative 4 True Negative 238 True Positive 246 False Positive 12 Total 250 Total 250
    The ROC curve analysis of the performance of the classifier on full slide Pap smears is shown in Fig. 11.
    classification results on this dataset is shown in Table 6. Of the 250 normal cells, 238 were correctly classified as normal and 12 were in-correctly classified as abnormal. Of the 250 abnormal cells, 246 were correctly classified as abnormal and four were incorrectly classified as normal. The overall accuracy, sensitivity and specificity of the classifier on this dataset were 96.80%, 98.40% and 95.20% respectively. A False Negative Rate (FNR), False Positive Rate (FPR) and classification error of 1.60%, 4.80% and 3.20% respectively were obtained.