TY - JOUR
T1 - Deep learning classifier with patient’s metadata of dermoscopic images in malignant melanoma detection
AU - Anggraini Ningrum, Dina Nur
AU - Yuan, Sheng Po
AU - Kung, Woon Man
AU - Wu, Chieh Chen
AU - Tzeng, I. Shiang
AU - Huang, Chu Ya
AU - Li, Jack Yu Chuan
AU - Li, Yu-Chuan
AU - Wang, Yao Chin
N1 - Funding Information:
The first author thanks the Directorate General of Resources for Science, T echnology and Higher Education, at the Ministry of Education and Culture, Republic Indonesia for the sponsorship of her doctoral study . The author is also grateful to Muhammad Solihuddin Muhtar , International Center for Health Information T echnology , T aipei Medical University , who provided advice and assisted on the process of evaluating model performance. He was not compensated for his contribution.
Publisher Copyright:
© 2021 Ningrum et al. This work is published and licensed by Dove Medical Press Limited.
PY - 2021
Y1 - 2021
N2 - Background: Incidence of skin cancer is one of the global burdens of malignancies that increase each year, with melanoma being the deadliest one. Imaging-based automated skin cancer detection still remains challenging owing to variability in the skin lesions and limited standard dataset availability. Recent research indicates the potential of deep convolutional neural networks (CNN) in predicting outcomes from simple as well as highly complicated images. However, its implementation requires high-class computational facility, that is not feasible in low resource and remote areas of health care. There is potential in combining image and patient’s metadata, but the study is still lacking. Objective: We want to develop malignant melanoma detection based on dermoscopic images and patient’s metadata using an artificial intelligence (AI) model that will work on low-resource devices. Methods: We used an open-access dermatology repository of International Skin Imaging Collaboration (ISIC) Archive dataset consist of 23,801 biopsy-proven dermoscopic images. We tested performance for binary classification malignant melanomas vs nonmalignant melanomas. From 1200 sample images, we split the data for training (72%), validation (18%), and testing (10%). We compared CNN with image data only (CNN model) vs CNN for image data combined with an artificial neural network (ANN) for patient’s metadata (CNN+ANN model). Results: The balanced accuracy for CNN+ANN model was higher (92.34%) than the CNN model (73.69%). Combination of the patient’s metadata using ANN prevents the overfitting that occurs in the CNN model using dermoscopic images only. This small size (24 MB) of this model made it possible to run on a medium class computer without the need of cloud computing, suitable for deployment on devices with limited resources. Conclusion: The CNN+ANN model can increase the accuracy of classification in malignant melanoma detection even with limited data and is promising for development as a screening device in remote and low resources health care.
AB - Background: Incidence of skin cancer is one of the global burdens of malignancies that increase each year, with melanoma being the deadliest one. Imaging-based automated skin cancer detection still remains challenging owing to variability in the skin lesions and limited standard dataset availability. Recent research indicates the potential of deep convolutional neural networks (CNN) in predicting outcomes from simple as well as highly complicated images. However, its implementation requires high-class computational facility, that is not feasible in low resource and remote areas of health care. There is potential in combining image and patient’s metadata, but the study is still lacking. Objective: We want to develop malignant melanoma detection based on dermoscopic images and patient’s metadata using an artificial intelligence (AI) model that will work on low-resource devices. Methods: We used an open-access dermatology repository of International Skin Imaging Collaboration (ISIC) Archive dataset consist of 23,801 biopsy-proven dermoscopic images. We tested performance for binary classification malignant melanomas vs nonmalignant melanomas. From 1200 sample images, we split the data for training (72%), validation (18%), and testing (10%). We compared CNN with image data only (CNN model) vs CNN for image data combined with an artificial neural network (ANN) for patient’s metadata (CNN+ANN model). Results: The balanced accuracy for CNN+ANN model was higher (92.34%) than the CNN model (73.69%). Combination of the patient’s metadata using ANN prevents the overfitting that occurs in the CNN model using dermoscopic images only. This small size (24 MB) of this model made it possible to run on a medium class computer without the need of cloud computing, suitable for deployment on devices with limited resources. Conclusion: The CNN+ANN model can increase the accuracy of classification in malignant melanoma detection even with limited data and is promising for development as a screening device in remote and low resources health care.
KW - Artificial neural network
KW - Convolutional neural network
KW - Embedded artificial intelligence
KW - Skin cancer
UR - http://www.scopus.com/inward/record.url?scp=85105614341&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105614341&partnerID=8YFLogxK
U2 - 10.2147/JMDH.S306284
DO - 10.2147/JMDH.S306284
M3 - Article
AN - SCOPUS:85105614341
SN - 1178-2390
VL - 14
SP - 877
EP - 885
JO - Journal of Multidisciplinary Healthcare
JF - Journal of Multidisciplinary Healthcare
ER -