TY - JOUR
T1 - Can large language models provide secondary reliable opinion on treatment options for dermatological diseases?
AU - Iqbal, Usman
AU - Lee, Leon Tsung Ju
AU - Rahmanti, Annisa Ristya
AU - Celi, Leo Anthony
AU - Li, Yu Chuan Jack
N1 - Publisher Copyright:
© 2024 The Author(s). Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.
PY - 2024/6/1
Y1 - 2024/6/1
N2 - Objective: To investigate the consistency and reliability of medication recommendations provided by ChatGPT for common dermatological conditions, highlighting the potential for ChatGPT to offer second opinions in patient treatment while also delineating possible limitations. Materials and Methods: In this mixed-methods study, we used survey questions in April 2023 for drug recommendations generated by ChatGPT with data from secondary databases, that is, Taiwan's National Health Insurance Research Database and an US medical center database, and validated by dermatologists. The methodology included preprocessing queries, executing them multiple times, and evaluating ChatGPT responses against the databases and dermatologists. The ChatGPT-generated responses were analyzed statistically in a disease-drug matrix, considering disease-medication associations (Q-value) and expert evaluation. Results: ChatGPT achieved a high 98.87% dermatologist approval rate for common dermatological medication recommendations. We evaluated its drug suggestions using the Q-value, showing that human expert validation agreement surpassed Q-value cutoff-based agreement. Varying cutoff values for disease-medication associations, a cutoff of 3 achieved 95.14% accurate prescriptions, 5 yielded 85.42%, and 10 resulted in 72.92%. While ChatGPT offered accurate drug advice, it occasionally included incorrect ATC codes, leading to issues like incorrect drug use and type, nonexistent codes, repeated errors, and incomplete medication codes. Conclusion: ChatGPT provides medication recommendations as a second opinion in dermatology treatment, but its reliability and comprehensiveness need refinement for greater accuracy. In the future, integrating a medical domain-specific knowledge base for training and ongoing optimization will enhance the precision of ChatGPT's results.
AB - Objective: To investigate the consistency and reliability of medication recommendations provided by ChatGPT for common dermatological conditions, highlighting the potential for ChatGPT to offer second opinions in patient treatment while also delineating possible limitations. Materials and Methods: In this mixed-methods study, we used survey questions in April 2023 for drug recommendations generated by ChatGPT with data from secondary databases, that is, Taiwan's National Health Insurance Research Database and an US medical center database, and validated by dermatologists. The methodology included preprocessing queries, executing them multiple times, and evaluating ChatGPT responses against the databases and dermatologists. The ChatGPT-generated responses were analyzed statistically in a disease-drug matrix, considering disease-medication associations (Q-value) and expert evaluation. Results: ChatGPT achieved a high 98.87% dermatologist approval rate for common dermatological medication recommendations. We evaluated its drug suggestions using the Q-value, showing that human expert validation agreement surpassed Q-value cutoff-based agreement. Varying cutoff values for disease-medication associations, a cutoff of 3 achieved 95.14% accurate prescriptions, 5 yielded 85.42%, and 10 resulted in 72.92%. While ChatGPT offered accurate drug advice, it occasionally included incorrect ATC codes, leading to issues like incorrect drug use and type, nonexistent codes, repeated errors, and incomplete medication codes. Conclusion: ChatGPT provides medication recommendations as a second opinion in dermatology treatment, but its reliability and comprehensiveness need refinement for greater accuracy. In the future, integrating a medical domain-specific knowledge base for training and ongoing optimization will enhance the precision of ChatGPT's results.
KW - artificial intelligence
KW - ChatGPT
KW - decision-making support
KW - dermatology
KW - large language model
KW - medication
KW - second opinion
UR - http://www.scopus.com/inward/record.url?scp=85193951427&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85193951427&partnerID=8YFLogxK
U2 - 10.1093/jamia/ocae067
DO - 10.1093/jamia/ocae067
M3 - Article
C2 - 38578616
AN - SCOPUS:85193951427
SN - 1067-5027
VL - 31
SP - 1341
EP - 1347
JO - Journal of the American Medical Informatics Association
JF - Journal of the American Medical Informatics Association
IS - 6
ER -