Use of ChatGPT on Taiwan’s Examination for Medical Doctors

Yung Shuo Kao, Wei Kai Chuang, Jen Yang

Research output: Contribution to journalLetterpeer-review

16 Citations (Scopus)

Abstract

The study evaluates the performance of OpenAI’s GPT-3 model on answering medical exam questions from Staged Senior Professional and Technical Examinations Regulations for Medical Doctors in the field of internal medicine. The study used the official API to connect the questionnaire with the ChatGPT model, and the results showed that the AI model performed reasonably well, with the highest score of 8/13 in chest medicine. However, the overall performance of the AI model was limited, with only chest medicine scoring more than 60. ChatGPT scored relatively high in Chest medicine, Gastroenterology, and general medicine. One of the limitations of the study is the use of non-English text, which may affect the model’s performance as the model is primarily trained on English text.

Original languageEnglish
Pages (from-to)455-457
Number of pages3
JournalAnnals of Biomedical Engineering
Volume52
Issue number3
DOIs
Publication statusPublished - Mar 2024

Keywords

  • ChatGPT
  • Deep learning
  • Medical exam

ASJC Scopus subject areas

  • Biomedical Engineering

Fingerprint

Dive into the research topics of 'Use of ChatGPT on Taiwan’s Examination for Medical Doctors'. Together they form a unique fingerprint.

Cite this