11 February,2023 02:04 PM IST | Washington | ANI
Image for representational purpose only. Photo courtesy: istock
ChatGPT can score at or around the roughly 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights, according to a study by Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth that was published February 9, 2023, in the open-access journal PLOS Digital Health.
A large language model (LLM), or new artificial intelligence (AI) system, called ChatGPT is intended to produce writing that resembles that of a person by anticipating future word sequences. ChatGPT is unable to conduct online searches, unlike most chatbots. Instead, it produces text based on word relationships that are predicted by internal processes.
Kung and colleagues tested ChatGPT's performance on the USMLE, a highly standardized and regulated series of three exams (Steps 1, 2CK, and 3) required for medical licensure in the United States. Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.
After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release.
ALSO READ
Apple iPhone 16 launch: Know the features, prices and availability in India
Apple Event 2024: New iPhone 16 set to lead AI revolution
Blasters, bikes, and bravado
Mumbai Police issues traffic advisory for motorists in BKC from August 28 to 30
Apple's next Chief Financial Officer is Indian-origin Kevan Parekh
After indeterminate responses were removed, ChatGPT scored between 52.4 per cent and 75.0 per cent across the three USMLE exams. The passing threshold each year is approximately 60 per cent. ChatGPT also demonstrated 94.6 per cent concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9 per cent of its responses. Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8 per cent on an older dataset of USMLE-style questions.
While the relatively small input size restricted the depth and range of analyses, the authors note their findings provide a glimpse of ChatGPT's potential to enhance medical education, and eventually, clinical practice. For example, they add, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.
"Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation," say the authors.
Author Dr Tiffany Kung added that ChatGPT's role in this research went beyond being the study subject: "ChatGPT contributed substantially to the writing of [our] manuscript... We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress...All of the co-authors valued ChatGPT's input."
Read More: All you need to know about ChatGPT, a prototype Artificial Intelligence chatbot
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever