Emerging technologies such as artificial intelligence (AI) are rapidly transforming the way we receive and deliver health care. The development of ChatGPT by OpenAI has received much public attention, gaining 1 million users in under a week. This innovation has opened a range of possibilities for the medical field.
ChatGPT is an artificial intelligence (AI) technology that uses natural language processing (NLP) to generate responses to user input. It is based on an AI model known as the Generative Pre-trained Transformer (GPT-3). ChatGPT is designed to understand language and generate responses in a conversational manner, making it useful for creating chatbots and other interactive applications. It can be used to automate tasks related to: electronic health records (EHRs); diagnosis and treatment; medical education; patient education.
Natural language processing for Electronic Health Records (EHRs)
One use case for ChatGPT in health care is in the field of electronic health records (EHRs), digital versions of a patient’s medical history, including diagnoses, treatments and medications. ChatGPT could be used to automatically extract information from unstructured data sources, such as free-text notes written by health-care providers. This would reduce the burden of manual abstraction of notes and save considerable time for medical professionals. The hope one day would be for ChatGPT to take all the unstructured data and create completed EHRs.
In a study published in the American Journal of Epidemiology, the researchers developed an NLP-based system that abstracted information from 1472 patient EHRs. The NLP-based system correctly identified 92 per cent of cancer recurrences from the various clinical patient documentation. The authors concluded that NLP could reduce the number of charts that need manual review by 90 per cent.
Diagnosis and treatment
AI algorithms have already been used to improve diagnostic accuracy; ChatGPT could potentially be used to generate diagnostic reports or recommendations based on a patient’s medical history and symptoms. This could be especially useful in cases where a patient’s symptoms do not fit neatly into a specific diagnosis, as ChatGPT could help to identify patterns and connections that may not be immediately apparent to a human clinician. I asked ChatGPT, “In the Emergency Department, physicians need to rule out which diagnoses when a patient presents with chest pain” and it responded in seconds:
ChatGPT could also be used to improve medical education by providing a personalized and interactive learning experience. This technology can act as a personal tutor for students and help them understand complex medical concepts and prepare for critical exams such as the United States Medical Licensing Examination (USMLE).
A recent paper showed that ChatGPT was able to perform at >50 per cent accuracy across all three USMLE exams and even achieved 60 per cent in most of its analyses, essentially passing all the exams. The model did not get any special training or reinforcement prior to the exams. Additionally, ChatGPT was able to demonstrate a high level of concordance and insight in its explanations that would be helpful and understandable for a student studying for the exam.
Finally, ChatGPT can help facilitate patient communication and education. For example, it could be used to generate discharge summaries, patient education materials or personalized treatment plans based on a patient’s individual needs and preferences. This could help improve patient adherence to treatment regimens and ultimately lead to better health outcomes. I asked ChatGPT for a treatment plan for my patient who was diagnosed with Type 2 diabetes:
While more research is needed to fully understand its capabilities and limitations, early results suggest that ChatGPT could be a valuable tool for improving patient care and outcomes. In addition to the use cases mentioned above, there are likely many other potential applications for ChatGPT as this technology develops.
Note: The author would like to thank OpenAI as parts of the article were written with the help of ChatGPT.
ChatGPT is important in healthcare because it can assist patients in getting quick and accurate responses to their medical queries, providing them with a sense of security and empowerment.
ChatGPT has already demonstrated bias from its learning. This can be problematic, as we know: “…ask three doctors, and you’ll get three differing opinions”.
The question is which one is the ‘right’ opinion?
I hope ChatGPT delivers on its proponents’ promises. And I surely hope it improves website ‘virtual assistants’. But can we trust it, though. For example, can it distinguish between the two meanings of NLP: Natural Language Programming (a computer programming concept) and Neuro-Learning Programming (a psychological concept)? And can it distinguish between colloquial and formal idiosyncratic note taking, writing and speaking? Will it (does it) have robust quality assurance, data security and governance oversight? Will it meaningfully reduce medical errors? Speed up medical diagnosis? Reduce patient wait times? Reduce system costs? Improve medical system integration, without sacrificing flexible decision making?