Article

Artificial Intelligence: Kinks to iron out but a boon for health care

For more than three decades, we have been told that artificial intelligence (AI) has the potential to change our lives. Now, with the release of the large language model (LLM) AI tool ChatGPT, the public has a chance to get its hands on the technology.

ChatGPT is a text-based interactive tool that allows users to ask various questions ranging from oil changes to academic writing. ChatGPT stands out from previous tools as it has the ability to remember past conversations and provide unique responses. These abilities have kept ChatGPT in the headlines in 2023, with users appreciating its creativity and ease of use. ChatGPT has been used to write computer code, generate a text game and has even passed the United States Medical Licencing Examination.

Making an account to use ChatGPT is free, allowing individuals from many industries to try it. Conversations about ChatGPT and AI heading into the health-care field have been in the works for years, and now can be tested by anyone. Try some of these medical prompts to put ChatGPT to the test:

○  I want you to act as a doctor. I need you to come up with a medical plan on how to take care of a 70-year-old woman dealing with dementia and schizophrenia.

○  What are some side effects of taking the prescription medication metformin?

○  How does chronic stress affect my health?

One of ChatGPT’s core strengths is its ability to understand context. This can involve writing entirely new prose or manipulating information provided to it. In a medical context, the uses for this are immediate – converting dictated bullet points into a formal letter or creating a comprehensive past medical history list using an electronic medical record could speed up documentation extensively, allowing clinicians to focus more on patient encounters and medical decision making. For example Stuart Blitz, the COO of the U.S. company HoneHealth, tweeted a demonstration using ChatGPT to write a letter to an insurance company contesting a patient’s denial of a needed investigation, including references. Kashif Pirzada, a Toronto doctor, recorded a sample blood pressure check on a simulated patient, and asked ChatGPT to write a progress note, with excellent accuracy.

However, ChatGPT and other LLMs still have many kinks to work out. Reports have noted that it is a confident liar and can also generate bogus citations in a process called “hallucination” by computer scientists. CNET published dozens of articles using AI exclusively but then had to issue several lengthy corrections and clarifications. Google recently demonstrated its LLM tool, Bard, which shared false information in a demo shared to official Google social media accounts.

ChatGPT is still missing the core memory piece that is needed for development in the health-care industry.

As well, although ChatGPT has the ability to provide reliable information, its dataset is limited to information prior to 2021. This lack of memory causes the system to fail at answering questions such as updates on the COVID-19 pandemic and medical statistics from the past year. Although OpenAI CEO Sam Altman vouched that the tool would become more efficient and less faulty with its latest update, it is still missing the core memory piece that is needed for development in the health-care industry. Microsoft has announced the coming integration of ChatGPT into its search engine Bing, with an up-to-date knowledge base and the capacity to provide hyperlink citations for its sources. This update would provide users with conversational and accurate responses.

Another issue is ethical concerns. Springer Nature, which publishes thousands of scientific journals, has said it will not accept articles written using AI, though it will accept papers using AI for research. KoKo, a non-profit company specializing in emotional support, has used ChatGPT during private client conversations to gauge the tool’s capability to provide text-based support. The advice generated by ChatGPT was reported to have higher satisfaction rates than those given by humans, emphasizing its ability to be empathetic and caring. Notably, KoKo did this trial without consent, which has sparked an ethical debate, with outsiders and users saying it was a breach of privacy. Although it may be unethical, this example further proves the ease of AI integration into the healthcare industry.

Further advancements have been seen with Google releasing its medical AI tool, MedPaLM, that is solely focused on medical advice and diagnoses. Within its first month of launching, it had excellent results, including being 92.6 per cent similar to a clinician’s advice. The app combines six medical data sets in order to formulate precise and accurate answers. The use of this tool may cut costs for hospitals, with less stress load on clinical staff units. MedPaLM, unlike ChatGPT, is constantly updated with incoming medical information.

Nuance, creators of popular dictation application Dragon Medical One, have created an “ambient clinical intelligence” tool called Dragon Ambient eXperience that uses microphones and AI to automatically create documentation for a clinical encounter. Glass Health has released Glass AI, allowing clinician users to enter information about a patient presentation to receive differential diagnoses and clinical plans, and dotphraise has its dictation tool available for trial, allowing clinicians to convert brief dictations about a patient into chart and information-letter ready verbiage. Using AI to minimize the time physicians spend on paperwork allows for a deeper patient-doctor relationship as long as robust quality checks are in place.

Artificial intelligence is knocking on the door of your doctor’s office and the hospital wards. Extending its capabilities to clerical work, such as scheduling appointments and filing reports, is only a matter of time. Allowing AI to provide care requires informed consent, thoughtful rollout and extensive quality control. Ultimately, artificial intelligence will not replace human clinicians, but thoughtful application of this technology will allow them to be more human.

The comments section is closed.

Authors

Vanessa Duong

Contributor

Vanessa Duong is a first-year health sciences student at the University of Waterloo. Her research interests include health informatics and artificial intelligence.

Colin Whaley

Contributor

Colin Whaley is an incoming internal medicine resident physician at the University of Toronto and is a final year medical student at McMaster University. He previously completed a Master of Science in Pharmacy at the University of Waterloo, evaluating the addition of a medication’s indication on prescriptions and medication labels.

Republish this article

Republish this article on your website under the creative commons licence.

Learn more