An AI-based tool analyzes responses to mental health concerns and helps clinicians and others craft accurate and empathetic responses, researchers say. Photo courtesy of Tim Althoff
NEW YORK, May 25 (UPI) — Patients with medical questions who reach out to online providers may soon get the answers they need faster, thanks to communications tools enhanced with artificial intelligence, or AI, experts told UPI.
However, these new technologies, called chatbots, won’t replace human responses entirely, and shouldn’t, the experts say.
Instead, they will assist doctors and other clinicians in crafting more “empathetic” and informative responses and delivering them in a timely manner, they said.
“Our doctors are getting inundated with patient messages, and patients expect a response, and fully so,” said Dr. Christopher A. Longhurst, chief medical officer and chief digital officer at UC San Diego Health, which is piloting an AI-enhanced communication platform through its electronic health record system.
A new, AI-based program at UC San Diego Health allows the system’s doctors to respond to emails from patients more efficiently and “the patients love it” because they get the information they need, Longhurst told UPI in a phone interview.
Chatbots’ role in healthcare
Chatbot technologies, such as OpenAI’s ChatGPT, which debuted late last year and already has more than 100 million monthly users, engage AI to generate detailed and human-like text based on specified source material, such as websites, according to Boston College.
Although research suggests the technology has been used to spread health misinformation online, the technology has several potential positive applications in medicine, including helping providers write responses to patient questions, according to Longhurst.
It can also be used to train providers in how to craft messages that are clearly written, clinically accurate and empathetic, Tim Althoff, an assistant professor of computer science and engineering at the University of Washington, told UPI in a phone interview.
With UC San Diego Health’s AI-enhanced communications program, participating physicians see emails from patients seeking test results or asking questions about health conditions and/or symptoms in the electronic health record system, according to Longhurst.
On a section of the screen, the AI software generates a draft email response, which physicians can then choose to use as a starting point or write their own “from scratch,” he said.
“There’s no option to just send — either way, the doctors have to write or edit a response,” Longhurst said.
Responses generated with input from AI include a disclaimer that acknowledges that “part of this message was automatically generated in a secure environment and edited by your doctor,” he said.
“We’re being transparent with our patients because we know it would be creepy otherwise,” Longhurst said. “We’re helping the doctors save time, but this is still the doctor’s message, because they’re writing or editing it.
In one example, he said, the technology improved the “quality” of a doctor’s response to a patient concerned about daily marijuana by identifying resources for quitting and counseling services that the physician had not been aware of.
EHR vendor Epic Systems announced last month during the Healthcare Information and Management Systems Society conference that it is working with Microsoft to integrate “generative” AI into its software to create a similar platform for physician users.
Positive data
UC San Diego Health began its pilot of AI-assisted communication in January after seeing the initial results of a study co-authored by Longhurst and his colleagues at the University of California-San Diego.
The study, published by JAMA Internal Medicine on April 28, asked physicians to compare AI-generated responses to patient health questions on Reddit’s r/AskDocs with those crafted by healthcare professionals.
Physicians in the study preferred the Chatbot-generated responses nearly 80% of the time and indicated that the AI messages were generally more “empathetic” than those written by humans alone, the researchers said.
“You’ve got millions of people out there who are desperate for information and posting questions anywhere and everywhere,” study co-author John Ayers, chief of innovation with the division of infectious diseases and global public health at the Qualcomm Institute at University of California-San Diego, told UPI in a phone interview.
“This technology can help [clinicians] respond to these questions — so how do we use it in a responsible manner?” he asked rhetorcially.
Chatbots and mental health
Althoff and his team, meanwhile, have been working with Mental Health America, a nonprofit focused on people living with mental illness, to develop a tool called “Changing Thoughts with an AI Assistant.”
In a study scheduled to be presented in July during the Annual Meeting of the Association for Computational Linguistics, the researchers asked more than 2,000 visitors to the Mental Health America’s website to describe any negative thoughts they may have been experiencing.
The chatbot tool then generated messages that suggested ways in which these negative thoughts could be “reframed,” Althoff said.
“We are testing and developing a self-help tool that uses [AI] in a very specific way: to identify cognitive distortions that may be present in a user’s negative thoughts and to suggest ways to reframe these thoughts to be more positive, realistic and helpful,” the Mental Health America’s director of communications, Katie Lee, told UPI in an e-mail.
In another pilot program, Althoff and his colleagues are using chatbot technology with Talklife, a global peer support network for mental health that provides counseling services, to train the platform’s peer supporters how to write effective and empathetic responses to peer questions and concerns.
In an example Althoff provided, a visitor to the peer support platform enters the following message into the chat feature: “My job is becoming more stressful with each passing day.”
Without input from the AI-based platform, a peer responds: “Don’t worry, I’m there for you.”
“‘Don’t worry’ is not the worst response, but it can come across as invalidating,” meaning it can be interpreted as being dismissive of someone’s concerns or symptoms,” Althoff said. “If they’re seeking help, they’ re already worried.”
The chatbot tool developed by Althoff and his colleagues instead suggests the following response: “That must be a real struggle.”
It also recommends that the peer supporter try to learn more about the situation, while offering the visitor some advice, by adding: “Have you tried talking to your boss?”
“We train these language models to use as much of [a peer’s] original message as possible, so that the responses are authentic and to increase their confidence in their ability to support others,” Althoff said.
Why AI
By streamlining the response-writing process, chatbot technology can help clinicians respond to more patient requests, submitted by email or online, more quickly, Longhurst said.
This is an important consideration, given that healthcare providers receive dozens or even hundreds of queries about medical problems daily, he said.
In fact, the volume of patient communications has prompted some health systems to charge fees to patients for expert responses “to disincentivize this type of communication,” he added.
Support from “an AI Chatbot could change that equation,” meaning patients ultimately would be more likely to get the information they need, UC-San Diego’s Ayers said.
The technology can also correct typos, grammatical errors and even mistakes in medical information so that patients “ultimately get more solid and accurate advice,” he said.
Saving clinician time isn’t only about efficiency, though, according to Longhurst.
With burnout among healthcare providers on the rise nationally since the COVID-19 pandemic, the support provided by the technology can also reduce stress, allowing more time for patient care, he said.
Freeing up providers may also give the ability to see and evaluate more patients, addressing issues related to access to care caused by specialist shortages in many parts of the country, Althoff added.
“We know there is a huge gap in access to mental health services in certain areas and, as a result, the public is turning to online resources and social media,” Althoff said.
“Platforms that are using AI, responsibly, to enhance communications with these patients will be able to achieve better results, which ultimately benefits patients,” he said.