Your doctor or therapist might not be the only one listening in during your next visit. Artificial intelligence may be tuning in as well.
Why it matters: Health care is racing to incorporate generative AI and natural language processing to help wrangle patient information, provide reliable care summaries and flag health risks. But the efforts come with quality and privacy concerns that people developing these tools acknowledge.
The big picture: AI for years has been tested and used in the background in various ways, such as reading radiology scans, assisting in diagnoses, or helping scrape data from faxes to be used in machine-readable formats.
However, the launch of OpenAI’s ChatGPT last year helped open up numerous new capabilities, said Heather Lane, senior architect of the data science team for Athenahealth.
“We can absolutely do things now that we could not do a year ago,” Lane told Axios.
Driving the news: On Thursday, digital health company Hint Health announced a product in collaboration with OpenAI that will allow doctors to record an appointment, automatically transcribe the notes from it and generate a summary that can be embedded directly in the patient’s medical record.
“All of that happens in the workflows they are currently using. We’re just accelerating those workflows by a ton,” CEO Zak Holdsworth told Axios.
It joins a number of companies, including Amazon, Microsoft, and Google, racing to capture the health care documentation market using generative AI.
Between the lines: It’s also among a growing number of AI applications interacting directly with patients.
Mental health platform Talkspace for three years has run an AI tool that watches for signals in chats with their therapists that a patient may be at increased risk for self-harm and alerts their clinician.
Another startup, Kintsugi, which raised more than $28 million from investors and the National Science Foundation, uses an AI-powered voice analysis tool that looks for signs of clinical depression and anxiety in short clips of speech in interactions with different kinds of providers.
Qualtrics — which pulls together data to help health systems understand and improve the patient experience — has been using AI to summarize calls with customer service agents and billing representatives. It plans to bring ambient listening to the clinic to help inform doctors about the patient experience, Adrienne Boissy, chief medical officer of Qualtrics, told Axios.
“These moments with clinicians are one of the most powerful pieces of the patient experience,” she said.
She said it’s critical the tools are designed and deployed carefully “to respect the safety and privacy and the sacredness of that conversation.”
“That’s a responsibility I take very seriously — and I know Qualtrics does — but the field at large also needs to be very careful about as we move forward.”
What to watch: The use of AI in patient encounters raises a number of privacy concerns, as well as worries about accuracy of the data and potential biases.
Advocates have raised alarm that AI tools are being launched with little or no oversight or even standards of when patients should be notified about their use.
One of the more immediate privacy concerns, Athenahealth’s Lane points out, is that AI systems are trained on large amounts of real data, raising the question about whether patients’ data may be used for such training in the future.
“Is there a privacy concern there? Probably, and people who are the privacy experts should be looking at weighing in,” Lane said.