Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
If you're why doesn't this guy just check the AI scribe notes? Well, probably because with the amount of detail it writes, he'd be better off writing a quick soap note.
I've had similar experiences in Australia. I emailed one of my docs' practices asking if they use Heidi AI (or anything similar) and that I do not consent. They were using it without my consent.
In the consultation, he tried to give me the schpiel, including the 'it stays local' thing. The Heidi AI website has the scripts for clinicians; he ran through them all.
Oh, their documents for clinicians also mention every two sentences that patient/client consent is not required at all. I wonder why they keep saying that? Hmm.
This doctor knows I am a developer. When I asked him to explain what he meant by 'local data', he said the servers were in Australia. I almost flipped the desk. Aside from the fact that it is mandatory (it's the law! they do not have a choice!), it's ...kind of meaningless where the servers are, especially when he (on behalf of Heidi AI) was trying to sell it as a security or privacy feature. When I pointed that out, he just couldn't wrap his head around it. Of course he can't, he doesn't understand.
AHPRA's "Meeting your professional obligations when using Artificial Intelligence in healthcare" guideline[0] (not any kind of enforceable requirement, unfortunately) has great stuff in it. It encourages using it with the informed consent of patients. Even if my doctor read it and agreed with it, and cared about getting consent, how the hell can he inform patients sufficiently when he has absolutely no idea about, well, anything?
He keeps pushing it and asking me about whether I've changed my mind about allowing him to use it. No! He keeps asking me questions that only confirm he hasn't even done a perfunctory web search about why some people hate LLMs, especially in the context of PII and PHI.
I really do feel for clinicians, but these products are not the answer.
[0] https://www.ahpra.gov.au/Resources/Artificial-Intelligence-i...
I’d go as far as saying she’s right. And we’re in a tiny minority for even thinking about it.
That memo is how you make staff hide things instead of asking for help.
The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.
Nobody's re-reading the phrasing until a patient outcome goes wrong.
I think there is a chance that these systems will lead to a change where the note isn't the fundamental record of the encounter. Instead different artifacts are created specifically for each entity that needs it. Billing gets their view, and scheduling gets theirs, and, etc etc... It will, hopefully, give the practitioners a chance to get back to focusing on the patient and not ensuring their note quality captured one more billable code. Of course the negative is also likely to happen here too. As practitioners spend less time on the note they will likely not get that back in time with individual patients, but instead on seeing more patients. It will also likely lead to higher bills as the health systems do start squeezing more out of every encounter. There is no perfect here when profit is the driving motivator but with this much change happening I can only hope that it causes the industry as a whole to shake up enough to maybe find a new better optimum to land in.
This is what an EHR does somewhat. The discrete data elements in the DB and the way they are displayed in the system are a better record than free text notes.
The problem is creating standards so this data is easily exchanged. Anyone can read and parse a free text note - but if we had standards this would be less necessary.
We had to correct them at the end of the consultation.
In the email I wrote out everything myself, absolutely no use of AI, but after I hit send I realised there was a pretty silly typo, nothing grave but it irked me.
I decided out of boredom to see would my email be considered AI as it was probably going to go through a million filters these days, I popped it into an online checker (I don't know the quality of these so who knows) and it told me with 75% certainty it was written by AI.
It was not at all. It was written overly hastily on a phone on public transport. So I wonder how someone who might be grammar orientated and particular with the semantics would prove otherwise.
I can see a company needing to find any excuse to let people go saying "well theAI says you used the AI to do your work, we're letting you go"
This is just about not using free/public AI tools.
Heidi is frustratingly consistent at hallucinating stuff. I've seen it in almost all of the dozen or so summaries I've had from medical people recently (surgeon, physio, consultant). A GP I know tried for a month and then was like 'it's not worth the risk exposure to me or my patients'.
In fact, it's human transcribers who chose whether to forget the details of a case or whether to share the details of an especially funny patient with their buddies at the bar.
Enterprises are ok sharing their code base with OpenAI. I think it should be okay for patients.