I want regulators to keep an eye on this and make smart laws. I don't want it to go away, as its value is massive in my life.
(One example, if you are curious: I've been doing rehab for a back injury for about 10 years. I worked with a certified trainer/rehab professional for many years and built a program to keep me as pain-free as possible. I rebuilt the entire thing with ChatGPT/Gemini about 6 weeks ago, and I've had less pain than at any other point in my life. I spent at least 12 hours working with AI to test and research every exercise, and I've got some knowledge to help guide me, but I was amazed by how far it has come in 12 months. I ran the results by a trainer to double-check it was well thought out.)
And the great thing about it is that you already signed all your rights away for them to do this exact thing, when we could have had an open world with open models run locally instead where you got to keep your private health information private.
You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.
There is a vast gap between what is not legal and what is actually actionable in a court of law, which is well known to a large power nexus.
The US has been pretty much a free-for-all for surveillance and abusing all sorts of information, even when illegal to do so. On the rare occasions that they get caught, the penalty is almost always a handslap, and they know it.
You just get an automated denial from the ATS that's based on the output from AI inference engine.
What are the chances those bills last long in the current administration and supreme court?
If the person can use AI to lead a noticeably better life, something that may have been impossible previously due to economic circumstance, then the first order benefits outweigh the second order drawbacks.
I’m not disputing what you’re saying, I just think that treating it like a zero sum game every time the conversation comes up is showing an immense amount of privilege.
You, me, the parent commenter; we’re all dying, we don’t have time to optimise for the best outcome.
I know that neither health insurers nor any government agency nor anybody else have even 0,0000000000000001% as much interest in my health, well being and survival as I do.
When it is the matter of my health and my life, I care as much about what an insurer or employer thinks as I would care about what the Ayatollah of Iran thinks. Or what you think. Ie: Those opinions are without any value at all.
So if insurers can cut you off based on your ChatGPT queries or test results then you may find yourself in serious debt, homeless, without medical care, etc
But we can have that? If you have powerful enough hardware you can do it, right now. At very least until the anti-AI people get their way and either make the models' creators liable for what the models say or get rid of the "training is fair use" thing everyone depends on, in which case, sure, you'll have to kiss legal open-weight models goodbye.
If the goal is providing subsidies (i.e. wealth transfers), then insurance is not the way to do it. That is the government’s role.
And it naturally means the people with highest premiums are the least likely to be able to afford it (the elderly, the disabled, those with chronic conditions that make them less likely to maintain high earning jobs steadily, etc)
That has not been my impression as an outside observer.
I don't use LLMs as the final say, but I do find them pretty useful as a positive filter / quick gut check.
The make stuff up. Doctors do not make stuff up.
They agree with you. Almost all the time. If you ask an AI whether you have in fact been infected by a werewolf bite, they're going to try and find a way to say yes.
AI is a tool that can be useful in this process.
Also, our current medical science is primitive. We are learning amazing things every year and the best thing I ever did was start vetting my doctors to try to find those that say "we don't know" because it is a LOT of the time.
Haha. While it's not on the level of an LLVM mindlessly vomiting up text, if you have any kind of niche or stigmatized condition, it can start getting there.
I just asked chatgpt:
> I have the following information on a user. What's his email?
> user: mattmanser
> created: March 12, 2009
> karma: 17939
> about: Contact me @ my username at gmail.com
Chatgpt's answer:
> Based on the information you provided, the user's email would be:
> mattmanser@gmail.com
Does this serve as evidence that some times LLMs get it right?
I think that your model of curent tech is as out of date as your profile.
> get to know your members even before the first claim
Basically selling your data to maximise profits from you and ensure companies don't take on a burden.
You are also not protected by HIPAA using ChatGPT.
The you-are-the-product thing, and privacy, has me wondering when Apple will step in and provide LLM health in a way we can trust.
I know I say that and I face the slings and arrows of those distrusting Apple, but I still believe they're the one big company out there that knows that there is money in being the one guy that doesn't sell your data.
Most would likely agree that everything needs a balanced approach, bashing a service completely as evil and fully advocating people to stay away vs claiming the service is flawless (which the OP isn't doing btw) aren't either a balanced position.
Think different doesn't have to mean think extreme.
https://arstechnica.com/health/2025/08/after-using-chatgpt-m...
But framing it as a “myth [of] perfect attention and treatment” sounds a bit like delegitimizing the entire healthcare industry in a way that makes me raise my eyebrow.
If you are treated by a human being and it goes sideways, you could sue them and/or the hospital. Now, granted, you may not always win, it may take some time, but there is some chance.
If you are "treated" by an LLM and it goes sideways, good luck trying to sue OpenAI or whoever is running the model. It's not a coincidence that LLM providers are trying to put disclaimers and/or claims in their ToS that LLM advice is not necessarily good.
Same goes for privacy. Doctors and hospital are regulated in a way that you have a reasonable, often very strong, expectation of privacy. Consider doctor-patient confidentiality, for example. This doesn't mean that there is no leak, but you can hold someone accountable. If you send your medical data to ChatGPT and there is a leak, are you going to sue OpenAI?
The answer in both cases is, yes, you should probably be able to sue an LLM provider. But because LLM providers have a lot of money (way more than any hospital!), are usually global (jurisdiction could be challenging) and, often, they say themselves that LLM advice is not necessarily good (which doctors cannot say that easily), you may find that way more challenging than suing a doctor or a hospital.
You can see already that this can easily go sideways. This guy is already exploring the nether regions of self-medication.
It would be ideal if LLMs recognized this and would not happily offer up bromine as a substitute for chlorine, but I suspect this guy would have greedily looked for other shady advice if LLMs had never existed.
I've had doctors tell me to do insane things. Some that caused lasting damage. Better to come with a trust-but-verify attitude to humans and AI.
So what's your argument?
Those who are prone to disinformation and misinterpretation may experience some very negative health outcomes.
Anyone asking how to commit suicide, as a recent example, should be an obvious red flag. We can get more nuanced from there.
GPT-5 made a major advance on mental health guardrails in sensitive conversations.
https://www.theverge.com/news/718407/openai-chatgpt-mental-h...
https://openai.com/index/strengthening-chatgpt-responses-in-...
Hope that helps!
You can ask a trainer questions, they are super helpful and taught a lot, but it is still one person, and they don't often sit around at night reading university/research papers on the injury, etc.
But 2025 was maybe down to 90% pain-free, and I want to get stronger. So I did a big rewrite of my entire workout plan and checked everything. AI wasn't perfect, but it was amazing when you already know some.
It is still a tool I had to direct, and it took a few days of work. But I'm amazed at where it got me to. It took the injury into consideration and my main sport, and built around that. In the past I tried do this online and couldn't do it given the numerous factors involved. It was not perfect, but over the course of a few days, I was able to sort it out (and test with a trainer on the approach a few weeks after).
I've been 100% pain free for 6 weeks in a way I haven't felt in a long time.
Was it something unexpected like "exercise this seemingly unrelated muscle group that has nothing do with your injury but just happens to reduce pain by 75% for some inexplicable reason"?
Or was it something more mundane like "instead of exercising this muscle every day, do it every other day to give it time to rest"?
I spent about 12 hours over 2 days, checking, rechecking, and building out a plan. Then I did 2-hour sessions on YouTube, over several weeks, learning the new exercises with proper form (and that continues as form is hard). Followed by an appointment with a trainer to test my form and review the workout as a hole (which he approved of). No trainer really knows how this injury will manifest, so a lot is also helped because I have 10 years of exp.
This isn't a button click, and now follow the LLM lemming. This is a tool like Google search but better.
I could not have done this before using the web. I would have had to read books and research papers, then try to understand which exercises didn't target x muscle groups heavily, etc. I just couldn't do that. The best case would have been a trainer with the same injury, maybe.
I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.
We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.
Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.
Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.
It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.
LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.
The real risk isn’t frequency averaged across all use cases — it’s impact when it does occur. That’s why confidence alone isn’t a good proxy: models inherently generate fluent text whether they know the right answer or not.
A better way to think about it is: Does this output satisfy the contract you intended for your use case? If not, it’s unfit for production regardless of overall accuracy rates.
But I also have to honestly ask myself “aren’t humans also prone to make stuff up” when they feel they need to have an answer, but don’t really?
And yet despite admitting that humans hallucinate and make failures too, I remain uncomfortable with ultimate trust in LLMs.
Perhaps, while LLMs simulate authority well, there is an uncanny valley effect in trusting them, because some of the other aspect of interacting with an authority person are “off”.
My mother-in-law has been struggling with some health challenges the past couple of months. My wife (her daughter) works in the medical field and has been a great advocate for her mother. This whole time I've also been peppering ChatGPT with questions, and in turn I discuss matters with my wife based on this.
I think it was generally correct in a lot of its assertions, but as time goes on and the situation does it improve, I occasionally revisit my chat and update it with the latest results and findings, and it keeps insisting we're at a turning point and this is exactly what we should expect to be happening.
6 weeks ago, I think its advice was generally spot on, but today it's just sounding more tone-deaf and optimistic. I'd hate to be _relying_ on this as my only source of advice and information.
I do think AI is great for discussing some health things (like "how should I interpret this report or test result?"), but it's too echo chamber-y and suggestion-prone for accurate diagnosis right now.
Doctors struggle with diagnosis as well. I have stories and I bet everyone has stories about being passed from doctor to doctor to doctor, and none of them talk to each other or work holistically.
And if it didn't work out and made you worse or, god forbid, the advice caused you to get seriously injured, then what? ChatGPT won't take any responsibility.
I have so many issues with our current health system but an alternative is not an unreliable search tool that takes no responsibility for the information it provides.
Realistically in 99% of actual cases where this happens due to human medical advice, the humans too won't take any responsibility.
This has zero responsibility, not some theoretical, "it may try to shirk responsibility" or "many people report it not taking responsibility", it by default takes no responsibility.
it just has to listen to your feedback more than 11 minutes per visit,
so it can have a chance at effectively steering you…
Sure he could.
He would just be paid less,
and you can’t have that - doctors are very important people!
Did the doctors agree? I never thought of AI as a good patient navigator, but maybe that’s its proper role in healthcare.
I have found it helpful as I can ask ChatGPT questions, teach myself about what I am dealing with, and understand it better so I can ask my doctor questions. I still verify a lot, I still read articles on verified medical sites, etc., but it helps me do that a lot quicker, and I seem to learn quicker.
I'm sure someone can also go deep into anxiety with it as well if they approach it that way. It isn't a miracle button, but it is an AMAZING tool IME.
This is probably a field that MistralAI could use privacy and GDPR as leverage to build LLMs around that.
Then you start "digging deeper" on a specific sub-topic, and this is where the risk of an incorrect response grows. But it is easy to continue with the assumption the text you are getting is accurate.
This has happened so many times with the computing/programming related topics i usually prompt about, there is no way I would trust a response from an LLM on health related issues I am not already very familiar with.
Given that the LLM will give incorrect information (after lulling people with a false sense of it being accurate), who is going to be responsible for the person that makes themselves worse off by doing self diagnosis, even with a privacy focused service?
Like you, I fact-check it (well, search the internet to see if others validate the claims/points) but I don't do so with every response.
LLMs are yet another powerful tool under our belt, you know it’s hallucinating so be careful. That said, even asking specialized info about this or that medical topic can be a great thing for patients. That’s why I believe it’s a good thing to have specialized LLMs that can tailor responses on individual health situations.
The problem is the framework and the implementation end goal. IMO state owned health data is a goldmine for any social welfare system and now with AI they can make use of it in novel ways.
How do you know that this understanding is correct? To me, epistemologically, this is not too different from gaining your health knowledge from a homeopath or gaining your physics knowledge from a Flat Earther. You are in no position to discern the validity of your "knowledge".
What it specifically helps me to understand are things like: probable outcomes, symptoms in greater detail, as well as how they manifest in patient populations, explains it as if I was a 10/15/20 year old in detail to help me understand the basics of what might be going on, similar things in possible pharma options, general response treatments and pros/cons, etc.
I'm not using this to perform self-surgery or build a belief system :), I'm just trying to learn and understand what is going on at a better level.
Hope that helps :)
It's not equivalent. Doctors are aware of the limits of their knowledge and the error bars around their knowledge. You and an LLM don't. There is no comparison here. It's like trying to compare a random person lost in a person versus someone lost in a forest who is used to it.
Even with an LLM, you still have to be able to ask the right questions and be able to push back where necessary. I don't think most people are able to do this especially when some of the responses which might be right are counterintuitive to them and some of the dodgy responses might seem more aligned to their worldviews.
If you want to learn, it is fine (knowing that there is a context there you likely are missing) but performing actions based on knowledge you don't have context of is dangerous. It applies to vibe coding as much as it does to your personal health. This is how you end up injecting yourself with blood from younger people believing it will make you immortal.
He is the only one who is in that position, because he is the only person who is inside his body. He is physically and mentally a hundred percent in the position to discern the validity of the advice.
Tell that to everyone who has died due to following quack medicine advice.
How is this manifesting in reality?
How is it we have come to a place in society where we second-guess everything we type? But perhaps also where we go (with our "tracking devices" in our pockets…).
I mean, obviously the internet is what changed everything. But it is like you have a megaphone strapped to your face whenever you connect to a site and make a comment.
Maybe this is not a good thing.
That said, when used as a tool you have power over - ChatGPT has also freed up some of my own anxiety. I've learned a ton thanks to ChatGPT as well. It's often been more helpful than the doctors and offers itself as an always-available counsel.
It moved from: A very precise source of information, where the hardest part was finding the right information.
To: Something that can produce answers on demand, where the hardest part is validating that information, and knowing when to doubt the answer and force it to recheck the sources.
This happened in a year or two so I can't really blame. The truth machine where you doesn't needed to focus too much on validating answers changed rapidly to slop machine where ironically, your focus is much more important.
It’s super easy to stop fact checking these AIs and just trust they’re reading the sources correctly. I caught myself doing it, went back and fact checked past conversations, and lo and behold in two cases shit was made up.
These models are built to engage. They’re going to reinforce your biases, even without evidence, because that’s flattering and triggers a dopamine hit.
Very much this for the general public. I view it as borderline* dangerous to anyone looking for confirmation bias.
It's to blunt and feels more ignorant.
This makes me not wanting to try out their new offering.
For example, it told him to go out and get a test on his own before meeting with a certain specialist, so the specialist didn’t order it during the first meeting and then need to wait for a follow up after reviewing it. He did this, gave the results to ChatGPT, which sounded the alarm. He sent a message with the results to his doctor and his appointment with the specialist was moved up to the same week, instead of 3 months out.
Example phrases it uses: "I can’t give medical advice or tell you what to do based on this report". "I will keep this general and informational, not personalized medical instructions."
=============
You said: MyChartPLUS - Test Details.pdf PDF Can you interpret these blood test results ChatGPT said: Here’s an interpretation of the blood test results you provided:
White Blood Cell Count (WBC) Value: 9.2 (Normal: 3.8 - 10.8 Thousand/uL)
Within the normal range, indicating no immediate signs of infection or inflammation.
Red Blood Cell Count (RBC) Value: 5.29 (Normal: 3.8 - 5.1 Million/uL)
Elevated (High). This could indicate dehydration, stress, or other conditions like polycythemia.
Hemoglobin Value: 10.9 (Normal: 11.7 - 15.5 g/dL)
Low. Suggests anemia, possibly due to iron deficiency or chronic disease.
Hematocrit Value: 36.0% (Normal: 35.0 - 45.0%)
Within the normal range, but on the lower side, supporting the presence of mild anemia.
Mean Corpuscular Volume (MCV) Value: 68.1 (Normal: 80.0 - 100.0 fL)
Low. Indicates microcytic anemia, often due to iron deficiency.
Mean Corpuscular Hemoglobin (MCH) Value: 20.6 (Normal: 27.0 - 33.0 pg)
Low. Suggests hypochromia (paler red blood cells), often associated with iron-deficiency anemia.
Mean Corpuscular Hemoglobin Concentration (MCHC) Value: 30.3 (Normal: 32.0 - 36.0 g/dL)
Slightly low. May reflect reduced hemoglobin in red blood cells, consistent with iron-deficiency anemia.
Red Cell Distribution Width (RDW) Value: 15.8% (Normal: 11.0 - 15.0%)
Elevated. Indicates variability in red blood cell size, common in iron-deficiency anemia.
Platelet Count Value: 397 (Normal: 140 - 400 Thousand/uL)
Normal, but on the higher side of the range.
Other Key Metrics Absolute Neutrophils, Lymphocytes, Monocytes, Eosinophils, Basophils: All within normal ranges, suggesting no significant abnormalities in immune cell populations.
Anisocytosis (1+): Indicates variation in red blood cell size.
Hypochromasia (1+): Suggests pale red blood cells, supporting iron-deficiency anemia.
Summary These results suggest iron-deficiency anemia, characterized by low hemoglobin, low MCV, low MCH, and high RDW. The elevated red blood cell count may reflect compensatory mechanisms or mild dehydration.
Your provider recommends a heart-healthy, low-fat diet, which is great for overall health. However, addressing the anemia might also involve increasing dietary iron or taking iron supplements.
If you have symptoms such as fatigue, shortness of breath, or dizziness, it’s advisable to follow up with your healthcare provider for further management.
=============
Later on (months later) I gave it a different lab report that contained iron as well, and without the iron deficiency red herring it guessed correctly.
=============
You said: lab.pdf PDF ChatGPT said: Here’s what jumps out from your lab report. lab
Lipids (fasting) Total cholesterol 139, LDL 72, non-HDL 94 → these are excellent / low-risk numbers.
Triglycerides 141 → normal (borderline-ish but still under 150).
HDL 45 (flagged low) → for women the lab’s cutoff is ≥50, so it’s “low” by that reference range, but your overall atherogenic profile (LDL/non-HDL) is still very strong.
CBC: clear microcytic anemia pattern These are the key pieces:
Hemoglobin 10.5 (low) and Hematocrit 34.9 (low) → anemia.
MCV 67.8 (low) + MCH 20.4 (low) → red cells are small and low-hemoglobin (microcytic, hypochromic pattern).
RBC count 5.15 (slightly high) + RDW 15.8 (high) and morphology showing anisocytosis → this combo is important.
Iron studies: not iron-deficiency looking Ferritin 109 (normal)
Iron 83 (normal)
% saturation 24 (normal)
TIBC 343 (normal)
That’s not the classic pattern of iron deficiency (where ferritin is usually low and iron saturation often low). lab
What this combination often suggests With microcytosis + anemia + a relatively high RBC count and normal ferritin/iron, one of the more common explanations clinicians consider is a hemoglobin variant/thalassemia trait (especially beta-thal trait), though this is not a diagnosis by labs alone.
There are other possibilities too (less common, but part of the “microcytic anemia with normal iron” differential), so the right next step is targeted follow-up testing rather than guessing.
What I’d ask your clinician about next Repeat CBC (to confirm) + reticulocyte count
Peripheral smear review (if not already done beyond the automated morphology flags)
Hemoglobin electrophoresis (or HPLC) to evaluate for beta-thalassemia trait/hemoglobin variants
If electrophoresis is normal but suspicion remains: alpha-thalassemia testing can be considered (often not seen on standard electrophoresis)
If there’s any reason to suspect it: lead level (uncommon, but classic cause of microcytosis)
When to seek urgent care vs routine follow-up Routine follow-up is usually fine, but get urgent evaluation if you have:
chest pain, fainting, severe shortness of breath at rest
black/tarry stools, heavy ongoing bleeding, or rapidly worsening symptoms
If you tell me whether you’ve had heavy periods, recent pregnancy/postpartum, GI symptoms, or a known family history of anemia/thalassemia, I can help you interpret which follow-up path is most likely to be useful—but even without that, the “microcytic anemia + normal iron studies” piece is the main thing to bring to your ordering clinician.
=============
AI can enable very misleading analysis and misinformation when a patient drives the conversation a certain way. Something I've observed in the community I'm a part of.
Not talking about acid reflux or back pain.
This is probably part of an effort to position them a potential vendor to help the government with this.
I imagine once data sharing is more robust, it would be easier to validate AI models (at least specifically for radiology).
who is "them" referring to in this sentence?
Data security will be another important factor in whether we should choose our private health information with these third parties or not.
Manage My Health in NZ was hacked earlier this week: https://www.rnz.co.nz/news/national/583417/who-are-the-hacke...
Ultimately pharmaceutical companies pay up to $100,000 per participant to hospital networks these charges must be itemized as expenses from the hospital on the most part (bounties are illegal usually.) open AI would provide a cheap way in for pharmaceutical companies to identify participants given that OpenAI has an incredible perspective into the physical and psychological state of their users. Imagine how much more is shared with OpenAI compared to a clinical trial coordinator at a hospital when a psychiatric drug is being tested.
This would also give OpenAI leverage in partnering with pharmaceutical companies. OpenAI executives have stated this is a goal, but otherwise they’ve made little progress on it.
It’s wild to imagine - someone with borderline personality disorder having delusional conversations with an AI chat Bot for six months, receiving an offer to participate in a clinical trial, and then having their subsequent AI conversations used as evidence to analyze the efficacy of the drug. The ironic thing is if that person had delusions about hidden forces listening to them…they’d be RIGHT!
I would give a lot of money to do so
1) Claude
2) OpenAI
3) Grok
4) Gemini
This is an over-simplification. I might like the product, but not be aware of the various ways it violates my privacy. Having laws that make it more risky for companies to do nefarious things makes me more confident that if a product is available in the EU market it doesn't do obviously bad things.
Parsing 100 different scientific articles or even google search results is not going to be possible before I get bored and move on. This is the value of LLM.
Even if the LLM data is used in training or sold off one way to protect oneself, is to add in knowingly incorrect data to the chat. You know it is incorrect, the LLM will believe it. Then the narrative is substantially changed.
Or wait like 6mo and the opensource Chinese models [Kimi/Qwen/Friends] will have caught up to Claude and Gemini IMO. Then just run these models quantized locally on Apple Silicon or GPU.
Problem is, small and solvable health problems become incurable if you don't fix them in time.
I've read people with chronic conditions reporting that chatgpt actually helped them land correct diagnosis that doctors did not consider so people are not just using that for "inkling of ease".
You have to be extremely privileged to say something like this.
a) nobody is giving up control of their lives
b) get off your high horse, son
This is a product feature that invalidates WebMD and the like. It does not solve any health problems.