Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.
Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.
What are best channels for people with info to help halt the corruption, this time?
(The channels might be different than usual right now, with much of US federal being disrupted.)
Also it is not expected that the training material for the model deals with the actual practical aspects of therapy, only some of the theoretical aspects are probably in that material
waitlist control, where people get nothing
psychoeducational, where people get some kind of educational content about mental health but not therapy
existing nonpsychological service, like physical checkups with a nurse
existing therapy, so not placebo but current treatment
pharmacological placebo, where they're given a placebo pill and told its psychiatric medication for their concern
A kind of "nerfed" version of the therapy, such as supportive therapy where the clinician just provides empathy etc but nothing else
How to interpret results depends on the control.
It's relevant to debates about general vs specific effects in therapy (rapport, empathy, fit) versus specific effects (effects due to specific techniques of a specific therapy).
Bruce Wampold has written a lot about types of controls although he has a hard nonspecific/general effects take on therapy.
Otherwise, you may end up defending this and it's really foolish:
> “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.
Fuck me. Maybe that guy on the street corner selling salvation or “cuckane” really was dealing in the real thing, too, eh?
Man I hate this modern shift of “actually anyone who is an expert is also trying to deceive me”. Extremely healthy shit for a civilization.
(But yeah, relying on systems that can have bugs like that for your mental health is terrifying.)
They didn't feel threatened by systems like cleverbot or GPT-3.5
Try this on for size: I am not a therapist, but I will happily tell you that a statistical word generating LLM is a truly atrocious substitute for the hard work of a creative, empathetic and caring human being.
Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.
From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).
I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.
People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue. Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.
There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.
One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.
To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.
Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.
All that's needed for torture to break someone down are the elements, structuring, and clustering.
Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.
Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.
No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.
The world would like quite different if this was true.
He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.
I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?
It's good you are socially privileged, but a lot of people do not have someone close who they can feel secure to confide in. Even a therapist doesn't help here, as a lot of people have pre-existing conditionings about what a therapist is "I'm not crazy, why do I need a therapist?".
Case in point, my father's cousin lived alone and didn't have any friends. He lived in the same house his whole life, just outside London by himself, with no indoor toilet or hot water. A few years ago, social services came after the neighbours called, because his roof collapsed and he was just living as if nothing was wrong. My father was his closest living family, but they'd not spoken in 20 years or more.
I feel this kind of thing is more common than you think. Especially with older people, they may have friends from the outside, but they aren't close with them that they can talk about whatever is on their mind.
What you described isn't a good fit for using AI. What would an LLM do for him?
The fact his roof collapsed and he didn't think much of it indicates a deeper problem only a human can begin to tackle.
We really shouldn't be solving deep societal problems by throwing more tech at them. That experiment has already failed.
The point being fixing your own life is going to bring much more in the way of benefits than the government or Sam trying to fix it for you. If one are a complete social reject then no amount of AGI will save them. People without close relationships are zombies that walk among us, in most ways they are already dead.
I 100% do not doubt the usefulness of therapy for those who are suffering in some way, but I feel like the idea that "everyone should probably have a therapist" is kinda odd - if you're generally in a good place, you can explore your feelings/motivations yourself with little risk.
> In a separate interview last week, Zuckerberg said “the average American has three friends, but has demand for 15” and AI could plug that gap.
And I think we should definitely look on this tech with scrutiny, but I think another angle to look at it is: which is worse? No therapy or AI therapy? You mention suicide, but which would result in a reduction in suicide attempts, a or b? I don't have an answer, but I could see it being possible that because AI therapy provides cheaper, more frequent access to mental care, even if it is lower quality, it could result in a net improvement over the status quo on something like suicide attempts.
1) Chatbots are never going to be perceived as safe or effective as humans by default, primarily due to human fiat. Professionals like counselors (and lawyers, doctors, software engineers, etc.) will always claim that an LLM cannot do their job, namely because acknowledging such threatens their livelihood. Determining whether LLMs genuinely provide therapeutic value to humans would require rigorous, carefully controlled experiments conducted over many years.
2) Chatbots definitely cannot replace human therapists in their current state. That much seems quite obvious to me for various reasons already argued well by others on here. But I had to highlight point #1 as devil's advocate, because adopting the mindset that "humans are inherently better by default" due to some magical or scientifically unjustifiable reason will prevent forward progress. The goal is to eliminate the (quite reasonable) fear people have of eventually losing their job to AI by enacting societal change now rather than denying into perpetuity that chatbots are necessarily inferior, at which point everyone will in fact lose their jobs because we had no plan in place.
It may be able to assist those professionals, but that is as far as I am willing to go, because I am not blinded by the shine of the statistical turks we are deploying right now.
Compared to that status quo, I'm not sure that LLMs are meaningfully more risky - unlike a human, at least it can't physically assault you.
https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-gove...
https://www.theguardian.com/society/2024/oct/19/psychotherap...
For counselling, people are encouraged to choose counsellors accredited by professional orgs like BACP.
The BACP's standards really aren't very high, as you can qualify for membership after a one-year part-time course and a few weeks of work experience. Their disciplinary procedures are, in my opinion, almost entirely ineffectual. They undertake no meaningful monitoring of accredited members, relying solely on complaints from members of the public. Out of tens of thousands of registered members, only a single-digit number are subject to disciplinary action every year. The findings of the few disciplinary hearings they do actually conduct suggest to me that they are perfectly happy to allow lazy, feckless and incompetent practitioners to remain on their register, with only a perfunctory slap on the wrist.
BACP membership is of course entirely voluntary and in no way necessary in order to practice as a counsellor or psychotherapist.
https://www.hcpc-uk.org/news-and-events/blog/2023/understand...
https://www.bacp.co.uk/about-us/protecting-the-public/profes...
As a result, I agree with you.
It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.
We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.
AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.
The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.
No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.
Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.
Also, for every nay sayer I encounter now, I’m going to start by asking “Have you ever taken therapy? For how long? Why did you stop? Did it help?”
Therapy isn’t a silver bullet. Finding a therapist that works for you takes years of patient trial and error.
I'm sure 80% of expert therapists in any modality will disagree.
At best, AI can compete with telehealth therapy, which is known for having practically no quality standards. And of course, LLMs surpass "no quality standards" with flying colors.
I say this very rarely because I think such statements should be used with caution, but in this case: saying that LLMs can do 80% of a therapist's work is actually harmful for people who might believe it and not seek effective therapy. Going down this path has a good probability of costing someone dearly.
Given that, AI can be just as good as talking to a friend when you don’t have one (or feel uncomfortable discussing something with one).
That... seems optimistic. See, for instance, https://www.rollingstone.com/culture/culture-features/ai-spi...
No psychologist will attempt to convince you that you are the messiah. In at least some cases, our robot overlords are doing _serious active harm_ which the subject would be unlikely to suffer in their absence. LLM therapists are rather likely to be worse than nothing, particularly given their tendency to be overly agreeable.
Therapy seems like the last place an LLM would be beneficial because it’s very hard to keep an LLM from telling you what you want to hear. I can see anyway you could guarantee that a chatbot cause severe damage to a vulnerable patient by supporting their neurosis.
We’re not anywhere close to an LLM which is trained to be supportive and understanding in tone but will never affirm your irrational fears, insecurities, and delusions.
the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation
The average person will never have the required experience to make an informed decision on the efficacy and safety of this.
This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.
I don't know that AI "advisory" chatbots can replace humans.
Could they help an individual organize their thoughts for more productive time with professionals? Probably.
Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.
Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?
Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.
Experience matters, that's something we seem to be forgetting fast.
Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?
We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.
I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.
There's so much history that shows that people have always been able to think like this, and so much written proof that they have, and to the same proportion as they do today.
Besides, in 12 hour days on a field, do you not have another 4 hours to relax and think? While stalking prey for 5 miles, is it not quiet enough for you to reflect on what you're doing and why?
I do think you're onto something though when you say it's related to our material needs all being relatively met. It seems that's correlational and maybe causal.
Actually, around here, you are lucky to find a job that is NOT 12 hours a shift.
What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.
Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.
A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.
The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.
We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.
From the article:
> > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.
That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.
1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."
AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.
The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are all human therapists licensed in country / state X more competent than a good AI?"
2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).
It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. This is true even if AI is net beneficial because of the increased access to therapy.
3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.
4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?
5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?
You trust Anthropic that much?
Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.
It would meet objective definition if you replaced 'capitalist' with 'socialist', which may have been what you meant, but that's merely an observation I make, not what you actually say.
The entire paragraph is quite contradictory, and lacks truth, and by extension it is entirely unclear what you mean, and it appears like you are confused when you use words and make statements that can't meet their definition.
You may want to clarify what you mean.
In order for it to be 'capitalist' true to its definition, you need to be able to achieve profit with it in purchasing power, but the outcomes of the entire business lifecycle resulting from this, taken as a whole, instead destroy that ability for everyone.
The companies involved didn't start on their merits seeking profit, they were funded by non-reserve debt issuance or money-printing which is the state picking winners and losers.
If they were capitalist they wouldn't have released model weights to the public. The only reason you would free a resource like that is if your goal was something not profit-driven (i.e. contagion towards chaos to justify control or succinctly totalism).
on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.
So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.
I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.