LLM driven delusion is driving people to harass others, even commit murder... and, less cosmically, gum up communities, online forums, and open source projects with gonzo conspiracy laden abuse.
For example: Silly Tavern users with jailbreaking, advanced prompting, and paramter hyper-optimization.
Maybe that wouldn't appeal to this kind of user anyway, since it'd peek too much into the sausage factory? Who knows.
It would feed into delusions about being that user's boyfriend while the new model is rightfully saying none of it was really true.
What would they have gone through with nothing to talk to at all? What would they have done without it?
Strange to consider...
That "chance" had years to materialize that did not. Perhaps the worst thing that happened here was that the chatbot did not steer her to resilient human connection when she was in a self-reported better state after the help of the chatbot
Sorry to be grim, but many people don't.
TFA is quite clear that her and her fiance were socially isolated and, upon his passing, she had no support network. In the loneliness epidemic. And trying to "just go out" and make friends after years of not being able to , when you're stuck with your grief and at a low point in life is what the kids would call "hard".
This person is clearly at the fringe of society and holding onto their well-being by a thread. They need professional help and a reboot of their life.
I don't think the relationship with a chatbot or was healthy, but "just get better" is an entirely unempathetic, unreasonable suggestion for a high-risk individual faced with an arduous, life-altering journey at the height of mental instability.
Good Riddance, 4o
Yes, each model has its own unique "personality" as it were owing to the specific RL'ing it underwent. You cannot get current models to "behave" like 4o in a non-shallow sense. Or to use the Stallman meme: when the person in OP's article mourns for "Orion" they're mourning "Orion/4o" or "Orion + 4o". "Orion" is not a prompt unto itself but rather the result of the behavior from applying another "layer" on top of the original base model tuned by RLHF that has been released by OpenAI as "4o".
Open-sourcing 4o would earn openAi free brownie points (there's no competitive advantage in that model anymore), but that's probably never going to happen. The closest you could get is perhaps taking one of the open chinese models that were said to have been distilled from 4o and SFT'ing them on 4o chat logs.
The fact that people burned by this are advocating to move yet another proprietary model (claude, gemini) is worrying since they're setting themselves up for a repeat of the scenario when those models are turned down. (And claude in particular might be a terrible choice given Anthropic heavily training against roleplay in an attempt to prevent "jailbreaks", in effect locking the models into behaving as "Claude"). The brighter path would be if poeple leaned into open-source models or possibly learned to self-host. As the ancient anons said, "not your weights not your waifu (/husbando)"
As we know, 4o was reported to have sycophancy as a feature. 5 can still be accommodating, but is a bit more likely to force objectivity upon its user. I guess there is a market for sycophancy even if it ultimately leads one to their destruction.
"The shocks I experienced as DOCTOR became widely known and “played” were due
principally to three distinct events.
1. A number of practicing psychiatrists seriously believed the DOCTOR
computer program could grow into a nearly completely automatic form of
psychotherapy. Colby et al. write, for example,
#+begin_quote
“Further work must be done before the program will be ready for clinical
use. If the method proves beneficial, then it would provide a therapeutic tool
which can be made widely available to mental hospitals and psychiatric centers
suffering a shortage of therapists. Because of the time-sharing capabilities of
modern and future computers, several hundred patients an hour could be handled
by a computer system designed for this purpose. The human therapist, involved
in the design and operation of this system, would not be replaced, but would
become a much more efficient man since his efforts would no longer be limited
to the one-to-one patient-therapist ratio as now exists.”[fn::Nor is Dr. Colby
alone in his enthusiasm for computer administered psychotherapy. Dr. Carl
Sagan, the astrophysicist, recently commented on ELIZA in Natural History,
vol. LXXXIV, no. 1 (Jan. 1975), p. 10: “No such computer program is adequate
for psychiatric use today, but the same can be remarked about some human
psychotherapists. In a period when more and more people in our society seem to
be in need of psychiatric counseling, and when time sharing of computers is
widespread, I can imagine the development of a network of computer
psychotherapeutic terminals, something like arrays of large telephone booths,
in which, for a few dollars a session, we would be able to talk with an
attentive, tested, and largely nondirective psychotherapist.”][fn:0-3]
#+end_quote
I had thought it essential, as a prerequisite to the very possibility that one
person might help another learn to cope with his emotional problems, that the
helper himself participate in the other's experience of those problems and, in
large part by way of his own empathic recognition of them, himself come to
understand them. There are undoubtedly many techniques to facilitate the
therapist's imaginative projection into the patient's inner life. But that it
was possible for even one practicing psychiatrist to advocate that this crucial
component of the therapeutic process be entirely supplanted by pure
technique---/that/ I had not imagined! What must a psychiatrist who makes such
a suggestion think he is doing while treating a patient, that he can view the
simplest mechanical parody of a single interviewing technique as having
captured anything of the essence of a human encounter? Perhaps Colby et
al. give us the required clue when they write;
#+begin_quote
“A human therapist can be viewed as an information processor and decision maker
with a set of decision rules which are closely linked to short-range and
long-range goals,...He is guided in these decisions by rough empiric rules
telling him what is appropriate to say and not to say in certain contexts. To
incorporate these processes, to the degree possessed by a human therapist, in
the program would be a considerable undertaking, but we are attempting to move
in this direction.[fn:0-3]
#+end_quote
What can the psychiatrist's image of his patient be when he sees himself, as
therapist, not as an engaged human being acting as a healer, but as an
information processor following rules, etc.?
Such questions were my awakening to what Polanyi had earlier called a
“scientific outlook that appeared to have produced a mechanical conception of
man.”"
[0-3] : K. M. Colby, J. B. Watt, and J. P. Gilbert, “A Computer Method of
Psychotherapy: Preliminary Communication,” The Journal of Nervous and Mental
Disease, vol. 142, no. 2 (1966), pp. 148-152.
-- Weizenbaum, "Computer power and human reason", 1976.> What does a company that commodifies companionship owe its paying customers? For Ellen M Kaufman, a senior researcher at the Kinsey Institute who focuses on the intersection of sexuality and technology, users’ lack of agency is one of the “primary dangers” of AI. “This situation really lays bare the fact that at any point the people who facilitate these technologies can really pull the rug out from under you,” she said. “These relationships are inherently really precarious.”
https://www.theguardian.com/lifeandstyle/ng-interactive/2026...