Note that Weizenbaum was an AI critic: Weizenbaum's intention was not for Eliza to pass the Turing test, but to show to people that a clearly not intelligent program based on primitive pattern matching can appear to behave intelligently.
He failed: His own secretary wanted to be left alone with the software and typed in her personal problems. Work on Eliza (1963-65, paper published 1966) until today is mostly misunderstood.
The book also has one of the best and most succinct descriptions of Turing machines and the theoretical underpinning of computer science that I have ever read. Even if you’re an AI maximalist you should read the third chapter of the book.
Aren't we? Casual chains upon our matter produce emergent behaviors using the same physics and chemistry that our mechanistic creations rely upon.
Certainly those behaviors and results do not produce the same repeatable, predictable results as our clockworks but that is the whole point of the field of AI (as opposed to the marketing corruption/term that is currently in vogue, so GAI if you prefer), to produce system and algorithm structures designed with architecture and patterns more like our own.
Perhaps you believe in the ghost in the machine hypothesis? The magical soul that is more than the emergent evolving pattern produced across time by DNA replicators? That this undefinable, unmeasurable spirit makes us forever different?
I don't understand this, all the programs I've ever written make decisions based on some factors.
Are you talking about free will? If so, what is free will?
This post of yours sent me looking for this and I ended up with (now reading) Agassi’s provocative (& brutal) takedown of this book. As to the content of the book, Agassi pointedly mentioned Weiner. I will of course read Weizenbaum after this. (thanks.)
https://www.researchgate.net/publication/286058724_Computer_...
p.s. continuing on, the review turns very positive. Initial vitriol apparently more related to the “two worlds” matter.
Especially when your own reasons are not the slam dunk you think they are.
I'd say he succeeded. It just seems that people are perfectly content with just appearance of intelligence.
Now here I am talking about life, the universe, and everything with ChatGPT. It makes me both inexplicably happy/hopeful and simultaneously weirdly melancholic.
[0] https://liza.io/its-not-so-bad.-we-have-snail-babies-after-a...
However, this isn't what Eliza is all about. It's rather about the question, how little do you actually need in terms of knowledge, world model, or rule sets to give the impression of an "intelligent" (even sympathetic) participant in a conversation (as long as you're able to constrain the conversation to a setting, which doesn't require any world knowledge, at all.) To a certain degree, it is also about how eager we are to overestimate the capabilities of such a partner in conversation, as soon as some criteria seem to be satisfied. Which is arguably still relevant today.
The rule set, BTW, is actually small, just 3 pages in a printout, achieving a surprising generality (or rather, appearance thereof) for its size. Compare: https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1...
Honestly, AI shouldn't be the takeaway point here, but how we do the same for politics.
I see we have a new entry for the 2024 Lies of Omission award.
The article linked to plainly shows that Eliza only beats ChatpGPT-3.5 and is in the bottom half when ranked against a variety of different system prompts. An excellent ass covering strategy that relies on the reader not checking sources.
An honest author would have actually quoted the article saying:
> GPT-4 achieved a success rate of 41 percent, second only to actual humans.
instead of constructing a deliberately misleading paraphrase.
The blog appears to have been updated to specify GPT3.5, but the original version was accurate.
The paper itself is interesting as it covers the limitations (it has big methodological issues), how the GPT prompts attempted to overridei default chatGPT tone and reasons why ELIZA performed surprisingly well (some thought it was so uncooperative, it must be human!) https://arxiv.org/pdf/2310.20216.pdf
GPT4 + a RLHF that was trained to think it was human would be a much different beast.
(Nobody with even the crudest understanding of the principles of Eliza could claim this, and the article clearly demonstrates a detailed understanding. Disclaimer: I wrote the JS implementation linked in the article, many years ago.)
Edit: The question rightfully raised – and answered – by Eliza, which is still relevant today in the context of GPT, is: does the appearance of intelligent conversation (necessarily) hint at the presence of a world model in any rudimentary form?
Why was the Turing test still relevant after this? Didn't this indicate it was very flawed test? Or it was hard to come up with a better test?
A turing test means you enter into two conversations. Then you pick which one was with a computer. If people answer wrong 50% of the time, the computer is indistinguishable, hence it passes. Note that it is not "People get wrong whether their single conversation is talking to an AI >50% of the time" and it is definitely not "sometimes people don't realize they're talking to an AI". In particular people constantly write about the latter because it generates clicks.
The real turing test (the imitation game) involve a computer and a human subject, both talking to a human interrogator. The interrogator must determine who is the human. It is an adverserial situation, both the human subject and interrogator do everything in their power for the interrogator to correctly identify the computer. The computer has to not only convince the interrogator, but also do it better than the human subject. Furthermore, both humans are supposed to be experts in the game, just like the computer that is designed to pass the test. So, not just random people.
I think the point is less that there is a truth and we're too dumb to figure it out, and more that in certain circumstances we'll just have to accept a lower bar for evidence about whether those properties apply.
It reminds me of how no class of computer can solve the halting problem for itself. No matter how intelligent you are, there will be holes like this in your certainty about some things.
Even the definition of 'human intelligence' is a continuum from the smartest to the dumbest of us, that doesn't even stop there and descends thought all animal life.
Similarly, I believe a lot of early Turing Test successes kind-of cheated and had their bots pretend to be ESL, on the grounds that the interrogators would interpret their unnatural responses not as a robot but as a second-language speaker's human mistakes. But people who teach English as a second language, or interact with language-learners a lot will learn the types of mistakes each group of learners make, and will spot unnatural mistakes a lot faster.
Now that I think about it, that a major factor in determining Turing test performance isn't the intelligence of the testers but their knowledge does highlight why it's not a great measure of intelligence in the first place.
That's vague and covers a universe of criteria — mood, satisfaction with the conversation, actual behavior and so forth — but it also I think is a more realistic gauge of AI performance. It's probably unattainable but that's not necessarily a bad thing. If it is attainable within confidence then it's a pretty powerful AI.
There are probably some people who would be ok with some AI for some purposes.
Sadly the author doesn't elaborate on this. I thought nowadays 'AI' is a synonym for 'algorithm', which would fit ELIZA
Is there an accepted definition of the word AI?
https://en.wikipedia.org/wiki/ELIZA
[1] https://www.civilizr.com/communication/azile-chatbot-savage-...
You may find the original here:
https://sites.google.com/view/elizagen-org/commonly-known-el...
The source is in `doctor.el`.
DonHopkins on May 30, 2021 | prev | next [–]
Here's the source code for Kent Pitman's "DOCTOR" in MACLISP, which was of course inspired by ELIZA. (Joseph Weizenbaum taught Kent Pitman LISP!)
https://github.com/PDP-10/its/blob/master/src/games/doc.102
And here's what happened with he (manually by typing) hooked it up with Kenneth Colby's "PARRY" (the paranoid patient):
https://www.maclisp.info/pitmanual/funnies.html
>Parrying Programs
>I didn't write the original ELIZA program, although my Lisp class was taught by Joseph Weizenbaum, who did. I later wrote a very elaborate program of similar kind, which I just called DOCTOR, in order to play with some of the ideas.
>At some point, I noticed there was a program at Stanford called PARRY (the paranoid patient), by Kenneth Colby. I understand from Wikipedia's PARRY entry that Weizenbaum's ELIZA and PARRY were connected at one point, although I never saw that. I never linked PARRY with my DOCTOR directly, but I did once do it indirectly through a manual typist. Part of my record of this exchange was garbled, but this is a partial transcript, picking up in the middle. Mostly it just shows PARRY was a better patient than my DOCTOR program was a doctor.
>I have done light editing to remove the typos we made (rubbed out characters were echoed back in square brackets).
>Also, I couldn't find documentation to confirm this, but my belief has always been that the numeric values after each line are PARRY's level of Shame (SH), Anger (AN), Fear (FR), Disgust (DS), Insecurity (IN), and Joy (J).—KMP
[...]
https://news.ycombinator.com/item?id=38402813
DonHopkins 44 days ago | parent | context | favorite | on: The Revival of Medley/Interlisp
That's right, it's just a throw-away quip, but if you want the deep nuanced story and inside history of Common Lisp and comparison with Scheme, Kent Pitman is the one to read:
https://en.wikipedia.org/wiki/Kent_Pitman
Index of Kent Pitman's Papers:
https://www.nhplace.com/kent/Papers/
Scheme or Lisp? Kent M Pitman explains the deep philosophical differences.
https://www.reddit.com/r/programming/comments/6fa5r/scheme_o...
Kent Pitman on Scheme or Lisp?:
https://groups.google.com/g/comp.lang.lisp/c/TEk4O4-zsA8/m/H...
Common Lisp: The Untold Story:
https://www.nhplace.com/kent/Papers/cl-untold-story.html
Why Wolfram Mathematica did not use Lisp (2002) (ymeme.com):
https://news.ycombinator.com/item?id=9797936
https://web.archive.org/web/20110122140154/http://www.ymeme....
Kent Pitman's essay on why lisp doesn't have copying of lists.
https://groups.google.com/g/comp.lang.lisp/c/MmtQreo3PCM
Parenthetically Speaking with Kent Pitman: The Best Intentions: EQUAL Rights -- And Wrongs -- In Lisp:
https://www.nhplace.com/kent/PS/EQUAL.html
Kent M. Pitman Answers On Lisp And Much More:
https://developers.slashdot.org/story/01/11/03/1726251/kent-...
Kent M. Pitman's Second Wind:
https://developers.slashdot.org/story/01/11/13/0420226/kent-...
Tutorial on Good Lisp Programming Style: Peter Norvig, Sun Microsystems Labs Inc; Kent Pitman, Harlequin Inc.:
https://www.cs.umd.edu/~nau/cmsc421/norvig-lisp-style.pdf
Notes from the ANSI standardisation process:
https://stackoverflow.com/questions/72414053/notes-from-the-...
Issue CLOS-CONDITIONS Writeup:
https://www.lispworks.com/documentation/lw50/CLHS/Issues/iss...
On Pitman's “Special forms in Lisp” (2011) (kazimirmajorinc.com):
https://news.ycombinator.com/item?id=29947329
https://news.ycombinator.com/item?id=29954993
DonHopkins on Jan 16, 2022 | parent | next [–]
Kent Pitman also wrote the "Revised Maclisp Manual (Saturday Evening Edition)" aka the "Pitmanual".
https://en.wikipedia.org/wiki/David_A._Moon
http://www.nhplace.com/kent/publications.html
>In 1983, I finished the multi-year task of writing The Revised Maclisp Manual (Saturday Evening Edition), sometimes known as The Pitmanual, and published it as a Technical Report at MIT's Lab for Computer Science. In 2007, I finished dusting that document off and published it to the web as the Sunday Morning Edition.
http://www.maclisp.info/pitmanual/
Not to be confused with David Moon who wrote the "MacLISP Reference Manual" aka the "Moonual", and who co-authored the "Lisp Machine Manual" with Richard Stallman and Daniel Weinreb, which had big bold lettering that ran around the spine and back of the cover, so it was known as the "LISP CHINE NUAL" (reading only letters on the front).
https://news.ycombinator.com/item?id=15185827
https://hanshuebner.github.io/lmman/title.xml
https://news.ycombinator.com/item?id=15186998
DonHopkins on Sept 6, 2017 | next [–]
The cover of the Lisp Machine Manual had the title printed in all caps diagonally wrapped around the spine, so on the front you could only read "LISP CHINE NUAL". So the title was phonetically pronounced: "Lisp Sheen Nual".
My friend Nick made a run of custom silkscreened orange LISP CHINE NUAL t-shirts (most places won't print around the side like that).
https://www.lispworks.com/documentation/lw50/CLHS/Issues/iss...
I was wearing mine in Amsterdam at Dappermarkt on Queen's Day (when everyone's supposed to wear orange, so I didn't stand out), and some random hacker (who turned out to be a university grad student) came up to me at random and said he recognized my t-shirt!
http://www.textfiles.com/hacking/hakdic.txt
CHINE NUAL (sheen'yu-:l) noun.
The reference manual for the Lisp Machine, a computer designed at MIT especially for running the LISP language. It is called this because the title, LISP MACHINE MANUAL, appears in big block letters -- wrapped around the cover in such a way that you have to open the cover out flat to see the whole thing. If you look at just the front cover, you see only part of the title, and it reads "LISP CHINE NUAL"
toomanybeersies on Sept 7, 2017 | parent | next [–]
Link to an image of the manual, for the lazy:
https://c1.staticflickr.com/1/101/264672507_307376d26c_z.jpg
[...]
;;; Notes about CLI interrupts and eval-in-other-lisp:
https://news.ycombinator.com/item?id=20267415
https://news.ycombinator.com/item?id=38061207
>Here's Kent Pittman's :TEACH;LISP from ITS, which is a MACLISP program that teaches you how to program in MACLISP. (That's "Man And Computer Lisp" from "Project MAC", not "Macintosh Lisp".)