This.
Back in the late 1980's and early 90's the debate-du-jour was between deliberative and reactive control systems for robots. I got my Ph.D. for simply saying that the entire debate was based on the false premise that it had to be one or the other, that each approach had its strengths and weaknesses, and that if you just put the two together the whole would be greater than the sum of its parts. (Well, it was a little more than that. I had to actually show that it worked, which was more work that simply advancing the hypothesis, but in retrospect it seems kinda obvious, doesn't it?)
If I were still in the game today, combining generative-AI and old-school symbolic reasoning (which has also advanced a lot in 30 years) would be the first thing I would focus my attention (!) on.
Chess was a game for humans.
It was very briefly a game for humans and machines (Kasparov had a go at getting "Advanced Chess" off the ground as a competitive sport), but soon enough having a human in the team made the program worse.
But at least the evaluation functions were designed by humans, right? That lasted a remarkably long time; first Stockfish became the strongest engine in the world by using distributed hyperparameter search to tweak its piece-square tables, then AlphaZero came along and used a policy network + MCTS instead of alpha-beta search, then (with an assist from the Shogi community) Stockfish struck back with a completely learned evaluation function via NNUE.
So the last frontier of human expertise in chess is search heuristics, and that's going to fall too: https://arxiv.org/abs/2402.04494.
The common theme with all of this is that the stuff which we used before are, fundamentally, hacks to get around _not having enough compute_, but which make the system worse once you don't have to make those tradeoffs around inductive biases. Empirical evidence suggests that raw scaling has a long way to run yet.
I'm really baffled by such statement and genuinely curious.
How come that studying GOFAI as undergraduate and graduate at many European universities, doing a PhD. and working in the field for several years _never_ exposed me to EURISKO up until last week (thanks to HN)?
I heard about Cyc, many formalism and algorithms that related to EURISKO, but never heard of its name.
Is EURISKO famous in US only?
It was featured in a BBC radio series on AI made by Colin Blakemore [1] around 1980, the papers on AM and EURISKO were in the library of the UK university that I attended.
[1] https://en.wikipedia.org/wiki/Colin_Blakemore#Public_engagem...
I discussed ChatGPT with my yoga teacher recently, but I bet not even my IT colleagues would have a clue about EURISKO. :-)
https://www.lesswrong.com/posts/rJLviHqJMTy8WQkow/recursion-...
> This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.
To my mind -- and maybe this is just the benefit of hindsight -- this seems way too overcautious on Yudkowsky's part.
[1]: https://www.lesswrong.com/posts/t47TeAbBYxYgqDGQT/let-s-reim...
Why is Yudkowsky taken seriously? This stuff is comparable to the "LHC micro black holes will destroy Earth" hysteria.
There are actual concerns around AI like deep fakes, a deluge of un-filterable spam, mass manipulation via industrial scale propaganda, mass unemployment created by widespread automation leading to civil unrest, opaque AIs making judgements that can't be evaluated properly, AI as a means of mass appropriation of work and copyright violation, concentration of power in large AI companies, etc. The crackpot "hard takeoff" hysteria only distracts from reasonable discourse about these risks and how to mitigate them.
Trivialities Annoyances Immediate harm X-Risk
|------------------------------------------------------|
\----stuff you mention-------/
\---stuff Eliezer------/
wrote about
> The crackpot "hard takeoff" hysteria only distracts from reasonable discourse about these risks and how to mitigate them.IDK, I feel endless hand-wringing about copyright and deepfakes distract from risks of actual, significant harm at scale, some of which you also mentioned.
I will be heavily downvoted for this, but here is how I remember it:
1) LHC was used to study blackholes and prove things like Hawking radiation
2) LHC was supposed to be safe due to Hawking radiation (that was only an unproven theory at the time)
So the unpopular question: what if Hawking radiation didnt actually exist? Wouldnt there be a risk of us dying? A small risk, but still some risk? (especially as the potential micro black hole would have the same velocity as earth, so it wouldnt fly away somewhere into space)
On a side note: how would EURISCO evaluate this topic?
Since I read about this secretive CYC (why u can email asking for it, but source not hosted anywhere?): couldnt any current statistics based AI be used to feed this CYC program / database with information? Take a dictionary and ask ChatGPT to fill it with information for each word.
People like religion, particularly if it doesn't affect how they live their life _today_ too much. You get all of the emotional benefits of feeling like you're doing something virtuous without the effort of actually performing good works.
I agree.
> I suspect its achievements were slightly overblown and heavily guided by a human hand
So do I. We'll find out how much of its performance was real, and how much bullshit.
> the unreasonably effectiveness of differentiable programming and backpropagation has sucked up much of the oxygen in the room
The Bitter Lesson -- http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Unfortunately it starts with the passing of Douglas Lenat. But that enabled Stanford to open up their 40 year old archive, which they still had, of Lenats work.
Somehow, someway, someone not only stumbled upon EURISKO, but also knew what it was. One of the most notorious AI research projects of the age that actually broke out of the research labs of Stanford and out into the public eye, with impactful results. Granted, for arguably small values of “public” and “impactful”, but for the small community it affected, it made a big splash.
Lenat used EURISKO to find a very unconventional winning configuration to go on to win a national gaming tournament. Twice.
In that community, it was a big deal. The publisher changed the rules because of it, but Lenat returned victorious again the next year. After a discussion with the game and tournament sponsors, he never came back.
Apparently EURISKO has quite a reputation in the symbolic AI world, but even there it was held close.
But now it has been made available. Not only made available, but made operational. EURISKO is written in an obsolete Lisp dialect, Interlisp. But, coincidentally, we have today machine simulators that can run versions of that Lisp on long lost, 40 year machines.
And someone was able to port it. And it seems to run.
The thought of the tendrils through time that had to twist their way for us to get here leaves, at least me, awestruck. So much opportunity for the wrong butterfly to have been stepped on to prevent this from happening.
But it didn’t, and here we are. Great job by the spelunkers who dug this up.
Basically, with the Traveller tournament Lenat appears to have stumbled onto a story that caught the public's imagination, and then through the milked it for all he could to give his project publicity and to make it appear more successful than it actually was. And if that required embellishing the story or just making shit up, well, no harm no foul.
Even when something is technically true, it often turns out that it's being told in a misleading way. For example, you say that "the publisher changed the ruleset". That was the entire gimmick of the Traveller TCS tournament rules! The printed rulebook had a preset progression of tournament rules for each year.
I wrote a bit more about this a few years ago with some of the other details: https://news.ycombinator.com/item?id=28344379
Doug Lenat's sources for AM (and EURISKO+Traveller?) found in public archives - https://news.ycombinator.com/item?id=38413615 - Nov 2023 (9 comments)
Eurisko Automated Discovery System - https://news.ycombinator.com/item?id=37355133 - Sept 2023 (1 comment)
Why AM and Eurisko Appear to Work (1983) [pdf] - https://news.ycombinator.com/item?id=28343118 - Aug 2021 (17 comments)
Early AI: “Eurisko, the Computer with a Mind of Its Own” (1984) - https://news.ycombinator.com/item?id=27298167 - May 2021 (2 comments)
Some documents on AM and EURISKO - https://news.ycombinator.com/item?id=18443607 - Nov 2018 (10 comments)
Why AM and Eurisko Appear to Work (1983) [pdf] - https://news.ycombinator.com/item?id=9750349 - June 2015 (5 comments)
Why AM and Eurisko Appear to Work (1984) [pdf] - https://news.ycombinator.com/item?id=8219681 - Aug 2014 (2 comments)
Eurisko, The Computer With A Mind Of Its Own - https://news.ycombinator.com/item?id=2111826 - Jan 2011 (9 comments)
Let's reimplement Eurisko - https://news.ycombinator.com/item?id=656380 - June 2009 (25 comments)
Eurisko, The Computer With A Mind Of Its Own - https://news.ycombinator.com/item?id=396796 - Dec 2008 (13 comments)
This is amusing: https://www.saildart.org/D.SAI[1,DBL]
And it looks like he wrote a story called "Lethe" as a grad student: https://www.saildart.org/LETHE.DOC[1,DBL]
What can one do with EURISKO? The fact of its recovery after its authors passing is interesting, in and of itself - but why is EURISKO, specifically, worth the effort of understanding?