Yes...
> Argue the facts.
What?
> What are the limits that prevent progress on the dimension of replicating human intelligence?
I don't work in that field, but as a layman I'd wager the lack of clear technical understanding of what animal intelligence actually is, let alone how it works is the biggest limitation.
Half of the language in your comment was fact free and incendiary. Delusion, prattle, deep needs of some role. That’s all firmly in the realm of ad hominem, so I was asking if you had anything substantial.
> I'd wager the lack of clear technical understanding of what animal intelligence actually is
I made the same points long time ago so can’t be too critical of that. I’ve changed my mind and here’s why. That’s not any kind of universal limit. It’s a state, and we can change states. We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves. So our inability to understand intelligence isn’t a limit that prevents progress. New algorithms and architectures will be tested, in an ongoing fashion, because the perceived advantages and benefits are so great. It’s not like the universe had a map for intelligence before we arrived, it emerged from randomness.
I’m less sure that it’s a good idea, but that’s a different discussion. Put me in the camp of “this is going to be absolutely massive and I wish people took it more seriously.”
No. I believe it is factually accurate that many tech workers believe that technology progress marches steadily onwards and upwards. This is easily shown by the historical record to be patently false. Moreover, it's easy to imagine myriad ways we could regress technologically--nuclear war, asteroid impact, space weather, etc. So a belief in the inexorable progress of technology is therefore delusional. I believe it is a fact that tech companies' managers encourage these delusions by spinning up a bunch of pseudo religious sentiment using language like "innovation" to describe what is mostly actually really mundane, meaningless, and often actively harmful work. People hear that stuff, it goes to their heads, and they think they're "making the world a better place". The inexorable march of technology progress fits into such a world view.
> We currently don’t understand intelligence but there’s no barrier I’m aware of that prevents progress. In addition, we’ve discovered types of intelligence (LLM’s, AlphaGo Zero etc) that don’t depend on our ability to understand ourselves.
How can you claim both that we don't know what intelligence is, and that LLMs, AlphaGo Zero, and etc are "intelligent"?