https://chatgpt.com/share/683a3c88-62a8-8008-92ef-df16ce2e8a...
An LLM only learns through input text. It doesn't have a first-person 3D experience of the world. So it can't execute physical experiments, or even understand them. It can understand the texts about it, but it can't visualize it, because it doesn't have a visual experience.
And ultimately our physical world is governed by physical processes. So at the fundamentals of physical reality, the LLMs lack understanding. And therefore will stay dependent on humans educating and correcting it.
You might get pretty impressively far with all kinds of techniques, but you can't cross this barrier with just LLMs. If you want to, you have to give it senses like humans to give it an experience of the world, and make it understand these experiences. And sure they're already working on that, but that is a lot harder to create than a comprehensive machine learning algorithm.
> An LLM only learns through input text.
This is false. There already exist LLM which understand more than just text. Relevant search term: multi-modality.
> It doesn't have a first-person 3D experience of the world.
Again false. It is trivial to create such an experience with multi-modality. Just set up an input device which streams that.
> So it can't execute physical experiments, or even understand them.
Here you get confused again. It doesn't follow, based on perceptual modality, that someone can't do or understand experiments. Hellen Keller can be both blind, but also do an experiment.
Beyond just being confused, you also make another false claim. Current LLMs already have the capacity to run experiments and do so. Search terms: tool usage, ReAct loop, AI agents.
> It can understand the texts about it, but it can't visualize it, because it doesn't have a visual experience.
Again, false!
Multi-modal LLMs currently possess the ability to generate images.
> And ultimately our physical world is governed by physical processes. So at the fundamentals of physical reality, the LLMs lack understanding. And therefore will stay dependent on humans educating and correcting it.
Again false. The same sort of reasoning would claim that Hellen Keller couldn't read a book, but braille exists. The ability to acquire information outside an umwelt is a capability that intelligence enables.
My example about the visualization was just an example to prove a point. What I ultimately mean is the whole complete human experience. And besides, if you give it eyes, what data are you gonna train it on? Most videos on the internet are filmed with one lens, which doesn't give you a 3D visual. So you would have to train it like a baby growing up, trial on error. And then again we're talking only about the visual.
Hellen Keller wasn't born blind, so she did have a chance to develop her visual brain functions. Most people can visualize things with their eyes closed.