(Not a brain scientist btw)
Later research showed that models know that they don't know certain pieces of information, but the fine tuning constraint of providing answers did not give them the ability to express that they didn't know.
Asking the model questions against known information can produce a correct/incorrect map detailing a sample of facts that the model knows and does not know. Fine tuning a model to say "I don't know" in response to the those questions where it was incorrect can allow it to generalise the concept to its internal concept of unknown.
It is good to keep in mind that the models we have been playing with are just the first ones to appear. GPT 3.5 is like the Atari 2600. You can get it provide a limited experience for what you want and its cool that you can do it at all, but it is fundamentally limited and far from an ideal solution. I see the current proliferation of models to be like the Cambrian explosion of early 8 bit home computers. Exciting and interesting technology which can be used for real world purposes, but you still have to operate with the knowledge of the limitations forefront in your mind and tailor tasks to allow them to perform the bits they are good at. I have no-idea of the timeframe, but there is plenty more to come. There have been a lot of advances revealed in papers. A huge number of those advances have not yet coalesced into shipping models. When models cost millions to train you want to be using a set of enhancements that play nicely together. Some features will be mutually exclusive. By the time you have analysed the options to find an optimal combination, a whole lot of new papers will be suggesting more options.
We have not yet got the thing for AI that Unix was for computers. We are just now exposing people to the problems that drives the need to create such a thing.
Seems pertinent, and now I will try to read it again. Perhaps it will be useful for reference by others.
I once provided an LLM the riddle of the goat, cabbage and wolf, and changed the rules a bit. I prompted that the wolf was allergic to goats (and hence would not eat them). Still the llm insisted on not leaving them together on the same river bank, because the wolf would otherwise sneeze and scare the goat away.
My conclusion was that the llm solved the riddle using prior knowledge plus creativity, instead of clever reasoning.
I feel this analogy confirmed by the fact that chain of thought works so well. That is what (most?) people do when they actively "think" about a problem. They have a kind of inner monologue.
Now, we have already reached the point that LLMs are much smarter than the language areas of humans - but not always smarter than the whole human. I think the next step towards AGI would be to add other "brain areas". A limbic system that remembers the current emotion and feeds it as an input into the other parts. We already have dedicated vision and audio AIs. Maybe we also need a system for logical reasoning.
That implies to me that a larger brain immediately benefited our ancestors. That is, going from 400 to 410 ccs had evolutionary advantage and so did 410 to 420, etc.
That implies that once the brain architecture was set, you could increase intelligence through scale.
I bet there are some parallels to current AI there.
Seems to me (not a neuroscientist) like there's a flaw in that experiment: how would the right hemisphere understand the meaning of the words, if language is only processed by the left? I also recall reading that the more "primitive" parts of our brains don't have a concept of negation.
But maybe they have been considering this and it's no issue?
Which also makes me consider: Is there some sort of consciousness in my own brain I'm not aware of? Sure, my brain does a lot I'm not conscious of—but is there a quiet, thinking awareness that exists in my skull, that I, the mind writing this, do not know of?
for our particular brain , the logical ones usually have immediate reactions that suck (like borderline personality disorder) and the emotional ones tend to have mish mash wordses feelings
(idk how to write that it's like words-es, the s at the end is very gender for emotional feelings words so all of them tend to have it)
This perspective fails to establish that the brain produces consciousness, as it relies on the mistaken assumption that "mind" and "consciousness" are interchangeable. While brain activity may influence the mind, consciousness itself could be a more fundamental aspect of reality. Rather than generating consciousness, the brain might function like a radio, merely receiving and processing information from an all-pervasive field of consciousness.
In this view, a split-brain condition would not create two separate consciousnesses but instead allow access to two distinct streams of an already-existing, universal consciousness.
I think consciousness arises from the brain.
I think the music I dance arises from the radio."
I tend to agree, but it doesn't fully explain Benj Hellie's vertiginous question [1]. Everyone seems to have brains, but for some reason only I am me.
If we were able to make an atom-by-atom accurate replica of your brain (and optionally your body, too), with all the memories intact, would you suddenly start seeing the world from two different pair of eyes at the same time? If no, why? What would make you (the original) different from your replica?
Don't push the argument. It's not coming from a place of rationality even though he's deliberately not using the word "spirit".
(Edit: Michael Graziano is who I was trying to remember - he uses the words "schematic" and "model")
Your view is called "pan-psychism". It's interesting, but there isn't anything that makes it necessary. Everything we're finding out is that most or all thinking happens outside of consciousness, and the results bubble up into it as perception. Consciousness does seem to be universal within the brain, though.
I find pan-psychism interesting just because of its popularity - people want something spiritual, knowingly or not. I would advise not to insist that consciousness==soul, however, as neuroscience seems to be rapidly converging on a more mundane view of consciousness. It's best to think of one's "true" self according to the maxim that there is much more to you than meets the mind's eye.
Secondly, it wouldn’t really explain anything. The “consciousness field” would presumably obey some kind of natural laws like the known fields do, but the subjective experience of consciousness would remain as mysterious as before (for those who do find it mysterious).
Experiments demonstrating an external source of consciousness would be very interesting.
Not a teapot in this case!
It's effectively akin to talking about mass. Despite the fact that mass is observable as a distinct phenomenon in any object, it's obviously not accurate to say that you "produce mass" or that it's "your mass" in some private, ontologically separated way, it just appears that way, by definition if we look at particular manifestations of it.
So that's very interesting that you mention that.
> LaSota believed that humans have two minds, a left and right hemisphere, and each hemisphere can be good or evil, according to posts on her blog. Eventually, LaSota came to believe that only very few people — she among them — are double good. [1]
[0] https://www.usatoday.com/story/news/nation/2025/02/19/zizian... [1] https://www.nbcnews.com/news/us-news/german-math-genius-get-...