What sets my brain apart from an LLM though is that I am not typing this because you asked me to do it, nor because I needed to reply to the first comment I saw. I am typing this because it is a thought that has been in my mind for a while and I am interested in expressing it to other human brains, motivated by a mix of arrogant belief that it is insightful and a wish to see others either agreeing or providing reasonable counterpoints—I have an intention behind it. And, equally relevant, I must make an effort to not elaborate any more on this point because I have the conflicting intention to leave my laptop and do other stuff.
Human will know what they want to express, choosing words to express it might be similar to LLM process of choosing words, but for LLM it doesn't have that "Here is what i know to express part", i guess that the conscious part?
Not really. More often than not my thoughts take form as sense impressions that aren't readily translatable into language. A momentary discomfort making me want to shift posture - i.e., something in the domain of skin-feel / proprioception / fatigue / etc, with a 'response' in the domain of muscle commands and expectation of other impressions like the aforementioned.
The space of thoughts people can think is wider than what language can express, for lack of a better way to phrase it. There are thoughts that are not <any-written-language-of-choice>, and my gut feeling is that the vast majority are of this form.
I suppose you could call all that an internal language, but I feel as though that is stretching the definition quite a bit.
> it seems like that would pretty analogous to how humans think before communicating
Maybe some, but it feels reductive.
My best effort at explaining my thought process behind the above line: trying to make sense of what you wrote, I got a 'flash impression' of a ??? shaped surface 'representing / being' the 'ways I remember thinking before speaking' and a mess of implicit connotation that escapes me when I try to write it out, but was sufficient to immediately produce a summary response.
Why does it seem like a surface? Idk. Why that particular visual metaphor and not something else? Idk. It came into my awareness fully formed. Closer to looking at something and recognizing it than any active process.
That whole cycle of recognition as sense impression -> response seems to me to differ in character to the kind of hidden chain of thought you're describing.
As a shortcut my brain "feels" something is correct or incorrect, and then logically parse out why I think so. I can only keep so many layers in my head so if I feel nothing is wrong in the first 3 or 4 layers of thought, I usually don't feel the need to discredit the idea. If someone tells me a statement that sounds correct on the surface I am more likely to take it as correct. However, upon digging deeper it may be provably incorrect.
The thinking slow version would indeed be thought through before I communicate it
Maybe the reason you give is actually a post hoc explanation (a hallucination?). When an LLM spits out a poem, it does so because it was directly asked. When I spit out this comment, it’s probably the unavoidable result of a billion tiny factors. The trigger isn’t as obvious or direct, but it’s likely there.
One of the big problems with discussions about AI and AI dangers in my mind is that most people conflate all of the various characteristics and capabilities that animals like humans have into one thing. So it is common to use "conscious", "self-aware", "intentional", etc. etc. as if they were all literally the same thing.
We really need to be able to more precise when thinking about this stuff.
Brains are always thinking and processing. What would happen if we designed an LLM system with the ability to continuously read/write to short/long term memory, and with ambient external input?
What if LLMs were designed to be in a loop, not to just run one "iteration" of a loop.
ReAct one line summary: This is about giving the machine tools that are external interfaces, integrating those with the llm and teaching it how to use those tools with a few examples, and then letting it run the show to fulfill the user's ask/question and using the tools available to do it.
Reflexion one line summary: This builds on the ideas of ReAct, and when it detects something has gone wrong, it stops and asks itself what it might do better next time. Then the results of that are added into the prompt and it starts over on the same ask. It repeats this N times. This simple expedient increased its performance a ridiculously unexpected amount.
As a quick aside, one thing I hear even from AI engineers is "the machine has no volition, and it has no agency." Implementing the ideas in the ReAct paper, which I have done, is enough to give an AI volition and agency, for any useful definition of the terms. These things always devolve into impractical philosophical discussions though, and I usually step out of the conversation at that point and get back to coding.
[1] ReAct https://arxiv.org/pdf/2210.03629.pdf
[2] Reflexion https://arxiv.org/pdf/2303.11366.pdf
How is this different from and/or the same as the concept of "attention" as used in transformers?
It’s when LLMs start asking the questions rather than answering them that things will get interesting.
It’s when AIs start asking the questions rather than answering them that things will get interesting.