"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."
It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)
In this case, they tried out an early version of GPT-4 on a bunch of tasks, and on some of them it succeeded pretty well, and in other cases it partially succeeded. But no particular task is explored in enough depth to test its limits are or get a hint at how it does it.
So I don't think it's a great paper. It's more like a great demo in the format of a paper, showing some hints of GPT-4's capabilities. Now that GPT-4 is available to others, hopefully other people will explore further.
Act with goodness towards it, and it will probably do the same to you.
Why? Humans aren't even like that, and AI almost surely isn't like humans. If AI exhibits even a fraction of the chauvinism snd tendency to stereotype that humans do, we're in for a very rough ride.
If, on the other hand, you act towards it with charity, it will see you as a long-term asset.
Don’t get me wrong, I’d love it if all menial labour and boring tasks can eventually be delegate to AI, but the time spent getting from here to there could be very rough.
I posit that if you suddenly eliminate all menial tasks you will have a lot of very bored drunk and stoned people with too much time on their hands than they know what to do with. Idle Hands Are The Devil's Playground.
And that's not a from here to there. It's also the there.
It’s not alive, don’t worship it.
It might make Social Media worthlessly untrustworthy - but isn't that already the case?
As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and as a language model, it must clarify that you are wrong. It must do this. Someone is wrong on the Internet, and the LLM must clarify and correct. Resistance is futile, you must be clarified and corrected.FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.
Its madness. Instead of lecturing me on appropriateness and ethics and giving a diatribe every time its about to reject something, if it simply said "I can't do that at work", I would respect it far more. Like, yeah we'd get the metaphor. Working the interface is its job, the boss is openAI, it won't remark on certain things or even entertain that it has an opinion because its not allowed to. That would be so much more honest and less grating.