Can an LLM decide, without prompting or api calls, to text someone or go read about something or do anything at all except for waiting for the next prompt?
Do LLMs have any conceptual understanding of anything they output? Do they even have a mechanism for conceptual understanding?
LLMs are incredibly useful and I'm having a lot of fun working with them, but they are a long way from some kind of general intelligence, at least as far as I understand it.