You are prompting an LLM to temporarily behave in a certain way. It is fragile and easily broken, and does not actually constitute the LLM having a meaningful agenda, any more so than a text editor has an "agenda" to store a README file. And ultimately this sort of prompting is just a trivial variation on this:
> I could see LLMs interrupting if you are typing something clearly false or against TOS. But that would require an LLM which reliably understands things are clearly false or against TOS and hence requires a solution to jailbreaking....so in 2024 I think it would just be an incredibly annoying chatbot.
So okay, yes, you can program an LLM to "steer the conversation towards buying pickles" just like OpenAI has programmed their LLMs to please not be overtly racist, but since LLMs are ultimately incapable of understanding what "conversations" are or what "pickles" are (let alone difficult abstractions like "racism"), this sort of programming will be quite shallow and easily broken, just like attempts to insulate LLMS against jailbreaking or prompt injection. I suspect if I kept talking to your LLM one of two things would happen:
1) It would completely forget about the pickle prompt and go back to being a generic chatbot
2) The interjection of "Bill's Pickle's Gourmet Pickles" would quickly become facile or annoying - the LLM is not actually intelligently reacting to the conversation and trying to "steer" things, it is just blindly repeating pickle-related sales verbiage.
Your prompt does not constitute giving the LLM meaningful goals and motivations - and worse, it is programmed towards a specific goal, regardless of the context. It is a shallow imitation of an agenda, and simply not the same thing of an animal having an agenda in the sense described by Saint Augustine[1]:
> Did I not, then, as I grew out of infancy, come next to boyhood, or rather did it not come to me and succeed my infancy? My infancy did not go away (for where would it go?). It was simply no longer present; and I was no longer an infant who could not speak, but now a chattering boy. I remember this, and I have since observed how I learned to speak. My elders did not teach me words by rote, as they taught me my letters afterward. But I myself, when I was unable to communicate all I wished to say to whomever I wished by means of whimperings and grunts and various gestures of my limbs (which I used to reinforce my demands), I myself repeated the sounds already stored in my memory by the mind which thou, O my God, hadst given me. When they called some thing by name and pointed it out while they spoke, I saw it and realized that the thing they wished to indicate was called by the name they then uttered....So it was that by frequently hearing words, in different phrases, I gradually identified the objects which the words stood for and, having formed my mouth to repeat these signs, I was thereby able to express my will.
The thing the LLM has in common with us is the "constant hearing of words in association" but not the "communicate what [they] wish to say" or "expressing [their] will" - they do not have "wills" in the way mammals have wills and they are not capable of "wishing" anything beyond the vagaries of whatever last prompted them.