Yes, the short term behavior/output of the LLM could reflect an implicit goal, but I doubt it'd maintain any such goal for an extended period of time (long-term coherence of behavior is a known shortcoming), since there is random sampling being done, and no internal memory from word to word - it seems that any implicit goal will likely rapidly drift.