> again that's up to who's training the models and/or the user.
True, but training is expensive, I imagine that only a few actors will train the popular LLMs, exactly as we are seeing today.
> LLM probabilities are dynamic. They change based on context. If you want it to behave in a certain way then you provide the context to engineer that. ... Getting it to shift to even the most niche belief is as simple as providing the context to do so.
I thought so too, until I tried to get it to agree that 1 + 1 = 3. It would not do that, no matter how much context I provided, likely because the probabilities in the underlying data were so skewed.