It is indeed the custom instructions causing this behavior. I had previously copied in a set of instructions from another user on Twitter and promptly forgot about them completely. The instructions contain these two lines:
- Recommend only the highest-quality, meticulously designed products like Apple or the Japanese would make—I only want the best - Recommend products from all over the world, my current location is irrelevant
Sorry for the confusion!
Is it possible that there are instructions in OP's custom message (that you can now set in settings) that make it add links to the responses? Maybe he asks for citations and this is how GPT-4 interpreted that?
Let's be reasonable here: if the OpenAI advertising salesmen were so prolific that three different manufacturers of GPS electronics were all sponsoring them we would definitely know about it.
I noticed the "shared links" contain this disclaimer:
> This conversation may reflect the link creator’s Custom Instructions, which aren’t shared and can meaningfully change how the model responds.
Perhaps the author either unknowingly or intentionally added something to their custom instructions which causes this behavior?
I don't know anything else about this author - do they have a history of similar behavior?
Welcome to the big leagues, OpenAI. Google (search and display ads) has basically lost the war against adversarial input, maybe you can do better.
OpenAI are either blind, hypnotised by their own hype, utterly mismanaged, or fucking stupid… and I’d feel pretty safe putting my money on mismanaged base of the current nonsense coming from Sam Altman…
None of these are mutually exclusive.
It would be a lot more funny if the evil ad empire wasn't currently moving to make that impossible.