In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions.
This is true not just from the chat, but for Google AI summaries.
When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar?
(If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.)