People put too much faith in organizations that depend on commercializing everything,
Most general purpose AI systems seem built around continuing engagement rather than providing best possible answers. This is absolutely an unhealthy thing because it takes the people most at risk of being unable to recognize this behavior in AI and then reinforcing whatever that is they're talking about.
This is absolutely unhealthy and it is a conscious choice by the AI overlords. Because they fully have the ability to put in a filters or adjustments based upon their ethical guidelines. For whatever reason prioritizing the truth at the best effort possible isn't one of the ethical guidelines. I've seen some AIs that have ethical guidelines that specifically contradict the truth.
Far less people want to hear the truth than you'd want to believe.
This sounds like a term from an Arthur C. Clarke novel. Also reminds me of Urasawa Naoki's Pluto.
> Information Utility Burnout
These days, every time I search for something, I have to use 100% of my brain just to work out which results are slops and which are not, even on Kagi. A few days ago, I had to search for something on Google without an adblocker. It was a remarkable experience. Like, 60% of my screen were ads or AI slops.
And it did end up helping me make a nice chicken dinner the other day, so thanks AI.
What a journey. Just say you use AI to do your writing. It's ok, or at least it's preferable to the above, which is just "I always read the linked sources on the Wikipedia article" for the 2020s.
I guess it’s not their fault. I did put in a comment somewhere else. “beep Boop. I am a robot” and I’m being punished for it.