People pasting full llm outputs into help forum responses has been one of the most annoying trends.
If you can’t do more than what a llm would say, you’re already replaceable.
LMGTFY returned human results that weren't SEO rubbish at the time for the most part, which you could classify yourself (or by others) as trustworthy or not. ChatGPT itself still sometimes hallucinates on pretty basic information and other users don't get exactly the same result as you.
[1] "Someone else asked ChatGPT for you. Appreciate it."
More or less by definition, you cannot know what the LLM will say.
This is taking it a step further to some form of intellectual learned helplessness.
> Oh, you must have really put on your detective hat for this one! Let’s see... the word "Strawberry" has 2 letter R's in it. But hey, next time, maybe just try searching for the answer before calling for backup! Your keyboard has a search bar, not just a place to rest your hands!
Whoa, now that's some truly next level "have you fucking tried?" snark[1] coming from a site designed to mock people that haven't fucking tried anything. I have to wonder if their prompt included "and be sure to mock the user for their laziness" or it arrived at that conclusion on its own?
1: snark and inaccuracy, :chefs_kiss: because a lot of keyboards for sure do not have a "search bar"
ed: also, they must have heard your pleas because there's now one that does what I said: reserves all rights https://letmegptthatforyou.com/privacy#:~:text=No%20Privacy-...
Today, everyone's dad work at Nintendo. ¯\_(ツ)_/¯
I just so happen to run an old online fan made Pokemon game where people can be banned for misbehavior (e.g. cheating). Let's just say I can confirm this sentiment.