Meta should be (and is) in the business of policing third-party spam on their forums that does exactly this. We can infer what must've happened - the model must've been fine-tuned on forum comments, and this would be the likely format for a response to that question. This sort of thing should've been caught by a wrapper/guard model, and will probably make a good case to add to such a model's instructions/training.
(btw: is it "an LLM" or "a LLM"? I guess I should ask an LLM which it prefers to be called)
The distinction between using “a” vs “an” is one based on the immediately proceeding syllable sound rather than the letter. If it’s proceeded by a vowel, then use “an”, and if it’s proceeded by a consonant, use “a”.
Because “LLM” is pronounced “el el em”, the first syllable sound is “eh”—a vowel.
The same letter may need different “a”/“an” article based on how the word is pronounced. For example “an LLM” vs “a layer”.
See this for someone smarter than me explain it: https://www.merriam-webster.com/grammar/is-it-a-or-an
And when I'm reading a text for any other purpose than memorizing article choices (so, 99,999% of cases), they don't get enough of my attention to get remembered - it's the meaning of other words which get it, and big part of which will get remembered, not the articles used before them.
Being able to look on such a list every few days for say, a month, would definitely help to remember most of these cases.
Is anyone still surprised by this? If so, let me repeat: LLMs are Internet simulators. They will give you simulations of good replies you might get on the Internet.
(sorry I am just venting my annoyance with it too, I don't mean for this to come off as hostile if it does!)
At least "E-Commerce" turned out to be real, eventually.
The makers of Velcro (brand name) would love for people to use “hook and loop fasteners” when referring to velcro (generic), but once enough people are using a term you might as well try to fight the tide.
dell-32a7iZ
No it doesn't. It can't. Only people (or companies, which require people) can meaningfully "claim" things. LLMs are still not people, despite our persistent attempts to personify them.
This is merely a sexier headline than "Hallucination machine hallucinates." And even that word personifies a bit too much!
But this is different. The subtext of the headline is clearly "Facebook's dumb chatbot had a very dumb glitch." I believe laypeople would immediately understand the AI is just plagiarizing a Facebook mom. Policing the language here seems more about pedantry than correcting actual misconceptions.
(You might say that some people would read this headline and jump to a Her fantasy, where Meta's poor AI is desperate for human connection or whatever. But these people are not going to be swayed by technical accuracy. They will just interpret language like yours as euphemism and denial.)
Feel free to blame me, if it helps. I've got broad shoulders.
However the fact that we're (collectively) losing the mass "mindshare battle" doesn't imply bad faith. Some of us are still fighting the good fight, and I don't see a problem with that.
Personally I think this only means we should fight harder against these dangerous beliefs, not throw in the towel (or worse, friendly fire against fellow educators).
And yes, it's human-side beliefs that are dangerous, not the tech itself. If an LLM "suggests" to kill <group of people> and we know what an LLM really is, then it's harmless. However if a large fraction believe an LLM is some infallible AI oracle or genie (a surprisingly common belief), then this "suggestion" could cause catastrophic harm.
I heard so much about the Kevin Roose stuff, is there a breakdown somewhere of what actually happened.
From the way that podcast presented it, Microsoft had the bing bot untethered in a way that it kepted taking in more and more context and was just taking it correctly.
This is against my current much less virgin, but very much simple, understanding of how llms/gpt works.
What actually happened there?
It's definitely a vivid example of Meta being irresponsible with the tech today and of what we can expect a lot of the internet to be polluted with in the future.
At the end of the day, it's turning a mathematical crank. LLMs have no more intentionality than a jack-in-the-box.
The way the information has been coming out and sold of ai to pump up stock prices is to the detriment of public opinion. Curious to see how things go.