Perhaps the biggest “needs citation” statement of our time.
Not in any weirdly-self-aggrandizing "our tech is so powerful that robots will take over" sense, just the depressingly regular one of "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."
P.S.: For example, imagine having applications for jobs and loans rejected because all the companies' internal LLM tooling is secretly racist against subtle grammar-traces in your writing or social-media profile. [0]
We don't have to imagine such things, really, as that's extremely common with humans. I would argue that fixing such flaws in LLMs is a lot easier than fixing it in humans.
I currently work in the HR-tech space, so suppose someone has a not-too-crazy proposal of using an LLM to reword cover-letters to reduce potential bias in hiring. The issue is that the LLM will impart its own spin(s) on things, even when a human would say two inputs are functionally identical. As a very hypothetical example, suppose one candidate always does stuff like writing out the Latin like Juris Doctor instead of acronyms like JD, and then that causes the model to end up on "extremely qualified at" instead of "very qualified at"
The issue of deliberate attempts to corrupt the LLM with prompt-injection or poisonous training data are a whole 'nother can of minefield whack-a-moles. (OK, yeah, too far there.)
To continue one of the analogies: Plenty of people and industries legitimately benefited from the safety and cost-savings of asbestos insulation too, at least in the short run. Even today there are cases where one could argue it's still the best material for the job--if constructed and handled correctly. (Ditto for ozone-destroying chlorofluorocarbons.)
However over the decades its production and use grew to be over/mis-used in so very many ways, including--very ironically--respirators and masks that the user would put on their face and breathe through.
I'm not arguing LLMs have no reasonable uses, but rather that there are a lot of very tempting ways for institutions to slot them in which will cause chronic and subtle problems, especially when they are being marketed as a panacea.
We have a term for that, it's called "luddite". Those were english weavers who would break in to textile factories and destroy weaving machines at the beginning of the 1800s. With the extreme rare exception, all cloth is woven by machines now. The only hand made textiles in modern society are exceptionally fancy rugs, and knit scarves from grandma. All the clothing you're wearing now are woven by a machine, and nobody gives this a second thought today.
The Luddites were actually a fascinating group! It is a common misconception that they were against technology itself, in fact your own link does not say as much, the idea of “luddite” being anti-technology only appears in the description of the modern usage of the word.
Here is a quote from the Smithsonian[1] on them
>Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new. Moreover, the idea of smashing machines as a form of industrial protest did not begin or end with them.
I would also recommend the book Blood in the Machine[2] by Brian Merchant for an exploration of how understanding the Luddites now can be of present value
1 https://www.smithsonianmag.com/history/what-the-luddites-rea...
2 https://www.goodreads.com/book/show/59801798-blood-in-the-ma...
They had very rational reasons for trying to slow the introduction of a technology that was, during a period of economic downturn, destroying a source of income for huge swathes of working class people, leaving many of them in abject poverty. The beneficiaries of the technological change were primarily the holders of capital, with society at large getting some small benefit from cheaper textiles and the working classes experiencing a net loss.
If the impact of LLMs reaches a similar scale relative to today's economy, then it would be reasonable to expect to see similar patterns - unrest from those who find themselves unable to eat during the transition to the new technology, but them ultimately losing the battle and more profit flowing towards those holding the capital.
No, that's apples-to-oranges. The goals and complaints of Luddites largely concerned "who profits", the use of bargaining power (sometimes illicit), and economic arrangements in general.
They were not opposing the mechanization by claiming that machines were defective or were creating textiles which had inherent risks to the wearers.
Maybe it would have been better for humanity if the Luddites won.
https://en.wikipedia.org/wiki/I%27m_alright,_Jack
Except, we are all Jack.
> "our tech is so powerful that robots will take over"
> "lots of people getting hurt by a short-term profitable product/process which was actually quite flawed."
You response assumes the former, but it's my understanding the Luddite's actual position was the latter.
> Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers.
In this sense, "Luddite" feels quite accurate today.
Some not-problems, presented as though they are:
"How can we prevent the untimely eradication of Polio?"
"How can we prevent bot network operators from being unfairly excluded from online political discussions?"
"How can we enable context-and-content-unaware text generation mechanisms to propagate throughout society?"
For example, MKUltra tried to solve a problem: "How can I manipulate my fellow man?" That problem still exists today, and you bet AI is being employed to try to solve it.
History is littered with problems such as these.
Yes, we are clearly talking about things to mostly still come here. But if you assign a 0 until its a 1 you are just signing out of advancing anything that's remotely interesting.
If you are able to see a path to 1 on AI, at this point, then I don't know how you would justify not giving it our all. If you see a path and in the end using all of human knowledge up to this point was needed to make AI work for us, we must do that. What could possibly be more beneficial to us?
This is regardless of all issues the will have to be solved and the enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for (even though I am actually fairly optimistic they all feel the responsibility and are somewhat spooked by it too).
But that does not mean I think it's responsible to try and stop them at this point — which the copyright debate absolutely does. It would simply shut down 95% of AI, tomorrow, without any other viable alternative around. I don't understand how that is a serious option for anyone who roots for us.
Firstly, *skeptics.
Secondly, being skeptical doesn't mean you have no optimism whatsoever, it's about hedging your optimism (or pessimism for that matter) based on what is understood, even about a not-fully-understood thing at the time you're being skeptical. You can be as optimistic as you want about getting data off of a hard drive that was melted in a fire, that doesn't mean you're going to do it. And a skeptic might rightfully point out that with the drive platters melted together, data recovery is pretty unlikely. Not impossible, but really unlikely.
Thirdly, OpenAI's efforts thus far are highly optimistic to call a path to true AI. What are you basing that on? Because I have not a deep but a passing understanding of the underlying technology of LLMs, and as such, I can assure you that I do not see any path from ChatGPT to Skynet. None whatsoever. Does that mean LLMs are useless or bad? Of course not, and I sleep better too knowing that LLM is not AI and is therefore not an existential threat to humanity, no matter what Sam Altman wants to blither on about.
And fourthly, "wanting" to stop them isn't the issue. If they broke the law, they should be stopped, simple as. If you can't innovate without trampling the rights of others then your innovation has to take a back seat to the functioning of our society, tough shit.
I don’t think that the consumer LLMs that openai is pioneering is what need optimism.
AlphaFold and other uses of the fundamental technology behind LLMs need hype.
Not OpenAI
I think you raise some interesting concerns in your last paragraph.
> enormous amount of societal responsibility this puts on AI makers — which I, as a voter, will absolutely hold them accountable for
I'm unsure of what mechanism voters have to hold private companies accountable. Fir example, whenever YouTube uses my location without me ever consenting to it - where is the vote to hold them accountable? Or when Facebook facilitates micro targeting of disinformation - where is the vote? Same for anything AI. I believe any legislative proposals (with input from large companies) is very likely more to create a walled garden than to actually reduce harm.
I suppose no need to respond, my main point is I don't think there is any accountability thru the ballot when it comes to AI and most things high-tech.
Oh, the humanity! Who will write our third-rate erotica and Russian misinformation in a post-AI world?