If you are going to potentially ruin the lives of 770 people you should have a better reason than being afraid of a chatbot and laundry buddy.
Not surprised Hacker News identifies and sympathizes more with the software folks than with everyone else who has to live with the systems they build/disrupt, though.
Yeah, this is the common trend among these insane utilitarians like you and Helen Toner. It's all a game of "how can I justify doing heinous shit to people". IF a superintelligent AGI is created that is as disruptive as you imagine, it also means we find cures for diseases and solve many other potentially world ending problems (such as climate change).
So since you brought up the hypothetical disruption of a technology that doesn't exist, can I now lay at your feet the blame for the millions of lives that are lost due to disease and climate change since we didn't move fast enough on AGI and advancing technology?
This is why these absurd hysterical fear mongering arguments are worthless.
Where did I say anything about a "superintelligence"?
The extant statistical inference engines aren't going to build killer androids. No, I'm terrified of what profit-motivated humans are going to do to each other using machines designed to match patterns and generate convincing bullshit.
Think less "Terminator", more like Cambridge Analytica, Russian-style firehose-of-falsehood, 2018 YouTube algorithmic radicalization, 2014 Facebook "emotional contagion" experiment, 2023 TikTok censorship, doctored videos/evidence, crossing an inflection point in declining trust in democratic institutions, fully automated fraud and identity theft in an arms race with increasingly invasive countermeasures, scientific atrophy, artistic and cultural homogenization, etc.
FFS, the SGA and WGA just wrapped up an unprecedented strike motivated in large part by the broadly anticipated destructive impact of ML on their industry. You wanna talk about jobs? Wikipedia says that cost 45,000, many of whom were barely getting by, while you whinge about 770 in-demand professionals who already have highly paying offers from at least two Fortune 500 companies. From what I've seen, the voice acting industry, although less able to advocate for itself, is in disarray too and already directly impacted by the spread of models trained unconsensually on actors' past work. Plus major publications like CNET have already been caught canning their staff and dishonestly publishing error-ridden drivel with a misguided faith in LLMs, Google and thus access to most textual information online is basically useless because of SEO blogspam already, etc.
Framing climate change as "we didn't move fast enough on AGI" is… Bizarre. Windmills ain't new tech, and neither are hydro dams, nuclear reactors, nor LRT. Framing disease as an "AGI" problem is… Delusional, as well. Some academics may certainly find domain-specific ML models useful, but were I a bit more vindictive, I'd pay good money to see you try to get ChatGPT to fold a novel protein correctly.
> It's all a game of "how can I justify doing heinous shit to people".