We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
I can understand someone telling me I'm an old man shouting at clouds if (2) works out.
But at least (2) is about a machine saving someone's time (we don't know at what cost, and for who's benefit).
My biggest problem with LLMs (and the email Rob got is an example) is when they waste people's time.
Like maintainers getting shit vibe coded PRs to review, and when we react badly, “oh you're one of those old schoolers who have a policy against AI.”
No kid, I don't have an AI policy, just as I don't have an IDE policy. Use whatever the hell you want – just spare me the slop.
Ah yes, crypto, Facebook, privacy destruction etc. Indeed, they made world such a nice place!
As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.
[1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n...
I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.
[1] https://cloud.google.com/blog/products/infrastructure/measur...
Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.
https://nvidianews.nvidia.com/news/openai-and-nvidia-announc...
I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.
No, this is not the same.
[0]: https://www.tomshardware.com/tech-industry/artificial-intell...
Those are all real things happening. Not at all comparable to Muskan Vaporware.
Yes!
> The needle moved just a little bit
That's where we disagree.
(Not a tech worker, don't have a horse in this race)
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
Yes!
> it’s important for us to understand why we actually like or dislike something
Yes!
The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.
The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.
> so we can focus on any solutions
Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?
Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.
Nvidia to cut gaming GPU production by 30 - 40% starting ...
https://www.reddit.com/r/technology/comments/1poxtrj/nvidia_...
Micron ends Crucial consumer SSD and RAM line, shifts ...
https://www.reddit.com/r/Games/comments/1pdj4mh/micron_ends_...
OpenAI, Oracle, and SoftBank expand Stargate with five new AI data center sites
https://openai.com/index/five-new-stargate-sites/
> Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
I'm a software developer. I don't take planes for work.
> We’ve been compromising on those morals for our whole career.
So your logic seems to be, it's bad, don't do anything, just floor it?
> I’m not an AI apologist.
Really? Have you just never heard the term "wake up call?"
You are right, thus downvoted, but still I see current outcry as positive.
We tech workers have mostly been villains for a long time, and foot stomping about AI does not absolve us of all of the decades of complicity in each new wave of bullshit.
Or has the bar been lowered in such a way that makes different people regard it as unsavory in different ways that wouldn't happen if everyone was more rational across-the-board?
i have yet to meet a single tech worker that isn't so