To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I think we simply don't have similar mental models for predicting the future.
We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.
What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.
> With as much capital as is going into
Yes, we are in a bubble. And some are predicting it will burst.
> the continued innovation
That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.
> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?
I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.
I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.
But that's not my strongest reason to avoid the LLMs anyway:
- I don't want to increase my reliance on SaaS (or very costly hardware)
- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).
[1] https://www.sciencedirect.com/science/article/pii/S016649722...
AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.
As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.
We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
That's quite uncharitable.
I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.
> AI presently has a far lower footprint on the globe than [X]
We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?
For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).
And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.
And I'm all for stopping the meat disaster as well.
> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!
Yep :-)
I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.
My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.
Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.
But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.
If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.
Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!
Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...
There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.
I think that the latter type might just have a bit of a bias in this matter?