I'm suprised they didn't make a point which is especially painful for Open Source projects, that AI might reduce coding effort but increases review effort, and maintainers are generally spending the majority of their time on review anyway. Making it easier to generate bad pull requests is seen as well-poisoning.
Dumb or evil, either way they are not people to be celebrated.
>FOSS projects like Asahi Linux cannot afford costly intellectual property lawsuits in US courts
seems quite practical and non Ludditeish.
Second, that page is obviously meant to shed light at AI issues from a lot of different viewpoints, it would have been a serious omission to not mention environmental concerns.
This is massively overstated. We ought to be more careful in performing such calculations.
But the huge amount of fresh water going to waste for cooling makes me very uncomfortable. In an ideal world it really should go the other way where the heat from the DC is used to desalinate salt water.
why is that, may I ask? Always interested to learn about alternative viewpoints.
how so?
the impact of increased energy consumption, from non-zero-emission sources, on global warming is a highly practical matter based on established science
AI is/will cause a significant increase in energy consumption (still largely powered by fossil fuels) at a time when it's well established that we're supposed to be reducing emissions (Paris Accord etc.)
Using that term through the article makes it hard to take seriously. I know nothing of this project but right off the bat it seems like a project that has little credibility just because of the tone used throughout.
There's no need to turn it into a full-on tirade against this set of technologies either. Is this an appropriate place to complain about Reddit comments?
Ironically, the author could well benefit from running this slop through an llm to make it more professional.
Personally I think the term is well deserved and am glad it continues to be popularised.
_____________
As for the individual points:
The initial concerns about copyright are convincing.
The point about resource impact ending with "these resources would literally be better spent anywhere else" devolved into meaningless grandstanding. I wouldn't mind seeing a project take a stand because of environmental impact, but again it just ends up sounding like the author has a bone to pick rather than a genuine concern about the environment. If that's not the case, then that's a prime example of why tone matters in communication.
The Reddit comment paragraph where the author berates users for using LLMs on social media is just odd and out of place. Maybe better suited to the off-topic section of their community forum/discord.
And the last point I simply disagree with. Highly knowledgeable people in a field that requires precision use LLMs every day. It's a tool like any other. I use it in financial trading (ex: it's great for scanning reams of SEC filings and earnings report transcripts), I know others who use it successfully in trading, and I know firms like Jane Street have it deeply integrated in their process.
It's an impressive one, to say the least. It's worth taking a closer look and weighing the excellence created by the human mind before completely dismissing the article's arguments.
> Ironically, the author could well benefit from running this slop through an llm to make it more professional.
True, that would effectively strip out all the heart and soul from the prose.
Have you considered that it is not the intent of the author to appear professional? That running it through the Slop Generator would obfuscate their intent to be snarky towards those who outsource all their thinking to Slop Generators?
They have no obligation to sound professional but if they intended to appear incredibly childish then they succeeded, and people will judge them accordingly.
https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
Reading the first two vulnerability reports makes it very clear.
The authors lack of professionalism is a reasonable counter to the completely unhinged mainstream takes on ai/llms that we hear daily.
I think the Reddit example provides useful, generally relatable context, otherwise missing to the average reader.
My opinions are not to detract from the use of the tech or engineers working in the space, but motivated by a disgust for the hype.
Marketing is completely ridiculous when it comes to the topic, but when isn't that enough in the case with the next shiny thing. They even extolled the life changing virtues of 3D TVs for one of the cycles.
I honestly hear far more unhinged AI doomer stuff and constant pessimism that makes me sort of sad (after all it is cool tech that will do a lot of new things) than AI sycophantism, do you not? If so, where? Granted this is a US perspective, where there is currently a deep seated pessimistic undercurrent about just about everything right now.
Call it slop all you want, doesn't change the unreasonable effectiveness that some individuals seem to have with such systems.
A problem with the slop coding movement is that they are happy living with wishful thinking.
On the rest, especially the confidently incorrect argument.. Not. so. much.
Firstly, models are stochastic parrots, but that truth is irrelevant because they're useful stochastic parrots.
Second, hallucinations and confidently incorrect outputs may yet be a thing of the past, so we should keep an open mind. It's possible that mechanistic interpretability research (a fancy term for "understanding what the model is thinking as it produces your response") will allow the UI to warn the user that uncertainty was detected in the model's response.
Unfortunately none of that matters because the IP point is a blocker. Bummer.
Yes, the article might have some wording issues, but for an operating system project to choose to not allow AI written code for a product that is inherently in need of good security, and rather opt for “think before you write and fully understand what you are doing”, I don’t think that their choice is invalid.
I wouldn’t wanna get into a plane where have the core systems are written by hallucinating AI.
The introduction where they claim LLMs are useless for software engineering is just incorrect. They are useful for many software engineering tasks. I do think that vibe coding is rubbish however, and more junior SWEs very regularly misuse LLMs to produce nonsense code.
The only substantive point is that the LLM may regurgitate pieces of proprietary training data; although it seems unlikely that it would be incorporated wholesale into the codebase in such as way it matters or opens them up to liability.
I do question if LLMs would even be useful for such a niche project -- but I think this should be left up to developers to figure out how it complements their workflows rather then ruling out all uses of LLMs.
EDIT: I want to point out that I think the Asahi Linux project is a jewel of engineering and is extremely impressive.
A small disclaimer, I am not a AI-booster. I think LLMs do have issues, and one should be careful with them.
I've found that there's a large group of people who dislike LLMs strongly and claim they are totally useless or even pernicious. I think this is grounded in truth -- they are trained on copyright work, they are used for spreading misinformation, they can produce sh^t code. Although some folks take a radical/extremist approach and totally dismiss them as useless -- often without actually using LLM-powered tools in any meaningful way.
They are useless for many applications, but programming is not one of them. I think a blanket ban on LLMs has to be somewhat unfounded/radical because they do have practical applications in writing code. The tab-completion models are extremely useful for example.
For this more niche project I would think LLMs might not be as useful as they are for other projects. This said, I still think there would be a variety of use-cases where they could be helpful.
And yet, weather prediction works. Therefore, LLMs work?
What shocks me most is that we have found something less useful than bitcoin mining. Remember all the articles about the environmental impact of bitcoin? That is peanuts compared to what the worlds largest companies are biulding to power the next LLM.