Another aspect is that, if someone took a much simpler, more readable and concise version of the content in the article and fluffed it up with an LLM it can come across as rude or time wasting like articles that are artificially extended to maximize ad impressions.
Skimming over it, I do get that impression too. It's poorly organized and lacks the respectful touch of some common human aesthetic, overusing phrasings like "X world" 16 times and even uses "labyrinthine" 5 times.
In an article about an important topic like dependency risks, you want to make sure there is some deeper competency informing the article so you get the most actionable advice with the least amount of time waste or at least something which rouses high quality discussion about the topic at hand.
Whatever the case, generated or not... it's a bad article. There is room to sarcastically imitate LLM output for comical value, but this article wouldn't be the place for it if that was the case.