> When code production gets cheap, the cost doesn't disappear. It migrates.
> It was true then. It is unavoidably true now.
Edit: In my observation it seems that people's opinions that do not agree with you get labeled as "AI Generated" more than opinions that agree with yours.
That's kind of similar to written content being posted and linked. There's an expectation that you are asking someone to take time to read it, and with LLMs now the cost to generate things to be read is a lot lower but our attention and capacity to read them remains the same.
One giant PR versus dozens of smaller ones, what's the difference? LLMs are going to send it your way whether you like it or not. No one is going to argue that usage of LLMs is going to lead to less code that has to be reviewed than normal, are they? It's by design since you're able to produce more code now, remember?
> There's an expectation that you are asking someone to take time to read it, and with LLMs now the cost to generate things to be read is a lot lower but our attention and capacity to read them remains the same.
I could understand this argument if this had been a 500 word blog post expanded out to 50K words, but it's not. And who's to say the author didn't write most of it and just had an LLM do a little polishing?
there are many apps with ai-generated ideas, specs, and functionality. nobody uses them because of the contempt.
in either case, the part that's user-facing is ai-resistant
The user interacts with the code, and if it's sloppy AI generated code, it's going to impact the user somehow. Be it through poor performance, bugs, security holes, you name it.
Maybe I was naive in thinking the bar was higher than "as long as I can't tell an LLM wrote it that's good enough for me."