So yes, they are proposing marking bad AI content (from the user's perspective), not all AI-generated content.
Matt also shared insights about the other signals we use for this evaluation here https://news.ycombinator.com/item?id=45920720
And we are still exploring other factors,
1/ is the reported content ai-generated?
2/ is most content in that domain ai-generated (+ other domain-level signals) ==> we are here
3/ is it unreviewed? (no human accountability, no sources, ...)
4/ is it mindlessly produced? (objective errors, wrong information, poor judgement, ...)
I take it to mean they’re targeting that shit specifically and anything else that becomes similarly prevalent and a plague upon search results.