Not true. If the accuracy of human debate had this much room of error all the time when two subject matter experts are talking, we would not have the progress of civilisation that we have now.
Room for error for sure is but at the very frontier of the knowledge where really no one knows what is what. There, yes people can be and have been blatantly wrong.
First off, why have you moved the goal-posts, expecting LLMs to be not just at human level, but at subject matter expert level?
And second, I would appreciate recommendations of good debates where both sides have a lot to offer and don't fall into errors; we do need more of those.
Because that's basic assumption. I'd want a discussion about academic papers from subject matter experts otherwise reading it aloud alone is not of any value let alone commentary on it with no understanding of it whatsoever.
There's a spectrum of value here. I don't think we're at the SME level yet, but I'd say we are at least at the level of presentation you'd get at an undergrad seminar module - and I already see significant value here.