I see this conversation pretty frequently and I think the root of it lies in the fact that we have mental heuristics for determining whether we need to fact check another human because they are a bullshitter, an idiot, a charlatan etc, but most people haven’t really developed this sense for AIs.
I think the current state of AI trustworthiness (“very impressive and often accurate but occasionally extremely wrong”) triggers similar mental pathways to interacting with a true sociopath or pathological liar for the first time in real life, which can be intensely disorienting and cause one to question their trust in everyone else, as they try to comprehend this type of person.