When a human doctor prescribes the wrong medication, it's a mistake. One doesn't conclude the world would be better without human doctors because human beings are capable of thought, memory, perception, awareness, and when they don't make mistakes - and most don't most of the time - it's the result of training and talent.
Meanwhile, AIs don't possess anything akin to thought, memory, perception or awareness. They simply link text tokens stochastically. When an AI makes a mistake, it's doing exactly what it's designed to do, because AIs have no concept of "reality" or "truth." Tell an AI to prescribe medication, it has no idea what "medication" is, or what a human is. When an AI doesn't make a mistake, it's entirely by coincidence. Yet humans are so hardwired with paredolia and gaslit by years of science fiction that such a simple hat trick leads people to want to trust their entire lives to these things.
>The fact is simple. Professional diagnosing is such a scarce resource that people buy over-the-counter drugs all the time. It's not AI vs doctors; it's AI vs no doctor.
That's not a fact, it's your opinion, and I'm assuming you've got some interest in a startup along these lines or something, because I honestly cannot fathom your rationale otherwise. You're either shockingly naive or else you have a financial stake in putting poor people's lives in the hands of machines that can't even be trusted to count the number of fingers on a human hand.
I have no doubt the future you want is going to happen, and I have no doubt we're all going to regret it. At least I'm old enough that I'll probably be dead before the last real human doctor is put out to pasture.