If I could logically prove that I feel empathy, I would be much more famous.
I get your nuanced point, that “thinking” one feels empathy is enough to be bound by the norms of behavior that empathy would dictate, but I don’t see why that would make AI “empathy” superior to human “empathy”.
The immediate future I see is a chatbot that is superficially extremely empathetic, but programmed never to go against the owner’s interest. Where before, when interacting with a human, empathy could cause them to make an exception and act sacrificially in a crisis case, this chatbot would never be able to make such an exception because the empathy it displays is transparent.