Right, I know the example is somewhat hyperbolic - but one could argue that it's quite possible to build a micro service architecture around zeromq or rabbitmq - and those systems should (in theory) allow you to max out your modern hardware.
So in the sense that you might not really need micro service architecture for your regular "new thing" (ie you can run it on a modern server with 128 threads and it fits in half a terabyte of ram or some such "medium" size workload) - it's interesting to see how dapr affects you if you do need to scale.
If you struggle with N to M messaging, you might not want to multiply N and M with Y dapr nodes-because that might make your solution more difficult to scale.
That said I think there's a valid case to be made that it can be valuable to suffer a micro service architecture and the possibly massive signaling overhead - not to "scale", but to gain resiliency and stability. Failover, easier deployment of new code etc.
Basically anything that can give you "works like big iron" on whatever scraps of commodity hardware you're able to rent.