Yeah; and we have plenty of other general purpose databases which can run laps around postgres and sqlite.
There's plenty of optimisations postgres and friends leave on the table. Like, the postgres client driver could precompile queries and do query planning on behalf of the database. It would introduce some complexity, because the database would need to send information about the schema and layout at startup (and any time it changed). But that would remove a lot of unnecessary, repeated makework from the database process itself. Application servers are much easier to scale than database servers. And application servers could much more easily cache the query plan.
Weirdly, I think part of the problem is economic. I think there's a lot of very smart people around with the skills to dramatically improve the performance of databases like postgresql. Across the whole economy, it would be way cheaper to pay them to optimize the code than needing companies everywhere to rent and run additional database servers. But it might not be worth it for any individual company to fund the work themselves. Without coordination (ie, opensource funding), there's not enough money to hire performance engineers, and the work just doesn't happen.