They store up conserved programming time and then spend it all at once when you hit the edge case.
If you never hit the case, it's great. As soon as you do, it's all returned with interest :)
But it's only really efficient if it can run code right next to the data via fast access - ideally the same machine. The moment you have a DB running on separate hardware or far away from the client, it's going to be slower.
SQL is a very compact way to communicate what you want from a complex database in a way that can be statically analyzed and dynamically optimized. It's also sandboxable. Not so easily to replace.
Oracle is a prime example of this. Stored procedures are the place to put all business logic according to Oracle documentation.
This caused backslash from escaping developers who then declared business logic should never be inside the database. To avoid vendor lock-in.
There's no ideal solution, just tradeoffs.
I mean, that already happens. It's quite rare to see someone migrate from one database to another. Even if they stuck to pure SQL for everything, it's still a pretty daunting process as Postgres SQL and MSSQL won't be the same thing.
Absolutely not.
That which is asserted without evidence can be dismissed without evidence.