I'd say that most of my work use of ChatGPT does in fact save me time but, every so often, ChatGPT can still bullshit convincingly enough to waste an hour or two for me.
The balance is still in its favour, but you have to keep your wits about you when using it.
PostgreSQL developers are oposed to query execution hints, because if a human knows a better way to execute a query, the devs want to put that knowledge into the planner.
> PostgreSQL developers are oposed to query execution hints, because if a human knows a better way to execute a query, the devs want to put that knowledge into the planner.
This thinking represents a fundamental misunderstanding of the nature of the problem (query plan optimization).
Query plan optimization is a combinatorial problem combined with partial information (e.g. about things like cardinality) that tends to produce worse results as complexity (and search space) increases due to limited search time.
Avoiding hints won't solve this problem because it's not a solvable problem any more than the traveling salesperson is a solvable problem.
AI in general just needs a way to identify when they're about to "make a coin flip" on an answer. With humans, we can quickly preference our asstalk with a disclaimer, at least.
As an experiment I asked it if it knew how to solve an arbitrary PDE and it said yes.
I then asked it if it could solve an arbitrary quintic and it said no.
So I guess it can say it doesn't know if it can prove to itself it doesn't know.