I think it's worth stepping back here and re-examining the hurdles your setting for your own understanding.
The essential question here is: is ChatGPT useful to people. What you seem to be implying with your question is: I will not use ChatGPT unless it can solve problems with X level of difficulty for me. Why have you set that pre-requisite? Would it not still be useful to you if it simply increased the efficiency of solving simple day-to-day tasks that you're not blocked on?
I'm still waiting for the concrete examples that back up the above commentary about how much of a gamechanger it is.
> I've essentially got 15-20 high-priced world-class consultants in every field that I chose to pull from, working at my beck and call
In other words, this comment from above needs backing up. I've seen such descriptions everywhere but no one has ever provided examples that corroborate this.
> > I've essentially got 15-20 high-priced world-class consultants in every field that I chose to pull from, working at my beck and call
I'll give you that: many of the claims are hyperbolic & ridiculous, including that particular one by the gp. The only way I can think of that statement being remotely true is if "high-priced" has no bearing on the quality of work of those consultants (though tbh that is often the case in reality).
Personally I have not found ChatGPT particularly good at anything I need to do. I have however found it very passable at many things I don't enjoy doing. It's reporting boilerplate is far more tailored & novel than anything I could ever put into a reusable reporting template. I dread writing reports: it's not a technical challenge for me but I gain no stimulus from it, so reducing the task to one where I just have to heavily edit something ChatGPT prepared for me is frankly incredible. The same goes for formal cold-call emails to people I don't know: another menial, unpleasant but necessary task.
It's also great at spitballing, and - as others have mentioned - quick shallow topic summaries of things you'd normally rely on a quick google & scan to get a quick topic summary (timely as Google gets worse and worse).
These are microgains throughout your day, nothing truly revolutionary, but that's the case for many successful tech we take for granted.
I wanted to use my GPX files to generate my own private hiking/running log.
With basically zero knowledge specific of GPX files, D3.js, or mapbox before starting I was able to quickly write a plugin that displays a path on a mapbox map as well as generates an elevation profile.
It would have taken me so long to do this before by googling and reading documentation with trial and error I would have stopped the project.
Now I have a working system.
For me, personally, it has replaced Google Search for the type of query that looks like initial research on a specific subject. "What is X?" then ask a couple follow up questions and I can quickly get a rough idea of some subject that I happen to need at the moment.
Then I go on HN and read these paeans from other technologists who say ChatGPT has completely changed how they work and is 10x better than using Google. I'd like to have that too! I just don't get it, it doesn't match my experience at all. And yes I did try the paid ChatGPT model for a month.
This is exactly my experience and what I am trying to get at. I start feeling that maybe I'm missing out, so I go give it a try. My direct experience shows that it spits out gibberish, but then people say things like it's a godsend but don't give any examples besides maybe it generated some HTTP API requests for them.
GPT+ data analysis gives GPT4 access to a jupyter back-end.
I was dealing with a noisy sensor in a factory production line.
I asked GPT to remind me which averaging functions might be usable to smooth out the noise, within the constraint of a PLC with limited memory.
I got GPT to simulate the output from the noisy sensor, apply each averaging function, and supply graphs of inputs and outputs. It was then pretty easy to eyeball which function was most suitable for the task.
I then gave GPT a code-style example and asked it to provide an (IEC 61131-3) Structured Text implementation using that code style. Which it did. This turned out to be pretty close to the final implementation (after careful reading and testing.)
Because it's so cheap to do (time and money wise), I used GPT to generate quite a lot of throw-away code to get to the final result. I probably wouldn't have considered this particular approach if doing it by hand.