It has way more 'general inherent knowledge' than any human, just as as a starting point.
What they will do is to find all the solutions someone did and mix and match around in a mdiocre way of approaching the problem in a much more similar way to a search engine with mix and match than thinking out of the box or specifically for your situation (something also difficult to do anyway bc there will always be some detail missing in the cintext and if you really had go to give all that context each time dumping it from your brain then you would not use it as fast anymore) which humans do infinitely better. At least nowadays.
Now you will tell me that the info is there. So you can bias LLMs to think in more (or less) disruptive ways.
Then now your job is to tweak the LLMs until it behaves exactly how you want. But that is nearly impossible for every situation, because what you want is that it behaves in the way you want depending on the context, not a predefined way all the time.
At that time I wonder if it is better to burn all your time tweaking and asking alternative LLMs questions that, anyway, are not guaranteed to be reliable, or just keep learning yourself about the domain instead of just playing tweaking and absorbing real knowledge (and not losing that knowledge and replace it with machines). It is just stupid to burn several hours in making an expert you cannot check if it says real stuff instead of using that time for really learning about the problem itself.
This is a trade-off and I think LLMs are good for stimulating human thinking fast. But not better at thinking or reasoning or any of that. And if yiu just rely on them the only thing you will emd up being professional at is orompting, which a 16 year old untrained person can do almost as well as any of us.
LLMs can look better if you have no idea of the topic you talk about. However, when you go and check maybe the LLM hallucinated 10 or15% of what it said.
So you cannot rely on it nayways. I still use them. But with a lotof care.
Great for scaffolding. Bad at anything that deviates from the average task.
That's not quite how AI works.
Second - You'll have to provide some comparable reference for how 'humans' come up with creative solutions.
Remember - as a 'starting point' AI has 'all of human knowledge' ingested, accessibly instantly. Everything except for a few contemporary events.
That's an interesting advantage.
I never, ever got from a LLM a solution that either I could have never thought of or it was available almost verbatim in internet (take this last one with a grain of salt, we know how they can combine and fake it, but essentially, solutions looking like templates from existing things, often hallucinating things that do not exist or cannot be done, inventing parameter names for APIs that do not exist, etc).
When I give some extra thought to a problem (20 years almost in software business) I think solutions that I come up with are often simpler, less convoluted and when I analyze LLMs they give you a lot of extra code that is not even needed, as if they were doing guessing even if you ask them something more narrow. Well, guessing is what they are doing actually, via interpolation.
This makes them useful for "bulky", run fast, first approach problems but the cost later is on you: maintenance, understanding, modifying, etc.