What natural language processing does is just make a much smarter (and dumber, in many ways) parser that can make an attempt to infer the intent, as well as be instructed how to recover from mistakes.
Personally I'm a skeptic since I've seen some hilariously bad hallucinations in generated code (and unlike a human engineer who will say "idk but I think this might work" instead of "yessir this is the solution!"). If you have to double check every output manually it's not that much better than learning yourself. However, at least with programming tasks, LLMs are fantastic at giving wrong answers with the right vocabulary - which makes it possible to check and find a solution through authoritative sources and references instead of blindly analyzing a problem or paying a human a lot of money to tell you the answer to your query.
For example, I don't use LLMs to give me answers. I use them to help explore a design space, particularly by giving me the vocabulary to ask better questions. And that's the real value of a conversational model today.