How are you going to ship a tool you don't understand? What are you going to do when it breaks? How are you going to debug issues in a language you don't understand? How do you know the code the LLM generated is correct?
LLMs absolutely help me pick up new skills faster, but if you can't have a discussion about Rust and Svelte, no, you didn't learn them. I'm making a lot of progress learning deep learning and ChatGPT has been critical for me to do so. But I still have to read books, research papers, and my framework's documentation. And it's still taking a long time. If I hadn't read the books, I wouldn't know what question to ask or how to evaluate if ChatGPT is completely off base (which happens all the time).