It takes way more time to explain, and then re-explain, and then re-re-re-explain to the LLM what I want the code to do. No, it isn't because I don't understand LLMs, it's because LLMs don't understand, period. Trying to coax a fancy word predictor to output the correct code can be extremely frustrating especially when I know how to write the code.