Often I find myself cursing at the LLM for not understanding what I mean - which is expensive in lost time / cost of tokens.
It is easy to say: Then just don't use LLMs. But in reality, it is not too easy to break out of these loops of explaining, and it is extremely hard to assess when not to trust that the LLM will not be able to finish the task.
I also find that LLMs consistently don't follow guidelines. Eg. to never use coercions in TypeScript (It always gets in a rogue `as` somewhere) - to which I can not trust the output and needs to be extra vigilant reviewing.
I use LLMs for what they are good at. Sketching up a page in React/Tailwind, sketching up a small test suite - everything that can be deemed a translation task.
I don't use LLMs for tasks that are reasoning heavy: Data modelling, architecture, large complex refactors - things that require deep domain knowledge and reasoning.