I'll put it concisely:
Trying to build predictable result upon unpredictable, not fully understood mechanisms is an extremely common practice in every single field.
But anyway you think LLM is just coin toss so I won't engage with this sub-thread anymore.
Nothing in the current AI world is as predictable as, say, the medicine you can buy or you get prescribed. None of the shamanic "just one more prompt bro" rituals have the predicting power of physics laws. Etc.
You could reflect on that.
> But anyway you think LLM is just coin toss
A person telling me to "try to read comments" couldn't read and understand my comment.
Do you know there are approve drugs that have been put in the market for treating one ailment and that have proven to have effect on another or have been shown to have unwanted side effect, and therefore have been shifted? The whole drugs _market_ is full of them and all that is needed is to have enough trial to prove desired effect...
The LLM output is yours to decide if it is relevant to your work or not, but it seems that your experience is consistently subpar with what others have reported.