Twiddling with prompts is not "teaching", it's guess-n-check whack-a-mole until you get something barely good enough to ship, at least temporarily... until it Disregards All Previous Instructions and barks like a chicken or the training data shifts enough that you need to throw on a new set of arbitrary influential phrases.
The lack of easy analogies for controlling an LLM isn't really because it's an amazingly good control scheme, but because it's so weirdly unreliably awkward that it's something humans don't even try to make in our other machines and systems. It'd be like designing an industrial stamping machine so that its controls were buttons on a Twister™ sheet: It'd be quite novel, but not in a very good way.