Skills are often invoked imperatively by the user. In cases where they are intended to be used directly by the LLM, it would be included somewhere else in the context. E.g:
```
After implementing the feature, read the testing skill for instructions on how to test.
```
how do you guarantee that the LLM follows an instruction given imperatively by the user? It probably will, but this is not guaranteed behavior. Likewise, _how_ it follows that instruction is non-deterministic.
Nobody is arguing it's guaranteed. This is why you never give an LLM access to any essential infrastructure. Make sure everything it does can be undone. Double check when guarantees are required.
Well, to be fair, in e.g. Codex you can invoke a skill directly, with $my-skill, and this WILL lead to the skill being injected into the context. At that point, the LLM follows the skill as well as it follows any other part of the prompt, instructions, or context.