sorry, but this is wrong. the only time “prompt injection” is not a security risk is if no data is ever passed as input to a model, or the model output has no bearing on anything in the world anywhere.
in which case, why bother with the model/system in the first place.
you can exclude this security risk from your threat model. but what you’re saying there is you don’t believe it’s high likelihood that someone would want to or could run an attack, or that doing so would have sufficiently low impact/severity. possibly because you’ve put mitigations in place …
> It's about what you give the LLM access to. You can give it access to your complete DB and APIs, or you can only allow it to operate on a very specific piece of data.
aha! mitigations!
limiting blast radius and/or limiting access to model input. neither of which remove the security risk, but they do reduce the possible impact/severity and/or likelihood.
it’s all about the threat model ;)