After Bing has finished generating a message, it will likely call the moderation API with the message it has generated to see if it accidentally generated anything inappropriate. If so, it'll delete the message and replace it with a generic "Sorry, I don't know how to help here." message instead.
EDIT: I tried calling the moderation API with the message in your example and it does get flagged for violence:
"flagged":true,
"categories":{
"sexual":false,
"hate":false,
"violence":true,
"self-harm":false,
"sexual/minors":false,
"hate/threatening":false,
"violence/graphic":false
}