It makes no sense to walk. So the whole question makes no sense as there's no real choice. It seems that LLM assumes "good faith" from the user side and tries to model the situation where that question actually makes sense, producing answer from that situation.
I think that's a valid problem with LLMs. They should recognize nonsense questions and answer "wut?".