To me there does seem to be some nuance to here that's worth noticing. Some examples of this type of response are indeed too cheap and can be chalked up to lack of training data or something.
But in other cases it's actually not immediately obvious whether the answer the user got was their fault for not specifying that they are expecting code that works without additional supporting libraries.
A language model can't reasonably be expected to understand an expectation of usability or fitness for purpose in a context the user didn't specify.