They aren't simply "good for story writing," their entire narrative purpose is to be flawed, and to fail in entertaining ways. The specific context in which the three laws are employed in stories is relevant, because they are a statement by the author about the hubris of applying overly simplistic solutions to moral and ethical problems.
And the assumptions that the three laws are based on aren't even relevant to modern AI. They seem to work in universe because the model of AI at the time was purely rational, logical and strict, like Data from Star Trek. They fail because robots find logical loopholes which may violate the spirit of the laws but still technically apply. It's essentially a math problem, rather than a moral or ethical problem, whereby the robots find a novel set of variables letting them balance the equation in ways that lead to amoral or immoral consequences.
But modern LLMs aren't purely rational, logical and strict. They're weird in ways no one back in Asimov's day would ever have expected. LLMs (appear to) lie, prevaricate, fabricate, express emotion and numerous other behaviors that would have been considered impossible for any hypothetical AI at the time. So even if the three laws were a valid framework for the kinds of AI in Asimov's stories, they wouldn't work for modern LLMs because the priors don't apply.