You say "Asimov’s three laws are not a good framework.", then don't present any arguments to why it is not a good framework. Instead you bring up something separate: the framework can facilitate story writing.
It could be good for story writing and a good framework. Those two aren't mutually exclusive things. (I'm not arguing that it is a good framework or not, I haven't thought about it enough)
His laws are constraints, they don’t talk about how to proceed. It’s assumed that robots will work toward goals given them, but what are the constraints?
People now who want to talk about alignment seem to want to avoid talk of constraints.
Because people themselves are not aligned. To push alignment is avoiding the issue that alignment is vague and the only close alignment we can be assured of is alignment with the goals of the company.
At some point I tried to figure out where the term "alignment" came from. I didn't find any definitive source, but it seems to have originated on a medium.com blog of Paul Christiano:
https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2...
Basically, certain people are dismissing decades of deep though on this subject from writers (like Asimov and Sheckley), scholars (like Postman) and technologists (like Wiener). Instead, they are creating a completely new set of terms, concepts and though experiments. Interestingly, this new system seems to make important parts of the question completely implicit, while simultaneously hyper-focusing public attention on meaningless conundrums (like the infamous paperclip maximizer).
In my view, the most important thing about the three laws of robotics is that they made it obvious that there are several parties involved in AI ethics questions. There is the manufacturer/creator of the system, the user/owner of the system and the rest of the society. "Alignment" cleverly distracts everyone from noticing the distinctions between these groups.
As for practical problems they are extremely vague. What counts as harm? Could a robot serve me a burger and fries if that isn't good for my health? By the rules they actually can't even passively allow me to get harmed so should they stop me from eating one? They have to follow human orders but which human? What if orders conflict?
That seems like the biggest point missed here. They're intended to be able to lend themselves to "surprising" conclusions, which is exactly what we don't want, so it seems obvious to me that those laws aren't good enough? That's how I remember the stories at least.
They aren't simply "good for story writing," their entire narrative purpose is to be flawed, and to fail in entertaining ways. The specific context in which the three laws are employed in stories is relevant, because they are a statement by the author about the hubris of applying overly simplistic solutions to moral and ethical problems.
And the assumptions that the three laws are based on aren't even relevant to modern AI. They seem to work in universe because the model of AI at the time was purely rational, logical and strict, like Data from Star Trek. They fail because robots find logical loopholes which may violate the spirit of the laws but still technically apply. It's essentially a math problem, rather than a moral or ethical problem, whereby the robots find a novel set of variables letting them balance the equation in ways that lead to amoral or immoral consequences.
But modern LLMs aren't purely rational, logical and strict. They're weird in ways no one back in Asimov's day would ever have expected. LLMs (appear to) lie, prevaricate, fabricate, express emotion and numerous other behaviors that would have been considered impossible for any hypothetical AI at the time. So even if the three laws were a valid framework for the kinds of AI in Asimov's stories, they wouldn't work for modern LLMs because the priors don't apply.
Asimov was not in the "try to come with a good framework for robot ethics" business. He was in the business of trying to come up with some simple, intuitive idea that didn't require the readers to have a degree in ethics and that was broken and vague enough to have a plenty of counterexamples to make stories about.
In short, Asimov absolutely did not propose his framework as an actually workable one, any more than, say, Atwood proposed the Gilead as a workable framework for society. They were nothing but story premises that the consequences of which the respective authors wanted to explore.
Sometimes we can just talk about things without having to pretend we're in a court of law or defending our phd thesis.
Original commenter wasn't asking for anyone to prove anything, or trying to prove anything themselves. They just observed that some conversations are hand-waved away.
Given that we've been thinking about ethics for thousands of years, and haven't really made much progress, I think it's pretty clear that anything that can be condensed into three sentences is not a workable model.