> the whole process of building AI looks very different from what we pictured back then.
Right, and so do the harm risks. We need a framework centered around how humans will use AI/robots to harm each other, not how AI/robots will autonomously harm humans.
Why so? Even for simpler and better-understood machines, autonomous harm is a critical part of the safety framework. We wouldn't declare a steel mill to be safe just because there's lots of safeguards against humans intentionally using the machines to harm each other.