I think its just a case of dealing with something that has no precedent. We have never had to determine what the line is between a tool and an employee when they can both be instructed with natural language. If we were to evaluate AI as if it were in a contract with us for use of its time and efforts in exchange for something of consideration, it would be an easy ruling. If we were to evaluate AI as if it were a tool which operates as an extension of the operators skill without any independent additions then it would be an easy ruling. But since we now have a tool that can produce results that are independent of our ability to produce them with any former class of tools, then we have to create entirely new models for how to map these tools into the complexity of real life conflicts where people have different goals and where we must decouple fairness from intentions.