But people talking about tools usually mean tools in the concrete sense. Even very complicated tools in the concrete sense, e.g. compilers or CNC mills, only act as multipliers to the operator's ability. A consultant is an agent, not a tool, and this is a better comparison for modern AI. It's an important distinction because an agent can fail or act against your interests through no fault of your own. The ability multiplier is unpredictable and potentially negative.
Well I don't think getting hung up on such definitions will be fruitful. But here is the point i was trying to make: humans, as individuals and as collectives, do indeed have a lot of experience outsourcing intellectual jobs. They do this knowing full well that the "expert" they're employing is not a deterministic box, and may in fact be secretly working against their interests. None of those problems or potential issues is different if the expert is a human or a silicon agent.
The human agent has a physical body like yours and shares your evolutionary history. They have reasons to care about things like reputation and social status. An AI agent only "cares" about maximizing or minimizing a number. It's much more difficult to determine if this aligns with your interests.
Maybe. But a human agent also has personal needs, desires, and self-interests that may motivate treachery. Human's have an evolutionarily proven propensity for duplicity and deceit. It may turn out that some silicon experts are more neutral and less prone to betrayal, and are worthy of trust.