> Liability, penalties, trust, and responsibility are means we use to try to influence the application of the processes that do. They do not directly affect reliability. They can be applied just as much to a team using AI as one that does not.
Yes and no. see next point.
> You can ask the same thing about all the supporting staff around the experts in your team.
I have a good idea of the shape of errors for a human based process, costing and the type of QA/QC team that has to be formed for it.
We have decades, if not centuries of experience working with humans, which LLMs are promising to be the equivalents/superiors of.
I think you and me, would both agree with the statement "use the right tool for the job".
However, the current hype cycle has created expectations of reliability from LLMs that drive 'Automated Intelligence' styled workflows.
On the other hand:
> part of initiatives to increase reliability of human teams
is a significantly more defensible uses of LLMs.
For me, most deployments die on the altar of error rates. The only people who are using them to any effect are people who have an answer to "what happens when it blows up" and "what is the cost if something goes wrong".
(there is no singular thread behind my comment. I think we probably have more in agreement than not, and its more a question of finding the precise words to declare the shapes we perceive.)