I don’t see that ever going away. Humans have learned to trust other humans over a large time scale with rules in place to control behaviour.
We've compared now more than a hundred replies to that of GPT Pro, and the quality is roughly the same. Sometimes a little worse, sometimes a little better. Always more detailed. Never unacceptable.
But how to convince our customers that we have the right technology and know how to use it appropriately? We're trying, but it's not easy.
Part of that's accountability. In the event of the LLM producing rubbish, as rare as it may be, who is accountable? There is not a person and her reputation attached to it.
Being able to hold someone liable for a F up has been how we have been able to function as a society and get to where we are today.
You won’t be able to scale out and make as much money though. But surely you’re not only concerned about profit, right? What’s the point of life if you’re just trying to get rich.
Once we know what they can do well and how to get them to do it well, and what they can't, you could say we "trust" them to do the first category well and just stop trying to get it to do the second category.