AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.
Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability. The two are foundationally incompatible for reliable operations.
Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.
> A tribunal last year ordered Air Canada to honour a discount that its customer service chatbot had made up.
If a human sales representative had made that mistake instead of a chatbot, I wonder if companies will try to recover that cost through insurance. Or perhaps AI insurance won't cover the chatbot for that either?
A customer trusted the policy that the chatbot provided to make a decision, and the tribunal said that it was reasonable for the customer to make a decision based on that policy, and that the airline had to honor that policy.
You were responsibile for something, say, child care, and you just decided to go for beer and leave the child with an AI. The house burns down, but because you had insurance you are not responsible. You just head along to your next child care job and don't too much worry about it.
Don't take the right safety precautions and burn down a customers house - liability insurance
Click on a link in a phishing email and open up your network to a ransomware attack - cyber insurance
Forget to lock your door and get burgled - property insurance
Write buggy software which leads to a hospital having to suspend operations - PI (or E&O) insurance
Fail to adequately adhere to regulatory obligations and get sued - D&O insurance
Obviously there will be various conditions etc which apply but I've been in Insurance a long time and cover for carelessness and stupidity is one of the things which keeps the industry going. I've dealt directly with (paid) claims for all of the above situations.
It doesn't absolve responsibility though, it just protects against the financial loss. I suspect if you leave a child alone with an AI and the house burns down that's going to be the least of your problems.
I’m pretty sure this will be the same for the other insurance you mentioned but for property insurance if you left your front door open you will have a hard time getting the insurance to actually pay out your claim. At least here they require a burglar alarm and they require it to be armed when nobody is on site or they will absolutely decline the claim.
Insurance insures against risk, but there’s a threshold to that and if you prove to be above it they will decline your claim or void your insurance in totality.
Insurance will only pay out if you can show that you have done everything a reasonable person would be expected to do to avoid the loss/damage.
> Don't take the right safety precautions and burn down a customers house - liability insurance
You mean someone burnt a customers house down /because of something like an electrical or equipment malfunction that they could not have reasonably foreseen or prevented/, right?
> Forget to lock your door and get burgled - property insurance
That seems unlikely. Compare this: https://moneysmart.gov.au/home-insurance/contents-insurance
> It's worth checking what isn't included. For example, damage caused by floods, intentional or criminal damage, or theft if you leave windows or doors unlocked.
Happy to be shown that I'm wrong but please do not give people the impression that liability insurance or property insurance will absolve them of losses no questions asked.
You could. Insurance companies will sell you insurance for just about anything, in custom situations they figure up the risk somehow. You likely wouldn't like how much it'd cost you though.
More generally I think “if something is bad, we should not be able to insure it because then we incentivise it” is not right
Aside from the fact that your insurance rate just went up, possibly by a lot.
Would you want to insure people who think they have no responsibility because they've delegated it to an AI? They might as well have delegated the responsibility to a child or a dog. To sell them insurance, you as the insurer are making a financial bet on the ability of the dog to take care of anything that does go wrong.
And still as the insured, using the AI imbued with your responsibility risks horrible outcomes that could still ruin your life. The AI has no life to ruin. It was never really responsible.
Insurance just covers financial damage, and it's the insurer making a bet with you that they will profit off the premiums they calculated for your particular coverage instead of you causing an insurance payout that would be in the red for them.
And if you intentionally committed an act that would cause a payout, the insurance would almost certainly void your coverage and claim.
We know this stuff isn’t ready, is easily hacked, is undesirable by consumers… and will fail. Somehow, it’s still more efficient to cover losses and degrade service than to approach the problem differently.
If you’re doing it wrong to a meaningful extent you won’t be able to get insurance or it will be very expensive
https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-...
How about MAGA insurance that covers injecting disinfectant, or eating horse dewormer pills, or voting for tariffs?
For all my being pwned, I should get paid for my part in purifying the gene pool and raising average human intelligence by a minuscule fraction of a percentage, as well as the right to smugly say I told you so! Because that was ALL part of the plan. Don't let the coffin lid hit you on the way out.
The Conservatives Who’d Rather Die Than Not Own the Libs
https://www.theatlantic.com/ideas/archive/2021/09/breitbart-...
>Rarely has so significant a faction in American politics behaved in a way that so directly claims the life of its own supporters. [...]
>In Nolte’s account, however, a conspiracy of evil leftist elites are to blame for vaccine skepticism on the right. “I sincerely believe the organized left is doing everything in its power to convince Trump supporters NOT to get the life-saving Trump vaccine,” Nolte writes. They are “putting unvaccinated Trump supporters in an impossible position,” he insists, “where they can either NOT get a life-saving vaccine or CAN feel like cucks caving to the ugliest, smuggest bullies in the world.” [...]
>Nolte theorized:
>In a country where elections are decided on razor-thin margins, does it not benefit one side if their opponents simply drop dead? If I wanted to use reverse psychology to convince people not to get a life-saving vaccination, I would do exactly what Stern and the left are doing … I would bully and taunt and mock and ridicule you for not getting vaccinated, knowing the human response would be, Hey, fuck you, I’m never getting vaccinated! …
>Have you ever thought that maybe the left has us right where they want us? Just stand back for a moment and think about this … Right now, a countless number of Trump supporters believe they are owning the left by refusing to take a life-saving vaccine—a vaccine, by the way, everyone on the left has taken. Oh, and so has Trump.