Things are better on consumer law because it assumes from the start that people are stupid and have no idea what they are doing. But other kinds of contracts have quite large security risks.
In the case of a smart contract, it can even happen that both parties agree in how things should take place when there's a problem! But bad code doesn't work that way, and you can find yourself in a null state of indeterminacy without a built in layer to resolve that.
The entire evolution of legal jargon can be viewed as an attempt to provide a linguistic framework whereby ambiguity is minimized and edge cases are anticipated. But human interactions are chaotic and all outcomes or scenarios can never be predicted, hence the need for a layer of human judgement.
That layer is imperfect, sometimes wrong, sometimes biased, but still necessary. I don't see how smart contracts can work without that sort of layer, but with a human judgment layer then they're not really smart contracts in the way people want such contracts to function.
Trusting any smart contract of sufficient complexity is like trusting that a code base has absolutely zero bugs and zero unanticipated edge cases. I just don't see that as realistic.
If a programmer at a bank or exchanged should have coded ">=" then I want 1) for them to easily be able to act on their intentions instead of their mistake or 2) the ability to bring in a 3rd party to interpret and resolve the situation if #1 doesn't work out.
You can do this to some extent using formal verification. Most code doesn't get formally verified because it's kind of a pain to do, and you can usually fix bugs later, but smart contracts are the perfect candidate for it since they are (1) mission critical, (2) naturally limited in size and scope, and (3) cannot be fixed after the fact. You can write perfect code if you have the right tools and do it carefully.
Obviously you are not a programmer.
And you are DJB.
Laws still apply to things handled with smart contracts. You can't say "well sure the escrow contact did the wrong thing but it's code so you can't come after me for your money back".
The problem is that lots of smart contract enthusiasts embrace them for the same reason they embrace crypto currency: they see it as a way to avoid government institutions that they do not want to have to work with or trust. In fact they want the entire contract to be trustless: agree to the terms, implement them in code, and and since the rest is automated you don't have to trust that the other party won't follow through.
I don't see how that can actually work in an automated fashion. Whether it's traditional government or some other legal system or analog, you need a resolution layer above contract layer, but at that point you've lost a lot if what enthusiasts want in smart contracts.
Your phone's alarm is also not a critical transaction worth a person's life savings or the wealth of a country or a transaction involving life-critical supplies etc. The bar is a bit higher here, and even the most reliable systems ever designed have the ability to insert human judgement when the rare issue happens. Smart contracts are supposed to be appealing because they avoid the need for biased/imperfect judgement in favor of something "trustless", or at least that's the vision many see for them.
Contracts can also have hundreds or thousands of clauses, making "sufficient reliability" a much higher bar than an alarm clock. Especially because many of those clauses entail human concepts that would be extremely difficult to translate into code: What is the algorithm for determining "force majeur"? That's a pretty basic clause that appears in many contracts, but I don't know where you'd even begin to get a computer to understand & properly identify such events.
I don't see a pathway to sufficient reliability in smart contracts anywhere on the horizon, save for very simple cases. Even then, here we have IRON, which should have been relatively simple as these things go, but failed because the simple case of "Titan has no value" was not considered.