I don't think 100.0% accuracy is a reasonable requirement for anything. Should at the very least be "do I end up with the correct information more often than when I don't use this tool", and realistically there's also going to be some case-dependent trade-off between accuracy and convenience. Maybe for language translation a professional human translator is 98% accurate compared to 95% for an LLM/transformer, but the latter is far more accessible to most people to the extent that such tools see frequent usage; it feels a mistake to rule it out as "useless" when a huge number of people are already making use of it.
The other side of that coin are the use cases where inaccuracies are not easily tolerated. Legal documents, transcripts, translations, summaries, reporting the news, medical diagnosis, and such. In fact some of the most memorable flops have happened in those domains. The specific example of translation is a fuzzy one, since indeed it is nice to get the gist of a news story written in, say, German. But if you are translating a legal document or a critical piece of intelligence, LLMs are simply not good enough.
I don't believe that most people have any real need for something that isn't 100% accurate. I can just ask a buddy if I merely wish for an opinion ;)
Or where accuracy does matter and ML-based tools are the most accurate of the available options. Material/product defect detection, weather forecasting/early warning systems, OCR, spam filtering, protein folding, medical segmentation, interaction prediction, etc.
> The other side of that coin are the use cases where inaccuracies are not easily tolerated. Legal documents, transcripts, translations, summaries, reporting the news, medical diagnosis, and such. In fact some of the most memorable flops have happened in those domains.
Translation (like Google Translate) and transcription (like Whisper) are huge and successful uses of transformers. Albeit, not necessarily because they're more accurate than a human (though they sometimes are) but because there's generally some point, varying by scenario, at which their increased speed/cost/accessibility outweighs disparity in accuracy.
> I don't believe that most people have any real need for something that isn't 100% accurate. I can just ask a buddy if I merely wish for an opinion ;)
Is a 99% accurate weather forecast, one which is better than available alternates, useless? Is anything 100% accurate?
It is for some things: it's a matter of serenity of the mind, of trust even.
I don't want my water pipe to leak (yes, physically, it's never closed at 0%, but as far as I am concerned, as far as my water bill is concerned, when the faucet is closed, it's 100% closed - or, it's up for repair).
I don't want my pencil to draw 2% away from where my hand goes: I want it to be the exact result of what I do, imperfections included, because that's what matters to me: what's my own making (that's why I really despise drawing/painting tools that _automatically_ smooth lines and brushes: I need this to be an option).
Nobody will consider a "sorry I'm late, my phone AI gave me almost the good address for the appointment" as anything else than a loose excuse. And nobody will trust their phone AI ever after a failed interview.
I may tolerate a text generator to screw up things or meddle with my input, but only because/if/when I use it in that perspective: as kind of a dice/spaghetti thrower, as a toy, not as something I would depend on.
> it feels a mistake to rule it out as "useless" when a huge number of people are already making use of it.
Popularity is unrelated as to how each person will appreciate those tools for themselves.
1/ if you can tolerate the failure margin of the tools you use (and the consequences that come with it), of course it may be of use to you.
2/ if you feel you cannot _trust_ the machine that listen to you, stores your data, and drives some of your life, it is more than useless, it's literally cruft, it's a burden, given all the care and worry (and the cost) that goes around it.
If you have a pencil that, due to the graphite core being loose, draws 0.5mm from your intended position and you're considering swapping to another pencil, then I'd say the correct line of consideration for the new pencil is whether it's more accurate than the alternatives (and weigh that up against other factors like cost/comfort). If you reject it for not having absolutely zero imprecision, you may just be inadvertently be sticking with an option that has greater imprecision.
Totally fine to decide not to use some tool if it really is less accurate than the alternatives (and you don't consider that inaccuracy to be made up for by other factors) - and I agree plenty of uses-cases would fall into that - just that it should be compared in this way and not against some "it must never make errors" standard. I see the same logic applied to autonomous vehicles ("they should be penalized/not allowed on roads until they cause zero deaths"), computer-vision quality-assurance ("the system is unacceptable if it misses any defects"), or even vaccines ("the manufacturer must be sued into the ground for every side-effect").
> Nobody will consider a "sorry I'm late, my phone AI gave me almost the good address for the appointment" as anything else than a loose excuse.
In 2025 if I give someone an address, I pretty much expect them to use some system with ML-based query-understanding/routing/map-updating/etc. like Google Maps, and would find it more odd if their excuse was that they were having trouble with their paper maps.
This makes the point:
> If it’s 100% accurate, it’s a fantastic time saver. If it is anything less than 100% accurate, it’s useless. Because even if there’s a 2% chance it’s wrong, there’s a 2% chance you’re stranding mom at the airport, and mom will be, rightly, very disappointed. Our moms deserve better!
Introducing errors is not "what technology wants" [0]
The upside of this is that people are realizing that unreliable technology is not for everyone. All it takes is one gaffe to break trust, and there are too many gaffes.