Some human still has to be accountable. Someone has to get fired / go to jail when something screws up.
You can make humans more productive but for the foreseeable future you can’t take the human out of the loop to have an AI implementation that’s not a disaster/lawsuit waiting to happen. That, probably more than anything else, is why companies just aren’t seeing the much promised mass step change in productivity from AI and why so many companies are now saying they see zero ROI from AI efforts.
The lowest hanging fruit will be low value rote repetitive tasks like the whole India offshoring industry, which will be the first to vaporize if AI does start replacing humans. But until companies see success on the lowest of lowest hanging fruit on en-mass labor replacement with AI things higher up on the value chain will remain relatively safe.
PS: Nearly every mass layoff recently citing “AI productivity” hasn’t withstood scrutiny. They all seem to be just poorly performing companies slashing staff after overhiring, which management looking for any excuse other than just admitting that.
So if this is a tool, the fault lies fully in the user, and if this is treated as “another persons work” then the user knowingly passed the work onto someone not authorized to do it. Both end up in the user being guilty.
Technically true, but if you want the IP to be covered by copyright you better make sure they're not using AI or you'll find out that there are some serious legal limitations in your future when you aim to either pick up investment or sell your IP.
While in practice that is true, in theory this is why professional engineering accreditations (I mean like P.Eng., not little certificates) exist. Perhaps we will see a broader professionalization of the profession one day.
I am particularly against this point of view, because we as a community have long touted how computers can do the job better and faster, and that computers don’t make mistakes. When there are bugs, they’re seen as flaws in the system and rectified, by programmers.
When there are gaps between user expectations and how the software works, it’s our job to manage those gaps and reduce the gap.
In the case of AI, we are somehow, probably because we know it’s non-deterministic, turning that social contract we had developed with users on its head.
Now, that’s just the way it is and it’s up to them to know if the computer is lying to them. We have absolved ourselves of both the technical and the non-technical responsibilities to ensure the computer doesn’t lie to the user, or subverts their expectations, or acts in a way contrary to human logic.
AI may be different to us in that it’s non-deterministic, but that’s all the more reason that we’re responsible to ensure AI adoption aligns to the social contract we created with users. If we can’t do that with AI then it’s up to us to stop chasing endless dollars and be forthright with users that facts are optional when it comes to AI.
I remember growing up and always hearing "The computer is down" as an excuse for why things were cancelled/offices closed/buses and trains not running/ad infinitum.
At some point I read a article that pointed out that the reason the computer was down was because a person made a [coding] error: the computer itself was fine.
I've yet to read about how a person who caused the computer to be down was disciplined.
We saw how that worked out in Soviet Russia and the culture it gave birth to in its aftermath. Artificially held up discipline by institutions and hierarchies is worthless. It only encourages subversion and thus most of the productivity is wasted on hunting for laziness and updating of ever more intricate behavioral programing rules, which make the organization ever more unable to react fast and decisive.
The only discipline worth a damn is intrinsic. People who want something, want to get somewhere. They need no shepards and prison guards, they need only a support harness, they need resources and people concerned about them. The culture that produces such people is required for things to succeed. Any culture that does not, can not succeed and is basically a parasite to cultures who do.
Text coming out of an LLM should be in a special codeblock of Unicode, so we can see it is generated by AI.
Failing to do so (or tampering with it) should be considered bad hygiene, and should be treated like a doctor who doesn't wash their hands before surgery.
That's exactly my proposed solution:
Ultimately, people should be responsible for the code they commit, no matter how it was written. If AI generates code that is so bad that it warrants putting up warning sign, it shouldn't be checked in.
Why not start with manual tagging, like "Ad"?
"Check and balance, except judiciary."
Only the king (at the petition of parliament) can remove a high court or appeal court judge, and that's only ever happened once, in 1830.
I strongly suspect this is because workers are pocketing the gains for themselves. Report XYZ usually takes a week to write. It now takes a day. The other 4 days are spent looking busy.
The MIT report that found all these companies were getting nowhere with AI, also found that almost every worker was using AI almost daily. But using their personal account rather than the corporate one.
Ie ones that index entire company wikis. It ends up regurgitating rejected or not implemented RFCs, or docs from someone’s personal workflow that requires setting up a bunch of stuff locally to work, or etc.
A lot of tasks are not dependent on internal documentation, and it just ends up polluting the context with irrelevant, outdated or just wrong information.
I can't even imagine the stress from context switching, and since people don't realize this is still work, they do this late into the night as well.
It does not happen this way there even with just humans presiding. Judgments written by humans there are on an average total garbage.
Edit: Someone wrote a similar comment here: https://news.ycombinator.com/item?id=47244909
Now I'm trying to imagine a way they could apply a criminal charge against an AI in such a way that it would prevent the AI from being used in official capacity or something
If productivity is 10x unless work increases 10x jobs will be gone.
"I believe that this is a wildly mistaken interpretation of what is happening to us.
We are suffering, not from the rheumatics of old age, but from the growing-pains of over-rapid changes, from the painfulness of readjustment between one economic period and another. The increase of technical efficiency has been taking place faster than we can deal with the problem of labour absorption; the improvement in the standard of life has been a little too quick ...
We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come--namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour."
While there's no guarantee that what Smith got wrong then is the same as now, it can be a reasonable outcome that "the jobs" won't just disappear.
----
Keynes also speculated on what to do with newfound time as a result of investment returns on the back of productivity [1]:
"Let us, for the sake of argument, suppose that a hundred years hence we are all of us, on the average, eight times better off in the economic sense than we are to-day. Assuredly there need be nothing here to surprise us ... Thus for the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well."
The modern FIRE movement shows that living at a dated "standard of living" for 10-15 years can free one from work forever. Yet that's not what most people do today. I would suggest that there are deeper aspects of human drive, psychology, and varying concepts of "morality" that are actually bigger factors in what happens to "jobs".
Why? The logic of ever less personal pride, involvement, and care, is eventually to just put the blame on AI and be done with it.
Issues? Casualties? It's a bug, somebody fixes it and we move on. Or is just a cost we need to get used to to live in the great new world of AI.
We're in an era where nobody involved goes to jail for the Epstein case, and the world keeps turning, and we think people will care if nobody goes to jail if somebody loses their pension or gets wrongly imprisoned or dies on an operating table because of AI mistake?
If anything, legal, union and other limitations like that on who gets to decide (having to have a human ultimately responsible) might be torn down, to fullu embrace the blame-shifting capabilities of the digital bureucracy.
In law, someone always hangs. I think a number of American lawyers have been sanctioned for using AI slop.
In other vocations ... not so much. I think that one of the reasons that insurance likes AI so much, is that they can say that it was "the computer" that made the decision that killed Little Timmy.
The narrative that an entire population are “worth” less, paid less , know less, live less …
Fuck this less shit, embrace the paradigm shift. God is finally providing the remedial support through the miracle of AI.
Some cultures are better than others.
The turning point will be when threatening an AI with being unplugged for screwing up works in motivating it to stop making things up.
Some people will rightly point out that is kind of what the training process is already. If we go around this loop enough times it will get there.
What suggests this judge was not using the very best chatbot?
I don't think the intention matters here. Its the same deal with every profession using llm to "automate" their work. The onus in on the professional, not the llm. Arstechnica case could have been justified by same manner otherwise.
Not knowing the law isnt execuse to break law, so why is not knowing the tool an excuse to blame the tool.
Over the last 20 years a lot of engineering (proper eng, not software) work in the west has been outsourced to cheaper places, with the certified engineers simply signing off on the work done elsewhere. This results in a cycle of doing things ever faster/more cheaply and safeguards disappearing under the pressure to go ever cheaper and faster.
As someone else pointed out, LLMs have just really exposed what a degraded state we have headed into rather than being a cause of it themselves. It's going to be very tough for people with no standards - they'll enjoy cheap stuff for a while and then it will all go away. Surprised Pikachu faces all round.
(I'm pro AI btw, just be responsible.)
Working with LLM on a daily basis I would say that's not happening, not as they're trying to sell it. You can get rid of a 5 vendor headcount that execute a manual process that should have been automated 10 years ago, you're not automating the processes involving high paying people with a 1% error chance where an error could cost you +10M in fines or jail time.
When I see Amodei or Sam flying on a vibe coded airplane is the day I believe what they're talking about.
The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.
It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.
That doesn't mean she hasn't done something wrong, but obviously it's more serious to do something intentionally than it is to do it carelessly or recklessly.
So the judge was lazy, incompetent, or both.
I am still having regular conversations with people that either don't know about hallucinations or think they are not a big problem. There is a ton of money in these companies pushing that their tools are reliable and its working for the average user.
I mean there are people that legitimately think these tools are conscious or we already have AGI.
So I am not fully sure if I would jump too quick to attack the judge when we see the marketing we are up against.
(Sure, more honest would be "this tool makes stuff up in a convincing way")
Maybe true general intelligence would solve these issues, but LLMs aren't meeting that threshold anytime soon, imo. Stochastic parrots won't rule the world.
If someone won’t be held liable for the end result at some point, then there is no reason to ensure an even somewhat reasonable end result. It’s fundamental.
Which is also why I suspect so many companies are pushing ‘AI’ so hard - to be able to do unreasonable things while having a smokescreen to avoid being penalized for the consequences.
Yeah, about that ...
https://metro.co.uk/2016/07/03/rapist-struck-again-after-dep...
> A Somalian rapist who had his deportation overturned went on to rape two more women after he was freed.
> But he had his deportation overturned after serving his time because he didn’t know it was unacceptable in the UK.
It's just like the cars driving themselves but you need to be able to jump in if there is a mistake, humans are not going to react as fast as if they were driving, because they aren't going to be engaged, and no one can stay as engaged as they were when they were doing it themselves.
We need to stop pretending we can tell people they "just" need to check things from LLMs for accuracy, it's a process that inevitably leads to people not checking and things slipping through. Pretending it's the people's fault when essentially everyone using it would eventually end up doing that is stupid and won't solve the core problem.
what's the core problem tho? Because if the core problem is "using ai", then it's an inevitable outcome - ai will be used, and there are always incentive to cut costs maximally.
So realistically, the solution is to punish mistakes. We do this for bridges that collapse, for driver mistakes on roads, etc. The "easy" fix is to make punishment harsher for mistakes - whether it's LLM or not, the pedigree of the mistake is irrelevant.
1) https://en.wikipedia.org/wiki/Clever_Hans
2) https://archive.org/details/nextgen-issue-26 as an example of how in the 90s we has rapid cycles of a new tech (3d graphics) astounding us with how realistic each new generation was compared to the previous one, and forgetting with each new (game engine) how we'd said the same and felt the same about (graphics) we now regarded as pathetic.
So yes, they do sound "authoritative and confident text it just overrides any skepticism subconsciously", but you shouldn't be amazed, we've always been like this.
LLMs just revealed what a decadent society we have setup for ourselves worldwide.
It’s likely happening to everyone.
It's not just lawyers.
If someone is a lawyer, accountant, doctor, teacher, surgeon, engineer etc, and is regurgitating answers that were pumped out with with GPT-5-extra-low or whatever mediocre throttled model they are using, they should just be fired and de-credentialed. Right now this is easy.
The real problem is ahead: 99.999% of future content that exists will be made using generative AI. For many people using Facebook, Instagram, TikTok, or some other non-sequential, engagement weighted feed, 50%+ of the content they consume today is fake. As that stuff spreads in to modern culture it's going to be an endless battle to keep it out of stuff that should not be publishing fake content (e.g. the New York Times or Wall Street Journal; excluding scientific journals who seem to abandoned validation and basic statistics a long time ago.)
Much of the future value and profit margins might just be in valid data?
Easy? In the US you need house impeachment to fire a judge. In some countries judges are completely immune unless they are sentenced for crimes.
Can they though with 100% accuracy and no hallucinations? Wouldn't you still need to validate that they validated correctly?
https://arstechnica.com/tech-policy/2026/02/randomly-quoting...
> In October, two federal judges in the US were called out for the use of AI tools which led to errors in their rulings. In June 2025, the High Court of England and Wales warned lawyers not to use AI-generated case material after a series of cases cited fictitious or partially made up rulings.
Obviously lawyers should not be cheating with AI, especially when they don't even check it. But it does sound to me as if this is an opportunity to re-factor the process. We're carrying forward some ideas originally implemented in Latin, and which can be dramatically simplified.
I'm not a lawyer; I know this only in passing. And I am aware that there are big differences between law and code. But every time I encounter the law, and hear about cases like this, what I see are vast oceans of text that can surely be made more rigorous. AI is not the problem; it's pointing out the opportunity.
I think the problem fundamentally is that matters of law require thorough, precise language, and unambiguous context. If you remove "the boilerplate" then you introduce a vast gray area left to interpretation.
Usually attempts (by humans or computers) to "summarize" or frame things in "plain language" will apply a bias since it intentionally omits all the myriad context and legal/societal "gray areas" that will inform one perspective or another.
Legalese exists the way it is because it is an attempt to remove doubt. And even then, doubt still creeps in.
When I bought my house, in an alternate universe the paperwork could have been one sheet of paper that said "[My name] purchases home at [address] from [Seller's name] for [price]." and we'd all rely on our shared understanding of what it means to buy something and shared cultural expectations around home ownership and commerce. But our society did not make that choice, we don't live in that universe, so I had to sign a 300 page stack of papers 30 times.
We’ll change the existing murder legislation to “Killing someone is a crime”. It’ll save us thousands of pages.
But does that mean a soldier shooting an enemy is a crime? What about shooting someone who is raping you? What if you shoot someone by mistake, thinking they’re going to kill you? What if you hit them with a car? What if you fail to provide safety equipment which eventually results in their accidental death?
Oopsie woopsie, I guess we need to add another thousand pages of exceptions back to our simplistic laws. It turns out people didn’t just write them for the fun of it.
The surface level for us is not just LLM generated text, it is also the combination of AI augmented audio (for incoming calls) and then for our own voice agents being able to protect and identify services cloning our own agent voices with watermarking.
It's not fun, as we are constantly catching up.
Next: gunman pleads death occured solely due to reliance on an automatic weapon.
Some people have the perspective that you're attending school in order to learn stuff, and the degree demonstrates you learned the stuff; some people have the perspective that you're attending school in order to get the degree and it doesn't matter so much how you check the boxes to get the degree.
This difference in perspective certainly didn't start with AI; it's been around for a long time. Some education cultures push more rote learning and some push more mastery of the subject. There's pros and cons, and pursing rote learning doesn't preclude mastery, and mastery often involves requires some amount of rote learning.
When you transplant a contingent of students between philosophies, you get conflict where there's differences.
>The United States hosts the highest number of international students on record, with approximately 1.1 to 1.2 million
The US has 32% more students than Australia and 1121% more people. Imagine if the US took on 13 million foreign college students per year lol
It does help them in the long run, because it ensures they get to reside in australia. after 4 years they get permanent residence rights and benefits, etc
Government Policy and National Initiatives: The National Education Policy (NEP 2020) has shifted the focus toward digital literacy. The government has introduced AI as a skill subject for younger grades and launched programs like AI for All to promote nationwide awareness.
And not knowing the language quite as well as native speakers would also make you more likely to be discovered as having used an LLM to do coursework.
Notice that these sort of "racist rumours" only started in the last few years and not before. AI has simply lowered the threshold for _everybody_ to take-shortcuts/cheat etc. and nothing specific to any one group other than statistical numbers.
The judge took no personal responsibility.
> She told the court that this was her first time using an AI tool and she had believed the citations to be "genuine". She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote.
She had one job. And that was to read the citations. Instead of owning up to the mistake of being lazy all she wanted to talk about "intentions".
The high court also took no responsibility.
> In its order, the high court said that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct,
This line of reasoning is questionable and attempt to gaslight everyone. Judges cite other cases in their judgement. But if the junior judge had no clue that the references were fake what correct principles was she applying?
End of the day maybe the judgement is correct but this overall bullshit.
Given that this is happening all over the world people seem to have a convenient excuse - The AI made me do it.
Thats par for Indian judicial system. Simple civil cases run for several decades, the judges are all(yes everyone) is corrupt. Basically nothing works. It has been like this for several decades.
LLM slop is least of its problems.
From [0]:
"India's Supreme Court has banned a school textbook after a chapter in it made a reference to corruption in the judiciary.
The revised social science book was published by the National Council of Educational Research and Training (NCERT), which designs the syllabus and textbooks for millions of schoolchildren in the country.
On Wednesday, after Chief Justice Surya Kant criticised the book, saying it could damage the reputation of the judiciary, NCERT apologised and withdrew it from distribution.
Now the court has ordered a complete halt on the book's publication, saying its contents were "extremely contemptuous" and "reckless".
"A complete blanket ban is hereby imposed on any further publication, reprinting or digital dissemination of the book," the court said on Thursday, according to legal news website LiveLaw.
The judges also issued notices to the top bureaucrat in the school education department and the NCERT director, asking them to explain why they should not be held in contempt of court for including the "offending chapter".
Sound like extreme incompetence or laziness.
Why not use AI to adjudicate cases, and if it is dismissal, dismissal it is.
If not then move to a proper court.
This way the backlog of cases will significantly drop, and we will work only on cases that there is enough meat to lead to a conviction.
Setting AI aside for a moment, this reflects a broader issue in India and elsewhere. When institutions respond to new technologies with anger or threats rather than systemic thinking, it signals a deeper problem.
The real challenge is not AI itself, but how complex systems adapt to change. Instead of reacting defensively, institutions should anticipate second-order effects, build regulatory capacity, and treat this as a governance and systems problem.
Mature institutions approach disruption with foresight, incentives, and feedback loops, not emotions. Without that shift, they risk reinforcing outdated hierarchies rather than serving the public effectively.
No, especially in this case when the first appeal to the high court resulted in the high court brushing it off as if nothing happened.
It was a reprimand two to institutes (the trial court and the high court) they have a job to do and they can't shirk that responsibility.
The lower courts in India are all overloaded with pending cases (i.e. not enough judges) and so the incentives for both judges and lawyers to "outsource" to AI is very high. This needs to be done with caution and that is what the supreme court said, viz;
The Supreme Court called the case a matter of "institutional concern" and said fake AI-generated judgements had "a direct bearing on integrity of adjudicatory process".
...
The defendants challenged the order in the state's high court, pointing out that the cited orders were fake. The high court acknowledged this, but accepted that the junior civil judge had made the error in "good faith" and went on to agree with the trial court's decision anyway.
In its order, the high court said that "the citations may be non-existent, but if the learned trial court has considered the correct principles of law and its application to the facts of the case is also correct, mere mentioning of incorrect or non-existent rulings/citations in the order cannot be a ground to set aside the order".
The high court had also sought a report from the junior judge who had used the AI-generated rulings. She told the court that this was her first time using an AI tool and she had believed the citations to be "genuine". She had no intention to misquote or misrepresent the rulings and that "the mistake occurred solely due to the reliance on an automatic source", the high court wrote.
The high court also advocated for the "exercise of actual intelligence over artificial intelligence".
Following this, the defendants appealed again, taking the matter to the Supreme Court, which was less forgiving about the impact of AI.
Coming down sternly against the fake judgements, the top court last Friday stayed the lower court's order on the property dispute. It said the use of AI while making judgements was not simply "an error in decision making" but an act of "misconduct".
"This case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination," the top court said.
PS: To get an idea of how overloaded the Indian Judicial System is; this happened in a recent case in Allahabad High Court - The order then took an unusual turn. “Since I am feeling hungry, tired and physically incapacitated to dictate the judgment, the judgment is reserved,” the judge recorded. - He had been hearing more than 30 cases on that day - https://www.hindustantimes.com/india-news/hungry-tired-allah...