News: 0180899908

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

India's Top Court Angry After Junior Judge Cites Fake AI-Generated Orders (bbc.com)

(Tuesday March 03, 2026 @05:00PM (BeauHD) from the here-we-go-again dept.)


An anonymous reader quotes a report from the BBC:

> India's Supreme Court has threatened legal consequences after a judge was [1]found to have adjudicated on a property dispute using fake judgements generated by artificial intelligence . The top court, which was responding to an appeal by the defendants, will now examine the ruling given by the lower court in the southern state of Andhra Pradesh. The Supreme Court called the case a matter of "institutional concern" and said fake AI-generated judgements had "a direct bearing on integrity of adjudicatory process."

>

> [...] Coming down sternly against the fake judgements, the top court last Friday stayed the lower court's order on the property dispute. It said the use of AI while making judgements was not simply "an error in decision making" but an act of "misconduct." "This case assumes considerable institutional concern, not because of the decision that was taken on the merits of the case, but about the process of adjudication and determination," the top court said. The court said it would examine the case in more detail and issued notices to the country's Attorney and Solicitor General, as well as the Bar Council of India.



[1] https://www.bbc.com/news/articles/c178zzw780xo



dependence (Score:5, Insightful)

by SumDog ( 466607 )

The thing that bothers me so much right now is the absolutely insane amount of dependence people have on these random word machines. I know people who use them instead of search engines, or they'll read the generated results at the top of Google/DDG as if they're fact without question. There is little literacy onto how models really work, and even for developers who should know better, they still see the current generation of machines similar to the movie "Her" than the semi-deterministic feedback systems they truly are.

The trust people put into these things is frightening. If it isn't for the right attorney or judge that digs through and discovers these completely randomly generated court cases, cases like this could be made entirely with incorrect standing and precedent. There are people who do not use chatbots at all, or in limited capacity to generate images, video and segments of code they clean up. There are others who use them for everything from recipes to fitness advice to generating entire applications they submit for code review without manually reviewing and fixing the generation to match their code style.

When the crunch hits the LLM industry, some of these people will easily shell out $500/month to keep their bots, and it will require that much to keep some of these companies afloat. The lack of decent off-line models is troubling. Some companies will refuse to shell out the $1k~$2k/employee increases I think we're likely to see. This is a disaster brewing, and we're not ready as a society to deal with it.

Since we're finding some of these models can reproduce entire chapters of Harry Potter with 95% of the original, I can only imagine the actual models used by Anthropic/OpenAI are probably 400GB ~ 1TB in size. They're not telling us what's really in these models. I suspect they are huge, and at some point, it just turns into lossly JPEG/mp3 compression with weird realistic artifacts.

Re: (Score:3)

by nightflameauto ( 6607976 )

> The thing that bothers me so much right now is the absolutely insane amount of dependence people have on these random word machines. I know people who use them instead of search engines, or they'll read the generated results at the top of Google/DDG as if they're fact without question. There is little literacy onto how models really work, and even for developers who should know better, they still see the current generation of machines similar to the movie "Her" than the semi-deterministic feedback systems they truly are.

>

> The trust people put into these things is frightening. If it isn't for the right attorney or judge that digs through and discovers these completely randomly generated court cases, cases like this could be made entirely with incorrect standing and precedent. There are people who do not use chatbots at all, or in limited capacity to generate images, video and segments of code they clean up. There are others who use them for everything from recipes to fitness advice to generating entire applications they submit for code review without manually reviewing and fixing the generation to match their code style.

>

> When the crunch hits the LLM industry, some of these people will easily shell out $500/month to keep their bots, and it will require that much to keep some of these companies afloat. The lack of decent off-line models is troubling. Some companies will refuse to shell out the $1k~$2k/employee increases I think we're likely to see. This is a disaster brewing, and we're not ready as a society to deal with it.

>

> Since we're finding some of these models can reproduce entire chapters of Harry Potter with 95% of the original, I can only imagine the actual models used by Anthropic/OpenAI are probably 400GB ~ 1TB in size. They're not telling us what's really in these models. I suspect they are huge, and at some point, it just turns into lossly JPEG/mp3 compression with weird realistic artifacts.

Everything you've said here is part of why some of us "fear" what's happening with AI. It's not so much the technology itself, it's that people have become so inane, so unable to think clearly, so utterly dependent on technology for the simplest of things, that we're (collective we here) bound to put these systems into decision making positions. And they aren't decision makers.

Sometimes I wonder if it's a coincidence that we've having this AI surge right at the point where the vast majority of people in the

Re: (Score:3)

by alvinrod ( 889928 )

That's always been a problem and the only thing new is that people are blindly trusting in an AI as opposed to the pst where they did the same only it was god, the state, etc. that they were following the orders of without giving it too much thought. People have always been lazy as well, so it should come as little surprise that people might use an LLM to do their work for them. The only difference is that with LLMs the barrier to entry is much lower and that they're trained to be very agreeable with the hu

Re: (Score:3)

by CubicleZombie ( 2590497 )

> When the crunch hits the LLM industry, some of these people will easily shell out $500/month to keep their bots

This. AI is cheap right now, to get everyone hooked on it. When AI has replaced all office workers, expect the price to go WAY up, because it can.

The first hit is always free.

Re: (Score:2)

by nikkipolya ( 718326 )

I don't think AI with all its smarts is going to destroy humanity. Humans believing the word of dumb AI will destroy themselves.

Re: dependence (Score:3)

by drinkypoo ( 153816 )

"I know people who use them instead of search engines"

I know search engines which have created this situation deliberately.

Google results have been getting ever more shit so I guess their game plan is to blame it on AI from now on?

Just highlights what we all knew already (Score:3)

by nedlohs ( 1335013 )

Clearly the only way that happens is if you have already made up your mind and just want to find some justification for it.

Which is expected from the lawyers - they are trying to win the case for their side, obviously they are looking to find evidence and precedent for what they want. If they use an genAI tool it will gladly make something up for them.

But the judge - they're supposed to be looking at precedent to guide them, not making up their mind and then finding prior cases that agree with them (and again a genAI tool will gladly make some fiction for them).

Re: (Score:1)

by sabbede ( 2678435 )

I was going to say one thing, but as I thought about it, so long as the issue is not novel, an LLM might be able to produce excellent judgements. Train it on the laws and precedents, and what is it going to spit out?

Though that's just a thought. I don't want to replace judges with robots.

Re: (Score:2)

by HiThere ( 15173 )

The problem with that is the laws are really atrocious. They're designed with the apparent intent of selective enforcement. Judges are supposed to reign in excessive use of this.

N.B.: "Apparent intent": This isn't necessarily actual intent, but it's the sort of thing that happens when different groups with different agendas pass laws without bothering to repeal those in conflict with their agenda.

Re: (Score:3)

by SumDog ( 466607 )

Because it really does not work that way at all. You can train on law databases for billions of compute hours and even add a 1000 hours of real lawyer feed back reinforcement, but at the end of the day, it's just billions of weights for parts of words to parts of other words. It's randomly guessing then next best word. A RAG store would at least point you to documents, but a lawyer needs to read those documents; not LLM summary.

Cases are complex and your answer shows you don't know how the random token m

Fire the judge (Score:3)

by stabiesoft ( 733417 )

As a start. Then ban the judge from doing any legal work (consulting or otherwise). Make an example.

Two human philosophies (Score:1)

by gurps_npc ( 621217 )

1) No harm no foul.

2) You do the crime, you do the time.

Surprisingly, it seems the DNC are the "do the crime, do the time' people in this generation. GOP has embraced the no harm no foul and even pardoned people.

Us Democrats are tough on crime. We care about the principle, not the result.

I personally am shocked that a judge would think it was OK to use AI at all when crafting his argument. Just do not do that. Do the job you were paid to do, you cannot hire someone else to do it, nor can you get an AI t

Using TSO is like kicking a dead whale down the beach.
-- S. C. Johnson