News: 0177507637

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Judge Slams Lawyers For 'Bogus AI-Generated Research'

(Wednesday May 14, 2025 @05:20PM (msmash) from the brave-new-world dept.)


A California judge slammed a pair of law firms [1]for the undisclosed use of AI after he received a supplemental brief with "numerous false, inaccurate, and misleading legal citations and quotations." From a report:

> In a ruling submitted last week, Judge Michael Wilner imposed $31,000 in sanctions against the law firms involved, saying "no reasonably competent attorney should out-source research and writing" to AI, as pointed out by law professors Eric Goldman and Blake Reid on Bluesky.

>

> "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes. "That's scary. It almost led to the scarier outcome (from my perspective) of including those bogus materials in a judicial order."



[1] https://www.theverge.com/news/666443/judge-slams-lawyers-ai-bogus-research



Yup (Score:3)

by Ol Olsoc ( 1175323 )

In a few cases, I've seen decent AI generated texts.

Most of the time, it seems to just make shit up, as this example proves.

I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

Re: (Score:2)

by Randseed ( 132501 )

It boarded the [1]USS Make Shit Up [youtu.be] immediately.

[1] https://youtu.be/GUx2C7Pn7ZE?si=kPGx_rEPFRUjW587

Re: (Score:2)

by Fons_de_spons ( 1311177 )

AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.

Re: (Score:2)

by Ol Olsoc ( 1175323 )

> AI is getting better. I am surprised at the progress they made. Sure it is hyped, but it is here to stay.

What do you think of the AI when it references itself. Truth will become quite malleable. Already groups are poisoning AI - so I for one will be quite skeptical about AI ascending over all other fields.

But let's say that problem is overcome - What would be the rationale for any human getting an education when you jut speak into the computer and AI dies it all for you?

Re: (Score:2)

by OrangeTide ( 124937 )

You're probably right. but I still hope that AI ends up on the same list as 3D Blu-ray and Livestrong bracelets.

Re: (Score:2)

by hey! ( 33014 )

What I've been saying all along is that the biggest problem with the technology isn't going to be the technology per se. It's going to be the people who use it being lazy, credulous, and ignorant of the technology's limitations.

The bottom line is that as it stands LLM isn't any good for what these bozos are using it for: saving labor creating a brief. You still have to do the legal research and feed it the relevant cases, instructing it not to cite any other cases, then check its characterization of that c

Re: (Score:2)

by geekmux ( 1040042 )

> In a few cases, I've seen decent AI generated texts.

> Most of the time, it seems to just make shit up, as this example proves.

> I wonder if Altman et al would be willing to place their freedom on the line, in such a case?

Altman et al are the new John Moses Browning. They’ll might be credited for “inventing” AI, but they won’t be blamed for what happens next.

The human mind invented the double-edged sword. We should probably remember we’re teaching AI to get that irony even if we don’t.

AI is the new priesthoo ... (Score:5, Insightful)

by drnb ( 2434720 )

AI is the new priesthood we go to for answers, and place too much trust in. Very typical behavior for humans, another iteration on the appeal to authority fallacy.

Re: AI is the new priesthoo ... (Score:4, Funny)

by devslash0 ( 4203435 )

Thanks. That's my new favourite AI dig now.

Plug & PrAI

Re: (Score:2)

by drnb ( 2434720 )

> Plug & PrAI

That is a good one.

On the plus side ... (Score:2)

by drnb ( 2434720 )

On the plus side, the attorney's billable hours were rather low.

Re: (Score:1)

by Big Hairy Gorilla ( 9839972 )

That's not how it works. Not how lawyers work. They get a low paid lackey, paralegal, or whatever they call the junior junior people to do the work, at say 25/hour then they bill you for 375/hour. Now they would pocket the 25 and have an effective billing rate of 400/hour. You don't think they'd pass the savings along to the customer, I mean chump, do you ?

You probably just meant that as a joke, but that's how it works out in the wild.

Re: (Score:2)

by registrations_suck ( 1075251 )

Never really understood why anyone would think anyone else would "pass on" savings to them.

Why would they?

If you're willing to pay me $350/hr, why would I charge you less, regardless of how much I lower my costs?

Why create savings if I am just going to "pass it along" ?

Re: (Score:2)

by Big Hairy Gorilla ( 9839972 )

Oh, exactly... remember we are talking about lawyers here.

This model has been adopted in many areas, Dentistry for one.

In the last few years I've noticed TONS of effing dentist shops around town popping up.

You hire 4 dental hygienists at bottom dollar, make sure you have a lot of treatment rooms, and book in people like crazy, and the entry level people do the work, and the dentist walks around from room to room to inspect the work. Cha-ching.

So the dentist is still the one doing the drilling and root canal

Re: (Score:2)

by DamnOregonian ( 963763 )

Capitalism 101.

You pass on the savings so that you're ahead, competitively, against those you compete against.

There are a hundred asterisks required for this logic to actually hold, since here in real life, capitalism isn't as pure as some would pretend that it is, but it does basically hold in most situations.

If you coordinate with your competitors to NOT lower prices, that's price-fixing, and against the law.

Not surprising (Score:2)

by Anachronous Coward ( 6177134 )

I've developed software for lawyers. I can confirm that at least some of them are really dumb and lazy.

Re: Not surprising (Score:1)

by CustomBuild ( 2891601 )

Turns out lawyers are human, and as such represent the full gamit of personality, at scale.

I don't get it. Law firms should be (Score:3)

by hdyoung ( 5182939 )

scared sh*tless of this. Let's say a judge gets fooled by hallucinated crap submitted by lawyers and puts in some sort of wrong/flawed judicial order that results in a death, or some sort of massive financial or reputational damage. People would probably be on the hook for real prison time or 10^(insert many zeros) dollars of liability, and I'd be surprised if any insurance policy would cover it.

Re: (Score:3)

by taustin ( 171655 )

Any litigator worth the name will be checking every single reference his opponent cites at this point, even if the judge doesn't. You can't ask for an easier win.

In an adversarial court system, this is a self correcting problem.

I am legitimately terrified (Score:1)

by rsilvergun ( 571051 )

Of ai-powered lawyers. Eventually the tech will work and it will stop hallucinating.

That's going to put millions of lawyers out of work and while that sounds great at first those guys aren't going to just go quietly into that good night.

It's like we're going to have tens of thousands of people trained to kill with no jobs. When we do that with the military we go out of our way to make sure ex-military have jobs but we're obviously not going to do that with lawyers.

So they're going to start looki

Re: (Score:2)

by Big Hairy Gorilla ( 9839972 )

tell that to the AI judge.

I'll leave it to 93escortwagon to insert appropriate Futurama quote.

Re: (Score:3)

by XopherMV ( 575514 )

Maybe it'll start working without errors aka "hallucinations." The problem is that the technology is fundamentally disconnected from reality. It reads the writing that we've bothered to put online. But, AI has no means of independently verifying that writing. It has no independent means of verifying the truth or falsehood of a section of text. I don't see a good means for it to get that data without first changing the way in which it interacts with the real world.

Re: (Score:2)

by rknop ( 240417 )

^^^ This.

The whole terminology of "hallucinations" is misleading. It suggests some sort of not-functioning normally anomaly. But it's not. It's just LLMs behaving exactly the way they are designed, and not happening to give the right answer.

We have to remember that LLMs, as *designed*, are bullshit generators : [1]https://link.springer.com/arti... [springer.com]

Just like college student papers, sometimes that bullshit is correct. If the students are really good at bullshitting, it's often correct. But that doesn't chang

[1] https://link.springer.com/article/10.1007/s10676-024-09775-5

Good enough is always good enough (Score:3)

by rsilvergun ( 571051 )

This is something old people always have a really fucking hard problem with. Especially anyone who thinks to survivor bias has never faced a lot of layoffs.

I would point out that outsourcing and layoffs are coming and that we need to prepare for it in position ourselves so that we aren't as likely to be on the chopping block and the old folks would always say that we were utterly irreplaceable. I would then watch as they are forced into retirement at the age of 55 with no job prospects whatsoever and re

Re: (Score:2)

by techno-vampire ( 666512 )

Eventually the tech will work and it will stop hallucinating.

I don't think you understand that in AI, hallucination is just a euphemism for mistake. In order for an ai to stop hallucinating it would have to stop making mistakes, ever, and that's not very likely to happen. Maybe you should take some time to find out what AI really is and how it works so that you can stop putting your foot in your mouth.

Now... (Score:3)

by larwe ( 858929 )

... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.

Re: (Score:3)

by laughingskeptic ( 1004414 )

I'm sure you're correct. If only all legal decisions were easily accessible by the public so that this could be investigated.

Re: (Score:2)

by geekmux ( 1040042 )

> ... what about all the other cases where the judge didn't think s/he needed to check the references? I guarantee you AI garbage is already in published judgements. There's no way it couldn't be.

Then I suppose it won’t be long before we’re looking at other elements of our legal system that should be automated.

Like lawyers. Not like they’re leaning on their education or experience anymore.

And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

Re: (Score:2)

by larwe ( 858929 )

> And if judges are having a hard time judging what’s real or not, then perhaps they’re next.

Judges currently expect that lawyers use fact-based research tools to find precedents and other related case law from *real records*. There is no slack in the system for judges to have to independently verify that everything in a filing is factual vs being an AI hallucination.

This is a cascading cluster. Precedents will be set based on AI hallucinations. Precedents probably already HAVE been set based on AI hallucinations. The next round of research will dig up the *real* judgements containing *fake* data a

Going to get worst, until (Score:3)

by oldgraybeard ( 2939809 )

You add 2 zeros to the sanctions against the law firms "imposed $31,000 in sanctions against the law firms" and submit the lawyers who did not even review things for legal action/disbarment.

Re: (Score:2)

by hyades1 ( 1149581 )

Exactly what I was thinking. Give them a fine a little higher than their coffee budget.

Re: (Score:3)

by taustin ( 171655 )

A more likely escalation is suspended or revoked licenses, or even criminal prosecution.

And that's as it should be.

How long (Score:2)

by JustAnotherOldGuy ( 4145623 )

How long before the AI generates and posts all the supporting "decisions" it cited to bolster its 'case'? It's a trivial step, really. All it would need is write access to a few places.

So, it'll try to hack into whatever it 'needs' to in order to gain access and it'll succeed at least some of the time. AI generated pollution of the internet at large (as if it wasn't already filled with AI slop).

Maybe it'll generate fake personalities, complete with backstories, history, etc to 'back up' its fiction. Who kno

MAGA / Conservatives should be happy (Score:2)

by fahrbot-bot ( 874524 )

> "I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them -- only to find that they didn't exist," Judge Milner writes.

The judge did his own research. :-)

Fundamentally Similar to Fake Quotes (Score:5, Interesting)

by careysub ( 976506 )

I have some recent experience with trying to use multiple chatbots to find quotations on particular topics.

It seemed a promising approach -- knowledge of everything that has ever been published (more or less) and semantic matching, not just text matching. And I got a list of good to great quotes right off the bat.

Only problem, none of them were real (thought they were falsely attributed to people). So I asked only for quotes that had sources, and I got a list of good quotes with sources.

Only problem, none of the sources were real either. I could never get any of them to stop just making up quotes.

It may not seem that looking up quotes is the same as fabricating legal decisions, but it is -- especially to the LLM. It is all just tokens to the LLM and a fake legal ruling and citation is no different from a fake quote and reference.

This is not even the first time (Score:2)

by Vlad_the_Inhaler ( 32958 )

This has happened before, I think it was some time last year but it may even have been in 2023.

Those crappy lawyers should have known that AI results need to be checked at the very least, that is what paralegals do. I suppose a second possibility was that the paralegal was the one who "delegated the research" but in that case, they are toast.

Re: (Score:3)

by taustin ( 171655 )

Citing legal precedents goes back to long before AI, or even the internet. I recall reading about a case involving maritime law, in which one attorney cited G. Gordon Liddy's autobiography because Liddy once owned a boat, and the other cited a case that not only didn't exist, but was supposed to be in a volume of case law that didn't exist. (The judge warned both attorneys to not run with scissors, and stop filing briefs written with crayons.)

There's nothing new about incompetent, stupid attorneys making sh

Re: This is not even the first time (Score:1)

by Tschaine ( 10502969 )

June of '23: [1]https://apnews.com/article/art... [apnews.com]

[1] https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c

Plot twist: the judge doesn’t exist (Score:3)

by VaccinesCauseAdults ( 7114361 )

The story is AI generated and the judge doesn’t exist.

Well okay, maybe not — but it would awesome.

Re: (Score:2)

by Ecuador ( 740021 )

Parent post seems AI generated to me, I suspect the poster does not exist.

Re: (Score:2)

by VaccinesCauseAdults ( 7114361 )

Okay fair cop. I’ll pay that. That was actually pretty funny.

However, my knowledge is only updated as of November 2024. Let me know if you have any other questions. Is there anything else you’d like to know?

Work smarter not harder (Score:2)

by RogueWarrior65 ( 678876 )

Fundamentally, I have no problem using AI to do research as long as you verify the results. I regularly use AI to research things like programming questions or to check somebody else's claims about something. I look at it as a far more efficient google search. Instead of wading through a lot of search results that are often very dated, I get something pithy. But I test the results. If the AI gives you a result that should in theory be correct but isn't because the programming language doesn't have the

Generative "AI" is an oxymoron. (Score:2)

by butlerm ( 3112 )

Generative "AI" is an oxymoron. There is no there there - it is just delusions or possible delusions all the way down. That makes it good for entertainment and in the hands of people who are diligent and clearly smarter than the AI, but for most serious applications AI is somewhere between dangerous and dangerously useless.

Not a fine, disbarred and maybe felony charges (Score:2)

by joe_frisch ( 1366229 )

There have been enough cases in the media that any lawyer should know that AI generated statements can be false. In addition the EULAs for AI almost certainly say that they cannot be used in this way, and if attorneys aren't reading the EULAs then what is the point?

This sort of mistake can lose someone their life's savings, send them to prison. The AI did not make this statement, the attorneys did, and so they made false claims in court. At an absolute minimum they should be disbarred, and possibly charge

It's not that they used AI (Score:2)

by Local ID10T ( 790134 )

It's that they filed falsified documents with the court.

It does not matter if it is AI generated, intern generated, or written by your drunk nephew; whatever an attorney submits to the court is their responsibility.

Women professionals do tend to over-compensate.
-- Dr. Elizabeth Dehaver, "Where No Man Has Gone Before",
stardate 1312.9.