News: 0180467379

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google's 'AI Overview' Wrongly Accused a Musician of Being a Sex Offender (www.cbc.ca)

(Monday December 29, 2025 @05:44AM (EditorDavid) from the hallucination-nation dept.)


An anonymous reader shared [1]this report from the CBC :

> Cape Breton fiddler Ashley MacIsaac says he may have been defamed by Google after it recently produced an AI-generated summary falsely identifying him as a sex offender. The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19. "You are being put into a less secure situation because of a media company — that's what defamation is," MacIsaac said in a telephone interview with The Canadian Press, adding he was worried about what might have happened had the erroneous content surfaced while he was trying to cross an international border...

>

> The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name... [W]hen CBC News reached him by phone on Christmas Eve, he said he'd already received queries from law firms across the country interested in taking it on pro bono.



[1] https://www.cbc.ca/news/entertainment/ashley-macisaac-ai-accusation-9.7026786



Misinformation is the new information! (Score:2)

by TheMiddleRoad ( 1153113 )

Lies are the new truth, and it's not even on purpose. Evil companies pump out AI-generated pages by the galaxy-load to attract clicks for ads, so whatever they scrape, they repeat. Google shoves to the top whatever it sees the most. Eventually we will need private, paid-for, human-curated internets.

Re: (Score:2)

by houstonbofh ( 602064 )

How they do it does not change accountability for doing it. I hope he wins bank!

Re: (Score:2)

by PPH ( 736903 )

It's not how . It's why . If they can't prove intent, what sort of judgement do you think he'll get?

Re: (Score:2)

by swillden ( 191260 )

>> It's not how . It's why . If they can't prove intent, what sort of judgement do you think he'll get?

> It will depend on the amount of harm he can demonstrate.

Not under US law (not sure about Canada). He's a public figure so defamation requires "actual malice", which means that the defamer knew it was false or published it with a reckless disregard for the truth. Since this was a case of mistaken identity, it's going to be very hard to prove knowledge of falsity or reckless disregard. Without that, a trial would never get to the point of determining damages.

Re:Misinformation is the new information! (Score:4, Interesting)

by Dragonslicer ( 991472 )

> Since this was a case of mistaken identity, it's going to be very hard to prove knowledge of falsity or reckless disregard.

The summary says that it was someone with the same last name but a different first name. It's a pretty reasonable argument for reckless disregard when simply checking the first names of the two people would have solved the mistaken identity problem.

Re: (Score:3)

by sjames ( 1099 )

On the other hand, AIs are well known for hallucinating and Google certainly has enough expertise to be well aware of that, so that might constitute recklessness.

Re: (Score:2)

by bsolar ( 1176767 )

> It's not how . It's why . If they can't prove intent, what sort of judgement do you think he'll get?

In Canada "lack of intent" is not a defense against defamation. The possible defenses are:

Truth: does not apply as the statement in question is agreed to be false.

Fair comment: does not apply as the statement is clearly not presented as opinion but fact.

Privilege: does not apply as this defense mainly covers public proceedings and this is not one.

Responsible communication: this applies if the matter is considered of public interest, but it requires the defendant having exercised responsible diligence, which

Re: (Score:2)

by ClickOnThis ( 137803 )

Turning lies into "truths" is not a new problem. It goes back centuries.

I'm reminded of the book Nexus by Yuval Noah Harari, which discusses the consequences of (mis)information handled by the currently-emerging AI technologies. He gives an overview of the history of information-handling in human society, leading up to the present. In short, humans have struggled to find a balance between more central control of information (which helps to maintain order) and more distributed systems (which support self-cor

Re: (Score:2)

by martin-boundary ( 547041 )

Don't be so melodramatic! Google's AI probably just trained on the latest Epstein files, and "generalized" (I'm generalizing)

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Implausible. If that were the case, it would have accused the guy of being a ___ ___, and not a sex offender.

Re: (Score:2)

by Mr. Dollar Ton ( 5495648 )

Why are you replying to me?

And why are you pretending to quote something that I never said?

We see that you have people living in your head rent-free and all, but please, make your own post about it.

Q: The difference between a violin and a fiddle? (Score:3)

by hackertourist ( 2202674 )

A: The cost of the lessons!

Only a fool believes... (Score:2)

by MpVpRb ( 1423381 )

...google AI overview without verification

It has a long history of errors

Re: (Score:2)

by gweihir ( 88907 )

There are tons of fools around though.

Re: (Score:2)

by Sebby ( 238625 )

> There are tons of fools around though.

Most of them working at Alphabet, obviously.

To be fair (Score:2)

by Luckyo ( 1726890 )

To be fair, error made here is one that a human would reasonably make as well.

So long as it's an honest mistake, I doubt there's much case to be had here. They'll probably have to settle because that's how this usually goes, and there's novelty to the case in that error is made by AI rather than a human. But the error of "two people share the same name, and one of them is X" is indeed reasonable one to make.

Re: (Score:2)

by fleeped ( 1945926 )

I think that's easy to fix. Just set the parameter "has_an_army_of_expensive_lawyers" to true for anybody, not just Disney, Musk, Trump, Zuckerberg, Bezos et al, and the AI will start being a bit more careful in what it spouts for people.

Misplaced expectations (Score:5, Insightful)

by gurps_npc ( 621217 )

No it is not. That error is fine for an individual blogger to make.

But information services are supposed to have this thing called fact checking.

The fact their business model forgoes it does not remove their responsibility. If you design a new vehicle that does not have brakes, you do not get a pass on crashes.

Re: (Score:2)

by Luckyo ( 1726890 )

Nonsense. Search doesn't "fact check". Search delivers a list of things that fit the search criteria. That's it.

Re: (Score:3)

by organgtool ( 966989 )

In the past, search delivered a list of links and excerpts from web sites that are related to the user-generated query and contained no unique materials generated by the search provider. However, an AI-generated summary contains information gathered by the search provider's algorithms and is published as a statement of fact by the search provider. It may be possible that they have a claim in the ToS that they have no liability for the veracity of that information, but simply making a claim doesn't mean it

Re: (Score:2)

by Luckyo ( 1726890 )

> is published as a statement of fact by the search provider.

This is a self-evident lie. Every google AI embedded summary includes the footnote which clearly states the following:

> AI responses may include mistakes.

Followed by a link to this support article.

Your entire argument hinges on an obvious falsehood.

Re: (Score:2)

by Luckyo ( 1726890 )

And formatting fail. This is the support article in question:

[1]https://support.google.com/web... [google.com]

[1] https://support.google.com/websearch/answer/14901683?hl=en-FI&visit_id=639025977468564536-250476963&p=ai_overviews&rd=1#zippy=%2Chow-to-control-your-data

Re: (Score:2)

by ItsJustAPseudonym ( 1259172 )

From the article, here is Googles "oopsie" statement:

> Google Canada spokesperson Wendy Manton issued a statement saying Google's "AI overviews" are frequently changing to show what she described as the most "helpful" information. "When issues arise — like if our features misinterpret web content or miss some context — we use those examples to improve our systems, and may take action under our policies."

So they are saying they might evolve the results.

This guy is a musician whose public profi

Re: (Score:2)

by Luckyo ( 1726890 )

It's a corpospeak copypasta saying "we admit no fault, we improve all the time".

If you're gleaming any deep meaning from a corpospeak copypasta, you have a problem.

Re: (Score:2)

by gweihir ( 88907 )

This is not search though. This is a summary and hence falls under the fact-checking requirement. As Google is about to find out.

Re: (Score:2)

by Luckyo ( 1726890 )

As noted above, whatever "requirement" you believe exists is unlikely to exceed "would human make a similar error" standard.

And in this case, it's self evident that human would (and have many times in the past) make the same error. Even in journalism, where there's an editorial standard, etc.

Re: Misplaced expectations (Score:2)

by Anamon ( 10465047 )

To be clear, they do *not* have the same name. It seems extremely unlikely that a human would've made that mistake. Conflating two people with different names because part of one person's name was mentioned in the vicinity of part of another person's name is not a mistake humans tend to make.

If they did, it would clearly be gross negligence and very likely result in consequences. As it should here, because "the algorithm did it" has never been, and will never be, a loophole to escape liability.

Re: To be fair (Score:2)

by hey! ( 33014 )

Whatâ(TM)s interesting here is that as a professional musician, this guy is a public figure and the âoeactual maliceâ standard for defamation applies â" a standard that was designed when defamation could only be done by a human being.

This requires the defendant to make a defamatory statement either (1) knowing it is untrue or (2) with reckless disregard for the truth.

Neither condition applies to the LLM itself; it has no conception of truth, only linguistic probability. But the LLM isn

Re: (Score:2)

by gweihir ( 88907 )

Google knowingly and recklessly put a mechanism in place that can make statements that are untrue or disregard the truth. I think that qualifies nicely, even if a bit indirectly. If you pay somebody to defame somebody else for you, you are in hot water as well. Same principle.

Re:To be fair (Score:4, Insightful)

by ArchieBunker ( 132337 )

Notice how AI slop never calls the Sergey Brin or Larry Page a pedophile.

Re: (Score:3)

by swillden ( 191260 )

> Notice how AI slop never calls the Sergey Brin or Larry Page a pedophile.

Because there aren't any pedophiles with the same names?

Re:To be fair (Score:4, Informative)

by ArchieBunker ( 132337 )

Several people named Larry Page here [1]https://www.nsopw.gov/ [nsopw.gov]

[1] https://www.nsopw.gov/

Re:To be fair (Score:5, Insightful)

by dskoll ( 99328 )

To be fair, error made here is one that a human would reasonably make as well.

Certainly not. In Canada, at any rate, Ashley MacIsaac is pretty well known and nobody would have made that mistake. Also, any human putting out a statement that "$SOMEONE is a sex offender" had better make sure of their facts first; a modicum of Internet searching on a platform other than Google would have cleared this up.

Re: (Score:2)

by Luckyo ( 1726890 )

You're assuming that people are into that specific kind of fame.

Most people don't have any idea about musicians. It's not in their field of interest, just like most people on slashdot have no idea who the famous paleontologist from their nation is.

Re: To be fair (Score:2)

by devslash0 ( 4203435 )

Maybe. But a human would hopefully double-check the information before making public accusations of this magnitude.

Re: (Score:2)

by Luckyo ( 1726890 )

First time on the internet I see. Welcome.

Re: (Score:2)

by organgtool ( 966989 )

True, and yet a human would likely still be liable for making the false statement and a defense of "I heard it somewhere" will likely not get the person very far in court.

Re: (Score:1)

by Tablizer ( 95088 )

If a human goes around accusing one of being a sex offender, they can be sued for defamation. (There are certain exceptions in the US for accusations against celebrities and politicians.)

Serious question (Score:2)

by VaccinesCauseAdults ( 7114361 )

Is it possible the LLM was more likely to hallucinate that accusation because the word for the musicianâ(TM)s instrument is also used in a slang term for the accusation?

Re: (Score:2)

by ClickOnThis ( 137803 )

Serious answer: No?

Longer answer: I think even an LLM ought to know the difference between the slang name for a musical instrument (or the act of playing it) and a sexually-deviant activity. Oh, and TFA explains that the actual sex-offender had a name similar to the fiddler.

Re: (Score:2)

by gweihir ( 88907 )

> I think even an LLM ought to know

Lets break that down: First, LLMs do not "know" things. They are not knowledge engines. And, second, you clearly do not "think" here.

Re: (Score:2)

by ClickOnThis ( 137803 )

I am pretty sure you are a human. I have seen many of your posts here, and often find myself agreeing with you. And I'm also pretty sure we both know how to think.

And that's why I'm confident in assuming you know what it means to speak figuratively. Which is what I was doing.

Re: (Score:2)

by swillden ( 191260 )

> Is it possible the LLM was more likely to hallucinate that accusation because the word for the musicianâ(TM)s instrument is also used in a slang term for the accusation?

The LLM didn't hallucinate the accusation. It was a case of mistaken identity, per the summary. The accusation was real, just for a different person of the same name.

Re: (Score:2)

by allo ( 1728082 )

In principle yes. There are two main points for that: The word embeddings group similar word together, like having cat, dog, bird closer than cat, chair, winter in an text embedding that allows to calculate similarities by meaning (instead of e.g. alphabetical order). The second point is the (self)attention, that weights existing words in the output with the to be generated words.

In practice this is unlikely and even smaller LLM won't get confused by that. Many also understand these things well enough to co

Good luck... (Score:2)

by nospam007 ( 722110 ) *

...proving malicious intent.

Re: Good luck... (Score:3)

by YetanotherUID ( 4004939 )

Wrong. Even i the U.S., "actual malice" means only reckless disregard for truth or falsity. The laws are MUCH more plaintiff-friendly in Canada.

Re: (Score:2)

by gweihir ( 88907 )

As this has happened before and Google will know that, no problem.

Consequences? LOL! (Score:3)

by dskoll ( 99328 )

Of course there will be no meaningful consequences for Google. Doing something that would ruin the average shmo is just swatted away as an annoyance by the oligarchs.

Re: (Score:1)

by Tablizer ( 95088 )

I hope this guy can successfully sue the royal pants off Google. Google won't change unless they get a big juicy metal boot to their wallet.

Boo hoo (Score:1)

by Bahbus ( 1180627 )

An LLM cannot defame someone. And unless you can prove that Google setup their LLM specifically to defame the artist, there isn't a case to be had. If anyone should be sued it's the venue for canceling based off of faulty and unproven information.

Re: (Score:2)

by ClickOnThis ( 137803 )

> An LLM cannot defame someone.

That depends on whether you grant the LLM any agency. It can certainly output something that is defamatory. Which in fact happened in this case.

> And unless you can prove that Google setup their LLM specifically to defame the artist, there isn't a case to be had.

I don't think the bar is that high. If you can prove that Google failed to take adequate precautions against their LLM doing something like this, then it seems to me that you can base your suit on negligence.

> If anyone should be sued it's the venue for canceling based off of faulty and unproven information.

The venue (the Sipekne'katik First Nation) already apologized to MacIsaac quite sincerely, and it appears MacIsaac accepted the apology, because he'd still like

Re: (Score:2)

by Bahbus ( 1180627 )

> That depends on whether you grant the LLM any agency. It can certainly output something that is defamatory. Which in fact happened in this case.

Eh, it can output material that can be used to defame someone, but it falls short. You need 4 things:

-False Statement, which they have.

-Published/Communicated, which, yeah.

-Fault, this is where it's unlikely. Depending on whether he is counted as a "public figure" or not, they need either proof of malice or proof of negligence. As much as people would like to believe this is Google being negligent, I'm more than positive they've worked really hard to try and prevent cases like this from happening. And there

Re: (Score:2)

by ClickOnThis ( 137803 )

You clipped the part of my post where I address potential negligence on Google's part, and then you appear to raise the issue as though I hadn't. Here's a reminder of what I posted:

> If you can prove that Google failed to take adequate precautions against their LLM doing something like this, then it seems to me that you can base your suit on negligence.

You are confident that Google "worked really hard" to keep this from happening. And yet it did. So, it appears to me that Google's precautions were inadequate.

Even if Google made a good-faith effort to keep their LMM from making defamatory statements, the courts could still find them negligent. Because their efforts weren't enoug

Re: (Score:2)

by Bahbus ( 1180627 )

I was acknowledging your address of it with my questions-. I just didn't feel like breaking the continuity of my post to quote different portions of yours.

> And yet it did. So, it appears to me that Google's precautions were inadequate.

And probably will again, despite whatever fixes I'm sure they already implemented immediately after finding out about this. It's not whether or not the precautions were adequate to prevent the occurrence. It's about whether they were reasonable precautions. And, like I said, given this hasn't happened more often or with bigger, actually well-known celebri

Re: (Score:3)

by gweihir ( 88907 )

If you set up and run a machine that defames somebody, you are 100% liable. Seriously. This does not even need computers involved.

Re: (Score:2)

by Bahbus ( 1180627 )

> If you set up and run a machine that defames somebody, you are 100% liable.

Sure, if I set it up negligently or to purposely defame someone. Can you confidently say Google took no precautions against this sort of thing? I can't. If they didn't take any precautions, this would happen way more often and with bigger, real celebrities. And, if the defamed person rises to "public figure" status, then you'd need to prove that I did it maliciously . I'm not sure if MacIsaac counts as a "public figure", but if they do...do you honestly think Google maliciously targeted a random minor Canadi

Re: (Score:2)

by gweihir ( 88907 )

This is a professional product, so the standards for simple negligence applies. Google is quite guilty of that, because their AI tools have done it before. All it needs is that their thing did it (it clearly did) and that they did not have adequate safeguards in place (they clearly did not). And 3 years in this AI hype they cannot even argue that this was surprising behavior, which is basically the only defense they have left.

Re: (Score:2)

by Bahbus ( 1180627 )

> This is a professional product, so the standards for simple negligence applies.

Which doesn't matter if MacIsaac counts as a "public figure", because they'll need proof of malice instead.

> Google is quite guilty of that, because their AI tools have done it before.

Prior guilt does not prove current guilt.

> did not have adequate safeguards in place

Adequate doesn't matter. It's whether or not reasonable safeguards are in place, which, you have zero factual intel about because you haven't seen the source code.

> And 3 years in this AI hype they cannot even argue that this was surprising behavior, which is basically the only defense they have left.

There are probably plenty of other defenses beyond arguing fault. Could easily argue that the output of Google's AI Overview counts as neither a form of publication nor communication. If that doesn't f

Re: (Score:2)

by gweihir ( 88907 )

>> Google is quite guilty of that, because their AI tools have done it before.

> Prior guilt does not prove current guilt.

What is it with you mindless, insightless idiots? Obviously I was pointing out that Google _knows_ their machine can do it because it has done it before. Are you completely dumb or just a massive Stockholm syndrome sufferer? At least make a MINIMAL attempt.

Attention everyone suing Google for AI slop... (Score:1)

by mrsam ( 12205 )

... please form an orderly line on the right.

Google [1] is already been sued for this very thing. [bloomberglaw.com].

[1] https://news.bloomberglaw.com/litigation/google-sued-by-robby-starbuck-over-false-ai-crafted-biography

Important Information missing in the summary (Score:2)

by allo ( 1728082 )

You can stop speculating how the AI failed. The key quote of the article is: "The 50-year-old virtuoso fiddler said he later learned the inaccurate claims were taken from online articles regarding a man in Atlantic Canada with the same last name."

Looks like most information was right (or as right as the claims against his double are), but were linked with the wrong person with that name. I think such things happened with Google's knowledge graph way before LLMs were a thing. I also read some time ago an int

wrong kind of fiddler (Score:2)

by DuroSoft ( 1009945 )

wrong kind of fiddler

Cats are intended to teach us that not everything in nature has a function.
-- Garrison Keillor