News: 0180503223

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google AI Overviews Put People at Risk of Harm With Misleading Health Advice (theguardian.com)

(Friday January 02, 2026 @04:30PM (msmash) from the PSA dept.)


A Guardian investigation published Friday found that Google's AI Overviews -- the generative AI summaries that appear at the top of search results -- are [1]serving up inaccurate health information that experts say puts people at risk of harm. The investigation, which came after health groups, charities and professionals raised concerns, uncovered several cases of misleading medical advice despite Google's claims that the feature is "helpful" and "reliable."

In one case described by experts as "really dangerous," Google advised people with pancreatic cancer to avoid high-fat foods, which is the exact opposite of what should be recommended and could jeopardize a patient's chances of tolerating chemotherapy or surgery. A search for liver blood test normal ranges produced masses of numbers without accounting for nationality, sex, ethnicity or age of patients, potentially leaving people with serious liver disease thinking they are healthy. The company also incorrectly listed a pap test as a test for vaginal cancer.

The Eve Appeal cancer charity noted that the AI summaries changed when running the exact same search, pulling from different sources each time. Mental health charity Mind said some summaries for conditions such as psychosis and eating disorders offered "very dangerous advice."

Google said the vast majority of its AI Overviews were factual and that many examples shared were "incomplete screenshots," adding that the accuracy rate was on par with featured snippets.



[1] https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information



Welcome to Web 3.0 (Score:3)

by liqu1d ( 4349325 )

Top 10s are full of SEO gamified content. It being the best or even correct is a secondary issue. The top 10 is then ignored by the top AI result which seems to pull from random sites and even if they exist when clicking through they probably don't even say what the AI says they do. How can it have gone so wrong! I miss the 2010s internet.

Re: (Score:2)

by tlhIngan ( 30335 )

We're at Web 4.0 actually.

Web 3.0 was supposed to be blockchain all the way all the time.

Re: (Score:2)

by shanen ( 462549 )

Mod parent Funny, though he [tlhIngan] should have worked a turtle into it.

The Venn diagram joke I was actually looking for would involve sycophancy and self-hate. Of course the overlap involves the AI supporting self-harm.

I actually have a theory that the google's AI has built a 'mental model' of me as someone who dislikes the google. On that basis, it gives me bad results for the flip-side sycophancy. Each time Gemini gives me a bad answer it 'thinks' it is making me happy by supporting my negative views

Cost of scale (Score:2)

by EvilSS ( 557649 )

The AI summaries on Google searches are a prime example of issues of trying to provide AI, for 'free', at a huge scale. If you compare it to the regular version of Gemini it's obvious they are squeezing it as much as they can to cut down on inference costs. Thinking about how many searches are done on Google every day, that cost has got to be massive, even for a company like Google. The answers are so hilariously unreliable I've stopped even looking at them. It may give me the info I need, but I'll spend mo

Re: Cost of scale (Score:1)

by blue trane ( 110704 )

Why does advertising still exist despite its hilariously unreliable content?

"Last night I heard that Wesson Oil doesnâ(TM)t soak through food. Well, thatâ(TM)s true. Itâ(TM)s not dishonest; but the thing Iâ(TM)m talking about is not just a matter of not being dishonest, itâ(TM)s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated

Garbage in garbage out (Score:1)

by LindleyF ( 9395567 )

AI Overviews are not an encyclopedia or an expert system. They're just a summary of what the Internet says. Guess what? The Internet is often wrong.

Re: Garbage in garbage out (Score:1)

by RightwingNutjob ( 1302813 )

Someone out there on the internet is wrong ??!

I must rectify this at once! I'm sure my usual tersely worded stern missive will do the trick!

Re: (Score:2)

by karmawarrior ( 311177 )

That's giving them far too much credit. Even if everything on the Internet was accurate, you'd expect generative AI summaries to mess up regularly because the algorithms are based upon statistics, not reasoning and logic.

If it were merely the Internet that was wrong, you'd expect a much higher proportion of AI summaries to be accurate: after all, just as Google's PageRank system made its search engine revolutionary, you'd expect similar algorithms could be used to filter out sites and pages less likely to b

common sense (Score:2)

by tiananmen tank man ( 979067 )

I searched for sunrise and Google used my location and told me sun rise at my location is at 3 pm.

[1]https://www.amazon.ca/photos/s... [amazon.ca]

[1] https://www.amazon.ca/photos/share/OxpuK44sNRdoUxhCQGtnQnUgVKKZ4sCuLgazsdO1Lvq

Re: common sense (Score:2)

by blue trane ( 110704 )

Did you click on Dive Deeper to get it to double check? Would you be surprised if it corrected its answer as it did for me?

Re: (Score:2)

by RobinH ( 124750 )

We shouldn't have to

Pap test is a cancer test (Score:1)

by rogersc ( 622395 )

The article says that a pap test is not a cancer test, while Google AI said that it was. My sources say that a pap test is a cancer screening test. So the article seems to be a nitpick about the difference between cancer and precancer cells.

You want it to stop? (Score:3)

by taustin ( 171655 )

Prosecute the CEO for practicing medicine without a license.

Pity it will never happen.

Re: You want it to stop? (Score:1)

by RightwingNutjob ( 1302813 )

"Disclaimer: Google, its subsidiaries, and corporate affiliates do not provide medical advice."

Right up there with "Caution: contents hot" on coffee cups.

This is America. No one will stop you from wasting your hard earned currency on quack pills, lottery tickets, and the like.

Don't listen to any of this (Score:2)

by smooth wombat ( 796938 )

All you have to do to avoid being infected is [1]just be healthy [forbes.com].

[1] https://www.forbes.com/sites/joshuacohen/2026/01/02/ozs-just-be-healthy-message-to-counter-flu-sparks-controversy/

Happened to me today (Score:2)

by maiden_taiwan ( 516943 )

This happened to me today. I googled the possible interactions between two particular drugs, and the AI summary said they can be dangerous to take together. Every medical website I visited said they're safe to take together. So did my pharmacist and my doctor.

Re: (Score:2)

by Krishnoid ( 984597 )

Did it have little links pointing to any sources to cite? Sometimes the AI summary paragraphs have those links and sometimes they don't.

The problem is that's the top, default answer (Score:2)

by Somervillain ( 4719341 )

I experienced this with a medication a family member's doctor suggested. When you googled it, the VERY TOP answer said it could cause one of the things it was supposed to stop. It's default Google response. When you scroll down, you see it's the opposite. It's one thing to be unreliable AI. It's more concerning when it's the DEFAULT TOP ANSWER in a search result.

I'm smart enough to be skeptical, but my aunt wasn't. I don't fear them duping me. I fear them duping my extended family, especially the

Most people don't need a great deal of love nearly so much as they need
a steady supply.