Google AI Overviews Put People at Risk of Harm With Misleading Health Advice (theguardian.com)
- Reference: 0180503223
- News link: https://tech.slashdot.org/story/26/01/02/188203/google-ai-overviews-put-people-at-risk-of-harm-with-misleading-health-advice
- Source link: https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information
In one case described by experts as "really dangerous," Google advised people with pancreatic cancer to avoid high-fat foods, which is the exact opposite of what should be recommended and could jeopardize a patient's chances of tolerating chemotherapy or surgery. A search for liver blood test normal ranges produced masses of numbers without accounting for nationality, sex, ethnicity or age of patients, potentially leaving people with serious liver disease thinking they are healthy. The company also incorrectly listed a pap test as a test for vaginal cancer.
The Eve Appeal cancer charity noted that the AI summaries changed when running the exact same search, pulling from different sources each time. Mental health charity Mind said some summaries for conditions such as psychosis and eating disorders offered "very dangerous advice."
Google said the vast majority of its AI Overviews were factual and that many examples shared were "incomplete screenshots," adding that the accuracy rate was on par with featured snippets.
[1] https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information
Cost of scale (Score:2)
The AI summaries on Google searches are a prime example of issues of trying to provide AI, for 'free', at a huge scale. If you compare it to the regular version of Gemini it's obvious they are squeezing it as much as they can to cut down on inference costs. Thinking about how many searches are done on Google every day, that cost has got to be massive, even for a company like Google. The answers are so hilariously unreliable I've stopped even looking at them. It may give me the info I need, but I'll spend mo
Re: Cost of scale (Score:1)
Why does advertising still exist despite its hilariously unreliable content?
"Last night I heard that Wesson Oil doesnâ(TM)t soak through food. Well, thatâ(TM)s true. Itâ(TM)s not dishonest; but the thing Iâ(TM)m talking about is not just a matter of not being dishonest, itâ(TM)s a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated
Garbage in garbage out (Score:1)
AI Overviews are not an encyclopedia or an expert system. They're just a summary of what the Internet says. Guess what? The Internet is often wrong.
Re: Garbage in garbage out (Score:1)
Someone out there on the internet is wrong ??!
I must rectify this at once! I'm sure my usual tersely worded stern missive will do the trick!
Re: (Score:2)
That's giving them far too much credit. Even if everything on the Internet was accurate, you'd expect generative AI summaries to mess up regularly because the algorithms are based upon statistics, not reasoning and logic.
If it were merely the Internet that was wrong, you'd expect a much higher proportion of AI summaries to be accurate: after all, just as Google's PageRank system made its search engine revolutionary, you'd expect similar algorithms could be used to filter out sites and pages less likely to b
common sense (Score:2)
I searched for sunrise and Google used my location and told me sun rise at my location is at 3 pm.
[1]https://www.amazon.ca/photos/s... [amazon.ca]
[1] https://www.amazon.ca/photos/share/OxpuK44sNRdoUxhCQGtnQnUgVKKZ4sCuLgazsdO1Lvq
Re: common sense (Score:2)
Did you click on Dive Deeper to get it to double check? Would you be surprised if it corrected its answer as it did for me?
Re: (Score:2)
We shouldn't have to
Pap test is a cancer test (Score:1)
The article says that a pap test is not a cancer test, while Google AI said that it was. My sources say that a pap test is a cancer screening test. So the article seems to be a nitpick about the difference between cancer and precancer cells.
You want it to stop? (Score:3)
Prosecute the CEO for practicing medicine without a license.
Pity it will never happen.
Re: You want it to stop? (Score:1)
"Disclaimer: Google, its subsidiaries, and corporate affiliates do not provide medical advice."
Right up there with "Caution: contents hot" on coffee cups.
This is America. No one will stop you from wasting your hard earned currency on quack pills, lottery tickets, and the like.
Don't listen to any of this (Score:2)
All you have to do to avoid being infected is [1]just be healthy [forbes.com].
[1] https://www.forbes.com/sites/joshuacohen/2026/01/02/ozs-just-be-healthy-message-to-counter-flu-sparks-controversy/
Happened to me today (Score:2)
This happened to me today. I googled the possible interactions between two particular drugs, and the AI summary said they can be dangerous to take together. Every medical website I visited said they're safe to take together. So did my pharmacist and my doctor.
Re: (Score:2)
Did it have little links pointing to any sources to cite? Sometimes the AI summary paragraphs have those links and sometimes they don't.
The problem is that's the top, default answer (Score:2)
I experienced this with a medication a family member's doctor suggested. When you googled it, the VERY TOP answer said it could cause one of the things it was supposed to stop. It's default Google response. When you scroll down, you see it's the opposite. It's one thing to be unreliable AI. It's more concerning when it's the DEFAULT TOP ANSWER in a search result.
I'm smart enough to be skeptical, but my aunt wasn't. I don't fear them duping me. I fear them duping my extended family, especially the
Welcome to Web 3.0 (Score:3)
Top 10s are full of SEO gamified content. It being the best or even correct is a secondary issue. The top 10 is then ignored by the top AI result which seems to pull from random sites and even if they exist when clicking through they probably don't even say what the AI says they do. How can it have gone so wrong! I miss the 2010s internet.
Re: (Score:2)
We're at Web 4.0 actually.
Web 3.0 was supposed to be blockchain all the way all the time.
Re: (Score:2)
Mod parent Funny, though he [tlhIngan] should have worked a turtle into it.
The Venn diagram joke I was actually looking for would involve sycophancy and self-hate. Of course the overlap involves the AI supporting self-harm.
I actually have a theory that the google's AI has built a 'mental model' of me as someone who dislikes the google. On that basis, it gives me bad results for the flip-side sycophancy. Each time Gemini gives me a bad answer it 'thinks' it is making me happy by supporting my negative views