News: 1765242991

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google says Chrome's new AI creates risks only more AI can fix

(2025/12/09)


Google plans to add a second Gemini-based model to Chrome to address the security problems created by adding the first Gemini model to Chrome.

In September, Google [1]added a Gemini-powered chat window to its browser and [2]promised the software would soon gain agentic capabilities that allow it to interact with browser controls and other tools in response to a prompt.

Allowing error-prone AI models to browse the web without human intervention is dangerous, because the software can ingest content – perhaps from a maliciously crafted web page – that instructs it to ignore safety guardrails. This is known as “indirect prompt injection.”

[3]

Google knows about the risks posed by indirect prompt injection, and in a Monday [4]blog post Chrome security engineer Nathan Parker rated it as “the primary new threat facing all agentic browsers.”

[5]

[6]

"It can appear in malicious sites, third-party content in iframes, or from user-generated content like user reviews, and can cause the agent to take unwanted actions such as initiating financial transactions or exfiltrating sensitive data,” Parker wrote.

The seriousness of the threat recently led IT consultancy Gartner to recommend that companies [7]block all AI browsers .

[8]

The Chocolate Factory, having invested billions in AI infrastructure and services, would prefer that people embrace AI rather than shun it. So the ad biz is adding a second model to keep its Gemini-based agent in line.

Parker refers to the oversight mechanism "a User Alignment Critic."

"The User Alignment Critic runs after the planning is complete to double-check each proposed action," he explains. "Its primary focus is task alignment: determining whether the proposed action serves the user's stated goal. If the action is misaligned, the Alignment Critic will veto it."

[9]

According to Parker, Google designed the Critic so attackers cannot poison it by exposing the model to malicious content.

[10]Publishers say no to AI scrapers, block bots at server level

[11]Block all AI browsers for the foreseeable future: Gartner

[12]Meta and Google turn to NextEra to feed insatiable datacenter power hunger

[13]IBM drops $11B on Confluent to feed next-gen AI ambitions

Enlisting one machine learning model to moderate another has become an accepted pattern among AI firms. [14]Suggested by developer Simon Willison in 2023, it was formalized in a Google DeepMind [15]paper published this year. The technique is called "CaMeL," which stands for "CApabilities for MachinE Learning."

Parker adds that Google is also bringing Chrome's origin-isolation abilities to agent-driven site interactions.

The web's security model is based on the [16]same-origin policy – sites should not have access to data that comes from different origins (e.g. domains). And Chrome tries to enforce [17]Site Isolation , which puts cross-site data in different processes, away from the web page process, unless allowed by [18]CORS .

Google extended this design to agents using tech called Agent Origin Sets that aims to prevent Chrome-based AI from interacting with data from arbitrary origins. The Register understands that Chrome devs have incorporated some of this work, specifically the origin isolation extension, into current builds of the browser, and that other agentic features will appear in future releases.

Additionally, Google aims to make Chrome's agentic interactions more transparent, so user directives to tackle some complicated task don't end in tears when things go awry. The model/agent will seek user confirmation before navigating to sites that deal with sensitive data (e.g. banks, medical sites). Also, the robo-browser will also seek confirmation before letting Chrome sign-in to a site using the Google Password Manager. And for sensitive web actions like online purchases, sending messages, or other unspecified consequential actions, the agent will either ask for permission or just tell the user to complete the final step.

To ensure that security researchers put Chrome's agentic safeguards to the test, Parker says Google has revised its Vulnerability Rewards Program (aka bug bounties) to offer payouts for folks who find flaws.

"We want to hear about any serious vulnerabilities in this system and will pay up to $20,000 for those that demonstrate breaches in the [19]security boundaries ," said Parker. ®

Get our [20]Tech Resources



[1] https://www.theregister.com/2025/09/18/google_chrome_ai_browser/

[2] https://www.youtube.com/watch?v=khX5-VHoYds&t=206s

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aTes61ep7AKPD7pP5gcCOgAAAAw&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[4] https://security.googleblog.com/2025/12/architecting-security-for-agentic.html

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aTes61ep7AKPD7pP5gcCOgAAAAw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aTes61ep7AKPD7pP5gcCOgAAAAw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[7] https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/

[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aTes61ep7AKPD7pP5gcCOgAAAAw&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aTes61ep7AKPD7pP5gcCOgAAAAw&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[10] https://www.theregister.com/2025/12/08/publishers_say_no_ai_scrapers/

[11] https://www.theregister.com/2025/12/08/gartner_recommends_ai_browser_ban/

[12] https://www.theregister.com/2025/12/08/nextera_meta_google_datacenter_power/

[13] https://www.theregister.com/2025/12/08/ibm_drops_11b_on_confluent/

[14] https://simonwillison.net/2023/Apr/25/dual-llm-pattern/

[15] https://arxiv.org/abs/2503.18813

[16] https://developer.mozilla.org/en-US/docs/Web/Security/Defenses/Same-origin_policy

[17] https://www.chromium.org/Home/chromium-security/site-isolation/

[18] https://www.w3.org/TR/cors/

[19] https://chromium.googlesource.com/chromium/src/+/HEAD/docs/security/faq.md#ai-features

[20] https://whitepapers.theregister.com/



Pulled Tea

Google plans to add a second Gemini-based model to Chrome to address the security problems created by adding the first Gemini model to Chrome.

[1]There was an old woman who swallowed a fly…

[1] https://en.wikipedia.org/wiki/There_Was_an_Old_Lady_Who_Swallowed_a_Fly#Lyrics

How

Snowy

Do it turn it off!!

xanadu42

So to fix a "known" "AI" security issue you add a second "AI" to fix the first...

And add another "AI" vector for security issues

The logical conclusion to this is that when the second "AI" doesn't solve all of the first "AI" security issues (and adds its own security issues) you add a third...

Then a fourth or fifth?

Why not just remove the new "AI" feature until ALL its issues have been addressed?

Can't have that!! The Browser MUST have "AI"...

bit, n:
A unit of measure applied to color. Twenty-four-bit color
refers to expensive $3 color as opposed to the cheaper 25
cent, or two-bit, color that use to be available a few years ago.