News: 1768427117

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Google offers bargain: Sell your soul to Gemini, and it'll give you smarter answers

(2026/01/14)


Google on Wednesday began inviting Gemini users to let its chatbot read their Gmail, Photos, Search history, and YouTube data in exchange for possibly more personalized responses.

Josh Woodward, VP of Google Labs, Gemini and AI Studio, announced the beta availability of [1]Personal Intelligence in the US. Access will roll out over the next week to US-based Google AI Pro and AI Ultra [2]subscribers .

The use of the term "Intelligence" is more aspirational than accurate. Machine learning models are not [3]intelligent ; they predict tokens based on training data and runtime resources.

[4]

Perhaps "Personalized Predictions" would be insufficiently appealing and "Personalized Artificial Intelligence" would draw too much attention to the mechanized nature of chatbots that now attracts [5]active opposition . Whatever the case, access to personal data comes with the potential for more personally relevant AI assistance.

[6]

[7]

Woodward [8]explains that Personal Intelligence can refer information from Google apps like Gmail, Photos, Search, and YouTube to the company's Gemini model. This may help the model respond to queries using personal or app-specific data within those applications.

"Personal Intelligence has two core strengths: reasoning across complex sources and retrieving specific details from, say, an email or photo to answer your question," said Woodward. "It often combines these, working across text, photos and video to provide uniquely tailored answers."

[9]

As an example, Woodward recounted how he was shopping for tires recently. While he was standing in line for service, he didn't know the tire size and needed his license plate number. So he asked Gemini and the model fetched that information by scanning his photo library, finding an image of his car, and converting the imaged license plate to text.

Whether that scenario is better than recalling one's plate number from memory, searching for it on phone-accessible messages, or glancing at the actual plate in the parking lot depends on whether one sees the mind as a use-it-or-lose-it resource. Every automation is an abdication of autonomy.

To Google's credit, Personal Intelligence is off by default and must be enabled per app. If Personal Intelligence is anything like AI Overviews or Gemini in Google Workspace apps, expect notifications, popups, hints, nudges, and recommendations during app interactions as a way to encourage adoption.

[10]

Woodward argues that what differentiates Google's approach from rival AI agents is that user data "already lives at Google securely." There's no privacy intrusion when the call is coming from inside the house.

Gemini, he said, will attempt to cite the source of output based on personalization, so recommendations can be verified or corrected. And there are "guardrails" in place that try to avoid bringing sensitive information (e.g. health data) into Gemini conversations, like "I've cancelled your appointments next year based on your prognosis in Gmail."

[11]Linus Torvalds tries vibe coding, world still intact somehow

[12]Stop dragging feet on AI nudification ban, UK government told

[13]Developer writes script to throw AI out of Windows

[14]Anthropic Claude wants to be your helpful colleague, always looking over your shoulder

It's ancient history now but in 2012, when Google [15]changed its privacy policy to share data across its different services, that was controversial. The current trend is to encourage customer complicity in data sharing.

Woodward insists Google's aim is to provide a better Gemini experience while keeping personal data secure and under the user's control.

"Built with privacy in mind, Gemini doesn't train directly on your Gmail inbox or Google Photos library," he said. "We train on limited info, like specific prompts in Gemini and the model's responses, to improve functionality over time."

Pointing to his anecdote about his vehicle, he said that Google would not use the photos of the relevant road trip, the license plate in those photos, or his Gmail messages for model training. But the prompts and responses, filtered to remove personal information, would get fed back to the model as training data.

"In short, we don't train our systems to learn your license plate number; we train them to understand that when you ask for one, we can locate it," he said.

Google's [16]Gemini Apps Privacy Hub page offers a more comprehensive view of how Google uses the information made available to its AI model.

The company says that human reviewers (including trained reviewers from partner service providers) review some of the data that it collects for purposes like improving and maintaining services, customization, measurement, and safety. "Please don't enter confidential information that you wouldn't want a reviewer to see or Google to use to improve our services, including machine-learning technologies," it warns.

The [17]personalization with Connected Apps page offers a similar caution.

Google's support boilerplate also states that Gemini models may provide inaccurate or offensive responses that do not reflect Google's views.

"Don't rely on responses from Gemini Apps as medical, legal, financial, or other professional advice," Google's documentation says.

But for anything less consequential, maybe Personal Intelligence will help. ®

Get our [18]Tech Resources



[1] https://gemini.google/overview/personal-intelligence/

[2] https://gemini.google/subscriptions/

[3] https://www.apa.org/topics/intelligence

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aWlxnNVzn-LdNQvyUi9slQAAAxc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[5] https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aWlxnNVzn-LdNQvyUi9slQAAAxc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aWlxnNVzn-LdNQvyUi9slQAAAxc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[8] https://blog.google/innovation-and-ai/products/gemini-app/personal-intelligence/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aWlxnNVzn-LdNQvyUi9slQAAAxc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aWlxnNVzn-LdNQvyUi9slQAAAxc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[11] https://www.theregister.com/2026/01/13/linus_torvalds_vibe_coding/

[12] https://www.theregister.com/2026/01/14/uk_government_nudification_ban/

[13] https://www.theregister.com/2026/01/13/script_removes_ai_from_windows/

[14] https://www.theregister.com/2026/01/13/anthropic_previews_claude_cowork_for/

[15] https://www.propublica.org/article/google-has-quietly-dropped-ban-on-personally-identifiable-web-tracking

[16] https://support.google.com/gemini/answer/13594961

[17] https://support.google.com/gemini/answer/16836988

[18] https://whitepapers.theregister.com/



Personalised Predictions

Anonymous Coward

Like Personalized Ads.

And they are shite.

Re: Personalised Predictions

Hubert Cumberdale

"Google on Wednesday began inviting Gemini users to let its chatbot read their Gmail, Photos, Search history, and YouTube data in exchange for possibly more personalized responses."

Nope. Nope. Nope. Nope. Also, Nope.

It seems like using Google or Microsoft is a liability

hx

Your cyber insurance carrier says, "No", probably.

cd

This seems to indicate a looming suit; proffering a carrot to legitimise their rifling through your stuff, if you haven't already left for safer/saner provision. If you're still there it's possible you'll sit still for overt rifling.

Very slow hand clap !!!

Anonymous Coward

Pray tell ... how does 'personalisation' chnage a 'wrong' answer ???

Holding your hand and stroking your brow while telling you, confidentally, that 1 + 1 = 3 is not an improvement !!!

It is just another way to get access to all your data and it may help Google and its 'AI' sound more useful gaining your trust in the process.

'AI' is crap and mind games will not change this !!!

FFS, just drop a bomb on it all and give us all a break from this Horrorshow !!!

:)

Sell your soul ?

Bebu sa Ware

Anyone in the US still possess one ?

I would have thought the only buyer in the market that could both purchase and take possession of a soul would be the Devil which would imply Gemini was Satan himself or an authorised agent (in which case Mephistopheles might be a better naming choice than Gemini.)

Sort of fits with the change in direction of the corporate mission viz " Do no evil. " to " Do only evil. "

the bigger issue (to me)

Bluck Mutter

Putting aside whether AI is useful or not, the bigger issue is product churn.

All the AI players are rolling out a never ending number of new or improved "stuff" with basically zero chance that if you build something against it that the next release (assumes the product isn't shutdown in 6 months) isn't compatible with your current model.

IT runs on standards: api's/intefaces/protocols that change slowly over time and when they do change provide backwards compatibility.

I don't see that as a feature of AI.

Bluck

Martin M

I know I'm going to get downvoted to hell and back but hey ho. Baldly stating AIs are not intelligent is a bit binary.

First of all, there are many different definitions of intelligence, from simple to sophisticated. For example, picking from the Wikipedia page the Lloyd Humphreys of intelligence as "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills" seems to me to be met. Everyone also seems to have conveniently forgotten the Turing test which served as the benchmark for years and is certainly met by some conversations, particularly if the human end is not expert in the problem space.

Taking the linked definition, I agree it does rule out current models but it's not far away:

"ability to derive information" - check

"learn from experience" - arguably in a limited form via the context window over a limited span, probably the stickiest point. Being able to fine tune based on a series of interactions might maybe change this but I have no feeling as to when if ever this will be computationally feasible/economic.

"adapt to the environment" - arguably via tools etc.

"understand" - not defined properly and a bit philosophical, but as an example if you have a murder mystery novel ending "the killer was x" predicting x requires learning something approximating a full world model within the transformer network - so learning for token prediction might be akin to understanding under some definitions, even if it's gained in a different (and more brute force) way than humans.

"correctly utilize thought and reason" - what constitutes thought and reason is also a bit philosophical, and if this needs to be consistent it rules me out as well as every human I've met. These models can reason correctly at least some of the time.

It seems to me that most definitions of intelligence are very anthropocentric. It's a valid question to ask what the definition is intended to be useful for. If it's just to say "thinks in the same way as a clever human", that will likely rule out any AI, no matter how it works, unless it's literally simulating neurons..

nobody who matters

I mentioned a few days ago that the term 'AI' seems to have become another 'Humpty Dumpty' word which is now used to mean whatever the speaker wants it to mean.

Your post is a clear demonstration of that effect. Using your quoted definitions, it is apparent that nothing we have now qualifies - the requirement to be able to learn and adapt precludes them to start with, because they clearly don't. Just because someone on t'internet has posted their version of a definition of something, does not automatically make it correct.

Contrary to your beliefs, none of what we currently have classes as actual AI; not even at a simple level.

Martin M

You seem very sure of my beliefs.

You start by saying AI is now used to mean whatever the speaker wants it to mean. I agree, and I think it always did - as far as I can see there has never been a single, uncontested definition of "intelligence", and those put forward differ wildly. If we struggle to define intelligence, we cannot define AI. This renders your final triumphant "none of what we have classes as actual AI" statement utterly meaningless.

I explicitly said learning was the "stickiest point" and in regard to the linked definition of intelligence, so I think we agree on that. I also personally feel learning probably should be included in useful definitions of intelligence. I don't like that Lloyd Humphreys omits it, but he was a professor in psychology and recognised expert in the field of defining intelligence cited thousands of times by other experts, so not "someone on t'internet". If you think there is "one true definition" of AI then please feel free to proffer it and explain why we should trust you/it more than Humphreys and Turing.

For what it's worth, I think it's pretty pointless trying to describe AI in a binary way. Modern chatbots as fielded by OpenAI display, to some extent, a number of behaviours that we've traditionally thought as key aspects of human intelligence. They fall short (sometimes woefully short) on others. They exceed humans in other ways (sheer breadth of training data). I think it's more useful to ask what attributes of intelligence are useful for success in a given situation, and the reliability and extent to which they need to be expressed.

Please think of the children ....

Anonymous Coward

Down vote ... Check !!!

Although it was 'difficult' because of the noise of the 'stretching sinews' almost breaking as you made your arguments !!!

Every point you have made is a stretch, made to impart 'AI' with attributes it does not have.

If you have to work so hard to make your point, which is virtually always 'via tools etc', you are simply moving the debate to 'the tools' and how much they do and with what amount of 'driving' by the 'meatsack' who is using the 'AI'.

Finally, you use 'anthropocentric' then use 'thinks ...' ... same problem a 'machine' does not 'think' !!!

All I can say is 'Nice try' but you didn't win the 'Kewpie Doll' this time !!!

:)

Re: Please think of the children ....

Martin M

By "tools" I meant in the sense of MCP tools or similar, which can gather information from the environment automatically and allow models to adapt to the environment, which is relevant to the point. Nothing to do with meatsack driving.

With regard to anthropomorphism, I was trying to make the point that if intelligence is just defined as "thinks like a clever human" then there will *never be* a AI without a precise simulation of the human brain (and probably embodiment etc.). This doesn't seem likely to come about, so in that case we may as well just drop the term.

Re: Please think of the children ....

Anonymous Coward

I do get what you are trying to say BUT it was/is wrong to use the words 'Artificial Intelligence' ... it does not apply no matter what definition you use.

NO-one is using 'AI' to the degree that it is delivering a real/repeatible benefit that can be measure by money (The only measure that counts in the 'AI' world).

Focused 'AI' in well-defined knowledge areas using well-curated data as the 'fuel' for the 'AI' does exist BUT this is not the norm and that is the problem ... it is not scaling up to the real world !!!

A useful 'toy' is being generalised to work in the real world and is totally proving to be unsuitable for the task.

Future 'hope and faith' is the thing that keeps the money flowing in ... this bubble must burst eventually due to the money supply being finite.

I too would like 'AI' to be real, just like in the best movies & SciFi ... BUT I lost my belief that it would be possible when I no longer was convinced by the SciFi ideas I read as a child.

'AI' has reached the size that equates to 'Too big to fail !!!' ... BUT what is it use for as it stands ... random answers to random questions is what it does, do we really need a machine to do that !!!???

:)

m4r35n357

You did not mention your technical background - are you expecting to influence this mob of stroppy brit IT _professionals_ with word soup?

Martin M

Brit IT professional, have been the technical lead on some quite large IT systems you probably use several times a week, also intermittently stroppy. On AI: class me an enthusiastic amateur. I spent a while studying it, mostly symbolic but some neural networks, at a university quite well known for it at the time (during the AI winter). Read and mostly understood the Google attention paper just after it came out, for fun, and well before the current froth. Work with and frequently talk to actual experts in the area with published papers etc.

Not expecting to influence anyone in a comments thread, but it's fun to participate.

You?

JessicaRabbit

They put all these warnings in but let's be honest hoi polloi aren't realistically going to read them, let alone take notice. Feels like stating the obvious at this point but this won't end well.

OK, we understand that you have declined our offer

Anonymous Coward

Mind if we just peek anyway?

Spying tonite

Pete 2

> The use of the term "Intelligence" is more aspirational than accurate. Machine learning models are not intelligent

The word carries many meanings, apart from implying cleverness. Nobody seems to care when "intelligence" agencies claim to have "intelligence". Prefixing the word with artificial is just more word soup. Not worth worrying about.

98% lean.