Every question you ask, every comment you make, I'll be recording you
- Reference: 1755511210
- News link: https://www.theregister.co.uk/2025/08/18/opinion_column_ai_surveillance/
- Source link:
When you ask an AI chatbot for an answer, whether it's about the role of tariffs in decreasing prices (spoiler: [2]tariffs increase them ,); whether your girlfriend is really that into you; or, my particular favorite, "How to Use a Microwave Without Summoning Satan," OpenAI records your questions. And, until recently, Google kept the records for anyone who is search savvy to find them.
It's not like OpenAI didn't tell you that if you shared your queries with other people or saved them for later use, it wasn't copying them down and making them potentially searchable. The company explicitly said this was happening.
[3]
The warning read: "When users clicked 'Share,' they were given the option to 'Make this chat discoverable.' Under that, in smaller text, was the explanation that you were allowing it to be 'shown in web searches'."
[4]
[5]
But, like all those hundreds of lines of end-user license agreements (EULAs) that we all check with the "Agree" button, it appears that most people didn't read them. Or, think it through. Pick one. Maybe both. Hanlon's Razor says it best: "Never ascribe to malice what can be explained by stupidity."
OpenAI's chief information security officer, Dane Stuckey, then tweeted that OpenAI had removed the option because it "introduced too many opportunities for [6]folks to accidentally share things they didn't intend to . The company is also "working to remove indexed content from the relevant search engines." It appears OpenAI has been successful.
[7]
So, everything's good now, right? Right? Right!? Oh, you poor dear child, of course not.
For the moment, no one can Google their way to embarrassing questions you've asked OpenAI. That doesn't mean that queries you've been asking may not appear from a data breach or somehow resurface in a Google or AI search. After all, OpenAI has been legally required to retain all your queries, including those you've deleted. Or, well, that you thought you deleted anyway.
Oh? You didn't know that? OpenAI is currently under a federal court order, as part of an ongoing copyright lawsuit, that [8]forces it to preserve all user conversations from ChatGPT on its consumer-facing tiers: Free, Plus, Pro, and Team. The court order also means that "Temporary Chat" sessions, which were previously erased after use, are now being stored. There's nothing "Temporary" about them now.
[9]
See, this is why you need to follow me so you can keep up to date with this stuff. While I don't think that what you ask ChatGPT is as big a deal as someone who goes by "signull" on Twitter does when they said, " [10]the contents of ChatGPT often are more sensitive than a bank account," it still matters a lot.
You'll be glad to know that OpenAI is fighting in the courts, but, as someone who has covered more than his fair share of legal cases, I wouldn't count on them winning this point.
This isn't just an OpenAI problem, by the way. Take Google, for example. Google has begun rolling out a [11]Gemini AI update, which enables it to automatically remember key details from past chats . What Google wants you to consider is that this means Gemini can personalize its responses by recalling your preferences, previous topics, and important context from earlier conversations.
So, for instance, Gemini will know that when I ask about "dog treats," it will "recall" that I've asked about Shih Tzu before, so it will give me information about small dog treats and, Google being Google, ads for the same.
Isn't that sweet and helpful?
But, say it recalled me asking about how to make 3D-printed guns. You may not want that on your permanent AI record. By the way, on OpenAI, that same feature is called Memory, and Anthropic just added it as well to Claude.
On Google, this feature is on by default, but can be disabled. Of course, people had to enable OpenAI to make their questions publicly searchable, and they blithely went and did just that.
[12]LLM chatbots trivial to weaponise for data theft, say boffins
[13]UK expands police facial recognition rollout with 10 new vans heading to a town near you
[14]I started losing my digital privacy in 1974, aged 11
[15]Suetopia: Generative AI is a lawsuit waiting to happen to your business
This isn't just a personal concern. As Anthropic pointed out recently, Large Language Models [16](LLMs) can be used to steal data just as if they were company insiders. The more data you give any of the AI services, the more that information can potentially be used against you. Remember, all the mainstream AI chatbots record your questions and conversations by default. They've been doing this for service improvement, context retention, product analytics, and, of course, to feed their LLMs.
What's different now is that, now that you're used to AI, they're letting you benefit from all this data as well while hoping you don't notice just how much the AIs know about you. I shudder to think what [17]Meta, with its AI policies allowing chatbots to flirt with your kids, will do. Let me remind you that [18]Meta declined to obey the EU's voluntary AI safety guidelines.
So, kids, let's not be asking any AI chatbot whether you should divorce your husband, how to cheat on your taxes, or if you should try to get your boss fired. That information will be kept, it may be revealed in a security breach, and, if so, it will come back to bite you in the buns. ®
Get our [19]Tech Resources
[1] https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations
[2] https://www.bushcenter.org/catalyst/opportunity-road/rooney-tariffs-rising-prices
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aKNOGdyrcYQB0dTHxTdKCwAAAJU&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aKNOGdyrcYQB0dTHxTdKCwAAAJU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aKNOGdyrcYQB0dTHxTdKCwAAAJU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[6] https://x.com/cryps1s/status/1951041845938499669
[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aKNOGdyrcYQB0dTHxTdKCwAAAJU&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[8] https://cdn.arstechnica.net/wp-content/uploads/2025/06/NYT-v-OpenAI-Preservation-Order-5-13-25.pdf
[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aKNOGdyrcYQB0dTHxTdKCwAAAJU&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[10] https://x.com/signulll
[11] https://www.theverge.com/news/758624/google-gemini-ai-automatic-memory-privacy-update
[12] https://www.theregister.com/2025/08/15/llm_chatbots_trivial_to_weaponise/
[13] https://www.theregister.com/2025/08/13/uk_expands_police_facial_recognition/
[14] https://www.theregister.com/2025/08/13/digital_privacy_senseless_data_preservation/
[15] https://www.theregister.com/2025/08/12/genai_lawsuit/
[16] https://www.anthropic.com/research/agentic-misalignment
[17] https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
[18] https://www.theregister.com/2025/07/18/meta_declines_eu_ai_guidelines/
[19] https://whitepapers.theregister.com/
Re: So
The US is governed by Russian assets.
Re: So
January 2017 doesn't qualify as "new".
That headline had a bit of a sting in it
Someone should call the Police
"Never ascribe to malice what can be explained by stupidity."...
More like : "Never ascribe to stupidity what can be explained by greed ."
Well, duh
If this is news to anyone, they deserve to feature prominently on Google (and eveywhere else).
Have fun training your replacement
Let me be clear: every time you use an AI to assist you, you're not delegating. You're donating. You're feeding it your workflow, your decision tree, your intellectual property—and in doing so, you're essentially handing over your job in a neat little JSON file.
And renting AI to run your business? That’s not innovation. That’s corporate self-cannibalization. You're not just outsourcing labor, you're outsourcing legacy. You're teaching the AI how to replicate your business model, optimize it, and—brace yourself—eliminate you from it. Congratulations, you've become the beta test for your own obsolescence.
Hanlon's razor is bullshit
Also known as the "Mayor West Defence". How about "Always ascribe to malice what can be explained by stupidity." ?
HR excuses reckless and/or wilful negligence.
Re: Hanlon's razor is bullshit
A stupid person doesn’t need to intend harm - their inability to grasp consequences is the harm. It’s like handing a toddler a loaded gun and saying, “Don’t worry, he’s not malicious.” Malice with intent and malice through incompetence both end up with a hole in the wall and someone bleeding on the floor. Pretending one is somehow excusable is just self-delusion.
The statement used to be: If it is free, you are the product
Now this can simply be shortened to:
You are the product!
let's not be asking any AI chatbot whether...
Let's just stop right there, kids.
How to Use a Microwave Without Summoning Satan
Does this mean if I use a lower wattage, say, thawing some food, that I will only get a lesser demon?
Re: How to Use a Microwave Without Summoning Satan
After an unexpected LOL, signed in just so I could upvote your post
Re: How to Use a Microwave Without Summoning Satan
Ah! That's why water in a bowl sometime explodes when heated on full power. The devil was being summoned and trapped within the water. Never knew that. This new learning is the wonder of our age.
I suppose giving the oven less welly you could expect a lukewarm demon. Crowley?
Searching for this phrase is a revelation in itself. Doesn't give one much confidence in the Intelligence or sanity of one's fellow man.
the option to 'Make this chat discoverable.'
I'm sorry, this is 2025.
How many times do you need to be warned that social media is just milking you to get the message ?
If you can't understand by now, then there's only a cluebat to the face - or a great big hole in your bank account - that will get the message across.
And I'm sorry if I do not sympathize.
It's worse than this (a lot worse)
Most all (pretty much all) devices that are "smart" are listening to your every word (and have been for many years). That ingestion is now simply better and more exploited. And yes, they listen to everything. Everything.
[...does anyone actually read the terms and conditions?
So many lines there to view
To make sure no-one dares to sue
But does anybody read
All of it? Don't know 'bout you
But my ass avoids it ev'ry time
Quick-scrolling down to find out where I
Can make it all go away, the little box that they've placed
To get on with this program's use
And I know that some tiny part of this will go
And bite me right in the ass, it blows
But it remains the truth, can't change it now
Reading the terms? Won't happen, no doubt]
Original Song Title: "The Words"
Original Performer: Christina Perri
Parody Song Title: "The Terms"
Parody Written by: the_conqueror_of_parodies
If you don't know this instinctively
then I'm not surprised if you're the sort of person who peppers "I asked ChatGPT" into conversation and then expects anyone to take any further utterances seriously.
Full Public Disclosure
I am more than happy to post every question I have asked ChatGPT, CoPilot, Gemini, et al in public here on El Reg.
Here is the complete, unabriged list in full:
The Bear of Little Brain
I would turn Pooh loose on ChatGPT with a slice of Cottleston Pie ... Why does a chicken, I don't know why.
So if OpenAI is being forced to store data on non-US citizens, does that not invalidate EU Safe Harbour? and according to ChatGPT the answer is Yes,
EU Safe Harbor and Its Replacement (Privacy Shield):
The EU-US Safe Harbor Framework was initially a mechanism for ensuring that data could flow between the European Union and the United States while still complying with EU data protection standards.
The Safe Harbor framework was invalidated by the European Court of Justice in 2015, largely because it did not adequately protect EU citizens' privacy rights, especially with respect to U.S. government surveillance practices.
Following that, the EU-U.S. Privacy Shield was introduced as a replacement, which aimed to address those concerns by providing stronger protections for EU citizens' data. However, in 2020, the European Court of Justice again invalidated the Privacy Shield, citing concerns over U.S. surveillance laws and insufficient protections for EU data subjects.
The Current Situation:
The primary legal mechanism for data transfers between the EU and the U.S. is now based on Standard Contractual Clauses (SCCs), which are used to ensure that personal data is adequately protected when transferred outside of the EU.
However, SCCs still face scrutiny, particularly in light of the Court of Justice ruling that certain aspects of U.S. law (like government surveillance) could make such transfers non-compliant with EU law.
OpenAI and Non-U.S. Citizens' Data:
If OpenAI is indeed storing data in ways that involve non-U.S. citizens, there are key considerations:
Where the data is stored: If data about EU citizens is stored on servers outside of the EU, including in the U.S., the transfer and storage need to comply with EU data protection laws, including the General Data Protection Regulation (GDPR).
Access by U.S. entities: If OpenAI or other U.S.-based entities can access this data, there might be concerns over the extraterritorial application of U.S. surveillance laws (such as the Foreign Intelligence Surveillance Act), which could invalidate EU-based protections, even if data is stored on non-U.S. servers.
So, if OpenAI (or any U.S.-based company) is storing data on non-U.S. citizens and that data is subject to access by U.S. authorities, this could undermine the protections intended by the Safe Harbor or Privacy Shield frameworks, and likely complicates compliance with the EU’s data protection laws.
Ultimately, the legal landscape around data transfers is in flux, and companies are actively working to ensure compliance through measures like SCCs, but as the rulings show, the legality of such data flows may continue to be challenged.
So
The water is wet, the pope is catholic, the bear shits in the woods and most of the people are stupid, erm...., sorry... too lazy to think.. What else is new?