Employees regularly paste company secrets into ChatGPT
- Reference: 1759868285
- News link: https://www.theregister.co.uk/2025/10/07/gen_ai_shadow_it_secrets/
- Source link:
In its [1]Enterprise AI and SaaS Data Security Report 2025 , LayerX blames the growing, largely uncontrolled usage of generative AI tools for exfiltrating personal and payment data from enterprise environments.
With 45 percent of enterprise employees now using generative AI tools, 77 percent of these AI users have been copying and pasting data into their chatbot queries, the LayerX study says. A bit more than a fifth (22 percent) of these copy and paste operations include PII/PCI.
[2]
"With 82 percent of pastes coming from unmanaged personal accounts, enterprises have little to no visibility into what data is being shared, creating a massive blind spot for data leakage and compliance risks," the report says.
[3]
[4]
About 40 percent of file uploads to generative AI sites include PII/PCI data, it's claimed, with 39 percent of these uploads coming from non-corporate accounts.
LayerX monitors data in the browser via an enterprise browser extension, meaning that the company sees only web-based AI interaction and not API calls from apps.
[5]
Or Eshed, CEO of LayerX, in response to a question from The Register about whether AI data leakage has caused actual harm, pointed to Samsung's decision in 2023 [6]to temporarily ban staff usage of ChatGPT after an employee reportedly uploaded sensitive code to the chatbot. He said that having enterprise data leak via AI tools can raise geopolitical issues (e.g. with Chinese AI models like Qwen), regulatory and compliance concerns, and lead to corporate data being inappropriately used for training if exposed through personal AI tool usage.
Users embrace ChatGPT, shun Copilot
The LayerX report says that app usage through non-corporate accounts (shadow IT) is common not only for generative AI (67 percent), but also for chat/instant messaging (87 percent), online meetings (60 percent), Salesforce (77 percent), Microsoft Online (68 percent), and Zoom (64 percent).
In a surprising endorsement of shadow IT, Microsoft recently said it will [7]support personal Copilot account usage in corporate Microsoft 365 accounts. That may be a reflection of Microsoft's discomfort with the dominance of OpenAI's ChatGPT, which LayerX says has become the de facto enterprise standard AI tool.
"Amongst all AI apps, ChatGPT dominates enterprise AI usage, with over 9 in 10 employees accessing it compared to far lower adoption of alternatives like Google Gemini (15 percent), Claude (5 percent), and Copilot (~2–3 percent)," the report says, adding that most people (83.5 percent) use just one AI tool.
"We see that users have a preferred AI platform and even if the business has an 'official' AI or a licensed one, users pick whatever they want," Eshed told The Register in an email. "In this case, it is overwhelmingly ChatGPT. In other words, users prefer ChatGPT."
[8]
Asked about the survey's figures on Microsoft Copilot adoption in enterprises, Eshed cited [9]a report claiming that Microsoft had "a 1.81 percent conversion rate across the 440 million Microsoft 365 subscribers" and noted that number "is almost identical to our findings (about 2 percent)."
[10]Deloitte refunds Aussie gov after AI fabrications slip into $440K welfare report
[11]OpenAI bans suspected Chinese accounts using ChatGPT to plan surveillance
[12]JetBrains backs open AI coding standard that could gnaw at VS Code dominance
[13]Google DeepMind minds the patch with AI flaw-fixing scheme
ChatGPT's enterprise penetration comes to 43 percent, LayerX's report says, approaching the popularity of applications like Zoom (75 percent penetration) and Google services (65 percent) while surpassing the penetration of Slack (22 percent), Salesforce (18 percent), and Atlassian (15 percent).
Overall, the LayerX report finds AI usage in the enterprise is growing rapidly, accounting for 11 percent of all application usage, just behind email (20 percent), online meetings (20 percent), and office productivity applications (14 percent).
Employee affinity for generative AI, the security firm argues, means that CISOs have to get serious about enforcing Single Sign-On (SSO) across every business critical applications if they want to have visibility into data flows.
Asked to provide specifics about the number of customers contributing data for the report, a LayerX spokesperson replied that the company did not want to reveal exact figures on its customer base.
Eshed said LayerX's client base consists of "dozens of global enterprises and large enterprises (1,000-100,000 users) primarily in financial services, healthcare, services and semiconductors. Most of our customers are in North America but we have customers in all 5 continents and any vertical." ®
Get our [14]Tech Resources
[1] https://go.layerxsecurity.com/the-layerx-enterprise-ai-saas-data-security-report-2025
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aOWNcc67KEK5gRE0uP1CuwAAAI4&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aOWNcc67KEK5gRE0uP1CuwAAAI4&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aOWNcc67KEK5gRE0uP1CuwAAAI4&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aOWNcc67KEK5gRE0uP1CuwAAAI4&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.cnbc.com/2023/05/02/samsung-bans-use-of-ai-like-chatgpt-for-staff-after-misuse-of-chatbot.html
[7] https://www.theregister.com/2025/10/01/microsoft_consumer_copilot_corporate/
[8] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aOWNcc67KEK5gRE0uP1CuwAAAI4&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[9] https://www.wheresyoured.at/the-case-against-generative-ai/#even-microsoft-is-failing-at-ai-with-only-8-million-active-paying-microsoft-365-copilot-subscribers-out-of-440-million-users
[10] https://www.theregister.com/2025/10/06/deloitte_ai_report_australia/
[11] https://www.theregister.com/2025/10/07/openai_bans_suspected_china_accounts/
[12] https://www.theregister.com/2025/10/07/jetbrains_acp_vs_code/
[13] https://www.theregister.com/2025/10/07/google_deepmind_patches_holes/
[14] https://whitepapers.theregister.com/
we're cooked.
I work for a non-profit healthcare organization and AI terrifies me. The number of users we support who are either constantly requesting access to chatbots, or have found ways to circumvent our policies to access chatbots is staggering, and I can only imagine what kind of sensitive information they're sharing. No data we produce, collect, or manipulate has any business being near an AI chatbot, and frankly, the people who are potentially sharing this information SHOULD KNOW THIS.
We've had trainings for years about HIPAA and privacy laws, and I can't get some of these people to forward me an email because they're worried about privacy laws, but they'll spill a complete sensitive medical history to ChatGPT.
Re: we're cooked.
I am surprised my health sector employer has had nothing much to say about the subject wrt end users, but is very interested in Alexa-style products transcribing clinical notes. If only Dr House was there to remind us that everybody lies, especially chat bots.
Re: we're cooked.
At my last job (also for a healthcare provider) we were asked to implement AI transcription programs, and I was really skeeved by the fact that it was an opt-out system for patients that they didn't really announce.
Not new.
We've been pointing out that AI is a massive security risk on here for ages. El Reg has carried stories of AI queries being searched and monitored. It is one thing for them to know the sites you surf to via your browser - your ISP knows that. Another if they are scanning images on your device. But AI gets considerably more access to your system when you tick the box to use it. Nobody in Govt., the military, or commerce should be touching this stuff, and individual users should realise how much access they are offering. AI and Windows 11 (which has it built in) are too great a security risk to use for this reason.
100% confused by the usage of mixed percentages
"ChatGPT dominates enterprise AI usage, with over 9 in 10 employees accessing it compared to far lower adoption of alternatives like Google Gemini (15 percent), Claude (5 percent), and Copilot (~2–3 percent),"
Let's see: 90 + 15 + 5 + 2-3 = 112-113% - is this of all employees, or only those using AI?
- presumably, the answer is "some employees use more than one AI platform" but it would help if that were stated, especially given the other percentages:
"The LayerX report says that app usage through non-corporate accounts (shadow IT) is common not only for generative AI (67 percent), but also for chat/instant messaging (87 percent), online meetings (60 percent), Salesforce (77 percent), Microsoft Online (68 percent), and Zoom (64 percent)"
"ChatGPT's enterprise penetration comes to 43 percent, LayerX's report says, approaching the popularity of applications like Zoom (75 percent penetration) and Google services (65 percent) while surpassing the penetration of Slack (22 percent), Salesforce (18 percent), and Atlassian (15 percent)."
"Overall, the LayerX report finds AI usage in the enterprise is growing rapidly, accounting for 11 percent of all application usage, just behind email (20 percent), online meetings (20 percent), and office productivity applications (14 percent)."
OK, I'll bite. Let's say I want to see these percentages defined, so as to become less confused. Following the link results in:
"By submitting this form, you agree to receive communications from us"
Ah, now I understand. Click bait. From the "If you can't dazzle them with your brilliance, baffle them with bullshit" department.
Ah. We've finally understood the profound security risk in old "cut-n-paste".
This is not just a new AI phenom. How many sensitive user data items have been copied from an on-screen form to a local spreadsheet, or in the other direction? One or the other (or neither) covered by some privacy restrictions?
So cramming a patient's full medical record into the non-certified-compliant maul of an AI vendor is not much different.
HIPAA is a label only. Governance is a buzzword. Privacy is a joke.
The only real thing is profits (but not for you.)
AI coding
I work for a large vendor of EDA software & solutions - IC design, layout, verification, yield optimization and so on. It goes without saying that it's a rigorous and exacting field where precision is everything and bugs can be catastrophically expensive.
Earlier in the summer we interviewed a candidate for a role in the team. Practically his first question to us was, wide-eyed and enthusiastic, "are you using AI vibe coding yet to develop your software?"
The reaction of our chief developer was priceless. I'd always thought the phrase "he sneezed coffee out of his nose" was just a metaphor.