News: 0177062129

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Study Finds 50% of Workers Use Unapproved AI Tools

(Friday April 18, 2025 @11:30PM (BeauHD) from the tools-of-the-trade dept.)


An anonymous reader quotes a report from SecurityWeek:

> An October 2024 study by Software AG suggests that [1]half of all employees are Shadow AI users , and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.

>

> There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.

According to [2]an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.

"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."



[1] https://www.securityweek.com/the-shadow-ai-surge-study-finds-50-of-workers-use-unapproved-ai-tools/

[2] https://www.harmonic.security/resources/the-ai-tightrope-balancing-innovation-and-exposure



Re: (Score:2)

by test321 ( 8891681 )

It's about the people not using the approved tools. Say your company has subscribed to "Copilot" and the contract says it's ok to use it to summarize, review or generate internal documents. But the employees will use "ChatGPT" for the same purpose, without a contract. Not that I personally trust the contract, but your legal department does. It's about the same as saying you don't like Outlook and you like gmail more, so you send work emails from your personal address.

Re: Oh noes, the proles are getting one over on us (Score:2)

by Z00L00K ( 682162 )

I'm on the other end of the spectrum and don't trust any ai tool for job related information.

More often than not those tools can misinterpret the situation and cause trouble.

Noble, sure, but unauthorized data disclosure (Score:2)

by ebunga ( 95613 )

SpungRuAI's local agent is going help make the Nelson report so much faster oh shit why did all my files just disappear?

Just because you use the tools (Score:2)

by wakeboarder ( 2695839 )

doesn't mean you are sending sensitive company info to AI tools. I still use unapproved tools, but I don't send any code or info that would be sensitive. Why? Because Gemini isn't that great and that is the only approved tool.

Re: Just because you use the tools (Score:2)

by Z00L00K ( 682162 )

Way too many do send enough information to those tools that when stitched together will result in information patterns that can be used to influence stock market value.

Re: (Score:1)

by Anonymous Coward

The Cheeto has that beat by enough to make your fantasy market-manipulation seem laughable.

It's irresistible (Score:2)

by Big Hairy Gorilla ( 9839972 )

I like Linux. Everything is customized down to details. Other people like Apple. Many would argue its a good brand. Everyone sticks with the brand they like. I doubt there is a way of hiding the use of your favorite brand from your employer .. and it's a lesson in hypocrisy because management wants to use AI to both monitor you and make you more efficient... just with the one *we* paid for. Also I'm guessing that people form strong emotional attachments to the AI with the longest memory. So this is going to

Copy Paste (Score:2)

by Mean Variance ( 913229 )

My work was blocking integrations but you could still go to a Chat GPT or similar and still get the code snip you wanted. Not me of course. We seem to have settled that out at the corporate level. Fine. I will follow the rules as long as current tools are available.

Still, what gets me is the tech screens I do for interviews. It's comical when a question requires some thinking. Gets very quiet. Then the eyes wander (remote Zoom interviews) and suddenly, nirvana! a solution - I think we should use the "leaky

Support the Girl Scouts!
(Today's Brownie is tomorrow's Cookie!)