Enterprises neglect AI security – and attackers have noticed
(2025/07/30)
- Reference: 1753899313
- News link: https://www.theregister.co.uk/2025/07/30/firms_are_neglecting_ai_security/
- Source link:
Organizations rushing to implement AI are neglecting security and governance, IBM claims, with attackers already taking advantage of lax protocols to target models and applications.
The findings come from Big Blue's [1]Cost of a Data Breach Report 2025 report, which shows that AI-related exposures currently make up only a small proportion of the total, but these are anticipated to grow in line with greater adoption of AI in enterprise systems.
Based on data reported by 600 organizations globally between March 2024 and February 2025, IBM says 13 percent of them flagged a security incident involving an AI model or AI application that resulted in an infraction.
[2]
Almost every one of those breached organizations (97 percent) indicated it did not have proper AI access controls in place.
[3]
[4]
About a third of those that experienced a security incident involving their AI suffered operational disruption and saw criminals gain unauthorized access to sensitive data, while 23 percent said they incurred financial loss as a result of the attack, with 17 percent suffering reputational damage.
Supply chain compromise was the most common cause of those breaches, a category that includes compromised apps, application programming interfaces (APIs), and plug-ins. The majority of organizations that reported an intrusion involving AI said the source was a third-party vendor providing software as a service (SaaS).
[5]
IBM's report draws particular attention to the danger of unsanctioned or so-called shadow AI, which refers to the unofficial use of these tools within an organization, without the knowledge or approval of the IT or data governance teams.
Because shadow AI may go undetected by the organization, there is an increased risk that attackers will exploit its vulnerabilities.
The survey for the report found that most organizations (87 percent) have no governance in place to mitigate AI risk. Two-thirds of those that were breached didn't perform regular audits to evaluate risk and more than three-quarters reported not performing adversarial testing on their AI models.
[6]
This isn't the first time that security and governance have been raised as issues when it comes to corporate AI rollouts. Last year, The Register reported that many large enterprises had [7]hit pause on integrating AI assistants and virtual agents created with Microsoft Copilot because these were pulling in information that employees shouldn't have access to.
Also last year, analyst Gartner estimated that at least [8]30 percent of enterprise projects involving generative AI (GenAI) would be abandoned after the proof-of-concept stage by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs, or unclear business value.
IBM's report appears to show that many organizations are simply bypassing security and governance in favor of getting AI adoption in place, perhaps because of a fear of being left behind with all the hype surrounding the technology.
"The report reveals a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation," said IBM's VP of Security and Runtime Products, Suja Viswesan.
"As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial, it's the loss of trust, transparency and control," she said, adding that "the data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it." ®
Get our [9]Tech Resources
[1] https://www.ibm.com/reports/data-breach
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2024/08/21/microsoft_ai_copilots/
[8] https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
[9] https://whitepapers.theregister.com/
The findings come from Big Blue's [1]Cost of a Data Breach Report 2025 report, which shows that AI-related exposures currently make up only a small proportion of the total, but these are anticipated to grow in line with greater adoption of AI in enterprise systems.
Based on data reported by 600 organizations globally between March 2024 and February 2025, IBM says 13 percent of them flagged a security incident involving an AI model or AI application that resulted in an infraction.
[2]
Almost every one of those breached organizations (97 percent) indicated it did not have proper AI access controls in place.
[3]
[4]
About a third of those that experienced a security incident involving their AI suffered operational disruption and saw criminals gain unauthorized access to sensitive data, while 23 percent said they incurred financial loss as a result of the attack, with 17 percent suffering reputational damage.
Supply chain compromise was the most common cause of those breaches, a category that includes compromised apps, application programming interfaces (APIs), and plug-ins. The majority of organizations that reported an intrusion involving AI said the source was a third-party vendor providing software as a service (SaaS).
[5]
IBM's report draws particular attention to the danger of unsanctioned or so-called shadow AI, which refers to the unofficial use of these tools within an organization, without the knowledge or approval of the IT or data governance teams.
Because shadow AI may go undetected by the organization, there is an increased risk that attackers will exploit its vulnerabilities.
The survey for the report found that most organizations (87 percent) have no governance in place to mitigate AI risk. Two-thirds of those that were breached didn't perform regular audits to evaluate risk and more than three-quarters reported not performing adversarial testing on their AI models.
[6]
This isn't the first time that security and governance have been raised as issues when it comes to corporate AI rollouts. Last year, The Register reported that many large enterprises had [7]hit pause on integrating AI assistants and virtual agents created with Microsoft Copilot because these were pulling in information that employees shouldn't have access to.
Also last year, analyst Gartner estimated that at least [8]30 percent of enterprise projects involving generative AI (GenAI) would be abandoned after the proof-of-concept stage by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs, or unclear business value.
IBM's report appears to show that many organizations are simply bypassing security and governance in favor of getting AI adoption in place, perhaps because of a fear of being left behind with all the hype surrounding the technology.
"The report reveals a lack of basic access controls for AI systems, leaving highly sensitive data exposed and models vulnerable to manipulation," said IBM's VP of Security and Runtime Products, Suja Viswesan.
"As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial, it's the loss of trust, transparency and control," she said, adding that "the data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it." ®
Get our [9]Tech Resources
[1] https://www.ibm.com/reports/data-breach
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIqV6NVLpITvPuNhV1CGNQAAAEc&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[7] https://www.theregister.com/2024/08/21/microsoft_ai_copilots/
[8] https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
[9] https://whitepapers.theregister.com/