Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge Artificial Intelligence

Cross-border Gen AI misuse to cause 40% of AI data breaches by 2027: Gartner

Nurdianah Md Nur
Nurdianah Md Nur • 2 min read
Cross-border Gen AI misuse to cause 40% of AI data breaches by 2027: Gartner
Gen AI tools pose security risks if sensitive prompts are sent to AI tools and application programming interfaces (APIs) hosted in unknown locations. Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

More than 40% of data breaches related to artificial intelligence (AI) will stem from improper use of generative AI (Gen AI) across borders by 2027, according to Gartner.

“Unintended cross-border data transfers often occur due to insufficient oversight, particularly when Gen AI is integrated into existing products without clear descriptions or announcements. While [Gen AI] tools can be used for approved business applications, they pose security risks if sensitive prompts are sent to AI tools and application programming interfaces (APIs) hosted in unknown locations,” says Joerg Fritsch, VP analyst at Gartner.

To mitigate the risks of AI data breaches, particularly from cross-border GenAI misuse, organisations should:

  • Ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data. This includes incorporating data lineage and data transfer impact assessments within regular privacy impact assessments.
  • Establish governance committees responsible for technical oversight, compliance and risk management to ensure transparency in AI deployment and data handling.
  • Use advanced technologies, encryption, and anonymisation to protect sensitive data. For instance, verify Trusted Execution Environments in specific geographic regions and apply advanced anonymisation technologies, such as Differential Privacy, when data must leave these regions.
  •  Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This includes AI governance, data security governance, prompt filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50% less inaccurate or illegitimate information, improving decision-making.

With no global standard for AI and data governance, enterprises are developing region-specific strategies. However, Fritsch noted that localised policies add complexity to data management and can create operational inefficiencies.

He advises organisations to invest in advanced AI governance and security to protect sensitive data and ensure compliance. “This need will likely drive growth in AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes.”

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2025 The Edge Publishing Pte Ltd. All rights reserved.