To mitigate the risks of AI data breaches, particularly from cross-border GenAI misuse, organisations should:
- Ensure compliance with international regulations and monitor unintended cross-border data transfers by extending data governance frameworks to include guidelines for AI-processed data. This includes incorporating data lineage and data transfer impact assessments within regular privacy impact assessments.
- Establish governance committees responsible for technical oversight, compliance and risk management to ensure transparency in AI deployment and data handling.
- Use advanced technologies, encryption, and anonymisation to protect sensitive data. For instance, verify Trusted Execution Environments in specific geographic regions and apply advanced anonymisation technologies, such as Differential Privacy, when data must leave these regions.
- Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This includes AI governance, data security governance, prompt filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50% less inaccurate or illegitimate information, improving decision-making.
With no global standard for AI and data governance, enterprises are developing region-specific strategies. However, Fritsch noted that localised policies add complexity to data management and can create operational inefficiencies.
He advises organisations to invest in advanced AI governance and security to protect sensitive data and ensure compliance. “This need will likely drive growth in AI security, governance, and compliance services markets, as well as technology solutions that enhance transparency and control over AI processes.”