As New Zealand agencies rush to integrate Generative AI into their workflows, due diligence often focuses on immediate data security: encryption, access controls, and data residency. However, a subtle but critical risk remains largely undiscussed: the operational reality of breach notifications.
For organisations relying on OpenAI’s Enterprise and API services, a dependency gap has emerged. While the contractual promises are in place, the independent evidence required to validate them is missing.
The Controller-Processor Disconnect
Under the European Union General Data Protection Regulation (GDPR) framework used by OpenAI to position its services, OpenAI (the vendor) acts as the data processor while the customer acts as the data controller. This dynamic places the primary legal obligation for breach notification on the customer, not the vendor.
To meet statutory deadlines (such as the 72-hour window under GDPR or “as soon as practicable” under the NZ Privacy Act), customers are entirely reliant on the processor notifying them immediately after detecting an incident.
The problem? There is no verified evidence that this notification workflow actually works under pressure.
OpenAI’s third-party audits and Trust Centre documentation indicate that no personal information-impacting incidents occurred during recent audit periods. While a clean security record is positive, it creates an assurance blind spot: the breach notification controls have never been tested for operating effectiveness in a real-world scenario. Unlike standard security protocols, OpenAI provides no evidence of simulated incidents, breach drills, or red-team exercises to validate that their notification channels function correctly.
The New Zealand Context: Agency and Liability
For New Zealand organisations, this is not just a technical oversight; it is a compliance trap.
While the Privacy Act 2020 does not use GDPR-style controller/processor terminology, Section 11 establishes an analogous relationship. This is where an agency (the customer) provides personal information to a company (i.e. OpenAI), who will not use the information for their own purposes (such as training their AI models). Under this arrangement, OpenAI acts as an “agent’ for the customer.
Crucially, Section 121(4) of the Privacy Act dictates that anything relating to a notifiable privacy breach that is known by an “agent” is to be treated as being known by the principal agency.
This creates a high-stakes scenario:
- If OpenAI detects a breach but fails to escalate it due to untested internal processes,
- The NZ customer remains unaware and fails to notify the Office of the Privacy Commissioner,
- Under NZ law, the customer is deemed to have known about the breach, potentially resulting in regulatory penalties and reputational damage for ignoring a breach they were never told about.
The Mixpanel Precedent
Despite the lack of audit evidence, real-world behaviour suggests OpenAI is capable of rapid response.
During a November 2025 security incident involving their data analytics sub-processor Mixpanel, user profile information (including names and emails) was exposed. In this instance, OpenAI triggered their notification process within 24 hours of receiving the dataset and subsequently severed ties with the vendor.
While this response is encouraging, reliance on past behaviour is not the same as formal assurance. Without independent mechanisms to confirm compliance with data protection laws, a discipline that remains immature compared to standard InfoSec audits like SOC 2, agencies are essentially operating on trust rather than verification.
Strategic Takeaways for NZ Agencies
The lack of formal testing for notification processes is a risk factor that must be acknowledged, not ignored. We recommend NZ agencies take the following steps to offset this assurance gap:
- Formal Risk Registration: Acknowledge the ambiguity of breach notification effectiveness in your organisation’s risk register.
- Contractual Awareness: Ensure you understand the specific clauses in the OpenAI Data Processing Addendum. OpenAI commits to notifying customers without undue delay, and you must understand who in your organisation receives that signal.
- Strict Data Minimisation: The most effective control is limiting exposure. Minimise the input of personal information to AI tools and strictly ensure that settings allowing model training on customer data are toggled to off.
- Privacy Impact Assessments (PIA): Do not skip the PIA. Use it to educate kaimahi (staff) on appropriate use and to document the specific notification risks associated with the tool.
Moving Forward
The uncertainties regarding breach notification highlights the broader challenge New Zealand agencies face: navigating the intersection of immature privacy audit frameworks and rapidly evolving AI capabilities.
At Bastion Security, we specialise in bridging the gap between Information Security and Privacy. We are uniquely qualified to guide agencies through these emerging risks, ensuring that your innovation does not come at the cost of compliance.
