Emerging risk and insurance considerations from AI misuse and data exposure events

Snapshot

Recent incidents show issues with respect to AI, not in using it, but rather weak governance and quality assurance around AI-assisted work.

What happened:

A consultancy firm’s report contained errors (e.g., non-existent references and a fabricated court quote). The organisation has since acknowledged that Azure OpenAI was used in drafting; the report was corrected, and they have agreed to a partial refund.

A contractor within a government department uploaded a spreadsheet containing thousands of rows of flood-victim data to ChatGPT, exposing personal information.

Key takeaway: the failure is in governance and Quality Assurance (QA), not the use of, or that “AI exists”.

The most effective risk mitigants are policy-plus-practice:

  • Disclose when AI assists;

  • Enforce human verification of facts/quotes;

  • Lock down data inputs to approved tools.

Further, contracts should bake in QA artefacts and data-handling obligations, and insurance should be reviewed for how PI/Tech E&O/Cyber respond to AI-assisted errors and data leakage.

The risks highlighted by the incidents

Data privacy and compliance risk:

The ‘government department contractor’ incident shows how easily sensitive personal data can be exposed when AI tools, often cloud-hosted and managed by third parties, are used without sufficient controls.

Key risk points include:

  • Unintended data leakage through AI prompts.

  • Lack of control over data residency and processing by external AI systems.

  • Potential breach notifications, regulatory investigations and fines.

Accuracy and professional liability risk:

The AI generated report error demonstrates the risk of relying on AI for professional judgements and outputs.

Mistakes in AI assisted deliverables can cause financial loss, reputational damage and contractual liability.

Key risk points include:

  • Inaccurate or misleading AI outputs causing client losses.

  • Reputational harm resulting in loss of trust and future business.

  • Possible claims for professional negligence or breach of contract.

Broader implications for tech and consultancy organisations

Operational risk exposure:

Many tech and consultancy firms are incorporating AI tools to improve efficiency. However, these incidents show operational risks increase without strong governance, such as:

  • Insufficient staff training on AI use and data handling.

  • No clear policies restricting sensitive data input into AI tools.

  • Overreliance on AI without proper human validation.

Reputational and client trust risk

Clients expect organisations to maintain confidentiality and deliver accurate advice.

AI errors and data breaches risk damaging these relationships with long-term effects on revenue and market standing.

Insurance considerations

These emerging risks require careful review of existing insurance cover and potential updates.

Cyber Insurance:

  • Coverage: Data breaches, privacy fines, breach response costs and notifications.

  • Consideration: Policies must explicitly cover AI-related data leakage, including data shared with AI vendors.

Professional Indemnity Insurance (Errors and Omissions)

  • Coverage: Claims arising from negligent acts, errors or omissions in professional services.

  • Consideration: Insurers may require clarity on AI use in service delivery and introduce endorsements or exclusions related to AI outputs.

Technology Errors and Omissions Insurance

  • Coverage: Tech firms providing software or AI services.

  • Consideration: Important for firms developing or reselling AI solutions to cover liability from defects or failures.

Emerging AI risk policies

Some insurers are creating specialised AI policies to cover risks such as model failures, algorithmic bias and autonomous decision errors

Potential fallout and strategic recommendations:

  • Regulatory fines and legal action following data breaches.

  • Contractual penalties and refunds due to AI errors, as seen with Deloitte.

  • Reputational damage leading to lost clients and business.

  • Increased scrutiny from insurers, possibly resulting in stricter terms and higher premiums.

Strategic recommendations:

  • Implement robust AI governance with clear policies on AI use and data input restrictions.

  • Provide comprehensive employee training on AI risks and data privacy.

  • Review and update insurance programmes to cover AI-related risks.

  • Conduct regular AI risk assessments to check for accuracy and security issues.

  • Develop incident response plans for data breaches or AI errors.

Final thought

These two incidents signify important warnings for organisations in tech and consultancy.

While AI offers great potential, it also brings new risks around data privacy, accuracy and liability.

Organisations must manage these risks proactively with strong governance, updated insurance and careful oversight to protect themselves and maintain client trust in this changing landscape.

A practical QA checklist for AI-assisted deliverables

  1. Declare assistance: Internally tag every deliverable where AI is used; externally disclose when material.

  2. Source-of-truth control: Pull citations from approved repositories; ban unverifiable sources.

  3. Quote and citation verification: Two-person check for quotations, case law and references (i.e., no invented items).

  4. Hallucination traps: Require human review for numerical/statistical claims produced by AI.

  5. Client-side facts: Anything about the client, regulator or court - must be human-verified before publication.

  6. Change log and artefacts: Save prompts, drafts and checklists to evidence diligence.

  7. Data input guardrails: Only use organisation-approved AI endpoints with enterprise controls and

    never paste sensitive data into consumer tools.

  8. Red-team review: For reports to regulators/boards, run a pre-issue challenge (what’s wrong, missing, overstated?).

  9. Role clarity: Separate author, checker, approver.

  10. Post-issue corrections: Document fixes, re-issue with version note, advise stakeholders promptly.

Contractual levers to make governance stick

  • Scope and acceptance criteria: State when AI assistance is permitted; require a QA artefact pack (citations log, checker sign-off) as a deliverable.

  • Disclosure clause: Warrant that any AI assistance will be disclosed where material and will pass human verification before delivery.

  • Data handling and vendors: Limit data sent to third-party AI, require enterprise instances, DPA terms, and audit rights.

  • Remedies: Link re-work/fee-adjustment to failed QA gates rather than AI per se; avoid over-broad “AI use” prohibitions that stifle productivity.

The contents of this publication are provided for general information only. Lockton arranges the insurance and is not the insurer. While the content contributors have taken reasonable care in compiling the information presented, we do not warrant that the information is correct. The contents of this publication are not intended as a legal commentary or advice and should not be relied on in that way. It is not intended to be interpreted as advice on which you should rely and may not necessarily be suitable for you. You must obtain professional or specialist advice before taking, or refraining from, any action based on the content in this publication.

© 2025 Lockton Companies Australia Pty Ltd.