
- Deloitte admitted AI-generated report for Australian government contained serious errors
- Report for Department of Employment included fabricated citations and misattributed quotes
- Australian government reissued corrected report removing false references and fixing typos
Consulting giant Deloitte has agreed to refund part of a $440,000 consultancy fee to the Australian government after admitting that a report it delivered, which included AI-generated content, was riddled with serious errors. The report, commissioned by the Department of Employment and Workplace Relations (DEWR) to assess the "Future Made in Australia" compliance framework and associated IT system, was published in July 2025. Subsequent scrutiny revealed fabricated academic citations, false references and a quote wrongly attributed to a Federal Court judgment.
Deloitte acknowledged that it used a generative AI model (Azure OpenAI GPT-4o) during early drafting. Human review, the company said, refined the content and that the substantive findings and recommendations were unaffected.
In response, the Australian government reissued a corrected version: Over a dozen fictitious references were removed or replaced, the reference list was updated, and typographical errors fixed.
Sydney-based welfare law academic Christopher Rudge, who first flagged the issues, called them AI "hallucinations" - where generative models fill gaps, misinterpret, or invent plausible but incorrect details.
While a partial refund is underway, the Australian government has indicated that future consultancy contracts may now include stricter AI-usage clauses.
India And Elsewhere: When AI Errors Draw Fines And Backlash
While the Deloitte case is among the most high-profile involving AI errors in consultancy, this is not the first time that such an alarm has been raised against the firm.
In December 2024, India's National Financial Reporting Authority (NFRA) slapped a penalty of Rs 2 crore on Deloitte Haskins & Sells LLP, along with a fine on two chartered accountants for lapses in the audit of Zee Entertainment Enterprises Ltd (ZEEL) during the 2018-19 and 2019-20 financial years. The penalty was for neglecting red flags, failing to exercise due diligence, and not adhering to audit standards. The action was not explicitly over AI use, these cases foreshadow challenges for AI-assisted reporting (No direct AI-error case by NFRA has been reported publicly yet).
In China, its affiliate was fined $20 million by US regulators in September 2022 for violating auditing standards, specifically, allowing clients to audit themselves - a breach of professional ethics.
In September 2023, Deloitte's Colombian affiliate, Deloitte & Touche SAS, was penalised $900,000 by Public Company Accounting Oversight Board (PCAOB) for audit quality control failures. In Canada, Deloitte admitted to violating ethical and audit conduct rules in Ontario, paying over CAD 1.5 million in 2024 for the "deliberate backdating" of audit workpapers.
In the United States, concerns over AI in professional work have triggered legislative scrutiny. For instance, state bar associations are investigating whether legal briefs generated partly by AI misstate case law or misattribute sources. They have also issued their own guidance or created task forces to address AI. The American Bar Association (ABA), in a formal opinion last year, offered a framework for lawyers on managing competency, confidentiality, communication with clients and supervisory duties. It also set up a task Force on Law and artificial intelligence.
Similarly, universities have retracted academic papers where authors used AI tools but failed to verify generated references, damaging trust in scholarship.
Root Causes And Risk Landscape
Several experts have pointed out that generative AI models, including large language models (LLMs), are prone to hallucinations: They are probabilistic machines, not truth machines. They may craft plausible sounding content without real sources or factual basis.
In consulting and reporting, the pressure to turn in deliverables may tempt over reliance on AI to speed drafting, especially when staff works under strict deadlines. Without rigorous human checks, invented citations slip through.
The Deloitte incident spotlights another problem: Lack of traceability. When the original version simply swapped one hallucinated reference for another, it suggested that the underlying claim lacked robust evidence.
Lessons And The Road Forward
With AI use proliferating, and such tools being used by a vast cohort, it is important to have some checks and balances in place to avoid being caught off guard. Experts have listed incorporating certain steps to avoid Deloitte-like situations:
1. Stronger AI-use clauses in contracts: Clients must stipulate when and how AI can be used, mandate transparency and demand attestations of human review
2. Audit and traceability: Every claim in a report should be traceable, with human-verifiable sources
3. Cross-jurisdiction regulatory frameworks: Nations will increasingly require guidelines about AI in professional services - in India, NFRA or SEBI may step in
4. Training and literacy: Human reviewers must be AI-literate and capable of spotting hallucinations or implausible references
5. Ethical risk management: High-stakes reports (government policy, welfare systems, court judgments) demand extra safeguards when AI is involved
Track Latest News Live on NDTV.com and get news updates from India and around the world