Deloitte Issues Refund For Error-Ridden Australian Government Report That Used AI
- Reference: 0179681310
- News link: https://slashdot.org/story/25/10/06/1622238/deloitte-issues-refund-for-error-ridden-australian-government-report-that-used-ai
- Source link:
[2]non-paywalled source
. From a report:> The Big Four accountancy and consultancy firm will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect, Australia's Department of Employment and Workplace Relations said on Monday. The department had commissioned a A$439,000 ($290,300) "independent assurance review" from Deloitte in December last year to help assess problems with a welfare system for automatically penalising jobseekers.
>
> The Deloitte review was first published earlier this year, but a corrected version was uploaded on Friday to the departmental website. In late August the Australian Financial Review reported that the document contained multiple errors, including references and citations to non-existent reports by academics at the universities of Sydney and Lund in Sweden. The substance of the review and its recommendations had not changed, the Australian government added. The contract will be made public once the transaction is completed, it said.
[1] https://www.ft.com/content/934cc94b-32c4-497e-9718-d87d6a7835ca
[2] https://www.theregister.com/2025/10/06/deloitte_ai_report_australia/
"repay the final instalment" (Score:2)
They will repay the final instalment.
What about the payments for the work (apparently) not done prior to the final instalment?
"Oops, you caught me. Here's a discount on the final payment."
Doesn't seem to be making the injured party whole.
Re: (Score:1)
I can tell you didn't read thoroughly.
Re: (Score:1)
"The Big Four accountancy and consultancy firm will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect,"
Did you miss that? It's right there in the summary.
Re: (Score:2)
I didn't miss that.
What I saw is, they identified signatures that one part of the report was AI written. My conclusion is that if they caught an AI writing one part of it, then they probably used an AI to write all of it. That final section just happened to be the part that got caught.
Re: (Score:1)
Which has nothing to do with repaying of the money they were paid to (not) write that report.
They got paid money in the past to write a report that was later found to have been written with AI.
They have agreed to repay or not collect the final payment for this report.
My point is that they were paid money in the past for this same (faulty) report that they have apparently not agreed to repay.
Re: (Score:2)
> My point is that they were paid money in the past for this same (faulty) report that they have apparently not agreed to repay.
Agree. They should pay back that money they paid in the past as well.
They should pay back the entire payment, not just the final installment; the report should be trashed, and a new report should be commissioned that is written by humans.
AI lies (Score:2)
Do you like being lied to? Just use an AI. Do you like putting your job at risk by delivering lies to your clients? Just use an AI.
Re: (Score:3)
> Do you like being lied to? Just use an AI. Do you like putting your job at risk by delivering lies to your clients? Just use an AI.
Hmm, what you said made sense. To assuage my fears, I posted your comment to copilot and asked it to respond.
> That’s a strong statement, and it sounds like there’s some real concern behind it. AI tools—like me (copilot)—are designed to assist, not deceive. When used responsibly, they can enhance productivity, improve accuracy, and support decision-making. But like any tool, they need to be used with transparency and critical thinking. If you're worried about misinformation or job risk, those are valid concerns. The key is to treat AI as a supporting resource, not a replacement for expertise or accountability. Would you like help understanding how to verify AI-generated outputs or how to use AI safely in client-facing work?
I feel better now, don't you? (throws phone out window and moves to a forest)
Re: (Score:2)
I take issue with the "improve accuracy" claim, with the exception of improving grammar. Many people lack the ability to write grammatically correct sentences, so it does help there, but so does a simple spell checker and tools we've already had forever. Though the text "AI" produces is technically correct, it always sounds like a bot wrote it. There's just something hollow about it.
>But like any tool, they need to be used with transparency and critical thinking
Well then we're fucked from the start.
When it comes to Artificial Intelligence, (Score:1)
LLM's are a dead end, now if you want to talk about automation that is a different story.
AI - Artificial Incompetence (Score:2)
I think its time we make this term more popular
Re: AI - Artificial Incompetence (Score:2)
Or "Artificial Information"
Re: (Score:3)
I like using the term, "Augmented Idiocy" to describe this phase of CEO stupidity.
Re: (Score:2)
I'm stealing that. Well put.
Fireable Offense (Score:2)
Giving Deloitte money should be a fireable offense. In fact, it should be automatic. No performance review, and no explanations:
1) Did you authorized an expenditure on Deloitte?
2) Yes.
3) Your personal property will be boxed up and made available for pickup at the Customer Service desk. Now GET OUT!
Digital Asbestos (Score:2)
And that, ladies and gentlemen, is why AI-generated text is "digital asbestos" that we'll be scraping from the internet as well as corporate and governmental databases for decades to come.
In the past, misinformation was confined to certain websites and blogs that were easy to avoid, much like before asbestos, pollution was only found in certain industrial or mining sites that were easy to avoid. But like asbestos brought pollution to every home (in multiple places, from oven mitts to piping to insulation
Welcome to the future (Score:2)
help assess problems with a welfare system for automatically penalising jobseekers.
So humans outsourcing their judgement to a robot to write a report about humans outsourcing their judgement to robots.
From Deloitte's perspective, the problem here is that human judgement is still involved in the payment processing chain.
Re: (Score:3)
The solution is easy. The contract for a provision like this should include a very large penalty for each and every single hallucination found in the final product.
Want to use AI?
Fine.
But check the final product before shipping.
The other benefit is that every citation will essentially have to include a webpage reference so that the citations can be checked before release.