KPMG lodges complaint after AI-generated material was used to implicate them in non-existent scandals

A Senate committee has warned artificial intelligence may have seriously undermined its integrity, after consultancy giant KPMG lodged an official complaint about factually inaccurate information receiving parliamentary privilege.

A group of academics has apologised to the big four consultancy firms after their submission to an inquiry contained false allegations of serious wrongdoing, which were collated by an AI tool and not factchecked.

One academic has claimed responsibility for the errors, generated by the Google Bard AI tool, which produced case studies that never occurred and cited them as examples of why structural reform was needed.

The submission accused firms of involvement in scandals that either didn’t exist or that they had nothing to do with. It referenced partners being dismissed by firms that had never employed them.

The inaccurate submission has been attacked by the firms, which believe their reputation has been unfairly tarnished. KPMG’s chief executive, Andrew Yates, complained to the parliamentary joint committee on corporations and financial services, which approved and uploaded the document.

“The livelihoods of the more than 10,000 people who work at KPMG can be affected when obviously incorrect information is put on the public record – protected by parliamentary privilege – and reported as fact,” Yates wrote to the committee.

“We are deeply concerned and disappointed that AI has been relied upon, without comprehensive factchecking, for a submission to such an important parliamentary inquiry.

The submission had falsely accused KPMG of being complicit in a “KPMG 7-Eleven wage-theft scandal” that led to the resignation of several partners. It also accused KPMG of auditing the Commonwealth Bank during a financial planning scandal. KPMG never audited the Commonwealth Bank.

In a statement, the committee said the incident had been reported to the Senate clerk and that it “raised important questions” about the use of artificial intelligence.

“Emerging tools within the artificial intelligence space, whilst appearing to reduce workload, may present serious risks to the integrity and accuracy of work if they are not adequately understood or applied in combination with detailed oversight and rigorous factchecking,” the committee’s statement said.

The committee said that when used incorrectly, tools like Google Bard were capable of “seriously undermining the integrity of submissions and the committee process”.

“Ultimately, it is the role of the committee, throughout its inquiry and the right of reply offered to those referenced within submissions, to appropriately weigh the evidence provided based on its truthfulness and accuracy,” the statement said.

In a letter to the Senate, emeritus professor James Guthrie claimed responsibility for the error, excusing the other academics.

skip past newsletter promotion

Our Australian morning briefing breaks down the key stories of the day, telling you what’s happening and why it matters

Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.

after newsletter promotion

“Given that the use of AI has largely led to these inaccuracies, the entire authorship team sincerely apologises to the committee and the named Big Four partnerships in those parts of the two submissions that used and referenced the Google Bard Large Language model generator,” Guthrie said in the letter.

The Sydney University academic Jane Andrew, who was not responsible for the AI research but did contribute to the submission, was a co-author of a 2022 article in the journal Critical Perspectives on Accounting, entitled: “The perils of artificial intelligence in academic publishing”.

“We are concerned particularly with the gradual removal of human involvement in journal editor and reviewer roles, as artificial intelligence and automated expert systems become increasingly influential across a range of tasks and judgments historically carried out and performed by people,” the journal article said.

A University of Sydney spokesperson noted Guthrie had taken full responsibility for the use of AI and said Andrew was not given an opportunity to review the submission.

“She did not have a role in reviewing the submission before it was sent, and would have objected to the inclusion of all parts produced by Bard AI if she’d had the opportunity to do so,” the spokesperson said.

Deloitte also complained to the committee about the submission. The firm’s general counsel, Tala Bennett, said “it is disappointing that this has occurred, and we look forward to understanding the committee’s approach to correcting this information”.

Guthrie said the factual errors were “regrettable” but insisted “our substantive arguments and our recommendations for reform remain important to ensure a sustainable sector built on shared community values”.

Leave a Comment