| 6.50pm 1 Oct 2024
Our speaker: Tom Rogers has been the AEC Commissioner since 2014, having previously filled roles as the Deputy Electoral Commissioner and the Australian Electoral Commission’s state manager and Australian Electoral Officer for NSW |
Sometime between now and May 2025, Australia will hold an election.
Currently, there are no actual laws in Australia against using deepfakes or other forms of digital manipulation or fake news in a political campaign.
Unsurprisingly, given the explosive growth in use of AI since the introduction of Chat GPT and other large language models, this has the Australian Electoral Commissioner worried.
On 20th May he told the Senate Select Committee on adopting artificial intelligence:
“The AEC does not possess the legislative tools or internal technical capability to deter, detect, or then adequately deal with false AI-generated content concerning the election process – such as content that covers where to vote, how to cast a formal vote and why the electoral process may not be secure or trustworthy.” [1]
There has been a sharp rise in social concern since the beginning of 2024 about the misleading effect of dis and misinformation. [2] But this is combined with larger than ever numbers of people, here and overseas, adopting AI tools and gaining expertise in generating the fakes that cause concern. High level AI is itself also able to autonomously deceive. [3]
But is digital disinformation even regulable? Bans are unlikely to work due to their inapplicability to overseas actors, and most other curbs can be circumvented by technology.
What can be done to protect our democracy from damage under the weight of deliberately generated electoral confusion?
Date: Tuesday 1st Oct 6:00pm ART AGM. Dinner and Speaker event commence 6:50pm. Venue: Woodward Centre, Law Building, 10th floor, 106/185 Pelham St, Carlton VIC 3053.
[1] https://www.theguardian.com/australia-news/article/2024/may/20/aec-australian-electoral-commission-ai-deepfakes-election
[2] McAfee conducted research about the impact of AI-generated content on 1,000+ consumers in each of the US, UK, France, Germany, Australia, India and Japan in January and February of 2024.
They found:
• Nearly 7 of 10 (66%) people are more concerned about deepfakes than they were a year ago.
• More than half (53%) of respondents say AI has made it harder to spot online scams.
• The vast majority (72%) of social media users find it difficult to spot Artificial Intelligence generated content such as fake news and scams.
[3] https://futurism.com/ai-systems-lie-deceive?