A new research report by Adobe highlights growing concerns among Australians regarding the influence of deepfakes on the democratic process, particularly ahead of the upcoming federal elections.
The "Authenticity in the Age of AI 2025" report, based on a survey of 1,010 voting-aged Australians, identified that 77% of respondents reported an increase in the number of political deepfakes encountered in the past three months. Additionally, only 12% expressed confidence in their ability to detect such content online.
The report underscores that 86% of Australians feel that generative AI complicates the task of discerning the authenticity of digital content.
In response, many are adjusting their behaviour: 45% are choosing to ignore deepfakes they suspect, while a minority continue to share this potentially misleading content without verification.
Jennifer Mulveny, Director for Government Relations and Public Policy Asia Pacific at Adobe, addressed the broader implications of deepfakes within the electoral context.
"In the era of generative AI, voters may be acutely aware of the spread of harmful deepfakes. It has the power to influence voter views and more citizens need to be equipped with the digital media literacy and skills to stop, check and verify content," she commented.
This concern over misleading content is reflected in the fact that 68% of respondents have reconsidered their stance on a candidate or issue based on information encountered online. The difficulty in verifying trustworthiness is significant, with 83% admitting challenges in assessing online content.
A large majority of Australians are calling for stricter measures to manage political deepfakes. Specifically, 78% advocate for regulations requiring that AI-generated political content be clearly identified.
This sentiment is compounded by the perception that the government is currently lacking in protective measures, with over 80% feeling that governmental action is insufficient.
Social media platforms are also seen as critical players in addressing the spread of misleading content, with 86% believing that platforms should be proactive in combatting deepfakes.
The effects of such concerns are tangible, as nearly half of the respondents have reduced their use of social media due to the prevalence of deepfakes, identifying Facebook (29%) and X (15%) as significant sources of misleading content.
Mulveny further highlighted the potential solutions for empowering voters, including tools that offer content origin tracking to enhance trust.
"Tools like labelling, tagging, and embedded Content Credentials can empower Australians to more easily track the origins and integrity of the content they encounter. Widespread adoption of these tools is essential to provide the public with verifiable information about what they view online," Mulveny explained.
Respondents have also expressed a desire for self-directed verification, with 72% indicating the importance of having additional context—such as location, time produced, or alterations made—to verify content authenticity. Moreover, 70% suggested that such details would enhance their trust in election-related content.
Overall, the findings indicate a strong need for comprehensive measures from both governmental and technological sources to ensure transparency and trust in the digital content consumed by voters, especially in the critical period leading up to elections.