New research from Graphika has found that Chinese state-linked operations have become more aggressive in efforts to influence the US 2024 election. The campaigns were seen using fake profiles of US voters on social media platforms to discuss sensitive social issues and talking points which spread divisive rhetoric ahead of the presidential election.
The influence operations (IOs) were found to have used AI-generated images of American voters, featuring lifelike avatars that were likely produced using a UK baked commercial AI video creation platform.
Despite the convincing content, the campaigns were primarily catagorized as ‘spamoflauge’ – as the videos were low quality and ‘spammy’ in nature. None of the videos reviewed received over 300 views and they generated very little authentic engagement, which highlights the difficulties in producing convincing political content.
Fake news is old news
The report shows evidence of “coordinated amplification”, by which the fake accounts reshared the same content and posts in a network misinformation. The content typically looked to undermine US democracy and political process, and pushed debate about sensitive topics like the legitimacy of the 2020 election and anti-establishment messaging.
This is not the first report to establish that China has been using (or at least has been backing) campaigns to influence US citizens. The state seems to be less interested in backing a specific candidate, and more focused on dividing the American public and creating distrust in the US political system.
The use of AI in political influence campaigns is here to stay, and as the technology evolves, will become harder to detect. Most AI content creation platforms state that their services are not for political use, but moderating its application is tricky.
More from TechRadar Pro
https://cdn.mos.cms.futurecdn.net/QUukvsGSGfX5aJxyKG5NDb-1200-80.jpg
Source link