In the days leading up to the election on November 5th, the use of AI was undeniable‒ from the images of Trump depicted as Superman to the images of Trump and Harris squaring off in brightly coloured uniforms.
Since 2018, every election has been marred with fears about disruptions that could stem from AI, whether an AI-generated image of poll worker destroying ballots in order to ignite riot, or doctored recordings to swing a key state either way.
Although these predictions never came to fruition, they were made before ChatGPT as well as other advanced generative AI models. After ChatGPT was released in 2022, it has been frequently used to target political leaders all over the world.
In June 2023, a group of political scientists at Purdue University consisting of Daniel Schiff, Christina Walker and Kaylyn Jackson Schiff began to track AI-generated images and videos with political inclinations in the U.S., focusing on deepfakes and cheapfakes.
Somewhat surprisingly, in the election, the Purdue team after conducting an analysis of the election-week AI-generated media discovered that it was used for mostly emotional outbursts, satire and transparent propaganda.
According to Brendan Nyhan, Dartmouth political scientist, the AI images he saw ‘were obviously AI-generated, and they were not being treated as literal truth or evidence of something. They were treated as visual illustrations of some larger point’.
Generative AI has provided a cheap and relatively easy way to produce large amounts of doctored images that push propaganda. ‘Even though we’re noticing many deepfakes that seem silly, or just seem like simple political cartoons or members, they still have a big impact on what we think about politics’, said Kaylyn Jackson Schiff.
By Marvellous Iwendi.
Source: The Atlantic