the Walter Cronkite school of journalism at Arizona state university. Amna: From Robo calls to deepfakes, artificial intelligence is already playing a role in the 2024 election. Today the Washington post and axios reported that companies like meta, Google and tiktok admitted to misleading content on their platforms.
Laura barrón-lópez has been covering what this means for the upcoming election. How have we seen ai already play a role in the election? Laura: Last week it was ruled Robo calls using ai generated content are illegal.
That comes after the new Hampshire attorney general launched an investigation into Robo calls that used ai to impersonate resident Joe Biden's voice leading up to the new Hampshire primary and that so far traced Robo calls back to a Texas company called life corp. . The investigation is ongoing.
We saw a number of ads using ai generative content with the RNC using imagery and video to depict a dystopian future under a second bite in term. We also -- Biden term. We also saw the super pac aligned with Santos' campaign.
Donald Trump's campaign putting out video using ai that impersonated Ron Desantis' voice. You are seeing a suite of ai being used by Republicans. The Biden administration said they have legal academics to combat this content.
Amna: What are concerns? What have experts told you about how ai is a potential threat to democracy? Laura: This is a change in degree.
It is not that ai has not been used before, but ai generative tools are more widely available and sophisticated. Ai threats in 2024 include things like Robo calls that can clone a voice, phishing templates, realistic deepfake video and photography and spoof accounts impersonating officials, offices and news outlets. Unlike 2016, ai is faster, cheaper year, easier to make because of the widely available ai generative tools.
I spoke to the senior counsel at states united democracy, a nonpartisan group focused on election security. She summed up the dangers. >> Election officials are already doing their jobs in such an elevated threat environment.
They are facing harassment, threats of physical violence, disruptions to their administration of elections, they are having trouble recruiting staff and coworkers. They do not have enough resources. Adding artificial intelligence is potentially going to make these election officials' jobs even more difficult.
It is like pouring accelerant on this already very flammable substance. Laura: One example, the aftermath of 2020, in 2022, there were Republicans and those circulating debunked video of what they called pull workers cheating -- poll workers cheating or throwing away ballots. Ai allows them to manipulate the video to make it look real.
Amna: Those emails and Robo calls, are they being targeted at certain groups? Who is most at risk? Laura: The new power of ai allows bad actors to target specific groups.
In 2020, minority communities were targeted with Robo calls that discouraged them from voting. Now because ai generative tools are more sophisticated, they can tailor content to specific communities and make emails and calls more convincing. Amna: Meta announced they will be flagging images and ai generated content there.
Is enough being done to safeguard this kind of content? Laura: Even though those companies are deciding to label the content, they are not outright banning it notably X, formerly Twitter, has not agreed to label content that could be fake. I spoke to an expert, a director from the Brennan center for justice.
He told me that labeling ai imagery and video is a good first step, but that alternately , it is on the companies to be the gatekeepers and the able to protect democracy. >> They have the responsibility to ensure to extent possible anything generated by ai is labeled for the public, but to increase their trust and safety teams to be on the look for coordinated bot activity that could be disinformation campaigns, to be on the lookout for fake news sites and to take those down. I would like to see them take as much responsibility for our democracy and integrity of our democracy.
Laura: Policing is all on the tech companies because there is no federal legislation mandating they do this. They have to do it of their own accord and there is no federal legislation banning the use of ai content in political ads. Even if there were, it does not stop foreign actors from using it.
Amna: What can people do to stay vigilant and not get fooled? Laura: This technology is very confusing for a lot of people and many may not understand even the labeling companies are saying they will put on ai generative content. It is not easy to see on an ad or videos or further graphs.
Advice experts give is to trust known sources. If you see something that might be fake floating around on the internet or social media or from an influencer, go to a known news outlet. Also if it is a question about voting go to your local state, county election official websites.
Amna: Great advice. Thank you.