Artificial intelligence (AI) has become an integral part of our lives, and its influence is expanding at a rapid pace. However, with this rapid growth comes concerns about its potential negative impacts. In a recent interview on WERC, AI expert Raven Harrison highlighted the dangers and implications of AI in the context of the upcoming 2023 elections.
WERC July 17th, 2023 with Raven Harrison - Highlights
During the interview, Raven Harrison shed light on the potential risks associated with the use of artificial intelligence in politics. He drew attention to the fact that AI is not just automation or simple algorithms but rather a powerful tool capable of generating automatic responses and mobilizing information within minutes. This means that political attacks and counter-responses can be devised and disseminated swiftly, drastically reducing the time it takes for misleading or fabricated content to go viral.
Harrison pointed out that anyone can now be a content creator and launch protected campaign speeches, making it possible to manipulate videos, audio recordings, and text to misleadingly attribute false statements to candidates. This amplifies the risk of spreading misinformation, as fabricated content can be easily crafted to degrade a candidate's reputation or manipulate public perception.
- JT: Joining us now, Raven Harrison, our buddy back in to talk about what's going on with artificial intelligence and the elections. Hey, Raven, welcome back in. Thanks for being here.
- Raven: Good morning. Good to be here.
- JT: So, yeah, we've seen it. I'm sure you have artificial intelligence is real. It's in our lives and it's moving at the speed of light. Lot of good things with this, but as even Elon Musk said, we need to slow it down, put the brakes on it, raise some red flags. What about the bad side of artificial intelligence? And that's afraid. I'm afraid rather where we're going to be with artificial intelligence and Democrats specifically using it to potentially, I don't know, influence elections, your thoughts on where we are with AI and next year's election.
- Raven: I highly recommend everybody rewatch iRobot. And we talk about the dangers of what's going on with AI. A lot of people don't even understand what that means. They're saying automate, you know, oh, it's automated. artificial intelligence. No, it looks like, you know, automatic information and that's where we are. So what this means is what I can tell you, so Biden announced that he is running again for reelection within minutes. There was a response, an automatic and AI generated response from Republicans to this attack ad that was out and across the social media platforms. And that's what you're looking at right now. So AI means that we no longer have to go through these specific content coordinators. Information can be pulled across the Internet, across multiple platforms and mobilized against the candidate within minutes, taking the response time from days to minutes. That is huge. So in layman term, JT, I could criticize you before you even finish the speech.
- JT: Right. Right.
- Raven: No criticism out and anybody can be a content creator now. That means that you don't even have to special tools. Anybody can start launching protected campaign speech against or for a candidate.
- JT: Artificial intelligence is like Photoshop or radio and video editing on steroids. You can take people's voices. You can take videos. You can do anything you want to to make a candidate look horrible and make it sound or look like they're speaking the words that were completely made up. So you could have a candidate that you hate jumping on there going, you know what, I can't stand the farmers of America. And when I become president, I'm going to destroy the entire industry. Next thing you know, it goes viral and everybody's thinking, what, how can that candidate say, you know, and next, so what's, what's, what's the watchdog on all this Raven? Who's going to make sure that who's going to fact check that everything that's out there is going to be real. I mean, talk about a daunting task.
- Raven: Well, you got that right. Who police is the police? And right now we're in a thing where we've got question marks on our DOJ and everything else. You know, hey, we found blow in the White House. We may never find out wink, wink, wink who it is. And at the end of the day, and that's an area that has facial recognition software. They have video surveillance, metal detectors and military details. And they're saying they don't know who brought Coke into the White House. But with this, oh, but this should be fine. Don't worry about this. You know, there's no, there's no regulators. The FEC is saying we're not going to touch this. Okay. We're just going to let it go and see what happens. What could go wrong? So it just depends on what generation you're talking to. Hey guys, what do you think about? We just automate these elections. You know, younger generation, woohoo. The older generation is like, what's that now? What's that you're getting ready to do? To our, our vote and to our, our elections. This is, we are in really, really dangerous area to say that this is a slippery slope would have the gravity of a black hole.
- JT: Yeah. I'm telling you right now, any, any major group of people that are supporting a certain candidate, all the other team has to do is come up with an artificial intelligence speech from that candidate saying they hate that group. And next thing you know, the group bales on them and it swings the election. I mean, this is really serious stuff here. And I hope we're going to have some fact checkers out there and people on top of this. But I think we all have to realize that going into this, what you see in here may not be the truth over the next election cycle. Raven Harrison, thank you so much. 830, Alabama's morning news. We've got traffic and weather together. We've got Leah Brandon and Fox news. 3 minutes.
Artificial Intelligence Wrap-up
Given the potential misuse and manipulation of AI in political campaigns, there is an urgent need for robust fact-checking mechanisms and regulations. However, the question of who will ensure the authenticity of online content remains unanswered. With doubts surrounding the efficacy of existing regulatory bodies, such as the FEC, the responsibility falls on the public, journalists, and technologists to critically analyze and fact-check the information they encounter.
As we approach the 2023 elections, it is crucial for voters to be vigilant and informed consumers of media. Relying solely on social media platforms or unchecked sources could lead to the unwitting propagation of false narratives. By encouraging media literacy and demanding transparency and accountability from both political actors and technology companies, we can safeguard the integrity of our democratic processes and ensure that AI serves as a tool for positive change rather than a weapon.
To combat the potential threats posed by AI in politics, we must take proactive measures. Stay skeptical of content you encounter online and verify information through credible sources. Support organizations working on fact-checking initiatives and advocate for increased transparency and accountability in political campaigns. By actively engaging in media literacy and pushing for responsible AI use, we can work towards safeguarding our democratic processes from the manipulation and misinformation that could be facilitated by artificial intelligence.
Remember, as technology evolves at an unprecedented pace, we must also adapt our safeguards and regulations to ensure a fair and informed electoral system. The future of our democracy depends on it.