tech company open AI is warning that state actors around the world are using generative artificial intelligence to run covert propaganda campaigns the company told the Washington Post that it found groups in Russia China Iran and Israel all using its technology to try and build and launch social media campaigns open AI has since addressed the issue but warns that there are still more groups out there using the technology for more let's bring in Washington Post Tech reporter Garrick dink so Garrick first of all just remind us what generative AI is and what are the concerns as
more of these tools become available yeah so generative AI tools are things like image generators text generators we even have video generators now that use AI where you put in a prom say can you make me a photo of a cat riding a skateboard and it generates that photo uh probably the most popular tool that people will know of or have interacted with is chat GPT from open AI so that's a chat bot that you can have conversations with ask it all sorts of questions ask it to do your homework ask it to you know
try to write the bar exam and what open AI found was that you know propagandist from countries like Iran Russia these groups that are already well known by cyber security exports experts for trying to push uh propaganda and sort of you know influence discourse online have started using chat GPT and other opening eye tools to help them do their job to actually generate new posts that are in English in a way that sounds like a native English speaker to translate them into multiple languages and then using that to sort of boost their propaganda campaigns so
talk to us then about how bad the implications of that can be because when people are thinking about cats on skateboards or creating some text for you when you don't want to do your homework or you know write a news copy I'm just kidding but but in those cases that doesn't sound nefarious and in this case open AI is really concerned explain the possible negative implications of this yeah so we've we've had to deal with these campaigns for years and a lot of uh these groups which are sort of sometimes Loosely tied to National governments
themselves and even employed or funded by these National governments will you know go online they'll respond to comments they'll try to stoke up debate we've seen this in US elections we've seen this uh in elections all over the world as these these countries sort of try to you know p people against each other by sort of stoking up debate online and I think a lot of people who spent time on social media have maybe even interacted with some of these things and they they they say oh this is a bot you know most of us
maybe not all of us but a lot of people can start of say okay this person seems a little bit too interested in you know making me angry or making me believe something that I I don't think is true and so I think this stuff has already been out there it's already a problem it's already something people should be aware of but with these generative AI tools the concern is that they can improve prove these things so in the same way that you might get a fraud fraudulent email asking you for your social security number
you know that if the English is very poorly written it's probably not from your bank now you might be getting an email that is written in perfect professional business English because of a tool like chat GPT and uh the other implication potentially in addition to the misinformation is is how believable it becomes particularly with deep fakes so what is open Ai and other technology groups doing to try and address that yeah I mean I think the companies they talk a lot about it they talk a lot about how they're concerned you know there's other companies
very powerful tech companies Google Microsoft Facebook that are building these tools and putting them out into the wild and they do talk about trying to create deep fake detectors so using their own technology to sort of be able to tell oh this image was created with uh a deep fake generator or maybe even putting some kind of signal on real images so that you know if a photo was taken by a camera there's a way to track that and to know that but at the same time experts aren't sure whether these Technologies are actually going
to work and the companies keep pushing these tools out because it's in their interest to sell more AI to make it seem like they're sort of at The Cutting Edge of technology so that Wall Street is more interested in them and so I think anything coming out of the big tech companies you need to be skeptical and you need to know that when you're online there's going to be stuff that might look real but probably isn't and you need to use all those same tools that people have had to use for years now with disinformation
online to double check things see where information's coming from do you trust that Source none of that changes all right G deink thank you