Hi, everyone. I actually had a close personal friend of mine record a little bit of an intro for me. Hold on, let me get this all set up.
Hello, San Diego. This is President Joe Biden here. Sorry, I couldn't make it to Chatting GPT.
It sounds like a fantastic and very important event. But it's my honor and privilege to introduce our next speaker who asked me to say a few words. Now, in my time in public service, I've gotten to know some exceptional, brilliant minds.
But when it comes to the impacts of artificial intelligence on truth and knowledge, there's really only one person I turn to, and that's Professor Stuart Geiger, [LAUGHTER] who teaches both in communication and data science here. Not only is he the most brilliant person who I've ever met, [APPLAUSE] he is also my very best friend in the entire world. [LAUGHTER] Please don't tell Barack that.
I met Stuart back in 2010 when he was studying misinformation and Wikipedia for his PhD. Anybody could edit Wikipedia, so I kept changing the photo of Mitch McConnell to a turtle [LAUGHTER]. Within two minutes, he caught me, but let me off with a warning.
Stuart has dedicated his life to unraveling the mysteries of AI and how it impacts our lives. He's a lot like that scrappy kid from Scranton fighting for the little guy, cutting through the malarkey. Although I never had hair like that majestic mane.
[LAUGHTER] Straight Talk folks. Technology is moving faster than a high-speed train, but are we sure it won't get derailed and hurt people? And where are the tracks going anyway?
That's no joke. We are more divided than ever and we know people will use AI to pull us farther apart. Sure, we've always had misinformation, which back in my day we just call it malarkey or propaganda.
Doctored photos, quotes out of context. Some people feel no shame to straight lying to your face. But something seems different.
Every week it seems there's a new AI released that can bring benefits for sure, but also can act in dangerous ways and bring real disruptions to people's lives. There's a hot new AI startup that makes a voice change your app. It's like Chat GPT, but for audio, they make money letting anyone impersonate politicians and celebrities.
They say they only need a 15-minute recording of anyone's voice to build a model that lets you speak in real time as if you were them. They encourage users to upload recordings of other people's voices. There's a whole gallery to choose.
The most popular model, the one that they put the most time into that works the best, and the one that you don't have to pay for? It's my voice. They stole it from me.
They say it's only for parody and entertainment. But we all know what's going to happen with this technology in this political climate. You're wrong, Joe, this is brilliant because now if I get recorded saying anything, I can just say fake AI, it's the new fake news.
I love it. Hold on. I thought we banned Donald from the Discord.
We did that. Yes, we banned Donald from the Discord server. But that's not him speaking.
Look at the icon. It's your buddy Barack. [LAUGHTER] I got you there, Joe.
Isn't it crazy what these AI technologies can do these days? Hold on. Give me a minute.
I'm just trying to record an intro for a talk that Stu is going to give at this Chatting GPT event at UC San Diego. Stu Geiger, I love that guy. [LAUGHTER] He was really smart things to say about AI.
Well, yeah, that's exactly what I'm saying, or trying to say. Is he going to talk about fake audio and video? Did you see that TikTok that was of you, me, Trump, and Drke and we were all at the McDonald's arguing about what to order?
That was hilarious. You know I'm not on TikTok Barack. But, yes, I saw it on Reddit [LAUGHTER].
But did you know that I made it with a little help from Sasha and Malia, but it's really easy to do now. I didn't have to do any coding. I just download this app, choose my politician, and spoke into it.
But this isn't just fun and games anymore, Barack. You're not in office anymore. I am.
If there's fake audio that make people think that the President of the United States is saying something, there are real political consequences. Well, look. Back when I was president, they didn't need AI to push the conspiracy theory that I was some secret radical Muslim born in Kenya.
Did they? No, but just imagine if they did. There will always be people who think the only way to win is to lie and manipulate the truth.
No matter what technology is out there or how easy it is to make and spread fake news. Look, what is crucial are institutions that we trust to anchor us to the truth, to report on the world accurately and reliably, to filter out all the BS. Do you just want to take over and give Stuart's talk for him?
Well, after all those late nights, the three of us spend playing Minecraft together and talking about the future and the ethics and politics and sociology, of artificial intelligence, I probably could give his talk for him. Really. Let me just finish up this intro and then we can go to Area 51 and take a joy ride on the UFOs.
All right, sorry about that Stu. Here we go again. We are living in a brave new world.
I don't know what to trust anymore and I bet you don't either. But you know who I do trust? Professor Stuart Geiger.
[LAUGHTER] Give it up to the eminent scholar who's making sure we don't get lost in digital sauce. Stu, what do we do? [APPLAUSE] That took me about four hours and that included the first time I ever use this particular app.
I'm not going to tell you what it is because I don't want to give them free publicity, but it is the first time I ever used it and a lot of that time was also coming up with the script. You can see here on their gallery, famous politicians, celebrities. Again, you can upload and again, requiring no programming experience, no coding, just the ability to take a 15-minute audio clip and turn it into this for anyone else on the Internet to use.
This is going to build on some of the things that Professor David Danks was talking about earlier, this should terrify you. But I also want to note something that I did, which was that I didn't think that pains me, which I put a disclaimer there. I thought that was a responsibility that I ought to do.
Well, I'm a big fan of parody. I'm a big fan of satire, of irony. I think that plays a really important role in society, it has for millennia, which by the way, if you haven't seen the Supreme Court brief that was filed by the Onion in support of a recent case around parody.
You should read it because it's a masterwork of the genre that makes the argument that parody performs a very important social function by tricking people into thinking that it is real. That I wanted to have something where if I played that to you without giving the disclaimer, you would have had to exercise your critical thinking skills. You might have actually believed, oh, maybe this guy does know Joe Biden, I don't know.
But then slowly as I pivoted in the style of parody and satire from the serious to the funny and back and forth, making it more and more ridiculous, you exercise your critical thinking faculties therefore you have the unique ability to critique and comment on this. But I chose to undermine, again, I really believe in humor. I feel it's important.
I wanted to make you-all laugh, but I chose the Pearson that, that wasn't as important as putting a disclaimer up, given the context of where I am. We're at a university, we've got UCTV here, we're presenting to a very large audience. I am speaking directly in my capacity as a professor at a research institution, and so the thing that I was absolutely terrified of, and I think the thing that I decided to do to be responsible, to be a responsible user of this technology was to put a disclaimer, because this is the last thing that I want.
This is also fake. I'm pretty good at making fake stuff. [LAUGHTER] Well, I also didn't want something like this to happen or if it did, I want there to be able to be some original record.
Or you could go back, the fact checkers could go back and say no, this actually was part of this long presentation. Here's the official canonical video in the same way that we've been debunking this information since the beginning of recorded history through institutions. I want to have Barack Obama say that about institutions because I want to believe that we live in a world in which our institutions of truth can protect us from this, that they are flexible, that we had things like this that you might have seen go.
It was one of these things that went viral, actually published before the arrest actually happened, which was a key piece of evidence that was used by all the fact checkers, to say, you might have seen this photo circulating around. They didn't use any algorithmic techniques, this wasn't a technical solution to this technical problem. It was a social solution to a social problem that they relied on our existing institutions to triage and to say, we have a lot of news organizations there, all of them showed him in different clothing.
These things have been core to journalistic institutions and other institutions whose role in society it is to anchor us to the truth and to filter out all the bias. But we've been doing this for quite a long time. This is the first famous war photo, back in the days when photos took many minutes, sometimes hours to produce, you couldn't catch action shots.
It turns out this photo was staged a little bit. There's evidence that actually this is the first photo that he probably took where there is no cannonballs on the road and the photo that ended up getting published, there were a whole bunch of cannonballs and there's a lot of evidence in the record that suggests that this photograph, the famous first war photograph ever taken and spread across Europe during the Crimean War was staged in a certain way. We've been having to figure out these things as a society for a long time.
We've also had propaganda for a long time. We've had nation-state level actors and large multinational corporations that have resources who have been able to develop and spread whatever messages they want. We still do it today in an analog form.
Even dropping pamphlets off to spread a particular message with a particular impact. But one of the things that's happening today with these automated generated technologies is that the scale is increasing and the cost of labor is going significantly down. It used to be the case that you had to be a security services of a nation-state or a large multinational corporation, to be able to have the human labor power necessary to carry out what are called the influence operations.
But one of the things that we're seeing is that, when you're used to be able to catch people who tried to do it with fewer resources, or they tried to do it sloppily, not very well at scale. For example, this is something that made the rounds during some of the Amazon unionization fights where this is clearly a fake account, it's got no history. You can do a reverse image search and find it's a stock photo of a guy who just happens to be a happy Amazon employee who doesn't want to show out hundred a month for lawyers.
This was caught pretty easily because they probably didn't put as much work into it. But now we're going to have histories, entire accounts that have histories that seem real going back years and the tasks and the techniques that we were used to, to be able to identify fake accounts or things like that. We're going to have to adapt, we're going to have to get better at that.
Our institutions are going to have to get better, we're all going to have to get better. But the problem is our institutions aren't in great shape right now. I wish we had a thriving functioning set of institutions that are enough of them to keep each other accountable, enough of them that are well-resourced to be able to go not just where the news headlines are, but where the next headline might be.
But currently all are from education to science to journalism, we are in moments of crisis around this. We are going to be spending a lot more time being suspicious of each other and that also takes a lot of labor. We can't just put a magnifying glass up, we're going to need new technologies around detection.
Some of that is going to have to come from these developers themselves. One of the things that I want to note though, is that this takes a substantial amount of work and I want to shift from, this isn't a misinformation story, but there's a famous case of a sci-fi magazine that they had to cut off submissions because they kept getting flooded with AI-generated stories. The editor said you can see this is the graph of number of submissions per month and it peaks at December and keeps rising.
The editor said something that I empathize with, he said he wanted to believe and wanted to live in a world where he could just say, well, if it's AI generated or not, it shouldn't matter, if it's good a fiction, we'll publish it, if it's bad fiction, we won't. I want to live in that world where we can apply that, but that's not the world that we live in. This publisher didn't have the labor to be able to do that.
It takes time, it takes effort, it takes money, it takes dedication, it takes specialized roles, and so we're seeing this being overwhelmed time and time again, sector by sector. I'll introduce you to Brandolini's law which is also known as the bullshit asymmetry principle, not a law law, but an idea that the effort in debunking from misinformation is much greater than the effort to create it in the first place. The amount of energy needed to refute BS is an order of magnitude or 10 times more than what is needed to produce it.
Again, not an exact figure, but it's harder to debunk than it is to spread misinformation. In fact, there's an interesting study that was done. Lots of parts of the federal government are required to have a public notice and comment period before making major changes to policy.
It's in the law. If they don't do this, then it's a violation of the Administrative Procedures Act so they have to receive comments from the public. One researcher basically use generative AI to create a whole bunch of content, actually submitted it to the FCC's notice and comment website.
Then it did not get detected, and then he took it down right before the end of the notice and comment period. But no one detected it, and he actually had a range of different people look at this text and they weren't able to detect it as well. But the thing is he wasn't using ChatGPT, GPT-3.
5, GPT-3. He was actually some GPT-2. It was also the case for GPT-0, I guess, or negative one, which is just having this Mad Libs style thing where you're just randomly generating a template sentence over and over again.
Humans can't detect this either, especially when we're talking about paragraph level comments or the size of social media posts. But one of the thing is this idea is also a lining and intersecting a bit with classical political organizing techniques. In one sense, this is actually savvy political organizing.
There's guides on how to create an email template to help your social media followers flood politicians with messages about the protest. This is one from Amnesty International that's around the No Ban No Wall movement at the beginning of the Trump administration that has these sample Facebook posts to copy and paste and maybe modify a little bit where it's acting also as an artificial organism. This has been a constant debate again pre AI, but on what is considered legitimate versus illegitimate forms of political organizing is this astroturfing or is this savvy organizing skills in the digital age.
One of the things that I think is also going to be happening is these second-order effects, given independent of whether or not things are actually happening on social media. We actually don't have a lot of evidence to know right now, is the GPT apocalypse, misinformation apocalypse upon us now or how much is it actually happening? We don't really have good methods to detect that.
But even the fear of this, I think, is leading to certain initiatives. For example, there's a lot more talk around requiring real name policies or ID requirements to be able to post on social media. I don't quite know how I feel about that because I think that can lead down to some pretty dangerous roads as well, requiring to have a government-issued ID to post on the Internet.
I wanted to end by talking a little bit about the responsibilities of developers in this. This is the AI voice tool that I shared earlier. They actually have an ethics page on their website and so I clicked it as one does.
They first said, "Yes, we want to help focus on the positive uses of voice technology. " Every technologist thinks their technology is ethical because it does what they designed it to do. We always think the things we want to do are good things, or else we wouldn't be doing them.
They said, "We want to help people maybe who have disabilities or are trans and maybe want to try out a different voice. " They had a couple of these positive use cases. But they said, "We want to help prevent the misuse of voice technology.
We provide an API to detect fake speech and prevent misuse," which I think if they did that, that would actually be some good due diligence to be able to say, well, maybe if we keep a record of everything that's generated and have some ledger that the fact checkers can go back to, that can plug into our existing institutions that anchor us to truth and provide resources and documentation so that we can know this is a variation slightly blurred to get around the audit detectors, but we found something that this was actually generated in this case before. I contacted them on their public forum. I said this is great, where can I find this?
They said, "Well, sorry, the feature is currently being finalized. It's not available at the moment. " Their ethics page contains, am I allowed to say it?
A lie. [LAUGHTER] To conclude, I think we should be thinking about these in terms of a common problem in society that we've thought about how to tackle before in different sectors, which is externalities, the idea that there is private benefit. These companies were making money, they're getting massive amounts of attention, they are causing harms for the entire rest of society.
Think about how many labor hours of work, those of you who are in education, have had to put into your own curriculum and thinking about curriculum design and GPTs, that is a cost that the public is bearing for private gains in innovation. Same with pollution. You have a private gain to the factory, a harm to the rest of society.
I think that we should be thinking about these kinds of things that are actually quite similar to other public policy and technology problems we've been dealing with for a very long time. I want to conclude by saying also that this is something that plugs into existing institutions and existing markets. There's a market for disinformation.
There is demand for disinformation from society. The fact that we have increased supply and it's a lot easier to generate it, is dependent also to the fact that there's demand for it as well, that there's big money to be made. For example, if the Federal Trade Commission has authority, standing from Congress, to be able to regulate unfair or deceptive trade practices.
I think it might be worth considering if certain use, disingenuous use of this or non-labeled use in a commercial context or in a context of a political campaign might constitute unfair practices, which we already have laws against regulating. This might not stop every single person but if you want to be a part of this, if you want to use this technology as part of existing institutions, existing organizations, existing companies, we have very strong things like whistleblower protections to be able to do this. I think that there's a lot of things to be done in talking about technological solutions.
But I don't want us to also forget that we have social and political institutions that have been trying to deal with problems like this for a long time, not just fake news, but from tax fraud to insider trading to violations of internal government policies. It only took one person to bring down the entire NSA spying regulations when Edward Snowden leak that. There's all these different ways.
I think that we need to be thinking about these things together and thinking about especially the role that technologists play and their responsibilities to society. Thank you very much.