this film was produced by the proceedings of the National Academy of Sciences and was made possible by support from the Pulitzer [Music] Center what do we [Music] want the world it seems is in Battle For Truth social media have become an accelerant for the spread of both misinformation and disinformation about everything from climate change to Public Health to politics and the rise of generative AI will only make the problem more intractable left unchecked misinformation will do real damage to people and institutions but now at multiple forums around the world interdisciplinary researchers are coming together to
pinpoint the nature of this infodemic and discuss Solutions I'm sander Vander Lindon I'm a professor of psychology at the University of Cambridge and I study how people are influenced by misinformation and how we can counter it I think misinformation really impacts Us in two different ways it could be direct in terms of threatening her own health but it could also be more indirect in terms of eroding trust in democracy and the electoral process so what exactly is misinformation it's defined as any sort of false or misleading information whether on social media or even in a
journal paper disinformation on the other hand is false information spread with nefarious intent whatever the classification deceptive news has an impact the extent of the impact and the spread has as much to do with human psychology as it does with the ranker of the message itself social media platforms are designed for engagement right I mean they want more people to be on the platform and they want these people to engage with one another when they see that there is some engagement going on on a post then they spread this post even further so that you
know they will maximize the engagement Jon set up an experiment she rewarded one group of people for sharing accurate information online she rewarded another group for sharing misinformation but when we took the rewards away we looked at their behavior they when we rewarded them for accurate information they continued sharing accurate information even when the rewards went away but what is even more interesting is that when asked everybody was super motivated to share accurate information even when they didn't do it this is telling us that it's not our motivation that's driving our behavior on this platform
it's the the habits that we are building on these platforms that drive the ultimate Behavior the research underscores the notion that at least some of the onus is on these platforms and they do in fact have teams in place to mitigate misinformation content moderation is at the end of the day is always going to be about uh tradeoffs and trying to uh figure out balancing that tradeoff I think it's really important for us to have policies in place and be transparent about the policies also make sure that we are getting the balance between free expression
and safety right Facebook spends lots of time and money on moderation but are they doing enough for one former employee the answer is an emphatic no Facebook does not allocate spending based on absolute need they do it based on uh fear of of consequences I don't think Facebook ever set out to intentionally promote devisive extreme polarizing content I do think though that that they are aware of the side effects of the choices they have made around amplification and they know that algorithmic based ranking so engagement based ranking keeps you on their sites longer you have
long you have longer sessions you show up more often and that makes them more money amplification algorithms things like engagement based ranking on Instagram can lead children from very innocuous topics like healthy recipes to anoria promoting content over a very short period of time in October 2023 more than 40 States sued meta and its platforms claiming they're detrimental to Children's Mental Health in January meta announced controls that they say will hide harmful Facebook and Instagram content from kids feeds often times it's not enough to tell people the truth after all one person's truth can be
another person's conspiracy theory so how do you convince people they've been misled without you yourself becoming an Orbiter of Truth one approach reveal the strategies of those attempting to deceive my colleague Carl burstrom and I many years ago started Gathering lots and lots of examples of things that we were seeing in our personal world and our professional world when it comes to misinformation so we wanted to teach students and the public more broadly how to call BS so here's an example from a scientific paper and what this paper reports to do is to show a
relationship between the prevalence of autism and the level of coverage in a population by the me measles mumps rubella vaccine the MMR vaccine and at first glance like just looking at that looks kind of bad until you notice that these axes are completely different this one goes from 0o to 6% this one starts at 86% and goes up to 95% here's a more reasonable graph again with two different axes fine the numbers are in different scales but in this case you know we're including zero on both sides and what we see is that there are
very small fluctuations in MMR coverage and minold changes in autism prevalence if anything indicating that these are unlikely to be causally connected the course really arose because of our concern uh in seeing miss information traveling in what appeared to be science and what appeared to be um sort of data driven um uh and you know evidence driven kinds of arguments the course has proliferated online and at other universities but it can reach only so many people and typically a particular demographic college students are there other ways to quickly and simply teach people to detect deception
could offering the right sort of insights actually protect people from being susceptible to misinformation could it inoculate them in a sense so the process of psychological inoculation follows the medical analogy almost exactly just as you expose someone to a weakend or inactivated strain of a virus to try to trigger the production of antibodies to help confer resistance against future infection through research we found that you can actually do the same with the brain by exposing people to weakened severely weakened doses of misinformation or the techniques used to produce misinformation but how to administer this vaccine
it's started with a game so we decided to gamify the approach so we created a game uh the initial game was called bad news and it's was one of the first fake news games a social media simulation that actually allows people to step into the shoes of an online manipulator and experiment with weakened doses of the more General tactics that are used to deceive people online the issue is that people often don't have the right mental defenses in place when they're confronted in the moment with manipulation um and so that what's really what the what
the game aims to do the game works by encouraging players to find unscrupulous ways to Garner as many new followers as possible the game went viral and Vander Lindon saw encouraging results people even seem to have some immunity to new variants of misinformation tactics they hadn't seen before but a realization set in not everyone wants to play a game so the researchers devised animations that humorously Illustrated common tactics an example would be a false dilemma so you present two options and pretend there's only two well in fact there's there's many more politicians love to use
that technique uh but so do disinformation producers uh and you you cut out all of the Nuance either you stop watching the lam stream media or you want all puppies to die make sense right no good because cuz it shouldn't it's a common manipulation technique called a false dichotomy or a false dilemma it's designed to make you think you only got two choices to choose from when in reality there are more as with our little dilemma at the beginning there's no reason why you can't watch mainstream media and want all puppies to live the two
don't rule each other out and by presenting you with an option that is clearly undesirable and the option the manipulator wants you to pick your choices are narrowed down for you sometimes they use sensationalism so sometimes they appeal to people's emotions um and that can lead people to to calibrate their judgments and say well I don't find this headline as reliable now that I know that they're trying to um you know use my emotions to to influence me but according to our definition that's that's okay um right we want people to be aware of manipulation
regardless of of where it's coming from Vander Lindon refined the videos before inserting them into time slots normally used for ads on YouTube The Campaign reached Millions those exposed the found were 5 to 10% better than controls at recognizing the manipulation technique in a test headline it's not a big effect but it's a potentially important start especially given what's at stake if social media has become an accelerance for misinformation emerging AI tools promise to spread the deception even further and with alarming Precision allowing Bad actors to perfectly tailor their morsel of misinformation for Maximum Impact
when we get into the world of generative AI you can now generate 10,000 different lies and figure out what goes viral do you want to talk about abortion LGBT plus gun control you can have an AI generate misleading arguments on those issues uh really fast and then you can actually use that to tailor it to different audiences and in fact you can even link some of the output to um bot accounts and automate the whole process the Syrian regime has um like whole rooms full of people who report journalist content um to try and um
alter our information space say on Facebook or on Twitter now instead of having to hire people you have Genera AI systems that can affect this space this is a massive risk and it's absolutely one that we need to be concerned about and it is novel both in its amplification its scale its cheapness regulations technical controls educating about AI capabilities all of these approaches are in play some are interested in embedding a type of watermark in AI created content to make it easier to identify Ironclad Solutions however remain elusive how do you design social networks to
be safe by Design like how do we study how information flows how do we put in moments of intentionality and friction you know how do we make these systems be one that we don't need to censor because they're not amplifying the most extreme stuff to some extent the way to tackle AI fueled misinformation involves AI itself AI has been instrumental for Content moderation and continues to be instrumental for Content moderation uh without AI a tech platform like meta or some other tech companies would not be able to do content moderation at scale and um in
a timely fashion unfortunately moderation AI AED or not may not be enough researchers are testing various ways to incentivize the sharing of accurate information how are we going to bring this rewarding for accuracy to life we are now testing different reward structures what happens if I add a trust button and if a sharer also sees how many uh you know trust wordss he got or she got you know based on the Old Post he shared and what happens to the Future posting Behavior by combining these different interventions at smaller more reasonable amounts we saw the
the the most uh drastic drop in the spread of misinformation West looked at interventions that remove posts related to a deceptive one at ways to automatically shut off sharing if a post spikes a popularity and it nudges that slow down readers by asking them to consider a claims accuracy each had limitations but taken together West found significant knock-on effects in March Weston colleagues expanded their approach to high school students emphasizing media literacy via games lectures and interactive programs Vander Lindon is working on better inoculation delivery he hopes trusted Messengers whether influencers Role Models or Community
leaders can help maximize uptake and he started collaborating with educational psychologists exploring the use of inoculation to make young kids more immune to manipulation before they spend lots of Time [Music] online this past spring Google embarked on an initiative prior to the EU elections to help pre-b commonly used manipulation tactics using for example ads that served as informative animations we are all at risk of manipulation online right now one tactic used to manipulate opinion is De textualization Can you spot the signs generative AI now means that anyone can create video audio and images that seem
real but are not but there are a few signs to watch out for surprising shocking or outof thee ordinary content a source that doesn't look reliable video audio or images that have been edited or repurposed the company says its media literacy resources and educational animations reached millions of potential voters in several countries Google researchers are still working to determine the extent of the Project's reach and efficacy given the considerable challenges can we really make significant inroads in the fight to stem the spread of misinformation can policy makers and platforms orchestrate that Balancing Act between free
expression and safety the technology for spreading misinformation is getting better it's getting um harder harder um to respond to just the scaling aspects of this problem in the end a public health approach might be the best we can do inoculate as much as possible monitor for outbreaks quash them before they spread encourage platforms to improve moderation approaches to meet changing demands these are the best weapons in the fight to stem misinformation the stakes are clear nothing less than a shared sense of Truth [Music]