I think something that was less obvious to me at the beginning of this project but is now like a thing that I keep really coming back to is that whether something is accurate or not is not a binary well this is true or its false and that's great and it's not even a trinary well it's true or it's misleading or it's false but it's exists on a continuum and so it's important when you're thinking About a valuing story is to think about it in that kind of continuous way today he's gonna speak to us on
understanding and reducing the spread of misinformation online I normally would give you an overview of the talk but it's actually an amalgamation of a variety of different studies that constant through different interventions against misinformation and so-called fake news and social media so please join me in welcoming David ring it's It's really great to be here I'm excited about being back in Boston and this is one element of that is being able to connect a short steen and all the great stuff happening around the Kennedy School so before I start I want to say that everything
that I'm presenting today is joint work with Gord Pennycook who is my like very close collaborator on all things misinformation and he's really really excellent okay and the other thing that I want to say is start With a little bit of definition because Ben people use some of these terms they mean different things by them and so when I say fake news or false news I'm not talking about anything that I disagree with which is one popular definition but I'm saying specifically I'm talking entirely fabricated stories that are presented as if they're true and I
think that this is one form of misinformation that has received the wire of attention because it's the most Striking and it's also sort of the easiest to to define in some sense but I actually think it's probably not the most important there's also what we call hyper partisan news so this is sort of biased or misleading coverage of events did actually happen and my guess is that hyper-partisan content is both much more widespread but also probably more pernicious because it's sort of less obviously false and so the question that I'm interested in is what can
we do to Fight misinformation of all varieties and in our research we do sort of basic science questions about what effects you know how people relate to misinformation and we also do sort of evaluation of different interventions to try and reduce it and I'm in this talk and a focus on the intervention half of the work and I'm gonna start with the thing that maybe is the thing that comes most easily to mind and that was one of the first things that social media platform Started doing after the 2016 election when suddenly everybody was worried
about misinformation on social media which is you can take a post like this that's false and you can put a warning on it that says something like disputed by third party fact checkers now there are a lot of problems with this I think some of them are just basic basic user interface things like nobody can it's going to pay attention to that or understand what it means this is very Subtle but I think that there's a much more fundamental problem which is even if this was done in a really effective way it's just not scalable
like there's no way the professional fact checkers can keep up with the volume of misinformation that's being produced and so what that means is on the one hand many or even probably most false stories never wind up getting flagged and even the ones that do the process is slow so during the sort of peak virality Spreading period they're not going to be flagged and so that just sort of immediately limits the amount of impact that an intervention like this can have but it's actually somewhat worse than that we identified this thing we called the implied
truth effect where we said imagine what happens in the context where you see content some of the false content is flagged with warnings but other false content is missed and so we did some experiments One of which I'll show here where we had about 1600 Americans we showed them a series of headlines some false some true some leading some right leaning and in the control condition that was it we just asked for each one would you consider sharing this on social media or not and of the false headlines they were willing to share a little
less than 30 percent of them and so then what happened in the treatment is we put warnings on 3/4 of the false stories and We did it into sort of much more serious way than the little disputed by third party fact check our label to sort of give the intervention the best chance had since big false stamped over things that are false and what we saw is that for stories that had the warning on them people were substantially less likely to say they would share them kind of half and we didn't see any kind of
evidence of motivated reasoning or reactance where people if they if it was content That was aligned with their ideology that the warning made them liked it more we didn't see anything like that was just like great things that got a warning on them a lot less likely to be shared but then the question was what happened to the fraction of false stories that didn't have warnings on them the ones that you know we're sort of modeling the situation if these are the ones the fact checkers mix and what we find is there's a significant Increase
in the probability of sharing the ones that are not labeled and the this implied truth effect the idea being that the absence of warning implies that it may have been checked and validated it was about a third of the size of the warning effect and so to the extent that those effect sizes generalize you could think something like if they're missing more than a third of the stories they may actually be increasing belief in false content rather than decreasing it So I think that the warnings are are not the best approach another thing that got
a lot of attention is and that you know some platforms have implemented and also that a lot of civil society organizations have been advocating for is emphasizing the sources so like you know for example on Facebook it does show you the source from which the article comes but in the way that sort of maximally designed for you Not to notice right so small light gray letters in the lower corner and so we wanted to know is what happens if you put a big banner for the source website or the publisher across the bottom and the
intuition is part of the reason people believe misinformation is they don't notice that it's from a bad source and if they were to notice it was from a bad source then they would say oh oops okay I guess I shouldn't believe this and so in a first study to test this we Were crowded recruited about 2,000 Americans we again showed them a set of headlines half true half false half right leaning half left leaning and in the control condition we just presented the source information as usual on Facebook and then lo lo condition we did
the big logo across the bottom and this is like the fraction of headlines that people judge to be accurate and we found was completely no effect so whether it was the control or the logo had Basically no influence on their probability of believing either the false or the true headlines and then we have some other data where we looked at what happens if we completely remove the source information also totally no effect and we were quite sort of taken aback by this and so we wanted to understand what was going on we thought about it
and we realized well what should the the effect of revealing the source or emphasizing the source be just Kind of conceptually and you can sort of imagine it in this space of like how much it like how reputable of an owl is it and then how accurate does the headline seem if you don't know the source sort of if you're not if you're not thinking about the source and so we do this experiment where we show people the source we show people articles or headlines without knowing what the source is and we people show people
the sources without any specific articles we Sort of see how plausible the headlines are and how much the sources are trusted and what sort of conceptually makes sense and what we in fact see in these experiments is when there's a myth between the plausibility and the source reliability you get an effect so if it's a headline that seems plausible but it's from a distrust at outlet then emphasizing the source make people trust it a bit less I think this is what sort of people had in mind when they were Envisioning this intervention you know and
similarly if it seems implausible but it's from a trusted source then people trust it more but if it's in this kind of middle space where there's no mismatch and the the trustworthiness of the source is sort of aligned with the plausibility of the headline then emphasizing the source doesn't do anything because it just kind of confirms what you already would have thought about the headline before you Knew the source and so what we sort of conjectured was you know maybe most of the headlines are kind of in this middle region where you're not actually getting
any additional information and in fact the 24 headlines that we used in the study I showed you a second ago we're emphasizing the source didn't do anything they were all sort of along this diagonal and so we wanted to know how representative that was of the general sort of social media ecosystem So what we did is we took a list of 60 sources some mainstream some hyper-partisan some fake news and we pulled the 10 best performer like sort of highest engagement articles from each of those sources over the the prior year and then we had
people assess either how much they trusted the outlets or how plausible they thought the headlines were when they didn't know the source we did it with Americans recruited from as an Amazon Mechanical Turk which is a Totally not representative but very easy to get sample well more representative than undergrads but you know and then we also used lucid which does give the sample that's quota matched to the US on age gender ethnicity and geographic region and so what we found is that in both cases the vast majority of the articles were sort of along this
diagonal where the the trust placed in the source is matched with the plausibility of the headline so Like just learning the source doesn't provide any extra information for everything in this region and so it actually would expect emphasizing source not to do anything for most of those articles but what you also see is there are basically there's nothing in here like an outlet that is trusted basically never produce headlines that seem implausible which is an interesting thing presumably as part of how outlets come to get trusted get To be trusted is by not producing crazy
seeming headlines um and I think it's also maybe a bit of a lesson to mainstream outlets that are getting sort of tempted into making click baby headlines that that you know there may be some negative consequences of that but there are actually a good chunk of headlines in this region where they seem plausible but they're from sources that people don't trust so they're emphasizing source could Actually make people believe those headlines lessen so maybe this would actually be okay but then what we did is we went and we manually fact-checked the sort of hundred ish
articles in that section and with a sort of very strict truth criteria where we said if any of the claims in the in the headline lead were at all false we call the whole thing false and even with that sort of conservative criteria the stuff up here was predominantly true like more than Twice as likely to be demonstrably true as even partially false and so a that means low quality outlets produce the whole range of content and so if you were to emphasize the outlet there that would overall be actually making people's beliefs less accurate
because it would mostly be making them discount the true stuff that the the bad outlets made rather than helping them identify the bad content so I think that also the emphasizing sources is not like the most Promising approach and now what I'm going to take the rest of the time to talk about is a couple of alternatives that we've been exploring that I think seem more more promising and for the first one we're going to sort of start by taking a more sort of cognitive science approach to try and understand why what is it that
drives people to share misinformation in the first place and if we have an understanding of that that will guide us into what our Interventions that that might actually work so the simplest answer to the question of why do people share this information is that they just can't tell what's true and what's not true and so you know people are not smart enough to know what's going on or not sort of sufficiently digitally literate to figure out what's going on then that would be a simple a simple explanation and so to get some insight into that
question we ran a study where we got About a thousand Americans for Mechanical Turk again we showed them instead of headlines half true how false half right meaning half left-leaning and for half of them in the accuracy condition underneath each article we said to the best of your knowledge is the claim in the above headline accurate yes or no sorry one important thing I should say is that in all of our studies all of the headlines that we use our actual headlines from social media we're Not making up you know our own fake news but
we're using all real fake news which i think is actually a super important methodological point because the media ecosystem selects for particular kinds of content in ways that we don't really understand but are profound and so the kind of stories that I would make up would be very unlikely to do well on social media just because most stories are unlikely to do well and so to the extent that you want to understand how People interact with the content that is happening out there in the world it's important to really study that content anyways okay so
half the people get asked is it accurate yes or no and the other half of the people get asked would you consider sharing this story online for example through Facebook or Twitter and in in the studies we also sort of asked them at the beginning would you ever consider sharing political content on social media and the people that say No we just kind of get rid of them so we get a better signal although it doesn't make that much difference okay so now the question is is is the fact that they basically is there some
difference between their accuracy judgments and they're sharing judgments or not and so if we start with accuracy if the if the question is can they just not tell what's true versus false and I guess in particular you might think that what they can tell is whether things align With their ideology or not I'm going to show you the fraction of headlines that people said were accurate based on whether they were false or true and whether they were discordant or concordant with the person's ideology and what you see is that when you ask about accuracy the
results are actually like pretty encouraging where people rate the true stories as vastly more accurate than the false stories regardless of whether they align with Their ideology and the sort of effective lining of ideology is really quite small so this suggests that when people are actually trying to evaluate the accuracy of content they're reasonably good at it and they're not that swayed by you know politically motivated reasoning or partisan bias or anything like that when you look at sharing on the other hand so here I'm showing you the same plot but this is the fraction
of stories people said that they would Consider sharing rather than that they thought were accurate and when you look at sharing the pattern is strikingly different where there's essentially no difference between false and true and people are much more willing to consider sharing things that align with their ideology than ones that don't and the comparison that is particularly striking is for the politically concordant but false statements where they're about twice as likely if twice as likely to Say they would consider sharing it as they are to say that it's true so I think that this
suggests that the problem is not that people simply can't tell what's false or not false and so then what might it be I think the most natural conclusion my first come to mind we look at those data is oh that we're in this post ruse world and all the people can tell what's accurate or not they just don't care much about accuracy and so like how Partisan something is is more important to them when deciding what you share then what's accurate and so they might say to themselves I know that this is false but I
like it it aligns with my view of the world so I'm gonna share it anyways and so to get a preliminary piece of evidence into whether that is an accurate way of characterizing people's motivations we just asked them when deciding whether to share a piece of content on social media how important Is it to you that the content is surprising politically aligned funny interesting or accurate and we randomize the order across people and so if people really didn't care about accuracy and were willing to tell us that which obviously is a caveat with these particular
data that we'll get back to in a minute but you might expect based on the pattern of sharing that we saw in the previous slide that people are going to think political sharing politically Aligned stuff is a lot more important than showing accurate stuff but the results are quite strikingly not consistent with that where people say accuracy is way more important than basically everything else and they say that political alignment is really overall not very important and like 80% of the subjects said that ACTA had not had no dimension that they thought was more important
than accuracy and so this suggests that in terms of people's Explicitly held values people do tend to think that accuracy is an important thing when they're making sharing decisions now like I said one possibility is that people are just lying but if we put that possibility in contrast with an alternative possibility which is maybe that people do actually care a lot about accuracy but the context of social media is basically distracting them and it's focusing their attention On other things that they care about and so like in this space say that you're someone that cares
a lot about accuracy and a bit about political alignment and you know humor or something like that then if you thought about all the different attributes you would say well this false but sort of politically aligned content it's false so I should not share it but our sort of idea is that people have constrained you know we have cognitive constraints we can't Consider everything at the same time and the context does a lot of determining what we pay attention to versus don't pay attention to and so the idea is that the social media context in
particular kind of shines the attentional spotlight on things like partisan alignment and humor and surprising this and away from accuracy and so people essentially forget to consider accuracy when they're making their sharing decisions but if they were just kind of reminded of Accuracy and so they got that into their consideration set they would say oh actually if it's false I don't want to share it and the reason that we think that social media is is particularly distracting from accuracy is there are all these other things that are really at the core of the social media
experience things like social validation and reinforcement the desire to attract more followers the idea to signal things about yourself to all of the the people Out there that are listening to you and there are aspects of the platform design that in particular make these things salient by giving you immediate quantitative feedback on how much people liked it you know and putting you in a sort of echo chamber situation where you're getting lots of positive feedback and also you get selective feedback because in general people that agree with you are much more likely to say yeah
awesome Than the people that disagree with you I mean some of them will get upset and you know yell you but a lot of people are just like okay whatever and so it creates this kind of biased feedback system that focuses people on these things that are not accuracy and so if that's true then the prediction that this account makes is if you just kind of make the concept of accuracy more salient to people make a more top of mind then they'll be more Likely to take it into account when they're deciding what to share
and so to test this idea we did so we started with some survey experiments and so in the design of these experiments we start where we show people same you know thing it's in the all the other experiments I showed you before here's this headline if you were gonna see it on Facebook how likely would you be to share it and that's the control condition and in the control condition we see that people are Only slightly more likely to say they would share true headlines compared to false headlines again consistent with what I showed you
before and then what the treatment is is everything is exactly the same but at the beginning of the study before they know anything about what the main study is about we say hey before you start can you do us a favor and help us pretest one item for another study that we're running that's about accuracy we just show them like One let's see where does that go yes we show them this like one politically neutral headline and say in your opinion is the above headline accurate and then we say okay thanks now you're done go
on to the main task and so the point is we didn't tell them accuracy is important please try and be more accurate don't share fake news fake news is you're not your friend whatever blah blah right we just like made the concept of accuracy Activated in their mind such as them went on to do exactly the same sharing task they were more likely to be thinking about accuracy sort of the attentional spotlight was shown it was swung in the direction of accuracy and so as a result they were significantly less likely to say that they
would consider sharing false content but not true content so that translates into like a 20 percent decrease in the fraction of false stories they say they Were willing to share and more than tripling the difference in sharing probability of true versus false which is what we call discernment and so that was the sort of first study that we did on this it works both for headlines that align with your ideology a headline that don't it works for Democrats and Republicans it doesn't change those answers to the question about how important accuracy is so that suggests
it's not sort of manufacturing an Accuracy motive it's just getting people to pay attention to this concept we replicated it with a different set of headlines so it wasn't unique to the particular ones that we used in that first study and we replicated it in a more representative sample using Lucid so it's not something funny about em trekkers and we also did it in the loosest study we did an ActiveX control we have people rate at the beginning instead of how accurate do you think This headline is we say how funny do you think this
headline is and that doesn't do anything so it's not just getting people to rate headlines you know it makes them more discerning it's specifically getting them to think about accuracy so we were sort of happy about those results but you know it's these survey experiments are only so compelling particularly in the context of of sharing we want to know does this work actually in the wild where people Are making real sharing decisions and so it's maybe not just about them wanting to look good to us and also how they said at the beginning I think
the real challenge here is not just blatantly false stuff but also misleading stuff so we want to know does this work beyond just totally fake news and so what we did next in this project is we did a field experiment where we wanted to know does sending someone a direct message on Twitter about the concept of accuracy Get them to actually improve the quality of what they share and so I mean the way that we did this is we created a bunch of Twitter accounts we made these twelve cooking bot accounts and we're sort of
trying to simulate something that would be as in the general vein of what the experience would be like if the social media platform was to implement this so we wanted them to know that this question that we sent them is not coming from some person it's just some kind of Like automated account whatever so with cooking bot says I'm a bot that shares and retweets awesome store about food and famous chefs we thought the idea was okay it's not political so we're trying to not get people to be defensive but just you know say oh
here's some random thing what about it okay so we got our our body counts and then we found about a hundred and thirty six thousand people that had retweeted a link to either Breitbart or Infowars in The preceding few months and we focused on those sites because prior work including my people here I've suggested that in this current moment in the social media ecosystem people on the right are much more likely to be sharing sort of low-quality content so it would give a higher signal and one might also worry that it would be sort of
less receptive to accuracy cues we're sort of trying to set ourselves up with a hard test case here so we followed 136,000 Breitbart tweeters we got about eleven thousand of them followed us back just kind of for whatever reason and our evidence suggests that the people that follow us back are the kind of spammy er end of Twitter like they do worse on the bots are not like bot detection thing so there were again these people are like the set of people even from within this set this the subset that follow us back are the
ones that I would think of as less likely to respond to our treatment And we did our best to weed out ones that really look like BOTS so we threw out the count that did you know about 0.5 on this bot or not algorithm in some waves of it and for other people we just we threw out high frequency tweeters as an approach to I rule out like either BOTS or people that were like political operatives who we also wouldn't expect to respond to this because they're not they're not making a mistake like they know
what they're doing ok so we wind up With our total of like 5,400 ish users and what we did is now that they're our followers we can send them private direct messages and so we sent each person a direct message that's basically the exact same thing from the survey experiment where it says how accurate is this headline here's some random headline and says hey thanks for following me can I ask you for a favor I'm wondering how accurate the above headline is and I'm Doing a survey to find out you know based on the headline
how happy do you think it is and basically nobody responds and basically nobody clicks on the survey but that's fine because the point was just you know it's as long as they opened the thing and read the top line then they're treated you know that's sort of enough to like activate this concept of accuracy and then we want to know what effect does this have on their subsequent tweeting and in Order to do causal inference and say like what is the actual effect of this we used a stepped wedge experimental design where everybody receives the
treatment but people are randomly assigned to the day on which they get messaged and so that means the people that get randomly assigned to be treated on day one like on day one they are the treatment and everybody that haven't gotten the message yet is the control then on day two these guys are the Treatment and they're the controls who basically have like a mini experiment on each day and use except for the last aches there's no control to compare them to whatever and then you've basically can do the comparison of what's the quality of
the stuff tweeted by the people in the treatment days versus the control days and we just look at the 24-hour chunk after the message is sent because that's the sort of periodicity with which we were sending the messages So basically this is a proof of concept to say does it work and then if it does work we can look at things like how does it decay and how do you make it more effective and so on but so the question then is what is the quality of the news shared during the 24 hours after receiving
the DM compared to the people in that same chunk of time that didn't get the DM and in order to do that we need some way of quantifying the quality of the news and so the way we did it is We selected a set of 60 sites so 20 mainstream outlets that we took from a queue list of the top news providers online news providers in the US as you'll see on the next list some of these are not the best but they are mainstream and as we will see they're better than the other ones
well we got also 20 hyper-partisan outlets and 20 fake news outlets well we've got put together lists assembled by other people of hyper-partisan sites And lists of fake news sites and we say somebody counts as the hyper-partisan site if it's not at least two hyper-partisan lists it counts as a fake news site if it's not at least two fake news sites and then we picked the ones from each of those collections of of sources that had the largest number of unique links on twitter so basically trying to pick out what are the most important outlets
in each of these categories and so here are the 60 Outlets that we wind up with the mainstream ones have things like New York Times and Washington Post and then also Fox and then also you know things like Daily Mail but you know okay then the hyper-partisan one have left hanging things like daily cross and daily causing Crooks and Liars right linking things and we also included Breitbart and fours because they actually they were ones that we found people were particularly familiar with even though The actual volume of Twitter URLs wasn't as high as some
of these other ones and then the fake news sites you've probably never heard of any of them because they're just weird random sites and then what we did is we had this 60 set of sites rated by 8 Professional fact checkers to indicate how much do you trust each of these sites and this is going to be our quality score is what's this sort of fact checker trustworthiness rating of The sources that you are tweeting and just to see what this distribution looks like this is you know New York Times and Washington Post are here
let's see Fox is here Breitbart is here Infowars is down here all the random ones you never heard of are down there and so one observation is there's a lot of variation within the mainstream category but like essentially everything in the hyper-partisan and fake news categories are not getting trusted and I set the Zero point here at the average quality of links shared by the users in our experiment that means these aren't guys that like one time tweeted a Breitbart link by accident and then went back to tweeting the New York Times all the time
the people who are are on average worse than any they're like average tweet is worse than any mainstream tweet in terms of the factory or quality ratings so we sort of a successfully isolated people that are Sharing substantial amounts of low-quality content okay so the big question then is what was the effect of getting our message on the content that they tweet and one way that I'm gonna sort of represent this for you is I'm going to sort of at the level of the domain there's gonna be one dot for each of the different news
sources in our list that people share and the size of the dot is going to be proportional to how frequently it was shared in the Control that sort of before they got the treatment and then on the x-axis there will be how much the fact-checkers said it was trustworthy and on the y-axis will have how much the frequency of that outlet changed after getting the DM so how the fraction of the person's total tweets would of total to these rated sites was made up by that new site and so what you see is that the
receiving that are like one random little message about how accurate is this brand and Political headline increased the fraction of tweets from these bright Barberie tweeters that were to New York Times CNN Washington Post and Fox News and decrease the amount to Daily Caller bright bar daily wire Western Journal and I know that Fox News is you know maybe a border case here but when you look at the things that are - Fox news.com that's mostly links to the news side of Fox News and not so much to the talking head sort of opinion side
of Fox News and so I think that I would definitely prefer people to be reading Fox than Breitbart I'm although I'm curious if other people agree but anyway so so the effect the treatment had the effect that we were hoping for you can quantitate it if you split things into number of tweets to mainstream versus hyper-partisan or fake news sites the treatment doubles the difference between fake and hyper Partisan versus mainstream sites and if you look at just the average quality among users that actually tweeted and therefore who we can measure their their quality it
was about a 5% increase in the average quality of the content shared and because the social media data is way more complicated than a survey experiment there's many many different ways you could choose to analyze this and so just to show that the results are robust we tried like 96 different Analytic approaches where you vary what's the outcome measure what's the model that you're using how do you calculate significance you know very well a whole bunch of different dimensions like which treats do you include or not include and you see that overwhelmingly this is your
magical P less than 0.05 threshold and basically they're like overwhelmingly the whatever analysis you choose in general is showing evidence of an effect so I think That this is a proof of concept that this kind of approach could really be helpful and it's something that I think platforms could very easily implement where for you know in one version of it while you're scrolling through your newsfeed and forgive my like my my expertise isn't running experiments not doing you know clipart and graphic design so this is my very happy Macha but you could be while you're
scrolling through a little thing pops up That basically says you know help us inform our algorithms how accurate is this headline and you're gonna say inaccurate accurate and then keep going and so you know and from the perspective of this intervention is just asking the question that's important you can throw away the responses but just by asking them it gets people to be thinking more about and so if this works it could provide a sort of distributed solution to help Reduce the spread of fake news that doesn't require require some kind of centralized definition of
what's true or what's false that you then censor but instead it relies on people's own ability to tell what's accurate if they bother to think about it and their own desire to not share an accurate content if they're thinking about accuracy and it's also scalable in that because each person is kind of doing it to themselves you don't need an army of fact-checkers Now there's clearly a lot of room to optimize this effect in terms of the size of the treatment effect that thing that we did is just the first random thing that popped into
my head and so we're doing some work now with jigsaw the sort of Google's R&D unit to try and figure out ways to optimize that treatment effect another thing that's cool about this is it's something that platforms could do themselves and I think that that would be really really Good but in the absence or let's say in the interim until we managed to convince them to do it it's also something that potentially be deployed using ads by sort of any citizen group that felt like it could buy ads on Twitter or Facebook or wherever targeted
at people who they think are likely to be sharing this information and do these accuracy reminders and so we're also working with some people from the Omidyar Network to test develop delivering this Intervention using targeted ads and you know of course the challenge with this is it might work once but it could pretty quickly people start just ignoring the messages so that's one thing that sort of needs to get worked on is how do you sort of avoid banner blindness and get us if people are still engaging with the content but I think that's a
sort of tractable UX problem that just sort of needs to get worked out I'd say it again this is basically We did a proof of concept and now we hope that the tech companies or people that are sort of good at that thing will get inspired by this and try and actually implement it all right so that's promising intervention number one and then it sort of connects two promising intervention number two which is when I proposed my little pop-up help us inform our algorithms I said you could just throw away the responses and this would
still be Helpful because it would get people thinking about accuracy but the other question is well should we throw away the responses or might the people's responses actually also be useful that is can you get lay people to help with make fact-checking or quality assessing scalable by crowdsourcing it rather than just relying on experts and this was something that we originally got interested in because a couple of years ago Facebook said that they were going To start doing exactly this that they were going to start surveying users to determine which news sources were trustworthy and
then up ranking content from those sources and the idea is that if it works the way that you would do it is it's not like you can just click it's not like Reddit where you can up or down vote sources because that is sort of fairly easily gamed through coordinated attacks and stuff like that but the idea is rather you just randomly sample People to say hey give us your opinion what do you think is what do you think of these outlets and then you would use the the outputs from the crowdsourcing as inputs into
the ranking algorithm to say if it's something that people say is not trustworthy then down rank it and make it so people are less likely to see it and so that also means it's you know not to say about labeling you don't have to rely on people paying attention to the the rating as you just sort of use Them as the the input into the right now of course this is censoring right we're talking about censoring the things that people say are not true so like that is that is something to be wrestled with from
my perspective it like inherent in almost any of the solutions to social media is a trade-off between the amount of censoring that's going on and the amount of misinformation that's circulating and so to me the question here is at least does This help on the misinformation side and if it does then you can have a conversation about the to which you think that that's worth it on the censoring side and if it worked it would be scalable because again you don't need to rely on experts you can recruit large numbers of just regular old people
to do it and least doesn't rely on Facebook deciding with the center but it's letting the people decide what to censor okay so this is Sort of the argument for why it's a good idea and then when Facebook said they were going to start doing this everybody freaked out I was like this is the worst idea I've ever heard because and including me so actually the way this whole project started as a journalist called me and you know January 2018 and said hey Facebook just said they're gonna start doing doing this what do you think
about it and I was like well sounds like a pretty bad idea because Like probably you know lay people's trust judgments are very influenced by partisanship and people probably gonna say they trust the most partisan sites not the sites that are most accurate also people are probably not very good at assessing the accuracy of content which is why we have a problem in the first place and also people are probably not familiar with most of the outlets involved and sort of levels of you know media literacy are pretty low so you Know why should we
expect like people to be able to do a good job at this and so I said all those things and then I was like mmm but now that I think about it this is actually a very simple empirical question like why don't we just test it and so that night Gord and I like put together a survey we got our list of 60 news outlets recruited thousand people had them rate the answers and then what I'm going to show you here is a Follow-up that we did that is better than the original one although the
results are essentially identical so what we did is we showed them that six themed set of 60 news outlets that I showed you before the twenty main stream 20 hyper-partisan 20 fake news and we asked are you familiar with each of these outlets and how much do you trust each of these outlets and then again we use lucid so these are Americans that were quote as sampled to be Representative on age gender ethnicity and geographic region and what I'm going to show you is one point per news outlet showing the average trust among the on
the x-axis and the average trust among the Republicans on the y-axis so up here is trusted by both down here is distrusted by both over here is trusted more by Democrats and Republicans and up here is trusted more by Republicans than Democrats and remember the fear going into this is that people are going to Trust the low quality outlets that align with their ideology more than the high quality outlets that don't or at least one fear and so what we find what we found sort of I guess to our surprise here's the pattern I labeled
all the mainstream ones and the hyper partisan ones that more than a third of the people said they were familiar with and so what you see is first of all there are some clear partisan differences like the Republicans trust Fox way more than They trust everything else and the Democrats trust literally every other mainstream outlet more than Republicans but to us the thing that was really striking here is that these partisan differences are actually the second-order effect and the first-order difference is everybody trusts the mainstream outlets more than the hyper partisan are fake outlets and
so that means for example although the Republicans trust Fox more than CNN in New York Times NBC they trust CNN New York Times NBC more than they trust Breitbart reinforce and similarly although the Democrats trust Fox News less than they trust all of the other mainstream outlets more or less they trust Fox News more than they trust things like daily hawser common dreams the sort of left-leaning hyper partisan sites and so what that means is that if you were to create a sort of politically balanced lay prison rating or a Representatively person rating every mainstream
outlet would do better than every fake or hyper partisan outlet and then because we also had these eight professional batch occurs rate the set of 60 sites we can also say what's make a politically balanced layperson rating and say how does the politically balanced lay person rating correlate with the rating from the fact checkers and what we found was a correlation of 0.9 which is the largest correlation That ever observed and anything I've ever done in my life and the thing that's driving that really high correlation is that and I guess another important point here
is if what we're talking about is using these ratings to generate inputs to the ranking algorithm then the only thing that matters is the relative rank like it doesn't matter if all of the trust levels are low or all the trust levels are high You're just saying relative to each other you know what's the what's the spread like and what's the ordering like and so although you see the fact checkers are sort of using the whole scale from really trusted a lot to really not trust it at all and the lay people are only going
from trust it's somewhat to trust it barely the the ranking is pretty similar because all of the fake and hyper-partisan sites get really low trust ratings from Republicans Democrats and fact checkers there's much less agreement within the mainstream category basically people can't agree on what's good but they can agree on what's bad basically um and so this suggests that if your goal was reduced circulation from misinformation sites and you wanted to do it in a way that was sort of politically defensible so people couldn't complain that oh you're picking you know you liberal wherever are
picking which sites to Censor which is what a lot of the right-wing sites did after Facebook started doing this you can say look we just survey the people and the people say these sites are not good and we down write them so and this suggests this would be reasonably effective at that and in particular this suggests that the partisan element in these trust judgments is a lot smaller than what I might have expected at the beginning but the thing is there is a problem here Which is that we also asked them are you familiar with
each outlet or not and if you split the ratings based on whether people said they were unfamiliar or they were familiar so it's gonna be like the fraction of ratings that were not at all trust barely trust somewhat trust a lot trust entirely trust based on whether they were unfamiliar with the site or from and what you see is that for sites that they were unfamiliar with and purple People overwhelmingly distrust them whereas for sites that they are familiar with there's a whole range you know they're making a judgement based on whatever they know about
it and so what that means is that if you implement a system like this it basically punishes unfamiliar sites and so if you are a niche site or you're a new site you're gonna get smashed by this so that's not great you might think one approach to that is Well just restrict to people who said that they're familiar with it which is actually what Facebook said they were gonna do and they were implementing this although who knows what they actually did because those two things are not related to each other and so they said well
we're we're only gonna look at the ratings from people that say they're familiar so now I'm going to show you the same plot of one dot per outlet you know Democrats versus Republicans but Only using ratings from people that they said people said that they were familiar with the outlets and it's terrible so this might be surprising because you might think well these are the more informed people but if you think about it there's a selection problem because who is it that selects into being familiar with hyper-partisan sites it's people that like hyper partisan content
and who selects into being familiar with fake news sites it's people that like Fake news sites so that's not a good solution to this problem and you know what this led us to isn't it so so one limitation is what I said which is that familiarity is a prerequisite for trust and so it punishes niche outlets another issue is as we saw in those earlier study the site is a coarse level of graining because you know low-quality sites produce a whole range of content so if you're doing this this sort of censoring essentially at the
level of The site that's you know you're gonna miss a lot of things and so the last thing that we wanted to know is what if we have people try and rate the actual articles themselves rather than the sources which seems like a much harder problem because you know you need to have a lot more specific information you're not sort of aggregating just up to the level of the source you have to actually look at specific headlines so what we did is We had some people at Facebook that were also interested in this crowdsourcing things
they gave us a set of articles that some sort of you know this sort of state-of-the-art Facebook internal is this misinformation or not algorithm said this stuff is sketchy and should get investigated so like the kind of thing that a platform would want to get raided and first we hired three fact checkers to fact-check you know to read the whole article and do all detailed Research and fact check each of these 209 articles and then we also got a bunch of em Turkish to rate them but instead of having them read the whole article we
just showed them the headline in the lead so basically what you would get out of a Facebook post a because this is scalable we're asking people to do research takes a long time and B because it's actually not totally clear to me that research is gonna help in a lot of these cases we want to just know If you do the most scalable thing just have people look at the headline in the lead how well can they do and rather than just saying is this true or false what we did for both the fact checkers
and the EM Truckers is we have them answer seven questions each of which was on a one to seven scale saying to what extent of this does this article describe an event that actually happened to what extent is it accurate is it reliable as a trustworthy is it true is It written in an unbiased way and is it objective and then the idea is basically you ask the same question seven ways and average the results and it gives you a lesson Oishi measure than if you just pick the best performing one and I think something
that was less obvious to me at the beginning of this project but is now like a thing that I keep really coming back to is that whether something is accurate or not is not a binary well this is true or its false and that's Great and it's not even a trinary while it's true or it's leading or it's false but it's exist on a continuum and so it's important when you're thinking about a valuing story is to think about it in that kind of continuous way so we get all these ratings and then what we
want to know about is what's the correlation between the average layperson ratings from just reading the headline having the average fact-checker rating from going and Researching the whole article so how correlated are they and then we also because we want to see how scalable is this you know say you get a really high correlation but you need 10,000 lay people to rate every story that's not gonna work and so the question is how does the correlation between the lay people and the fact checkers vary with the number of lay people that you have rating each
headline which we sort of estimate using bootstrapping simulations And we're gonna look at it for a politically balanced lay person rating and then just sort of four so you know because it's interesting what happens if you split up and look at Democrats and Republicans separately and so what you see is that if you have only two people rating each headline there is very little correlation between the lay people and the fact checkers but as the number of lay people increases pretty quickly you get like an increase in the Correlation and a leveling off for the
you know and also you see as you might expect based on everything that I showed you beforehand and what you know about the current state of the world the correlation between the Democrats and the fact checkers was substantially higher than the correlation between the Republicans and the fact checkers but be that as it may this sort of politically balanced rating performs pretty much as well as the Democrats and it gives you a Correlation in the best case of around 0.65 and so now the question is is that good like is that good enough for us
to say you know great this seems useful or not and our initial feeling was like oh that's not great I mean for you know a psychology experiment a correlation of 0.65 is good but if you're trying to make real assessments about what's good quantity what's what like what's you know what's good content or not that's not very good but then we thought about We'll wait what's the right that's sort of the numerator but what's the right denominator like what to be comparing that to and so that was part of what motivated us to get three
fact-checkers rating each article rather than just one fact checker we wanted to know what was the level of inter fact checker agreement and so if you do this same analysis and say just what's the correlation between the ratings of the three fact checkers It's point six two so that means that like at you know about twenty lay people just reading the headline in the lead not even researching the article there as correlated with the fact checkers as the fact checkers are correlated amongst themselves going out and doing research on and part of this is because
there are like legitimate disagreements mm-hmm amongst the fact checkers so for our three fact checkers this is sort of showing you the same this is the plot I Just showed you of looking about how the lay people will relate to the average of the fact checkers and now this is splitting it out just for the three in separate fact checkers and so for a fact checker number one in fact or number two it looks pretty similar to the average but for a fact checker three the Republicans are actually more correlated with fact checker three than
the Democrats and in we sort of did a bunch of sort of giving them test cases and Being like what do you think about this and there were like you know something on the order of like a fifth of the cases where this fact checker disagreed with these ones and we basically were like hey what's the deal and he gave these really long responses being like I stand by my ratings and the logic that he gave was it was reasonable basically my feeling about it as he was being sort of pedantic I'm like you know
like extremely like nitpicky but like not Wrong in the sense that like this or what let's say it was clearly reasoned and defensible it wasn't like he was just saying well I like this so it's true and so I think that the point is there are different ways even for people that have a lot of experience in this and deciding what's true and what's not and so that's something that that should really sort of keep in mind when thinking about all of this is that on the one hand there's not a clear-cut Well this is
true and this isn't but on the other hand there is real signal and there there is you know even you know across these fact checkers there was a reasonable amount of greement on there some stuff that really is not good and so my my my sort of take-home from all of this is like accuracy writ large is a construct that exists it's a continuous construct and by getting a reasonable not that large number of lay people to do estimates you can come up with a Pretty good rating of of the quality of the content and
I guess the very last thing I'll say is you can also use these article level ratings as a way to create source level ratings if you're worried about doing ratings at the level of the source because of this familiarity issue where people say they don't trust things they're not familiar about you can create source level ratings by just averaging the ratings of all of the headlines that people rate from those Sources and part of what that would do if platforms were to implement that is it would incentivize outlets to create headlines that seemed plausible on
reading which I think would be a good thing essentially I think it would it would disincentivize clickbait and sensationalism i think if people wrote plausible fake news no one would ever share it and like a lot of what makes the misinformation spread is that it's crazy and sort of sensational seeming And so if you can create up incentives if you can create incentives to get the headlines more reasonable I think that that would lead to good outcomes okay so just to summarize I think that relying on professional fact-checkers to fact-check things is not a long-run
solution I think that our studies suggest that lay people actually are surprisingly good at assessing the quality of information if they bother to pay attention and so whining feature of That means that if you make accuracy salient and get them to think about it they get more discerning in their sharing and also crowdsourcing is surprisingly effective at identifying both low quality sites and local the articles and so what these things suggest is scalable solutions that don't rely on a centralized Authority deciding what the sensor the accuracy thing is of basically getting people to censor themselves
and the crowdsourcing is sort Of letting the crowd make some decisions and I should also say I feel like in thinking about misinformation there's no silver bullet and saying none of these when you say solution what I mean is a thing that people can have in their toolkit as part of an integrated solution but a major caveat that I should give to literally everything that I said today is that we tested it all in the US and we've done the the sort of accuracy salience survey experiments in Italy France and Canada and the UK which
are like very slightly not the US but I think checking how these things work in particularly in places that don't have traditions of Free Press or any kind of media literacy or basic education is super important and I think also a part of the cross cultural issue is to think about in the crowdsourcing context in the US the sort of at least when you're thinking about politics the two dominant groups are roughly equally powerful but I think the you need to think about how do you get these mechanisms to work in places where you don't
want a sort of majority to be able to say lies about the minority are true and I think there are various approaches to that that that are getting thought about but just flagging that as another major issue okay so thanks as I said all of this is with Gord and then these are they're great people in or associated with my research group worked on various of the Other projects and thanks to the people that gave us money and thanks to all of you the question is wide open you would encountered this organization called use guard
and what they're doing and maybe because all right people I'm trying to see where that fits in and the continuum of what you're talking about where they are not trying to check facts of the story but great the reliability of the sources of the stories yeah it's Interesting so I think of news guard as in generally in the source salience or source information category of like you know it's providing information about how much you should trust this source and I think this sort of same story applies there where that could have an impact on how
people you know the extent to which people believe things when that information is mismatched with what they would get just from reading the headline by itself and our analysis suggests that In for most of the content that's doing well out there there's not actually that much of a mismatch and so I feel like I'm less enthusiastic about that than I am about other things excepting so much as providing that accuracy information and those sort of ratings might just get people to think about the concept of accuracy more generally and so that could be good but
that's sort of I think not really why they're doing it on the on the experiment with the DNS and and The accuracy have you noticed change from the first day to the third day on that segment they you had done it as in how much is that how much does the DM how BIG's the recency effect we don't really have enough statistical power to say much credible past the first day because the disadvantage of the stepped wedge effect like design is the further out you go the less of a control condition you have to compare
it to so basically in these experiments that we Ran we're not powered to answer that question but it's clearly a super important thing to know when trying to actual so I had a question about the experiment you did with a bright patch below us and I think you showed the scalar point zero 4 to minus point zero 4 in how likely they would be to change what they would share so I was just wondering with the given that like what your - - my they will seem like a very small they going Done for the
presentation so would that be statistically significant at this level and if you were to scale it this is where I'm saying like in order to contextualize what are these mean words me and you can think about so first of all there's two different things there's statistical significance and there's kind of practical significance right and so if you have a really large sample you can have been extremely precisely estimated so it's very statistically Significant but essentially meaningless effect and so this is the plot for the statistical significance part like it's definitely statistically significant and so then
the question is is it practically significant and the way that I think about the like thinking about what is like how do you interpret these numbers is you can think about like what's this increase in the average quality of what's happening so we get an effect that's you know it's around a 5% Increase and there's this question of like you know how important is that and I think when you think about the effect size estimates there are important things to keep in mind the first one is that this is likely a dramatic under estimate of
the true effect size for a few different reasons one thing is we say the minute we send the message we start counting everything for the next 24 hours is treated and you know everything you know and but we don't Know when they see the message we don't even know if they see the message at all so it's you know presumably a good chunk of our subjects never even get the message but they're all included in there is treated and another thing is that as I mentioned the design is just the first random thing that I
thought of and I'm not much of a graphic designer I called my graphic design skills are from making you know albums from my punk band when I was So like you know the hope is by doing better design you can make more effective stimuli that can create a bigger effect and I guess the third point also it's the thing about network effects which is if you get this person to you send the treatment to to reduce their sharing by a bit it also means that all of their followers are exposed to a bit less which
also means their followers are a bit less likely to share you know the things they were supposed To and their followers and so on so there are potentially way there were these translate into bigger effects so like I said I don't think this one thing is going to solve the entire problem but it seems like it looked like the Republicans have a real problem with not trusting anything yes and that no but that's the worth of a big follow-up because you have a big chunk of the country that doesn't trust anything that they're seeing and
that would seem to Have some correlation to problems with sharing false information if you don't believe there's any source that's reliable but that's not due to what comes with one one side coming on that which is we actually fairly consistently find over the studies that we run where we have people rate the accuracy of specific headlines which we've run like a million eighty studies at this point we don't find that the Republicans are more likely to believe false stories but We do find that Republicans are pretty consistently less likely to believe true stories which is
consisting of which is a big there's a probably bigger problem actually interesting story in there but the piece I want to get some more clarity on is you are when how do you see the relationship between perceived accuracy of a story that you want to see what you do about a source an actual factual ality because basically perceived accuracy is Just something that is my prediction based on my knowledge of the world up to now of what could plausibly be true but you can have a Markov model that just took the news and could keep
generating plausible news none of which is true it could all be possible and that's what you're asking people to rate is is this story plausibly true actually go check the facts right the important part for a lot of years of human looking at sources is that interesting news are the things That you wouldn't think are plausible because they're a novel event in some way so two weeks ago if you said the Speaker of the House is going to come to the State of the Union address and rip the thing up and from basically throw it
at the president is that possible there's a good reason for saying that that's a very plausible story it happens to be factually true and so my question is and then when you say that this is it's when something is a mismatch that Would seem to be exactly when you need the source from and so I'm having a little trouble understanding how you're equating sexuality and ability right so I think there's a few different things in there so one is the that we are having people rate essentially plausibility that like I agree but that's the central
you know DV and most of what we're measuring other than the sharing studies is how plausible do you think this thing is and I guess one of The results that comes out is that that's pretty reasonably related to how true it actually turns out to be that it's like plausibility is a reasonably good proxy yeah yeah but and then in terms of the surprising things so I think that there is sort of two in this in this diagram there's like these two places where you get some kind of mismatch my sense is in in the
misinformation disinformation discussion the focus is Mostly on stuff that is plausible but from a but from a bad source that is the things that that people would fall for but if they get reminded it's from a bad source then they say okay maybe I shouldn't fall for that right exactly but the right so that you know that would be down here right but I think the issue that you're raising you know with like the the speaker ripping up the I blocked it out convenient is is down here where it's an implausible headline But it's from
a credible source well the honest your okay fine anyways the point is so for that stuff it's the source is useful but like there's not much of that and or and I guess maybe the other the other comment that I would say is that I think that although that is also interesting and important that's not the myth the misinformation I think that's sort of in the same category of the first observation of yours of people not believing things that are true which is Also an important thing to be talking about but my sense is well
leave most of what we've been focusing on is people believing things that aren't true which is the sort of converse I mean it's interesting to think the extent to which those are different problems I'm curious about so these are everything that seems to be in this study is a formalized resource in the sense that its presenting itself in some way as that but it seems like one of the other Disturbing trends we have is kind of below the surface like the forum's the of drawing connections between things that may not necessarily be there and can
reach some of these other sources so I'm curious if you've studied some of the non formal new sources online we haven't and I think that's a super interesting and important direction and something else sort of in that vein is like memes you know and just like tweets and so like that's you know in those Sorts of some guy says something on Twitter and then that's news and so basically we haven't got there yet but I think that's super interesting and important overview on internet regulations no sorry that's not mine that's not my area of expertise
basically my sense is nothing maybe yeah but people are talking about things that I oh they're their own they're illegal people here that know much more about those things than I but that's not I'm sorry I just know this is a fairly new project so right yeah my sense is there's not much regulation yet and they told me two companies well I don't know either they are scared of it or they want it depending on and I worked on a project around defects and wondered if any of your cells should be translated to well she
played videos deep fakes and what you thought kind of good techniques for spreading this information of that kind of video Content could be we actually have a project on deep fakes which originated from the question of how do you you know fight deep fakes and but we're like Oh before we start thinking about that the more fundamental question is how much should we actually be worried about deep fakes and there's a lot of different dimensions of that but the one that we looked at is I think the sort of deep fake panic or even cheap
fake panic is promised on the idea that video is a lot More compelling and persuasive than text and although there is a bunch of stuff in the calm literature on comparing modalities there's like it's kind of all over the place and there isn't that much looking at political stuff in particular so with Adam Mariinsky and a couple of graduate students we have a paper that we just submitted today so we'll be projecting tomorrow and then we'll post it as a working paper on you know that basically looks at take some videos Either play the video
or have them read text and randomly randomize across stories whether you see it as video or text and then ask how much do you believe that the thing described actually happened and how convinced by basically some measure of how persuaded are you by it and what we find is across both political and non-political content people are very slightly more likely to believe that things actually happen when they're shown with video versus text and Because we have a big sample size were able to precisely estimate it it's a tiny effect and then when you look at
the actual persuasion part there is a tiny positive persuasive effective video over text for commercial stuff like stain removers and gwennyth pulcher is saying you should use oil of oregano when you get sick or whatever but there was totally zero persuasive advantage of video over text for any of the political things that we looked And so we have to see how this generalizes to a wider range of content and all that but I think the basic message is maybe people shouldn't be so freaked out about deep fakes relative to other kinds of misinformation as they
are so that's a long answer but also any of this stuff should work for defects probably I mean because I basically my feelings it's just another form of misinformation if there were some sort of public facing score for users on Facebook and you know they were getting feedback on how accurate the things that they were sharing or whether that was sourced through crowdsourcing or professional fact checkers and based off some subsample of what they're able to to score do you think that people would care and respond to that do you think that they would basically
just discount any negative feedback they got based on justification or disagreement with the fact checker do you have any data you Have any sense and whether that would be effective or not yeah I don't know so I feel like most times that I give this talk the idea of some kind of like badging or essentially shaming reputation system kind of thing comes up I'm overall sort of skeptical of it both because I'm worried about some of the reactives kind of things that you're talking about where people particularly problem people will wear it as a badge
of honor to have a bad Reputation but also just I think people will hate it and that that will make it kind of infeasible but so we haven't really looked into that but we are starting to discuss in general the idea of giving people feedback which could be public or it could be private and sort of wondering like what is effective feedback in terms of the things are saying is it from fact checkers from the crowd from people like you you know and things like that and so it's sort of Some of it was not
the first place we went but it is something that we are sort of looking into now we have a question about stratification the content of what you are showing to people as we're looking at misinformation in health we were talking a lot about entrenched issues versus your sort of latest misinformation or conspiracy theory of the day which is a lot easier to debunk then people's believes about vaccines so have you Experimented a bit with what you're putting in front of people and or is that a possibility yeah it's a great question and so we have
mostly been focused on things that are in the latter category you're saying of things that are not a sort of another piece of evidence for a deeply intention belief but rather these certain pieces of evidence of like sort of new novel evidence and I agree that I think the problem is a lot harder for deeply Entrenched issues not just because of motivated reasoning that is people wanting to defend their prior commitments in some kind of motivated way but also just from a pure sort of rationale if you're trying to have maximally accurate beliefs if there's
some uncertainty about the reliability of information sources which is always true it's actually sort of Bayesian optimal rational to say how well does this align with my priors and if it's Totally not aligned with my priors the source is probably crap rather than everything I know about the world is wrong and so that is also I think in the category of things that we haven't started on yet but that are very interested in like health misinformation in particular so you know next year weight methodological sidebar if you work on Twitter great the Twitter terms of
service or didn't think about that we Did actually think about it and although I think a major methodological obstacle in fact the thing that took essentially all of the time in doing that experiment was figuring out how to avoid getting blocked by Twitter we sort of it was it was blocked not for violating Terms of Service but for just like not them not liking us you know that is there's a difference between the service of service and saying all you look spammy So no I don't think it violates the Terms of Service we're very talk
we're very interested in the public's trust in science in particular and I was wondering I didn't see whether you made the distinction between specifically political stories and versus non political stories and in particular this question asker was saying if there is this less belief in trusted trusted sources is that mostly for the political news or does that go across the board so In other words does this also affect for instance stories that have signed any backing yeah it's a good question so we we've really focused on political on political content here and so that's another
one we're like health misinformation and sort of science both misinformation and information are errors that we think are really interesting and that we haven't looked at except in so much as they kind of intersect with political news and so the One sort of science related things I can say is in our experiment where we show people we have them read the accuracy of one headline at the beginning and then have them make more discerning and their sharing of political content we tried a bunch of different stories the beginning and the one that we found that
was the most effective was actually like a science you know scientists discover some new subatomic particle or something like that or some new galaxy or I forget What it was something along those lines but they felt like it's sort of even for both Republicans and Democrats that was particularly effective at getting them to think about accuracy which is a different question but I think it and and Dan kahan has done work on this where it's sort of like the thing that seems to most sort of protect against these false beliefs is kind of curiosity about
more so than science knowledge which It's not really an answer to your question but it's in the surveys are you missing for more than just a political orientation for example the analytical skills or something related to Jesuits that hope that there might be something we are more in the root of the problem that people yes totally so that's actually where we started with all of this before we got into evaluating interventions is looking at analytical skills and the basic result is more Analytical thinking less belief in fake news less believe in hyper-partisan news regardless of
others aligns with your ideology or not which actually was countered to a lot of the narratives because a lot of the narrative out there is your reasoning abilities are held captive by your partisanship and see if you think more you're just gonna talk yourself into believing all the crazy stuff we totally didn't find that so people that engage in more thinking are Less likely to believe false stuff and also experimentally if you kind of distract people and make people do less thinking they're more likely to believe false claims not more likely to believe true claims
and then you give them a chance to think about it it makes their they basically thinking actually causes reduced belief in false claims so critical thinking is good that's all good towards the end as we're showing some of the newer work you are using and Leveraging for systemic asymmetry between Democrats and Republicans in terms of trust patterns do you have the data to go in particularly if we were in mention of of the accuracy practical priming to go in and see what the displacement there is because if you're moving people from accurate stories on Breitbart
to accurate stories on Fox News maybe there's it doesn't matter if you're moving them from inaccurate Stories on Breitbart inaccurate stories on Fox News and you're actually prompting them to increase the accuracy assessments of their network you could actually have a bad outcome in terms of the network diffusion and you probably know you can probably assess if that at least of the skills you've been describing what the what the displacement effect the priming it's a great question and the the problem basically is like as you know David's Group and other group that's in the space
have done everything we're doing all the classification at the level of the source rather than the level of the article because there's just too many articles so like in in the Twitter experiment there's like 1.1 million tweets that are in the dataset and I just like don't know and and there's no good classifiers so you can't just say oh well we just put in there like is this true or not classifier and it sorts It out and so I think like I would I would really like to know the answer to that question and I
just sort of don't really know a feasible way of getting like are these good claims or not ratings on a million tweets but if if you have ideas [Music] you [Music]