let me start by asking you a question do you remember the last time you read or watch something online that you really disagreed with it could be something like a news story a comment on facebook or a youtube clip it wasn't a very nice feeling was it especially if you felt that although you disagreed with someone they actually made a good point i at least think it's an awful feeling but although it's uncomfortable you all know that it might be wise to expose yourself to disagreement ted talks and political discussions are all about learning new
ideas and perspectives but not only the ideas we agree with in many well-functioning democracies around the world people are encouraged to step out of their comfort zones and listen to those they disagree with to get a more complete picture and perhaps learn that they are sometimes wrong and here in norway it's not only wise to expose yourself to this agreement it is also a democratic ideal that is anchored in the constitution but in my talk today i ask the question is new technology making this ideal of exposure to this agreement harder or easier to accomplish
and to answer that question i want us to go back in time in 2011 10 years ago the writer and journalist eli pariser stood on a similar stage as this one and gave a ted talk that now has almost 6 million views he talked about how algorithms on the web tailor information to each and one of us and filter out ideas and perspectives we disagree with for instance your facebook news feed will be different from your facebook newsfeed not only because you have different contacts on facebook but also because facebook's recommender algorithms cease to show
you content that is irrelevant especially relevant to you the parents have warned that such algorithms have a capacity to learn how we behave online and we know through decades of research that people tend to seek out information they agree with rather than information they disagree with and this also shapes our online behavior i mean think about it how often do you click like or share on a facebook post that you disagree with and because these algorithms are often programmed to learn your preferences and then show you more relevant content they could also learn our preferences
for content we agree with if so these algorithms could also filter out opinions and ideas we disagree with because it assumes that like-minded content is more relevant and more engaging then we are heading in to what parents coined our own filter bubble a filter bubble is a world online created by recommended algorithms where you rarely meet or see content you disagree with because the algorithm narrows what content you consume but during the last 10 years we have learned through tons of great research a lot of the negative effects of such accuracy but the findings from
these studies may come as a surprise to many of you in fact it seems that the problem of filter bubbles is quite small most of us still see a lot of stuff in our social media feed that we disagree with but i should also mention that there are some studies that show evidence of a filter bubble but the general picture is that if there really is something like online filter bubbles only a few people live with him and importantly much of this isn't only about the algorithms but also about our own behavior and who we
have as contacts on social media it seems that only a small part can be blamed on the algorithms at least directly for instance a large study on facebook in the us showed that on average american partisans were about six percent less likely to see content they disagree with due to facebook's algorithm but although we know a lot about the extent of the problem much of the algorithmic technology is still a black box that we don't know much about because we can study the effects of different algorithms such as those on facebook but we know don't
know how their algorithms work we only get to peek in from the outside and this also means that although we know uh that algorithms don't tend to filter out information we disagree with we don't know why and without addressing this question of why we don't know whether these algorithms are simply bad at doing their jobs or whether most algorithms are designed to provide people with relevant content and show people information they disagree with this means that we lack knowledge to make informed choices about how algorithms should be designed if the goal is to prevent the
filter bubbles that parisr warned about and this is where i and my team come in at the media futures research center at the university of bergen we work together with the media industry to find out how we can develop responsible media technology for the future in a brand new project called the newsrec project we will address the why question by studying the conditions under which algorithms shape people's exposure to this to disagreement to do so my team and i will need full control of the design of the algorithms we study and because it's virtually impossible
to gain full access to tax firms algorithms we will design our own algorithm we are now in the process of developing the first recommender algorithm that is equipped with factors that should increase or decrease people's exposure to this agreement and to ensure that our results are valid not only in the lab but also in the real world we will collaborate with the news industry and test our algorithm on real world news sites in a way that lets us as researchers study the effects of the algorithm and although the results of this work is in the
future we have already started doing studies on how such algorithms could work this spread we worked together with a leading norwegian use company and did a survey with a representative sample of the norwegian adult population inside this survey people could browse a new site that looked similar to a well-known norwegian new site and importantly where my team and i had full control of what was going on we found that if people use a new site that is programmed to highlight content they agree with and downplay content they disagree with then people are less likely to
expose themselves to this agreement in other words we can see signs of the filter bubbles that paris have won about 10 years ago but it means that program and algorithm to do the opposite to highlight content they disagree with and downplay content they agree with then people are more likely to expose themselves to this agreement and then there's no sign of a filter bubble this means that people are more or less likely to start on the path towards the filter bubble depending on what the algorithm is designed to do this is important because after paris's
ted talk many tech firms started taking the issue of filter bubbles seriously in 2018 twitter ceo jack dorsey was quoted saying i think twitter does contribute to filter bubbles and i think that's wrong with us we need to fix it and what my team and i have started to show in our research is that this is possible exposure to this agreement can either be amplified or narrowed depending on the choices that tech firms make when they design their algorithms and over the coming four years my team and i will use this knowledge to further investigate
how tech firms and the media industry can design algorithms that give us relevant and engaging content without also narrowing our exposure to this agreement my hope is that the work we will carry out here in bergen can prepare the ground for a more nuanced discussion on the promise and perils of recommended algorithms because the negative or the positive effects of algorithms aren't really about the effects of the algorithmic technology it's about the effects of how algorithms are designed and what they are designed to do thank you [Applause] you