too long didn't read brought to you by the Alan churing Institute the National Institute for data science and [Music] AI welcome to too long didn't read AI news developments and research delivered directly to your ear holes from the experts and me I'm Jonah a Content producer here at the touring and I'm smara a researcher and data Justice and Global ethical Futures smea in season 1 you were the resident expert brilliantly answering questions on everything AI from the ethics of AI labor to history of the chip Wars and even briefly stopping to advise Santa on a
potential AI workflow but you have been given I I'm going to say a promotion you're my co-presenter now yeah um yes that's right we have a slightly new format this season this time you and I will be discussing the AI news but we'll also be seeking various expert voices from a wide range of Ai and data science disciplines ultimately to create an even more comprehensive podcast exciting stuff on this episode of TDR we will be talking about misinformation not a rising star in an academic beauty pageant but a very serious threat for democracy we'll also
see if the age of dynamic realtime robotics is actually upon us and what it means for your job and we look beyond the headlines when talking about generative Ai and sets that's Funky Music on the [Music] guitar Bry 24 is the year of Elections over 80 countries and half the global population will be voting this year since there's no collective noun for a group of Elections let's go with election tastic yeah taking on elections is a huge Endeavor take India it's seen as the world's largest democracy elections began in late April and Will Go On
till June and we're talking a massive population of nearly a billion people yeah and as if sorting through the candidates campaign promises wasn't tricky enough before we now have good old AI to consider and how it can help people spread falsehoods yeah and we discussed some of this in the first episode of our first season remember Jonah way back when uh but essentially we faced three strands of information manipulation first we have disinformation which is the big bad boy where in information is falsely construed with the intent of manipulating audiences then we have misinformation but
without the intention of actually causing harm finally we have the complexities of Mal information where information is exaggerated or conflated to obscure the truth or the narrative this is also where secret or classified information is often shared at a strategic time just to influence Waters check out series 1 episode 1 for for the the Fuller explanation on this uh we'll link it in the show notes and there's also a few points to keep in mind when it comes to the misuse of data and information for one any group or individual manipulating information doesn't necessarily have
to follow a single route even if they're intentionally planning on manipulating information one can begin by exaggerating historical events and follow it up with intentionally false and misleading information only to Galvanize voters towards their cause uh for instance you know I mentioned India early on and I actually was in India during the elections and there's a good chance that by the time the this recording is out India is probably still going to be counting the votes but if one were to track the misinformation or disinformation campaigns in the country that it's it's rarely a day
without reports of false information making rounds on social media platforms or communication channels be it Twitter or X or even WhatsApp in fact the world economic Forum said India has the highest risk of Miss and disinformation in 2024 wow so it's rif and it's relevant we're going to deal with it it's probably time we should uh bring on our special guest to navigate this dldr expert guest this month we are joined by an expert who has worked as an analyst within the defense and security research group at Rand Europe she's LED projects which assess the
impact of emerging Technologies on the information environment and worked to identify the impf of disinformation and conspiracy theories in Europe that sounds cool her research has informed strategy and policy at the UK Home Office UK Ministry of Defense the European commission and the United Nations development program from the Turing Center for emerging technology in security setas we are very happy to welcome research associate Megan Hughes woo sounded a bit like the sort of beginning to Blind Date there where Graham kind of brings him on like from the during Center of emerging technology hello Megan I'll
let you speak now hello Hi Megan thank you so much for having me looking forward to hopefully an interesting discussion definitely so can you give us a very brief explanation of um what setas is and what a research associate does sure yeah so setas is the center for emerging technology and security at the Alan churing Institute um and I'm a research associate within the team and we work on um policy research relating to emerging technology um and security so we look at kind of the implications of emerging Tech Technologies um and we try to advise
um actors like the government on what they should do in response okay so uh we've learned sort of quickly in the intro from smea about Miss dis and Malin information um but could you tell us a bit more about how it plays out during election time sure yeah so I'll kick us off with a kind of traditional influence operation if we look back at the 2016 US presidential elections we can see quite a typical example of a state sponsored influence operation so this was when um Russian actors looked to influence us voters ahead of the
elections and they did a number of things so it wasn't just a kind of misinformation disinformation campaign it was much broader than that so we had things like hack and leak techniques where hackers got into the Clinton campaign emails and then shared these emails over a period of a few weeks to kind of distract from the main campaign messages but spefic specific to misinformation Russian actors created a network of fake accounts of bots we're looking at about 50,000 of them and they were all sharing divisive content fake news stories reposting hashtags to make them go
viral like Hillary for prison that was one of them um and they were also publishing uh political advertisements criticizing Clinton so that's looking a few years ago and that's looking at something that like I said I'd kind of term that a traditional influence op operation when we look to the past few years and we look at AI examples from elections that have taken place since we've looked at the start of 2023 and research I've been I've been doing and I can talk to you about three examples of AI um misinformation so you've got AI generated
voice clones I don't know if you've if you saw coverage on the Biden Robo calls um so so this is where we had deep fake audio clips of Joe Biden urging voters not to turn out and vote in the New Hampshire primaries we've also got an example if we look to Poland in their recent election the opposition party actually published a deep fake audio clip of the Prime Minister reading a set of real leaked emails but the audio was fake um so you can see how kind of generated voice content we're seeing come up General
AI generated content as well over in the US there are reports of whole news site that have been generated by AI um sharing completely fake news stories so that's more text based content it's quite easily sharable and lastly coming closer to home looking at the London mayor elections we saw AI powered Bots that were again sort of similar to the tactics in the the Russian operation circulating hashtags so the hashtag London voter fraud was circulated quite a lot ahead of the elections so those are some examples of techniques and tactics that we've seen being employed
right and is there any evidence of them um having the desired effect so that's really interesting um so when we look at misinformation generally so if we kind of take the AI out of the context a lot of Studies have shown that only a small minority of people actually see the majority of misinformation so I think there was a study in 2016 on X formerly Twitter um and it showed that only 1% of users actually were exposed to 80% of the fake news content on the platform and if you're exposed to misinformation it doesn't necessarily
mean you'll be persuaded by it so fake news we know is more likely to enhance existing views it's not as likely to radically change your behavior so not as likely to kind of influence voting intentions and Studies have quite consistently found that in relation to elections misinformation hasn't meaningfully impacted the outcomes of Elections and that's because there are loads of factors that contribute to someone's voting choices what we can look at is what's new with AI so looking forward to kind of upcoming elections well AI might make a difference in the amount of disinformation and
misinformation that might be disseminated um so it might help people help actors reach more people it also might help to personalize misinformation so this is called microt targeting and it's a concept where personalized campaigns are aimed towards individuals or groups and it has been shown to be quite effective I think something that's quite relevant is the platforms on which people are finding their news so we know that young people uh between 16 to 24 the majority of them find their news online so I think it's 80% find their news online and most of that is
through social media not to kind of you know scare anyone because it's perfectly easy to look at to see BBC news on social media it doesn't mean that people are just getting all of their news from fake sites but what's important is traditional social media sites use graph models where they show you content based on the content that your network your social network is sharing and liking and engaging with when we look at Tik Tok which is obviously going to be a big player um when it comes to sharing information before uh elections Tik Tok
doesn't use that model so much so Tik Tok actually shows you information that comes from outside your social network it actually uses algorithmic recommendations uh to bring in new content so if we look at kind of what's new with AI in terms of being able to personalize disinformation or misinformation in in able to being able to reach more audiences could we see more effective use of misinformation on platforms like Tik Tok maybe but that's not a kind of so worries so would you say Tik tok's the answer to Echo Chambers then we're breaking we're breaking
what what things were before I I wouldn't recommend uh spending all your time looking for information on Tik Tok I think it um yeah maybe maybe it's the answer to Echo Chambers but I know that groups like like meta are exploring changing their kind of using the graph models the social graph models so who knows but I think you're definitely right that Echo Chambers exist on traditional social media sites and we know that people are likely to kind of share things that they agree with with people that agree with them right so surely voters are
used to being sold something when it comes to electoral promises and manifestos that's all the basis of electoral campaigning but doesn't that mean we've always been Vigilant towards such you know Trends in our communication sure um in the interest of talking about a really you know Timely topic can I suggest we go back to ancient Rome uh yeah you're not just down the road yeah just down the road you know finding the really relevant facts here but there is an anecdote there's a point so so if we go back to the Roman Republic it's facing
Civil War Octavian who is Caesar's adopted son wants to really get the public ons side so he can win against Mark Anthony one of Caesar's most trusted advisers so what does he do he spreads a bunch of rumors that Mark Anthony is a drunk and because he's having an affair with Cleopatra he doesn't have any of the traditional Roman values that would make a good leader I hope you see the point now I'm trying to point out that this is a very very early example of misinformation disinformation even campaign so we can trace this back
thousands of years misinformation is definitely not a new problem it's something that as you say we've been dealing with for a while when we look at the impact of new technologies like AI there are some differences so you know I mentioned being able to disseminate misinformation more easily and to more you know more people but there's also a concept called the Liar's dividend I'm not sure if you've come across this no so this concept was coined by a couple of US law professors and the concept is that people can now claim that true information is
false and you can avoid accountability by relying on public skepticism and the belief that the information environment is completely inundated with false information so that's something that you know we might expect to see we've we've seen an example of it actually in relation to elections in Tamil Nadu in India a clip came out of a minister accusing his own party members of illegally finding Finance so of fraud basically and he came out and said no I dismissed that that's not true I never said that but a later analysis of the clip by technical experts found
that it was quite likely that the clip was authentic so that's one example we've seen we've we've not seen lots of examples of this but it's definitely something that you know there's potential there for it to happen yeah so as that begins to happen people's trust in truth will sort of bit by bit break down it's funny isn't it exactly that you think of um kind of this as a sort of brow topic but it's basically just playground tactics isn't it completely with all of this how do we authenticate real information I know you said
the a couple of experts but if there's so much of this going around are there any ways we can you know try and Asser the truth at least for an audience that might not have that much time so is there maybe someone out there doing this work for them I think um that there are a few things so there's things that platforms can do and there's things that we can do and I think the first piece of advice I'd give is to maintain a healthy level of skepticism it's important not to believe all the hype
and not to worry too much because just as you mentioned Jonah if we get kind of really confused about the state of the information environment and we think you know the waters are completely muddy we can't find true information anywhere that's not going to help anyone and it creates a kind of sense of of public anxiety that might actually undermine things like real election results in terms of kind of practical things that that people can do and platforms can do as well we've seen that uh pre-b punking is a method that can be quite effective
so they this is a prevention rather than the Cure method um where you anticipate the use of disinformation and you warn people about it before it spreads and you provide factual information on a topic so people are kind of aware that disinformation might be coming their way I read about pre-b that's not a word I'd encountered before yeah and it's actually been effective they've used um they've done some early stud studies on climate disinformation um and I think that platforms like meta have actually started using pre- buunk techniques online so it's it's proven kind of
effective and platforms are deploying techniques looking at the stat you gave us about how few people are actually um exposed to misinformation means that the majority of information we're getting is information and we should be told yeah yeah you can believe a lot of what you're getting is that happening I I think you're completely right and it's really important that you know we do need to be encouraging trust in the information environment so I think when you log on to Facebook I think in the campaign period um if you share a post that's to do
with a political party for example I if I remember rightly a little comment comes up saying you know have you have you checked this Source or have you checked the content and I think that's a great example of something that could be done to just kind of make people pause and and think okay so on the different methods that we probably you we can use either as an individual or that platforms are taking on I've also heard about the Coalition for Content Providence and authenticity so essentially content watermarking is that going to have any real
impact what can we see in the future when it comes to c2p I think it's a great question I think ctpa is a step in the right direction so it's a group of organizations that have come together and they have committed to developing technical specifications to be able to trace the origin of media and there's lots and lots of ongoing research on watermarking but um there are a lot of problems with it so there's the adoption problem so if one platform adopts a form of watermarking and they're putting notices out saying you know oh this
content is AI generated there might be an assumption by users that any content that then isn't watermarked is legit um and that might not be strictly true so there's there's there's a there's a adoption problem there and even if water marking kind of becomes very good I think we can assume that sufficiently capable and sufficiently motivated actors they'll get around it um so it's a step in the right direction but it won't be a kind of a great solution that solves all of our problems so Megan could you tell us about this catas report sure
yeah so so this has been a great project to work on and it's it's ongoing so we've got a publication coming out soon that's that's a briefing paper and then a longer form report due out later this year and what we've been looking at is the impact of AI enabled threats to the um to the security of Elections and we've been looking at examples of AI misuse from 2023 to date and the kind of takeaway that I'd like listeners to to think of is that examples are quite scarce and where they do exist uh they're
really hyp up by mainstream media so the the risk isn't really in AI use during elections you know there's there's a small risk but the major risk is the heightening of public anxiety and the undermining of the general information environment and what we don't want is for people to lose trust in genuine authentic sources and information um so that's the kind of key Top Line I'd want people to take away from our reports yeah yeah that's a really good point let's make sure that we're not contributing to the hype about misinformation with this podcast so
I suppose that that kind of leads us to any final thoughts from you a concluding statement if you will sure I think um the the key message is you know misinformation has been around for thousands of years um AI is relatively new to us all but it is just a tool so people will use it for good and for bad but please don't worry that it's going to hugely impact all of the upcoming elections in this very important year for for democracy there's a lot of hype but we're yet to see any real evidence that
AI has actually impacted any election result so just think critically check your sources think about the content of news um and that's it all right um so just before we leave there's one final question is you hypothetically in a world where you are facing off um for our prime minister elections in the UK Megan What legislation should you be elected What legislation would youe oh that's a really good question I have to really think carefully because there's a lot of public accountability with with with a public podcast um I think that the Online safety act
has made some good steps but I think I would like to see stronger legislation surrounding pornographic deep fakes because you know we've spoken about AI in the context of election security but 95% of online deep fakes are um pornographic material often of women so you know that's a huge problem um that I think it it got discussed a lot with what happened to Taylor Swift um but the kind of conversation spiked and then has has dropped down a bit so I think that um that's a real important topic that we need to we need to
have really strict laws in place to deal with I mean that's a great point i' I'd vot for you just on [Laughter] that there's my campaign and we'll actually be coming to a bit of the uh the Deep fake stuff later in the episode thank you very much Megan you've been a wonderful guest we'll let you get back to saving the world thank you very much for having me this has been lots of [Music] fun okay Jonah so for our Second Story I really wanted to talk about robotics Rob heex sorry um so I saw
a really interesting video the other day from figure AI about their new robot name figure 01 and open AI software has been integral to the development of this robot and the reason why I think I was so surprised by it is because of the way in which the robot responds to some of the tasks that the person's asking them to do not only in terms of its movement but also the way the robots spoke I think that was the first I'm actually confronted the fact that you know this isn't uh something that's you know a
few decades away but this is something that we're actively working on right now yes it's a pretty amazing video uh we will link it of course in the in the show notes for our listeners that haven't seen it the launch video for figure 01 has someone asking this shiny Chrome robot for some food it gives him an apple and then proceeds to clean up a mess while explaining why it chose the Apple because it was the only edible thing on the table um I know the task of giving someone an apple doesn't sound hugely impressive
but you do need to watch it to see how different it is uh at least of how I thought of humanoid robots were progressing it's mad yeah and you know this this startup figure AI is backed by some of the biggest names in Tech Jeff Bezos Microsoft um a lot of companies have invested I think over a billion into the development of this technology and what's key to this new shift that's happening is that open AI recent generative AI software has been a key part of the entire puzzle it's making the robot more Dynamic and
it's making that text to uh that natural language speech a lot more impressive for the general audience and I think it it really shows how quickly Tech has been evolving I mean if you see you know the Industrial Revolution times of the early 18 early and mid 1800s to you know the quick jumps the rapid jumps that we saw from the year 2000 to now where you know we had some basic Computing and now we have really really really smart phones and I just wonder if we're seeing this right now what can we can expect
in like the next two or three years yeah so are we going to see a massive increase in robots around us now is is are we prepped for this um as I said before you know generative AI has been instrumental to giving that boost to the robotics industry to make it more Dynamic and respond in real time but if you watch the product videos it's far from our imagin idea of a perfectly mobile and you know a robot that's able to respond that quickly if you see some of these videos especially of the ones that
look like little dogs yeah it's a bit creepy to say the least but um but that's just talking about you know more performance related aspects I think there's also the general challenges of generative AI some of which we've already covered yes the impact on vulnerable communities or is it the biases or the safety concerns or the explainability or all yes it's pretty much all of that um I mean this isn't to say there aren't great uses for robotics though uh we can use them to navigate difficult terrains for instance NASA is working on a robot
to navigate celestial bodies so you don't need to put a human at risk on on the Moon instead a robot might be able to walk around and you know pick up some space um material to bring back for research purposes uh but it is a giant leap for machines very Jokes Aside there are studies showing that there is success with AI and Robotics in healthcare for for Mobility access and so on um interestingly we can also integrate them into the larger Internet of Things Network infrastructure and this might bring us one step closer to what
we envision Smart Homes and smart cities where all our devices are interconnected and they're perpetually consuming our data about our every movement our every decision you know what I'm going with this I you you say it like it's a bad thing but I I feel like I'm still so naive to how this data collection really impacts me it's it's too easy to accept the tnc's we're bombarded with uh so what can we expect in the next few months so but for the next few months um for manufacturers and this ranges from Amazon to Boston Dynamics
to Hyundai to Nvidia to Tesla you know everyone's getting in on it it's rather even playing field as of now so if we're imagining a sort of Jetson's esque future then presumably the production costs need to come down well if we continue on an unregulated path where robots are affordable it would actually come at the simple cost of your data your agency or even your job who needs them do you think do you think it is that dire it is interesting especially from a market analysis point of view and you know if take the language
of these websites these robotics websites it might lead you to believe we need these machines to fill up these jobs and and so forth and that we in fact are the more lazier humans but that's me reading between the lines right but fundamentally many of the repetitive manufacturing jobs which robots could replace are not only very very low paying but incredibly taxing so if one wanted to upskill and move out of say working in a warehouse where they have rather repetitive tasks they might not have the time because they're stuck in Endless shifts just to
make ends meet that's creating the Working Poor Side note right or side thought uh if you were to lose work like uh production lines you could lose the creativity that's born from it right is an interesting nugget Gordy Berry who founded mtown yeah was inspired by uh the production line he worked on at um building cars in Detroit he thought you could do the same with a musician like bring them in send them up the production line and come out with a hit he even had a quality control system like the car factory did where
they would make sure each song was like the best it could be before it left the Hit Factory even re-recording them with different singers and things like that so yeah remove all repetitive jobs and we might not get another mtown oh wow but I mean are you saying we should continue keeping workers in very repetitive factory jobs Jord in case we get another mtown easy for me to say yeah although I must say uh I used to be a very unskilled uh Builder Builders laborer and that is easily the time that I've been most prolific
in making music and art and feeling really creative not quite the Motown standard but um in all seriousness there needs to be a lot more analysis and review of what's going to happen to the state of our markets and you know what economic models will look like with greater automation you know we have a lot of fundamental assumptions about labor costs about knowled about information and so forth but it really needs to get a proper deep dive as we see greater and greater [Music] automation clickbait I know what you're up to with your tantalizingly open-ended
question an air of seductive mystery I thought I was kind of impervious to it until this month when I found myself paragraph deep into an article titled open AI is exploring how to responsibly generate AI porn let me guess they're not actually exploring how to generate porn at all basically you're right yes uh so what happened was this month openai released draft guidelines for how the tech inside chat GPT should behave and with regards to not safe for work content it says basically we don't do that however the article um that I read that was
in wired and and also Guardian focuses on this note lifted from the document and I quote we're exploring whether we can responsibly provide the ability to generate NSFW content in age appropriate context through the API and chat GPT we look forward to better understanding user and societal expectations of Model Behavior in this area so you can kind of see where the article got excited yes can see where they got it from but but they were also told by an open AI spokesperson that we do not have any intention for our models to generate AI porn
so this segment is kind of at risk of becoming clickbaity itself clickbait of a clickbait but it does raise some important questions I think about the future of generative Ai and where we need to be more careful the platforms want users to have maximum control but also don't want them to be able to violate laws or other people's rights I think we touched upon it last series uh where we looked at Deep fakes being used for generative pawn and since then there have been the very very public deep fix of Taylor Swift yes yeah as
Megan touched on uh earlier in the episode um and we'll obviously link the episode where smear and Jesse talk about that from last series as well in the show notes uh so a month or so ago the UK government created a new offense that makes it illegal to make sexually explicit deep fakes of over 18s without consent and open AI are very clear that they do not want to enable users to create deep fakes but it is happening on some platforms um I read an unpleasant article about the rapid rise in the number of schools
reporting of children using AI to create indecent images of other children in their school which is very sad I know but I mean that we're talking about something within schools in April we saw the first of what will hopefully be a larger Crackdown of sex offenders using aih mhm uh 48-year old man from the UK was prosecuted and banned from using AI tools after creating more than a thousand indecent images of children yeah so we we need better Tech better regs and a better education towards sex and respect in general um aside from the illegal
and abusive uses of AI when we're talking about sex can't see a future where some form of pornography isn't created by AI I imagine it's often The Fringe communities that the tech isn't specifically made for who improvise to make what they want and end up discovering some new use case that no one thought of surely it's going to play a part somewhere in the future of AI I would actually be more worried about AI driven Pawn there is no transparency on the data used to train some of of the generative AI models and we also
have the problem of poor explainability if we can even say there's any form of explainability in this case there may be a chance that someone's photographic data that may have been used to train a model and maybe somewhere down the line there's some gen Pawn which looks very oddly familiar to you and I personally do not want to wake up to a future 20 years down the line where a photo I uploaded on Facebook completely non-harmful ends up being part of a training data set that has very non-w welcome uses yeah and I wonder if
there's something in the idea that if AI companies do explore the more questionable Avenues uh the resulting new architecture developed could enable people with ulterior motives to jailbreak the system and use it for their own even more dubious means oh yeah definitely I mean better Tech doesn't mean we eradicate crime as much as criminal justice AI systems might make you want to believe the more interconnected our networks I think there are more risks of cyber operation be it Data Theft data leaks or even model replication where they can reproduce some of these models and the
outcomes and at the risk of the person whose data is being used yeah okay let's wrap it up there um I suppose just to bring It full circle back to clickbait and Having learned from Megan about being aware of what we read and where we get our information I suppose the message here is to be vigilant um although this click baity headline led us down a valid Rabbit Hole sometimes you could find yourself in a more spurious place you think before you click at least we're on the right track when it comes to the law
it's good to say that you know there are active steps being taken to make sure that people are protected and that there there are court rulings now that can be upheld in future cases hopefully it's not the case but you know knowing how the world tends to use Tech it won't be surprising if we hear more about this as this technology improves yeah keep your posted [Music] well that's about it for this month but before we go smear I want to continue a tradition from the last series and that is our positive News segment so
what made you feel optimistic about AI this month there's a lot been happening but there's one story I want to focus on it's this big breakthrough with deep Minds Alpha fold three essentially yes I've heard of it uh so the big breakthrough is that this AI system can now map out protein structures quicker than ever to give cures for diseases so essentially improve drug discovery would you like to know exactly how that works because I spent some time going into the physics and the biology I absolutely would because I I did read the the sort
of the headline of this story and thought that sounds positive but then I read the rest and understood nothing so I would love some help okay keep in mind I'm not a doctor by any means if I was my parents would be so proud of me but okay basically proteins are the workhorses of the cell they're important for everything and each protein is made up of complex amino acid sequences the issue is that these sequences and how they make up the protein is governed by these very complex physical and chemical interactions which has meant that
humans trying to map it out have taken a lot of time apparently it's like a 50-year grand challenge for medicine and biology but now there's a computer that can do it for us and if it means it can map out proteins it's the future of drug Discovery why you ask is the future of drug Discovery it's because uh drug molecules bind to specific sites on protein so if you know where those sites are on a protein to bind the drug molecule to then we find a way to make that drug effective very nice shout out
Alpha F 3 shout out Alpha fold 3 like it so that's it for this month thank you very much again to Megan Hughes uh our excellent guest thank you to Jesse behind the scenes thank you to smea I should also just mention that smear this week I watched you um perform at the pint of science event um in London which where you were performing your imagined future You Came From Mars from the year 2060 or something yeah 2064 um yeah I came down from Mars it was a very hectic moment of traveling for me I
don't usually come back down to terrestrial Earth uh but I luckily got the funds from a specific sponsor your sponsor was little right yeah it it was really good and um yeah for for those interested in that uh our YouTube will have the pint of Science in the future but SM well well done thank you for everyone who listened this far and we can't wait to see you next month with a new set of stories that we will cover in detail bye