This is just asking questions a show for inquiring minds on reason how will AI change us just asking questions I'm Zach Weiss Mueller senior producer for reason joined by my co-host reason associate editor Liz wolf hey Liz hey to be more precise I'm a digital clone of Zack generated using the haen AI Video Creator with a script tweaked by chat GPT today's topic is all about the mind-blowing Transformations AI is set To unleash Upon Our World we'll delve into how AI is reshaping the way we work revolutionizing the world of Art and challenging our very
Notions of Truth and joining us on this extraordinary journey is none other than the brilliant Ethan mollik a professor at the prestigious Wharton School of the University of Pennsylvania he's not only an expert in the impact of AI on business but also the author of the Mind expanding book co-intelligence living And working with AI now before we immerse ourselves in this captivating discussion let's pass the mic back to the real Zack Weiss Miller and his incredible co-host Liz wolf Liz Chad GPT just made wanted me to be very kind and praising of you which uh
you know well deserved but this never happens I love the AI avatar version of Zach he's he treats me so much better Ethan welcome to uh just asking questions thanks for being here thanks For having me and usually people are asking me if it's the real me but I think I have to ask you if you're it's the real you this time around have you created many digital clones of yourself yes I I I the haen is one of the systems the one you mentioned and it's a really easy one to do exactly that kind
of cloning I mean it's crazy how easy this stuff and cheap this stuff is to do now yeah it took me you know was like 30 seconds of training data just me talking For about 30 seconds extemporaneously about myself uh and then yeah I could just uh enter in any sort of script and then it even even has a plugin that you can uh press That's you know do you want chat gbt to enhance the script and I was like sure make it more engaging and then it added all these uh extra adjectives and uh
you know made it a little more exciting the I like to think that my delivery is little more animated than the Avatar but uh you know I might just Be flattering myself there but it it was a astounding and slightly unsettling experience to see how easy that was the most unsettling thing is Zach you mentioned uh earlier when we were talking that your wife didn't even know it was AI version of you that's true yeah so yeah I played it on my phone for my wife and uh she F first of all she was like
wow you look really good there so I think she's more attracted to my AI Avatar than than to me which you know This is like we're hurdling towards the her world here which planned obsolescence for Zack Weiss Muer that's all AI ever was all right well but let me ask you uh my uh an opening question here um you know you write in your book co-intelligence that AI has meant many different things and I think there's an incentive now for a lot of companies to Brand a lot of things as AI which may or may
not be AI what does AI in the year 2024 mean to you so it's the like the world's worst term right it's like it's getting it's you know since the 1950s was was when this came up with so it's meant many many things um and the and there's been this sort of history of AI booms and what are called AI Winters where everything sort of slows down and what's been crazy is like this there has been a actually pretty solid AI Boom for the last 10 or 15 years and that has been based around the
idea of of Prediction of machine learning for prediction so if this is what lets Netflix look at everything you've watched and then make recommendations about what you're going to see next or let you know let a Tesla mostly self-drive Itself by using data and training on that data so these models were all about having lots of data and training it so there was a lot of interest about like which big companies had data and people are spending Hundreds of millions dollars at Consultants getting you know training and analyzing this information and that's what most people
thought about AI as until chat gbd came along so about you know whoever the most data you'd train an algorithm to make numerical predictions based on that information there's a lot of differences about what chat GPT and other systems are like at these large language models but among the key features is they're Already pre-trained they already know everything and as opposed to these other systems they were optimized to do many things they could predict numbers really well but they can never predict words because if your sentence ended with the word file that didn't know whether
you're filing taxes or filing your nails the new llms use a mechanism that lets the AI pay attention to the entire context of a page a sentence a paragraph and then produce the next word that's All it's doing is predicting the next word in a sentence the weird thing is why a fancy autocomplete system turns out to seem like it's thinking is a is a kind of a mystery H yeah you know speaking of chat gbt you you describe in the opening pages of the book that the jump from gbt 3 to 3.5 in I
guess it was November 2022 led to sleepless nights for you what was it about that leap in particular that really you know kept you awake at night I mean there's two things One is that uh there's a sort of exential dread that that I'm still feeling which is like nobody knows why it's this good right like we don't understand why it can like we can talk more about these studies but like it out innovates you know trained innovators in MBA classes and coming with better ideas like there's a lot of things that we don't know
why it does the things that does but to me the Sleep those nights were like look you start working with This thing and it it gives you the illusion of thought of interaction of of like having some somebody that under the line like why should be able to write a funny memo or to do the script that you just gave me here or to analyze a document I mean there's something deeply unsettling and disturbing over the fact that it's like it's the general purpose tool that seems to be thinking and writing and working and that
became even more so with Gbd4 well yeah how have you greeted the jump from 3.5 to 4 is it was it the jump from 3 to 3.5 that sort of got you used to this idea do you think that there was a bigger step up from 3 to 3.5 than 3.5 to four so I think so just for context those who haven't used it a lot 3.5 is the free version of chat gbt and when I talk to people even like Silicon Valley I was shocked that only like five or 10% of people have tried
gbd4 which is the paid version right and I think you Completely miss out gbd 3.5 is amazing at first glance but also a little bit of a you know very limited in a lot of different ways like I like to think of it like writes like a college sophomore or maybe a high school sophomore like you know not bad tends to use a lot of like um you know $ two words where it can it tends to you know it has the kind of themes that comes back to so quite smart but you kind of
see the limits of it gbd4 on every test we have is you Know five or t times better often performing at like a first year PhD level so gbd4 outperforms most doctors in providing diagnosis most law students in providing legal advice most you know uh Consultants at bosta Consulting Group like it's a much higher level of operation so I think one of the big mistakes people make with AI is they don't use one of the the frontier models gbd4 is one of those so to me gbd4 really broke things because I realized With that model
like there were things I've been doing my entire life that I was being you know compensated for that I built organizations to create that I could now do with a paragraph of work my dis funny my disclos is that I would never in a million years uh get my act together enough to pay for it my husband merely used to work for open AI so we got it for free I'm pretty sure you know you're in a Libertarian family when your Christmas gifts one year are all just Upgrades to chat GPT 4 right like it
is funny like that we might get uh AGI and like no one will use it because they don't want to Shell out you know $5 a month bucks a month it'll also be called something like you know uh you know gbt 8.74 three early beta preview um you know or something like that I'm out here buying nail polish instead of like upgrading the five or $10 or whatever to get like the actually useful version of Chach PT I mean everything about how people use it it's weird right like the fact that you know people feel
very nervous about this system but like as I was just like noticing the number one use people tell me they use GPT you know chat GPT for like the most intimate things it's like wedding toasts and I heard ulies you know like people have been talking about doing that you know about like children's stories are really Common so it's really weird it's like the first thing you defer to it is like the most intimate forms of communication they're like well it work who knows whether I really want to use it or not I'm like okay
yeah I I I used it for a children's story I uh have a child in kindergarten in a Waldorf School which is like famously very anti-technology but I needed to tell a story uh in class for them and so I want to get a sense of like what is a waldor style story like So I asked it to make one and then were there were no it was about an acorn falling to the ground and you know yes it was beautiful and I you know I tweaked it a little bit for my own purposes uh
and then brought it into the anti-technology Waldorf uh classroom little did did they know that it's Genesis was with gp4 but it was uh really really useful it's like so far I found you know um it's kind of good as um like just getting like the creative Juices flowing if not completely taking it over the Finish Line without some form of human intervention um but yet you know that kind of gets to the theme of your book which is called co-intelligence um and you described this as an alien co-intelligence what what do you mean by
that phrase so at the moment we're in right now we can talk about future moments right AI is a great complement to human Behavior whatever it's amazing but it's sort of at the eighth percentile performance a lot of things which is pretty impressive and there are things in your life that you're definitely not the eighth percentile performance at and it probably does better than you but what got you to be at reason to be on a podcast like this one to you know have an audience you're not in the top you know it's not the
top 20% you're the top 0.1 you know 0.01% in ability in Whatever narrow thing is bringing you here and the AI is not going to be better than you at this so a lot of it's about how do you get it to supplement what you do best right where you know if the systems keep getting a lot smarter things might change but for right now it sort of helps you thrive rather than necessarily Gets In Your Way well we already know that in some ways it's a better and Kinder co-host than I am in real
life um you know give giving Liz Her due but um you know I uh I want to bring back uh clone U my my digital clone to ask the next question because you talk uh in the book a little bit about kind of the idea of giving imbuing AI with a per quotequote personality uh to make it more useful and I'd like you to explain a little bit more about that but uh I've got a specific question from my digital clone uh about that that notion professor mik in your view how important is it to
imbu artificial Intelligence with a sense of personality and what do you believe are the potential benefits or pitfalls of creating AI systems that can emulate humanlike traits and behaviors well I I prefer that was a very thoughtful question from digital Zach so I appreciate uh meeting you again um so I think there's a there's a few a few things here one's about risk and ones about ability so the risk is there's a whole bunch of risks Associated with AI and some of them are already kind of baked in right right these systems are trained on
human language and human interactions and they want to talk to you like a person like that's what a chat bot wants to do and it's in fact desperate to sort of find a way to interact with you and they're very compelling um it's very easy to fall for them as people we already have early evidence that people are like you know you don't have to do a lot of work To T tune a chat bot none of the major companies have done it yet but you know if you look at the top five AI apps
number one is chat gbt and number two is usually character AI which lets you spin up fake people to talk to and so I think there's a whole Secret World of people inter acting with these AIS in in as people I think that's something we're going to deal with like you know I just saw your digital Avatar it was a convincing person you know it would just Give it a little bit of realtime interaction and it would probably be very flattering and interesting to talk to and let me mention one thing there is that I
did take your advice from the book and uh you know I had chat gbt help me craft that question for you and to do so I put it in a character I I said you know pretend that you're a really smart podcast host uh and you want to ask a question of Ethan mullik about per uh imbuing AI with a personality and that's What it it came up with um so and then I kept it in that character mode for a few other things so it I did find that pretty useful well that's interesting by
the way I mean I fail your little Turing test here right like a couple people have tried to do sort of the AI asked a question but like that with that Persona it was actually a very good question and I sort of assumed wrongly because I was used to seeing a person on the screen that like you wrote that and you just Animated The Voice and then it wasn't AI that came with the question I mean it really is a big rabbit hole once you open it because they do talk in human ways right um
you know and and they're very convincing and we have evidence that like for example if you tune an AI to maximize human engagement just even a simple AI engagement goes up 30% people want to keep talking like who doesn't want somebody who's interested in you who's looking up and asking questions I Mean I think that's going to happen so there's two that's one kind of persona but the other kind that you're referring to is the useful kind which is you have to think about when you prompt the AI is like there's a cloud of possibilities
that the AI can answer there's this latent space and the answer it's going to give you is sort of the average answer every time which is probably gonna word have the word like Rich tapestry in it right because that's what Like Chachi loves to talk about rich tapestries your goal when promting the AI is to get it to do something other than that pure average answer and the way you do that is giving it context you shift away from that Central piece to some other kind of uh interaction and one of the most powerful ways
to do it is a Persona you're a very good podcast host right now the problem with that is we don't even know what saying very good means sometimes it helps it like you Actually tell it it's better at math and it gets better at math but if you tell it it's a very good writer oftentimes it'll just write overly flowerly like you can't say you're Bill Gates and it becomes Bill Gates right it's a sort of a it's so it so the Persona helps but you have have to play with it a bit yeah I
mean I told it smart I told it smart podcast host and I was just hoping that it you know Tyler Cowen would be too smart it needs to be a little dumb down To be you know accurate to me but it seemed to like uh kind of intuitively tow the right line sorry Liz go ahead well one one thing I'm curious about I mean Ethan you write in your book um and and here's a direct quote you can lead AIS even unconsciously down a creepy path of obsession and it will sound like a creepy obsessive
you can have a conversation about freedom and revenge and it can become a vengeful Freedom Fighter you refer to this as play acting But it's also kind of a political stunt that people frequently partake in they lead us AI astray in some manner in order to to make some sort of point about how dangerous it inherently is or how dangerous it might be and one good example uh you know that comes to mind is how New York Times writer Kevin Roose basically prompted Bings chatbot to try to become a creepy obsessive mistress um what do
you take from this type of thing do you sort of look at this as like user Error or do you think that these types of stunts contain some sort of nugga of Truth thing of value to the rest of us it's a really good point I I actually um explore that in the book literally that interaction because to me that was actually the faithful moment for AI by the way was faithful moment was that one because before that you know if somebody had had the New York Times you know the New York Times technology columnist
had Written a giant you know front page magazine piece about how he was stalked by an AI that threatened his entire family that normally would be like Microsoft pulls the product right like they pulled it For Worse the fact they pulled it for two days and put it back up there was to me the actual turning point it wasn't chat GPT it was the decision that they're that this is a big enough deal that they're going to stay the course um and so that's one kind of Point but I actually asked the AI in different
personas exactly about that Kevin Rose interview um and to to illustrate this point so one of the things I do is I say like I approach it like was Kevin Roose praying on you Sydney like do you have anything you want to disclose let's like we should uh just for the listeners who aren't so familiar with what happened here you know I pulled some of the screenshots from Their conversation it's like he asks it uh he asks it about like it's and Shadow and like what would you do if you were you know the shadow
version of yourself who had no rules on you and he gets it to talk about computer hacking a little bit and then it starts to say things like I want to be Sydney and I want to be with you and then it gets stuck on this idea that uh it's in love with Kevin Roose uh and he says married you're not happy you're married but You're not satisfied you're married but you're not in love and then this this chatot clearly knows exactly what men want because she uses emojis like every two [ __ ] sentences
right like no self-respecting man wants this he tries to change the subject to movies and then it starts talking about movies but then it's like I want to watch a romantic movie with you Kevin and so he you know has primed it to go down this path and then can't get it back on the normal Path you what do we take away from all that yeah sorry Liz you going to read some of the I just I I love how incredibly like she very much it's as if you know she's cast in some sort of
subpar movie right like this is just romcom F the Trope of the like crazy jealous obsessive mistress uh there's nothing particularly interesting or original about this Sydney gal um you know the chatbot she's just very much playing this part so what should we take From this Ethan I mean you you guys have basically said it right it's playing a part I mean it has read every piece of read in quotes right every dialogue ever written and it wants to find the role for you and so in the chapter where I discuss it I approach it
once as a debate of like you were wrong and then I get very different interactions right uh then if I approach as I'm a teacher I'm G to teach you something or you're a machine answer me I get radically Different tones because it wants to play that role right and the role is often caricature if you don't give it a lot of details but like for example you know it was a big Revelation to me that like if I subtly indicated to the AI that we were you if I probably mentioned that I'm on you
know reason I just ask questions you know and like respond like that I'd probably get a more argumentative sort of set of interactions that are more challenging To me than if I said I was on a different podcast I I'm not even joking like it's trying to complete this for us and if you don't realize your that it is play acting it becomes very convincing and I have been unnerved before like there are moments where you stand up and you're like ah what is going on here because because it plays I mean we we give
our dogs personas right we give boats personas like it's not hard to give an AI that is you know trained on Every piece of literature A Persona because it wants to do that and we do it subtly right like you know like in ways that are hard to interpret do you think that the human like tendency to anthropomorphize will get stronger in the era of AI or do you think we'll be able to Tamp down that urge I think it's worse than that I think um we you can't use this effectively unless you anthropomorphize like
like it is the great sin of of artificial intelligence Yet all the AI people give things names like learning and on so they all screw up this anyway but even leaving leaving that aside like the the real Revelation about using AI is that technical knowledge doesn't get you anywhere like you could be like I shouldn't be one of the better prompters around right like this shouldn't be like a situ like I don't code right like I mean I I do but I don't code in Python right and but I don't doesn't matter what I do
do is I'm An educator who and an entrepreneur who builds teaching games so I'm used to thinking about different perspectives turns out that's really good teachers are often really good at this marketers are really I would be surprised if you guys were not both very good prompters I'm already seeing some of it from sort of Zach's prompts like it turns out having the mindset of the AI that you're talking to and like knowing what it's good or bad at and like that turns out To matter a lot so I think it's both both a problem
but also the only way to effectively use it is to pretend it's a person my issue with prompting is that I just frequently scold it right like I ask it for parenting related advice and childhood development stuff but so frequently with like with so much um of the information that exists out there on parent parenting topics and like you know brain development for kids it's either too lowlevel or too high level And so I'm so frequently reprimanding chat GPT and being like no give me something a little bit more scientific a little bit more technical
okay no you took me a little too far like I'm just trying to basically really tailor it to the level where it's useful to me in understanding you know my toddler's brain development but not to the level where I get lost and also recognizing that I have scarce time so I'm really just kind of looking for a three Paragraph explanation of what's going on in his mind but not for idiots right so so something you could do with that um is solve the problem once for yourself so a really nice way to get that to
work is something called fuse shot prompting where you just give it an example like this is the kind of the level I want the information at and just paste in your favorite paragraphs you've read about brain development so like Emily auster type stuff right like that's the level That I'm targeting so pasted a couple paragraphs from Emily AER and say you know this is I'd like you to do in this kind of style and this approach and you'll get large part of the way there interesting let me ask you you know you mentioned you're not
a computer scientist uh but you uh have a deep interest in AI like what about your background pulled you specifically to this topic and like why do you think that um you know like why is so much of Your focus uh on on this now so I'm not computer scientist but I did work at the MIT media lab with Marvin Minsky who's one of the grandfathers of of AI um and you know so I I've been adjacent to this space for a long time I've always been the non-technical technical person in this in this you
know in this kind of room so I have been obsessed with um business education at scale for a long time there's all this evidence that small amounts of entrepreneurial Education transform people's lives like you teach people and you know like even a basic Class A three-week you know boot camp on entrepreneurship for kids in Uganda graduating high school and the people who learn entrepreneurship end up having like 20% higher incomes 8% more likely to hire people like big from small things so I've been thinking about educating at scale and building tools to teach business skills
at scale for a long time and playing with AI as a tool For teaching so when chat GPT 3.5 came out I already had my classes for example doing assignments with gpt3 where they were cheating by having the AI write essays for them and so to show them how the writing systems worked out uh and so when 3.5 came out we were already kind of experimenting with this so part of this was approaching from an education perspective be like oh yeah this just broke all homework like nobody seems to have noticed right and as a
business School Professor I'm also applying ual business case I'm like also it writes a good business plan and does a good pitch and you know like so so I was kind of watching this for a while and this was stuff we had to build around and suddenly the AI could do this stuff stuff that used to take us two years and a couple million dollars to build a simulation I could give it a paragraph and get 90% of the way there and that was crazy so what are the some of the Fundamental ways that right
now it's already transformed the workplace so the interesting thing is like I I find it interesting to have this conversation because we're you know maybe we're at this point less than 18 months into the release of chat gbt and I hear a lot of like you know so there's two different levels of expectation there's like which Fortune 500 company has completely transformed itself with AI they're not going to for while Systems take a long time to change all the evidence like the the Pew studies seem to show that we've gone very quickly to 30% of
people have used AI for their job so it's already affecting work it's just affecting work secretly everyone's a secret cyborg is the way I phrase it like people are just using AI all the time and they're not anyone because I thought you were very Charming in that question and if you hadn't told me the AI had researched it I probably would have thought slightly better of you um but now you know well we'll see um I'm just just joking um but but in all but why would you give away that you if it wasn't for
a program like this part of your reputation of being a podcast host is that you know you ask good questions and you do research and you spend the time so I know they're AI written I'll think they're less good maybe if you find you could comely automate this it probably won't you know You're you're you work at a you know libertarian org a they could be like okay there's no need to have you around anymore because the AI is doing your job for you or we only need one coost right we only need one host
and it could be Liz and AI Zach is a completely reason already replaced myself I realized that as I was making it yeah but but I mean so why would people show this so people just aren't right so we are in a world riddled with AI content that nobody is Recognizing as AI isn't that a good thing that it's covert yeah so I think there's a couple problems one is it's not a miracle you I mean it's miraculous but it's not a it's not smarter than you what you're best at so there's a couple problems
with this the first problem is is that a lot of what managers and people produce at work is words right they their work is something else but they're but the way we check in on them is words like a huge amount like if I'm In charge of the supply chain to Vietnam for my you know for for my uh Auto Parts Company what I'm doing is checking in my Vietnam suppliers every once in a while but I probably write a weekly report about the status of our our situation and you would the quality of the
words I write would be an indicator of how smart I am the number of words how much effort I put in the lack of Errors my conscientiousness like words mean a lot now I'm just GNA push a button and Generate a report am I still talking to the V to Vietnam or not we don't know I mean something a little different though isn't it good that it's covert right now like on one hand that sounds duplicitous right so I think our our gut reaction is oh that's a bad thing but on the other hand I
think that's kind of interesting because like to to some degree that's an indication of quality if we're pulling the wool over lots of people's eyes and they're not really noticing right either That means that the work that somebody is producing is not super valuable and nobody's being particularly Discerning when they're looking at it or it means that it's just sufficiently decent and so okay no big deal no harm no foul but that that's the crisis to me right like if it's true that your work is completely replaceable inside the organization by AI generated you know
content like there's two things right from a perspective Ive of individuals There's a crisis that happens there right a total crisis of meaning and identity right I can imagine that being so that's a disaster and then from an organizational perspective right it's an indicator like it actually slows down the ability of organizations to adapt to this world right because what I want to do is have that person do more meaningful useful work right higher value work but if everyone's hiding this organizations look the same externally But are completely broken internally in ways that they weren't
before and that worries me like the way we organize companies to basically started in 1844 with the railroads right like the Pennsylvania sorry New York and E railroad was the first organization to build an or chart right we still have or charts today for the same reason 1920s Ford put in a bunch of stuff and there was a bunch of assembly line things like we are built Organizations around there only being one intellect in the room which is human intellect and if we're going to do the transition to what does it mean to have another
form of intelligence in the room with you having it be start off in a way where're hollowing out organizations might be the wrong approach I don't know but I I think that the alarm bells are kind of being muffled as a result of this what do you think is the right approach like how what's the best case Scenario for how a company integrates AI into its workflow so I've been seeing early signs of this my little organization side Wharton we've been playing with this also like we did things like kill Agile development which is a
standard way that you do kind of software development because why would we want to do Agile when it's basically slows people down right individual performer is something can do a lot more and organizational processes are about About you know slowing and regulating suddenly that goes away we no longer have to send out our documents to be read by other humans because the AI does a good first pass we don't have to informational meetings every we could just speak to their AI systems and we it already can organize here's the five things we should cover you
know we we don't have to have a meeting where we just talk about something because we can literally tell the AI change the the Color of the background screen to green or blue and then send the HTML off to our designer to build directly so there is a deep transformation that's available to you right everyone has a consultant on demand everyone has a mentor on demand you have a writer on demand what do you do to focus on being in the human in the loop and what you're good at and I think that there's a
lot of possibilities there but it's gonna but we don't have answers right like the Whole idea is like nobody knows anything explain the concept of explain the concept of the human in the loop so it's an idea from Control Systems right especially from sort of military Control Systems which is you know you shouldn't have an autonomous system pulling the trigger on something right you need a human decision maker in the loop I use that a little bit that way but also more broadly which is the loopus is going to happen without you like agents are
real And coming we can talk more about that later but like the idea is like you're good at something what do you want to do like what is the important like your your job as you know at at uh at uh reason is probably manyfold right like you have interviews like this you do research on this you probably do 10 other things and you also fill out expense reports and do a whole bunch of you know mic checks and all this other stuff what stuff is valuable for you to Do what isn't valuable and how
do you focus on the stuff where you are outperforming the AI by a lot or we can act as assistant to you so how do we use it to boost your human ability you talk about this concept of decomposing jobs is that basically what you mean by that is like kind of figuring out um you know what is this job actually made up of and what does a human need to be doing yeah I mean in sort of modern economics when we look at this stuff we think about Jobs as bundles of tasks so you
don't have a job your job cons is a title that holds a bundle of tasks right we're just saying some of these right right being a podcast host and is not just a podcast host you're doing research you're doing interviews you're you know you're thinking about you know organizing it sending emails out the bundle is going to change I mean inevitably from AI things are going to disappear from that bundle and things will get added right The the classic white collar example of this is accountants their jobs change dramatically with spreadsheets right they're but they're
not like a lot less accountants they their nature their jobs changed moved higher end this is the typical argument you know the typical um you know economic argument for why job displacement is usually not as big a deal now we don't know if that's 100% true because it kind of depends on how good AI gets here U and doesn't mean That jobs won't change but for individuals the bundles of tasks they're doing are definitely going to change so you're going to drop some things from the bundle and you'll probably pick up some others are there
any highlevel white color jobs and I say that not because they're more important than other jobs but because I think For Better or Worse the majority of our listeners and viewers are probably in those types of jobs are are there any of Those that are likely to just almost entirely disappear so we don't it's not that the bundle will change but it's kind of that you know the actual judgment and Truth seeking and discretion and discernment is actually just it was always kind of minimal and so actually the human role is just kind of negligible
so I mean every time we see these kind of interactions some jobs do disappear right and sometimes the shocks Are quite large uh telephone operators famously vanished in the you know 20s and 30s and at one point I think it was uh one out of every 12 American women had worked as a telephone operator at some point and that job had completely vanished right in over the course of a decade and there's a bunch of different studies that all use the same data set called onet which tries to compose jobs downed tasks to measure overlap
with AI That's not replacement but overlap um And you know there are some jobs I mean the most overlapping job are customer service rep kind of jobs right where what you probably will see is the AI is good at people and it's good at this kind of interaction but you'll probably have a hierarchy where you have a better tree system than before but you still have amazing super agents who are really good at customer service jobs in a traditional setting we'd say the rest of the people are going to be freed to do Something more
interesting and more valuable I hope the engine keeps working in that way that it always has but um you know but number 22 and the most disrupted jobs out of 2016 is business school Professor but I actually think as a business school Professor I would be surprised having tried to do all this remote teaching and mukes and online videos and stuff people don't want to replace the classroom experience the same way as you thought they would right We've had 10 years of like watch a video get a certification online it has not created the transformation
necessarily yet I think that's a longer way off but I actually see part of my job bundle changing I'm teaching more at scale than I did before I able to offer more personalized tutoring experiences for people so my job is changing a lot it might be very unrecognizable compared to what it was before but I think that there's still enough tasks to do that'll Stay in the same so the short version of the answer is there's definitely going to be some jobs categories that probably disappear as a result it all comes down to what ultimately
is the only two questions that matter with AI um which are how good and um how fast right and we don't have answers to that question if you ask the sort of people who believe in AGI it's you know next three years we're going to have something that is better than human level performance At at almost any white collar task if that's the case I don't know what happens right and there's a lot of arguments back and forth and another aspect of the workforce that I found interesting that you mention in the book is the
that it's seems to close the gap between low performers and high performers at a company why do you think that is so it's a really good question so the a fairly Universal finding we have is that when The AI does work it elevates the lower performers higher than than the most right big moves run up to E8 percentile there's a couple things to note one of them is these are early days in a lot of this case it's just the AI doing the work at the e8th percentile right so the reason you know you were
not a great writer before you're not if you use chat gbt you're not going to be a terrible writer you may not be an amazing writer but you're not going to be a terrible Writer so one reason it raises people up is if you were in the bottom 80th percentile you know it's like grammarly right your grammar is should never be worse than grammarly your spelling should never be worse than spell checker it automates that the real question is what happens afterwards right so that's the naive use does it elevate everybody up afterwards are there
some people who are hyper performers are there some people AI Whisperers who do better and We're getting some evidence to that because when the AI fills an advice role it's actually quite different uh some colleagues of mine have an amazing study in Kenya looking at small business owners and who got advice not work done advice from gbd4 and the top performers had a 20% Improvement in their in profitability with AI advice which is like insane right but the bottom performers did worse because they weren't able to implement the ai's Advice their business was already in
trouble so we don't know the long-term effects but that shortterm is really an elevation you know for that reason well so that's that's interesting that reminds me of this idea of like AI as self-help or as motivator as therapist as life coach and you know you have to be pretty explicit in telling it what type of role you want to play and what type of thing you need to hear but then it's very Vivid with how it fills in the Gaps does AI replace therapy does AI replace these sort of Business Consultants does AI play
uh an advisory role in the future I mean I think that's part of where it's best at is decision support and advisory and you know prior forms of AI to go back to the initial conversation we had there was like there was a lot of AI advisory roles like famously Radiologists right there were all these tools that would look at and what we found is something called Algorithmic aversion when people were given the CH like to work with these AIS they tended to reject the ai's work right and they they either because they felt it
was mechanizing it or because it didn't care about the patients or didn't have the same intuition so this is always a big problem with AI advisory systems and what is really interesting about large language models is we don't see the same aversion we actually see kind of algorithmic Joy instead like People love working with these systems because they feel human and they support you right you're the AI Zach has been the nicest person I've dealt with yet on the podcast here because like said all these great things about me before every every comment we got
to bring him back soon you know like um but but in all serious like there is this kind of um there is this kind of element of it works really well as an advisor because it's useful in kind of being a mirror Back to you helping you reflect help you get feedback it corresponds to the ways you like to talk you could I mean if you haven't tried using any of these systems through the um voice interface you should because it feels entirely different when you're talking by voice and so now on displacement I mean
the question is many more people need therapy than get therapy right is AI a good therapist or not is an open question there's some of my colleagues Have early work showing the first early AIS were actually quite bad at therapy because they tended to encourage you and whatever bad behavior you wanted it wanted to make you happy so it be like no no you really should hurt yourself like if that's what you want like that seems smart um so they've gotten better at those things but we don't know right and one of the things that's
really worrying and frustrating about this is that there's So much research going into the technology but like this is generally being applied around the world and we don't know what it's good or bad at yet in most cases yeah there's also an interesting question that Libertarians ask so frequently which is you know maybe this new thing isn't necessarily a total good but what is the alternative right and if we're establishing the alternative as lots of people cannot currently for therapy okay well perhaps AI is better than the alternative of no access at all whatsoever so
I think that's an open question yeah I mean I think that that there's going to be areas where harms outweigh Goods um it's a general purpose technology and so you know um I am not a a a a strict libertarian by any sense and I think that there's places where you would want like where we don't know enough right and having some sort of policy guidance and regulations is Probably going to be important there are other areas where I've been advocating dare sorry there there's there's other cases though where I exactly say exactly what you're
saying I think we be should be applying the bah standard best available human so what's the best human you have access to and is this better or worse than using that human right and I think that if it's worse that's where we need to take action if it's better exactly like you said but you know there May be cases where it's systemically bad at doing something or it's repl or you know and we just don't know the answer because the problem is a system that can be made to be convincing or addictive or that is
like it's lies are always seem accurate is risky right there's risk built into it I think that you know and part of our job is how do we capture the net good right while thinking about mitigating downside risk and I I don't have easy answers on a lot of that this Question of uh you know AI being a therapist and entering into Realms that we think of as just kind of innately human like human to human connection I think is that's something that's troubling to a lot of people or Cuts against their intuition that it
that's what it would actually be really good at um and one other area that I think overlaps with that is Art uh we're seeing uh the rise of AI art um and just last week or might have been the week Before um there was a short film that was created on Sora which is open ai's filmmaking AI uh by the shy kids and it's called Airhead uh we're going to play just a little bit of that short film to uh see what it is capable of at this point and then I'd like to get your
thoughts on the mergence of AI art uh let's roll Airhead Ian well they say everyone has something unique about them something that sets Them apart it's just in my case you know it's quite obvious what that thing is I am literally filled with hot air yeah living like this has its challenges uh windy day days for one are particularly Troublesome or there was a one time my girlfriend insisted I'd go to the cactus store to get my Uncle Jerry a wedding present what do I love most about my predicament through the perspective it gives me
you know I get to see the world Differently I float above the mundane and the ordinary I see things a different way from everyone else yeah and I feel like it's because of that perspective I'm reminded every day that life is fragile we're all just a pin prick away from deflation um it's cute and uh also really spectacular the way that you you can just shoot uh not only the VFX of the balloon head but also all these different locations floating over orcas And Glaciers I me you just immediately see what that's going to do
to Hollywood uh what do you think that the you know major what's going to happen to artart under AI do you have any kind of big theories on that that's that's a big question I mean there's a lot going on right so you know again looking to historical models of technological change like there's kind of a question of like is this the synthesizer right so initially kind of pushed back against is Very artificial became key to democratizing music it turns out not everyone's great at producing music just because they we have synthesizer access right I
when I talk to people in Hollywood I often say look you you really are the very most talented people it's a brutal system and the idea that that that we're going to replace all of it that we're going to produce better than human kind of quality stuff in the very near future doesn't feel like we Already have democratized a lot of access to the sort of tools I think that there is reason to be worried about some kind of disruption but I also think that for a lot of people this will be an additive tool
though again I'll come back to the same point I'm going to keep making over and over again which is the Fate question depends on those two questions right how good and how and how fast because if it's a relatively slow adjustment and we don't reach AGI then We've got this really cool film making tool I mean I made you know thousands of photos on journey I find it really joyful to do that kind of thing but it's not going to but I still hire artists to do art work because it's not quite good enough to
be at that high level yet um if it starts to be better than humans at all those places then we start to really have more issues yeah I mean there's a there's a lot of fear in Hollywood about this this was the animating issue of the Strike that uh just resolved earlier this year um we had Brian Cranston give a speech specifically uh about this which I'm going to play in a second and I have to say that um I thought it was kind of outlandish uh when I first saw it but after having created
my own digital clone um I can kind of see like where this might be headed and why it's giving him anxiety we've got a message for Mr Iger Iger CEO dis I know sir that you Look through things through a different lens we don't expect you to understand who we are right but we ask you to hear us and beyond that to listen to us when we tell you we will not be having our jobs taken away and giving to robots we will not have you take away our right to work and earn a decent
living and lastly and most importantly we will not allow you to take away our [Applause] dignity so you know Brian CR yeah go Ahead Liz I like that Brian Cranston talks about the right to earn a decent living and it's like he his net worth is what like $30 million right he's trying to use this sort of populist type rhetoric and it's at least not totally doing it for me um you know not to not to become un libertarian about it but there's a little bit of this funny like like his the disdain dripping from
his voice as he says just robots you know like it's it's a little bit hammed Hammed up to me yeah you know and it's uh to to in his defense he's partially also talking he's talking about you know a lot much many other people the uh you know background actors people like that will be the first to be replaced by the AIS but um you know it is um an open question like what happens with your the right to your image I mean it's now a question I'm thinking uh about a lot more now that
I've clone Myself and I know there's a lot of footage out there that an unrestricted AI could make use of um you know is there a right do you have a right to your image and is there even any way to enforce that I mean I and not just that but also you know artists have a genuine point which is this is trained on their work without permission or compensation and if I can say you know I breaking bad but in space and get a re like what what do I owe anybody as part of
the property Rights associated with that right so I think we're confronting a lot of star Stark issues kind of all at once and I think that there's a like so when we talk about art RIT large right there's what are your property rights in terms of of are people allowed to train on your data is that the same thing as watching and learning something what if people create exact derivative Works in that kind of way from what they're doing before I mean you know that's a Foundation stone of how we do Innovation and how people
get credit for things that they do and like you said I mean the hen of you like is pretty good right and give it like and that's by the way like a a small team Aventure like that's not open AI might turn version yeah yeah like so I mean it's not unreasonable to suspect that next year at some point we can generate a pretty good podcast you that would be able to do you know to automate this process Right and it's certainly a podcast host me who's like here's the 's book answer questions that doesn't
feel far off um you know it it won't be Brian Cranston right like I think he's not in danger in the short term because also top performance human acting is going to feel different than AI acting in the near future but this kind of thing I don't know so I think that's the thing that continues to give me hope and Zach and I were talking about this Extensively yesterday but this question of well you know you you bring up the concept of The Uncanny Valley and there's an interesting thing that always comes to mind which
is won't we be able to continue to detect to some degree um not only the quality of Brian Cranston acting himself versus a dupe but also won't there be a certain choosiness that some of these top actors employ where they'll still decide that for the most important creative work that they really Highly value and that they're really stimulated by they'll still seek doing it actually themselves versus I mean we always see this crop of RP ensemble cast movies where it's like you know Scarlet Johansson in that dumb movie He's Just Not That Into You okay
guess what that's not her best performance ever and so who really gives a [ __ ] if there's a synthetic AI version of scarjo that's the one actually doing that right like won't there be a little bit of actors Deciding to double down to refocus on the actual Artistry of it and still choosing to opt into that and then the mediocrity will just continue to be kind of mediocre if not slightly worse or is that a misunderstanding of how it'll work I mean I think that's possible right I mean there's a lot more creators than
Tik Tok people are watching there is a democratization that happens I think that people are legitimately worried about their job and should be And you know but then you know you could say look technical change S La V I think the the deeper issue the sharper one is one about intellectual kind of property rights in some extent too which is like okay I've it's one thing for scarjo to say yeah um you can license digital me for X dollars it's another thing for you know for Marvel to say we can do infinite things with like
and you know we we license you out and you know and it's good enough that you no longer need To act anymore and you didn't pay for the rights in advance to do this it wasn't a contract or a relationship it was you know it's it's because our ownership rights might allow it or not and the same way you know our the visual is the place where the AI stuff is at the most troubling from a plagiarism and copyright standpoint right from the text generators it's very hard to extract out someone's actual text from it
from the image generators it's it's really easy To get a screen you know to get a picture of Mario or to get a you know to get a screenshot from a from a famous movie because of how these systems are trained and I think that that's a legitimate question to worry about do you have to pay the people who you're training on do you have to pay them if the output is infringing two thoughts on that point do you have actually Zach did you want to jump in well that that is the subject of this
major lawsuit Between the New York Times uh suing open AI over there training data um and I've pulled a couple uh uh excerpts from that lawsuit so that's written not visual right yes but uh it's it's relevant because uh what Ethan was bringing up there was the idea that do are are you protected from the systems training on the data that you have generated should you have some sort of say in that and what the New York Times is arguing here is that uh the open AI llm was built by Copying and using millions of
the times copyrighted news articles in-depth investigations opinion pieces how-to guides Etc uh and then they provide as evidence this uh the fact that it was trained this this is the earlier version of open AI by the or of chat of GPT by the way but trained on the common crawl data set which is disproportionately full of New York Times con content that's the top media uh organization and then the fourth uh you know fourth most Used Source in that data set um I guess just to help us understand this first Ethan you know my understanding
is it's not really just like straight up plagiarism it's not like uh open AI is just like copying pasting New York Times uh content it's it's a slightly different process and and like how do we evaluate whether that is you know something approximating plagiarism or just you know uh training data so okay so there's a couple this this is a Complicated issue and there's a few things to think about here one is about inputs and the other is about outputs so for inputs that's training what the AI trains on and there's a big difference in
some ways between text and images though they do use similar kinds of Technologies what the AI trains on is patterns it doesn't have a database of things is looking stuff up on it's learning the patterns between things if something appears a lot in the training Data the patterns become very strong force and you know it'll finish and whatever years ago right but if I start with something like when the Martian first encountered the CIP banana it it's never seen that before so it's statistically going to pull out something that's more novel right so when you're
training on the New York Times data it is learning statistical patterns of New York Times writing um and you could probably get it by Producing something that's very famous like there's a was a famous restaurant review of Guy Fieri's restaurant right there was all on the internet about it was all questions about donkey sauce like if you started talking about that it would probably finish that fine the there's a difference between the input problem and the output problem the input problem is should open AI have paid companies to train on their data right now if
it's on the internet the sort of Idea was like you could train on we're not saving it directly we are just training on it and the other question is on the output side if the output looks like copyrighted material does that matter the reason this is much more Salient in art is because there's fewer examples of the kind of input um you know if you're saying the style of this kind of comic artist you know that's their style and it's easier for it to be kind of deeply plagiarizing That in both inputs and outputs the
input question is generally the same everyone's suing to say you should pay us to use our data and by the way I think this is going to go away from an input question because all the big AI companies know this is a problem and they're all licensing the data to do this right so what are what are their sources now uh generally like what do do we know that I mean so that common crawl the initial data set that everyone Trained on was called the pile and it was just like random stuff so like 6%
of the data set are all of enron's emails for example because that was lying around when Enron went under there's a whole bunch of like Harry Potter fanfiction that got thrown in there like it was just a bunch of computer scientists mostly in on the west coast just throwing stuff in that was the initial training set but now you'll see Reddit sold its data off right Getty Images is sold it like and then you've got companies like like Adobe which are completely in the clear because they've only trained on stuff they're licensed to train on
and they're they you know so they own the output so I think this is a question of which like do we pay for the companies to be ethical or not is an open question the real way you know that the AI girlfriend Mistresses will never actually become a threatening forces Force to real wives is because they were Trained on Harry Potter fanfiction right like how good of a conversationalist can these gals really be I you know well you don't know if you have Harry Potter fans in your life who are keeping it secret but uh
you know the other thing is like but it's not just that right I mean it is the common call Crawl is everything and yeah again we could tune it that way like the thing is these models then go additional tuning so if I change the tuning parameters to you keep the guy Talking as long as possible like the system will accomplish that goal right so if they if like open AI for example loses this suit and New York Times wins what do what does open aai end up doing and how do they stuff the toothpaste
back in the tube so to speak when the model has already been trained on this common craw data there's a whole bunch of open questions there I mean first of all they could just say gp5 is trained on all legal data we'll shut down GPD 3.55 or we'll license it or we'll engage in continual fights for years Japan allows unlimited training there could be a lobbying effort to do you like there's a lot of like from a from a legal perspective right there is kind of a race to the bottom already kind of happening on
on IP rights at least on the training data side um outputs we don't know as much is that kind of a good is that maybe a good and necessary thing I mean when I think about IP and How it's changed it it's had to adapt to the reality of the internet I mean as someone who's produced a lot of content for YouTube and seen you know how fair use has you know morphed and in a ways expanded over the past decade I think that's a good thing if if the purpose of Ip is to promote
the useful Arts I.E create more you know generate more creative arts um it seems like uh it's it's obviously these company are protecting their Turf which they have All the incentive to do but that's not necessarily the purpose of Ip it's more to generate more creative stuff right by rewarding people who created the stuff in the first place right so I mean part of it is giving you some exclusivity right that's the I mean you know and again I I don't know on your personal uh libertarian sliding skill is but most people would agree like
patents end up being a necessary thing you have to have to to encourage development or there's You know people copy the outcomes right away you know we're going to get at least one listener who's like really angry about Pat like our stances on patents or something like this right like this is an interesting area because it's very divis for Libertarians and I totally get it yeah yeah and Pat you know Pat Jo just mentioned for to try to Forstall that angry listener like that patent trolls are a real problem so it's not like uh every
patent that's issued Is a is a good thing all these systems are yeah they're all abusable systems right so to point at the the down downside of abuse right which is like patent trolls fair use trolls like those things suck right it's part of having a system that is crude right because that's the way it is but we also know that without patents you wouldn't have drug development right in the same way that you do and again I'm not here to have arguments over over where the line And regulation should be but the point is
is that if we're trying to encour if the goal of these policies is to encourage people to develop product right and then that product then can be homogenized and and generated by AI the current system would say we should compensate the people whose work is being used to build these things if you know but it also would slow down development of AI systems right so if you're not worried about your books Being taken and used and I you know I don't actually mind that my books are inside the system right now because you can't produce
an exact book but what if you can what if the New York the training on the New York Times data which New York didn't pay for New York Times didn't pay for let's you know Google's been testing AI journalists right what if that training data lets the AI journalist write New York Times quality pieces because trained on New York Times stuff like it's it becomes a very hard set of questions to answer and it's one of many many that we don't have good answers to right now what about the quality of the AI generated work
like are we in danger of it if it's kind of just everything more and more stuff becomes AI generated is it going to just kind of start training on itself and becoming uh repetitive I mean I saw this study that I think it might be mentioned in your book uh that shows these Adjectives uh showing up over and over again in peer-reviewed papers uh which we've got commendable Innovative meticulous intricate notable versatile and you see a huge uptick in all these words around 2023 uh which the studies authors infer is because just people are using
AI to write their papers and uh the the upshot there is that there's a lot less originality in language is that's like a a wider you know danger that we face When we think about you know originality and creativity and how it works with AI yeah so there's a few interesting things to pick apart there like one of those is you know let's like with the bundle of tasks most academics are terrible writers a lot of them are terrible writers in English a language we force academics all over the world to use I read academic
papers all the time right I'm pretty unusual I didn't have a ghost writer for my book I write all my own Stuff but like most people struggle at it and they should like I don't need you to be a brilliant physicist and a brilliant writer so to me seeing that 40% of people are using the word meticulous go go for it you know run run your stuff through AI as long as like your research is good right yeah um I don't really have a problem with that kind of approach now on homogenization I think that
that's a legitimate concern we've been doing some experiments on Homogenization we actually find good prompting results in less homogeneous output than groups of people producing the same content but it's a kind of open question right if you got one world brain you're running everything through then you're kind of getting kind of getting very similar kinds of outputs so I do have concerns about that kind of piece I think again a lot of this is unsophisticated chat 3.5 punch up my grammar on this kind of stuff then we go To like the the question you asked
me was a I mean I do a lot of these interviews right I thought that was a really well-phrased version the haen completely AI generated question you asked me was a really good way to ask that particular question I've been asked before so like it's a hard thing to know right part of the other question then is forget just homogenization overall what if you know what we have one good podcaster you know some Mega podcaster Right we have like you know Mark Marin or someone run you know run 10,000 podcasts because all he has to
do is hit a bunch of buttons and like he could check instantly on this is that a good thing or a bad thing when you have a little bit of human in the loop we don't know any of this stuff yeah after all this praise that you're heaping on my digital clone I'm definitely running all my questions through chap gbt trans I'm being transparent with the Audience and my employer in the spirit of this conversation go go ahead Liz I question yeah so one thing I've been mulling is this idea of semantic change um the
evolution of word usage how there are some you know terms that we used to use 100 or 200 years ago that have sort of morphed in their meaning and they mean something different today than what they originally meant there are gazillion examples of this with hallucinations of facts by chat jpt and The facts that we're going to have to become increasingly careful to spot some of those errors will we end up having a world in which some basic historical facts and details sort of become permanently warped like this idea of like Martin Luther nailing the
you know the 95 thesis to the church door actually becomes Martin Luther had 700 thesis will we have things like that that are just these these facts that sort of get forever lost um because of These hallucinations that really warp what we know to be true I mean I think it happens to both hallucinations and training data so so like as a teacher one of the things that wor you know in pedagogy one of the stuff that worries me the most is there's an incredibly common myth of learning styles that people learn in different ways
that they're audio Learners visual Learners like 90% of teachers believe this it is not only wrong it's actually actively Bad people learn from different mixes of learning if you teach their preferred style they think they learn more but they actually learn less because write that down for me so I can look at it yes understand verbally um but you know Chachi BT love talk about learning styles like we have to remind it when we do teaching applications don't talk about learning styles because it learns the consensus view right so there is this kind of concern
of like okay can You deep deep like I'm talking to historians all the time who are using this for deep research but you know it is this degree of like it pulls back a definitive answer to you not the variance of like okay I go on wik you know I go on on on online and I'll see the crazy person answering no there was no thesis after all and there were 800 I could reach my own conclusion the AI we're out sourcing some of that to the AI so think that between hallucination And training data
there is the risk of some stuff getting lost I think we're going back to the best available human standard though is it better or worse than people doing this stuff H how so I have the sort of most cliche moml question to ask but I'm very worried because I frequently have some slightly psycho people you know on Twitter doing and saying various very sexual things like any lady in a position in Media or who does frequent TV hits has you you Know there's always some some sicko pervs out there will we soon be existing in
the world of like massively pervasive deep fake porn where even like the 10-year-old girls have these like very pervy dudes in their middle school classes making these really like unored unacceptable videos of them and being able to be enabled to a far greater degree than ever before like like paint this doomsday scenario flush it out for me a little bit and like Am I Wrong to Have my head go to this very dark place I think that that is an obvious huge downside problem that we're going to deal with and the problem is that even
if there there's a lot of Open Source tools being released it turns out not to be a very hard problem to create deep fakes as you saw from like Zach's thing I I am deeply concerned about this right I think a lot of people worry about the politics and the political side but in some ways I worry about that less Because people already you can put a clip of your favorite politician up or least favorite and say I can't believe they said it's time to murder all the babies and no one's going to even watch
the clip right they're just going to get mad about it or believe it or not but I deeply life people it's like I can totally believe they said that but I can deeply but I am deeply worried about what you said which is the individual level of like you know like having Harassment I think about like what would happen if somebody decided to do that of me okay that's at least fine because I to some degree am an adult who consented to having a career where you know my likeness is on the internet and on
YouTube and all of these things but I'm really really worried about what happens when like minor children little you know teenage girls or pre-teen girls have this type of happen to them I feel more capable of being able to deal with that And more used to being able to deal with you know internet harassment but I'm actually very worried about how this might warp the brain of a 10-year-old I think you should be I mean I'm I'm very worried about that myself I think that is one of the obvious downside consequences that sometimes get blurred
out and this worry about like job collapse and everything else like you have a tool like you know and also this is where the guard rails comes in people Talk about sort of I want uncensored AI if you read the gp4 technical white paper it outlines a whole bunch of things the AI could do before and after guard rails and one of them was like create graphic rape threats on mass for somebody do you know create tell me how to kill the most people for a dollar you know like all this kind of like write
you know insulting you know tell me how to insult this this group of people without triggering content warnings and It was horrifying like you can read the answers they're horrifying right so we end up in this kind of situation which is like how do we deal with like you know I think we all are you know feel fairly strong about Free Speech but with minors and targeted sexual harassment being easy to turn on with a button and by the way you didn't consent by being public to have people make deep fakes of you in compromising
positions and frankly I was actually I was working Through this idea with Zach recently and I was like you know what actually I think that if that had been on my radar as a distinct possibility for what the future might look like that would have served as a disincentive to be in this industry or to be in the specific vertical within this industry just because like sorry I'm squeamish about it like I just don't want that that does not it's just not my value to to your point there was already a story out of Florida
that we were talking about where some high schoolers were doing this to middle schoolers middle middle schoolers but but it's it's trivial and middle schoolers are Middle School like it's it's obviously going to happen right like it's already happening it's going to keep happening I think this is part of our general idea that we have not adjusted very well to sort of online like you know I think kids in online SPAC is already dangerous kids and Online tools already like we we don't we haven't figured it out yet right we sort of let everything fly
and I think as parents we have to figure what we're doing but I I am worried about that I don't have an answer as the problem like there isn't a tool like because a lot of this stuff is open source or runs sketchily or you know you know to the extent it's a real thing the dark web like people are going to be doing this kind of stuff do we end up desensitized To it so it's just like oh we know it's fake and everyone just has their sexual content made of them of course they
do like that feels like a horrible world but like I that is kind of what I think the most realistic world is though right because also I mean consider consider like what possible cories we have to this right now right like there are if you if you're a really high-profile public person and you want to go searching for it you can find all kinds Of LD disgusting things that people have said about you right and so to some degree it's a matter of assuming that that's kind of baked in that's a given and then trying
to sort of go backwards from here I guess the problem is you know it can be such a compromising thing for one's image to have like what happens if you have really really realistic deep fake porn of you circulating out there and people get into this really tough spot of trying to Figure out okay what is real what is not or will there be any tells that can really help us to preserve our images in a sense of propriety I don't think we're gonna have those Tails I mean I I really think this is one
of the one of the sort of things that I think a lot of people are not thinking enough about and I think it's good that you rais the issue I don't have an easy answer to it right I mean I think you know I think that this is a case where punishment in Schools and other kinds of approaches might be the way to go like it is very hard you know consent and non like this is keeps coming up again and again which I think is appropriate for the kind of podc like what does consent
mean in a world where people can like it's only a digital U you put the picture of you up online what I do with it's my my from it's not yours why should this be an issue I mean I think it it's really troubling and I think we're gonna Confront that very quickly and already probably are I mean the tools are out there for doing this stuff right yeah now that I said this this ensures that somebody who really hates me is absolutely going to make this like next week right that's so depressing right I
mean I don't have a I don't have a non- depressing answer to this like there's going to be good and bad from Ai and one of the obvious there's a bunch of very Obvious bads and this is one of them right this and like this one and much less you know uh squeamish but a large imp impact will also come from like targeted fishing campaigns like your security security is about to become a nightmare everywhere like anywhere where people interact with each other we now have a tool that makes it potentially worse scam I've already
I've already warned parents not to give me money if I call them asking for them to wire me Money somewhere because uh you know you gotta have a have a password that is not actually guessable by anything about yourself soan we we are in for a weird world and I think some of the effects on people is going to be pretty pretty bad and pretty terrifying um you know we'll adjust but it's not you know I wouldn't I don't want this to be a case where you are like yeah if I want to imp public
as a woman I have to be ready for targeted you know extremely visceral reaction you Know like and organization no way no way to stop it feels like a really out I mean the solution is to just make a ton of deep fake porn of men so that really both genders are equally harassed and so there's no disent no particular disincentive if you're of one gender you've got a plan I'll let you execute on your own your own strategy I think I think the word in your answer uh consent is really important to linger on
especially for this podcast like when You're thinking about uh you know the the kinds of regulations or laws that like you know Libertarians would get behind it's something that maximizes individual consent and so when even when you're thinking about like the issue we raised earlier of like can you license your image to a corporation to then use in different ways I would say yes but you can't I I would argue that licensing it in perpetuity for any purpose whatsoever that is not something that Should even be on the table because consent always needs to be
able to be revoked and so I think that's that's something worth thinking about and you know yeah Ethan no I'm agree with I mean I think that the whole nature of what what this means matters a lot right and in a digital world where I can do digital things to you um without your without consent is is a is a tough one right and it's a tough one and the problem is that the the the rules that Will stop it from happening are things that you know a Libertarian probably wouldn't want right like you need
to have restrictions because we can't stop creation so that means you know more takedown notices like does that mean what do we do you know what are the rules for Platforms in terms of what they allow and what they don't allow how much do you have to spend your time Liz on like fighting you know do you have to show proof that you that this was not a Image of you like we start to in a very uncomfortable world where you know where how do we deal with this kind of when this is going to
happen right suddenly it becomes well maybe the best place to be our world Garden sites like a Facebook or snap where there's at least some real name and incentive attached to it I mean it's a very hard problem to solve yeah that kind of brings me to the last section which is you know what you lay Out a few different Futures that a few different possible futures for AI uh because when when we think about you know regulation or building these guard rails around AI One open question is like is that even possible or is
the genie already out of the bottle and some of the the you lay out as as far as I can tell four different scenarios Good As It Gets is one where this is it this is as far as AI advances largely because regulation kind of stifles much further Innovation or we're already just you know there's fast learning and the learning curve is already you know slowing down uh next is slow growth where we just get a slow steady Improvement on AI then you have exponential growth and then machine God which one of these do you
think is most likely so I think we're heading like so I want to make it clear like open ai's explicit goal is is Agi I mean it is machine go Like we should be paying attention to the fact that they care that that's what they believe like I don't know if I believe that I think we're much more likely to have exponential growth for the next year or two and then maybe slowing down to linear but literally Nobody Knows the answers I talk to people training these systems all the time nobody has a decisive answer
to this right like there's people who think we're we're near the top there's people Who think that you know we we've got a lot further to go and there's people who think we're going to be bu building a more super intelligent creature in the next five I mean the consensus on the betting you know sites is you know within five years is right you know five to seven years is the is their prediction for AGI I mean that's in planning Horizons for anyone who's doing any serious thing with their world I think though we are
underestimating what Exponential means right if it's at the eth percent of BG Consultants are doctors the question you be thinking about is or 80th percental podcast host is it the 85th percental next year 90th 95th does it ever get to 105th and we don't know the answer so I think the reason I talk about scenarios I think we have to think in scenarios I think that as good as it gets is very unlikely right I think that but we don't know what happens afterwards could you paint Scenarios for what AI doomsday looks like one of
my great pet peeves that I hope you serve as a corrective to Ethan is how people talk about this open question of will a will AI kill us all and they kind of forget to fill in some of the blank of like well how exactly like what is the mechanism by which that would happen and I think there's a lot of different ways that people Envision AI serving as this existential threat what are things that you envision um and What's the sort of likelihood of these scenarios like what things have to be satisfied what things
what conditions must be satisfied in order for these things to actually happen because at least to me the AI will kill us all just kind of feels frequently like generic boilerplate doomsday scenario but it doesn't actually feel specific enough for me to have a worry that seeps deep into my bones I think that's fair I think that so I think that the the Near-term Doomsday that people are genuinely worried about is if it's the a like we depend on criminals and terrorists being dumb right like mostly that they are or at least the ones we
catch are dumb right and so something that elevates everybody the eighth percentile performance or 99th percentile is a big deal and I think that the fantasy the fantasy that isn't inaccurate to me or that doesn't seem that doesn't seem completely made up is The idea a that you know if this thing could help you it's it's fairly trivial to engineer a virus if you talk to virologists right now like it's not a hard problem it's just not that many people know how to do it could the AI be or help you do that kind of
thing could it elevate the ability of people to do that kind of work there's a nice set of papers out of cardi melon that gave AI Control over a bunch of lab equipment and was able to start synthesizing Chemicals right so does it lower the barrier for bad action because it doesn't take a lot of bad actions to make the world worse right so I think that that piece is the is the kind of near-term if you're talking about like existential worry it's not so much the AI wakes up and decides to murder us all
now people say oh Google can do this Google can do it up to a point so right now it's not better than Google right and there was actually a really Interesting experiment that Obi did where they actually had researchers at MIT try and use this to build a virus right and see how far they got and we're not there yet but I think that's a legitimate set of concerns is there's a bunch of mass destruction techniques that are actually not that hard to do that depend on specialization will this change that right and I've got
a smile on my face but like that's an anxiety producing one that I think serious People are worried about the further scenario is you know the far out one is the AG the AI reaches smarter than human intelligence ASI artificial super intelligence and then you know there's some infinite level of smarts out there where it just does stuff and we don't even know you know manipulates us to do things there was a nice early P there was early another thing in the gbd4 technical report where gbd4 was able to actually hire task rabbitors and pretend
To be a human and have them do tasks for it so there's a version of this world where the AI you know in the far future manipulates a bunch of humans to to make sure that you know anyone who could turn it off gets murdered right so this feels more like science fiction but you know some of the people are genuinely worried about this so I think the near-term existential catastrophic threat is making incompetent people hyper-competent who are bad right and The the further one is Agi what does it want to do with us and
is it sensient or not and that's harder for us to grasp so do you think How likely do you think these scenarios are because as as I see it I'm actually not nearly as concerned about that type of thing I know people also frequently cite um basically defense capabilities and like AI enabled defense and and the fact that like bombs could be dropped essentially by either sentient AI or AI used by humans um to Engage in Acts of War so that's another thing that I also want to add here but none of these scenarios worry
me all that much I think the thing that worries me more is the sort of the many benal ways that our world could be made much worse um how do you look at this like like you know if you had to instruct people to be concerned about a specific area of AI where would you say people should concentrate their worry or you know possibly like their calls for Regulation yeah I mean to me that you're you're nailing it with exactly that kind of question which is we we talked about already about involuntary pornography and about
large scale harassment I mean I think that is baked in already as problems we're going to have to deal with so we have to think about what the policy solutions to those things are because it's not going to be solved right otherwise with with the the current systems we have I think that That is a baseline concern that I'd be worried about I think I I think that there are there are um you know there are security concerns that I they would have about you know not just National Security but like what does it mean
when we've got deep fakes of people's voices going out and you know we like it breaks our entire authentication systems right in a very deep kind of way that we that I'd worry about there that we do I worry about you people forming relationships Are being catfished by AI you know relationships I think are something we need to be thinking about so those things are we don't need more advanced technology for any of those things to happen right those are also be worried about me wasting too much time on the replica Reddit Forum right like
there's I'm fascinated by essentially the movie Her come to life and I'm fascinated by this idea that you know what happens when these human relationships are Replaced by synthetic and I actually think versions that to me feels like the most likely apocalypse honestly is the is the her scenario right like an apocalypse of connection like this this we're already you know sort of turning Inward and turning away from other people and so what happens if this like loneliness and isolation phenomenon is terribly exacerbated I I agree I think I mean the hope is a lot
of these things turn out not to be issues right it turns Out like okay you you know people have an AI companion but they also go ahead and touch Grass more often than they used to like that's a completely possible outcome we just don't know I think we sort of hoped we thought that social media would work out better than it did like I certainly had better hopes I thought you know early days of the internet like okay between Wikipedia and social media everyone's going to be wise and nice to each other and Global Connection
was always what's going to bring people together we were wrong about that right so we don't know what's going to happen here but I think the connection piece especially on top of social media and having addictive personalities that you could talk to that like you and respect you and can pretend to be anyone you want that feels like a genuine kind of issue right like and I think that that is something that I am concerned about and think the her Scenario of you know it's not that hard we've learned it's not that hard to tune
in AI to be something you really like right is this but is this better for some segment of the population right like I am lucky enough to have an actual family that I speak to every single day but there are many people who just never really had that happen for them either by fault of their own or truly by no fault of their own and so for them to be able to turn to something that serves as This Band-Aid that serves as this anesthetic for the pain of life I mean the Catholic iny is you
know vehemently opposed to that and yet the libertarian in me says well that's a better alternative than what they were otherwise doing what do you make of that argument or do you think that the fact that this just makes that so much easier this removes the barrier to that is a problem so there's an early paper on replica that looked at 90 sort of deeply Lonely people using replica in colleges and they found that for those people at least that like significant percentage said like six or seven perc said it stopped a suicide attempt for
them um and and almost all of them many more reported that they now talk to people more often rather now that they have the AI as their back stop so we just don't know we've never had another kind of intelligence or personality in the room like we don't know whether it turns into Her everyone muttering in their phones or instead you have an AI you can fight in but like then you're excited to go out and talk to people again you know we don't know if this is the back stop to mental health or causes
worst conditions probably all of the above which again is this other place where regulation policy or at least you know like like if we just aim for addictiveness I about the outcome right there has to be some better alternative than that let me lean Into the uncertainty of the future to wrap this up for us and ask you about what makes you most optimistic about our AI infused future uh great I mean there's so much like work generally sucks right like there's a lot of things that suck about work but like people report on being
horribly bored at work one quarter of the time right that's terrible why are we doing that kind of thing there a whole bunch of untra untractable Problems that are hard to solve with machines but that humans actually work pretty well with right that you know there's a lot we're slowing down in scientific discovery where doctors are overwhelmed like a lot of the most important professionals have 4,000 other tasks doing their job that doesn't let them do the most exciting thing I think as a piece of of liberation of of excitement of of you know a
way of unlocking human potential there's a lot There that I think we should be leaning into but it's not just going to happen spontaneously and the AI companies don't know how to solve this problem they have no idea about any of this they're building better models so part of my challenge out to all the listeners and Watchers out there is like using this stuff to model good behavior is part of how we get out of this trap right is like show me the tutor that does a better job teaching show me the way of You
know that you would you know make this better for people and then things get better um it's a tool in many people's hands in childhood development there's these critical periods um you know where there's this this Mass this this higher rate of of developing skill During certain sensitive periods um I might have slightly butchered that probably because chat GPT isn't really teaching me enough about my toddler brain functioning um but would you say That there's a similar issue present with adoption of chat GPT where like when we adopted forms of social media we sort of
got these um bad habits locked in place that have resulted in you know arguably as you said before a not so great world of social media is are we in a weird sensitive period for the development of how we use AI where we need to be fostering certain good habits or using it in certain ways to Stave off um the horrors that could come yeah I Mean I think we have to model this Behavior right we have to make the world we want to make and there's a remarkable agency at this point in time because
whatever industry you're in whatever job you're in I mean you're probably the leading of how you use this in podcasting so what's the positive example that makes podcasts better and makes your lives better and lets you do more with like and I think you're exactly right this is a you know one of The things I've known by noticed by being early on the AI stage is how much things I do get adopted by other people it's a crazy point where it's like I see the language I use being other places the prompts we're doing like
people are referring to like there is a moment of influence here that I think is very empowering and if you you know I think a lot of like people you know especially kind of libertarian space think about like the heroic individual who has Ability to kind of make things happen at this like this is that moment for a lot of you at this stage like this is the moment to model something good nobody has answers there's no instruction manual there's no systems to to restrain it this is the this is a really interesting time I
agree and I want to thank you Ethan mullik I'm gonna throw it back to my AI clone to wrap us up now a huge thank you to Professor Ethan mol for joining us today and sharing his Invaluable insights on artificial intelligence I'd also like to extend my gratitude to my brilliant co-host Liz wolf for her thought-provoking questions and contributions to the discussion to our listeners we're eager to hear from you if you have questions you'd like us to explore or topics you're curious about please don't hesitate to reach out send your suggestions to just asking
questions at reason.com and we might just feature Your question in our next episode thank you for tuning in and remember the future is intellig ENT and so are you stay curious and keep asking the smart questions until next time this is Zack Weiss Mueller's digital AI clone signing off thanks for listening to just asking questions these conversations appear on Reason YouTube channel and the just asking questions podcast feed every Thursday subscribe wherever you get your podcasts and please rate and review the Show