The technology is a so it's almost the same thing as saying like how good is a hammer like hammers are great at certain things and they're terrible at other things to me that is a very basic level of creativity I'm not saying AI is creative like Picasso or Mozart but I did not tell it to put that card there yeah but the programmer told it to do something spontaneous it's designed to make you trust it it's not designed to Be right um it's also not conscious my point is Gen isn't learning truth from humans it's
just learning what humans want to hear El Mar say that um chpt or gen in general any llms are not designed to tell you the truth are designed to tell you something that pleases you you can tell it 1+ 1 is three and it'll say oh I'm sorry of course 1 plus 1 is three then you realize that this is just this is a design and that's the trick that the hype machine has has done has Created for us so gen is kind of like even more broad because it's um one of the most heightened
hype machines the bestseller weapons of math destruction Kathy what's your take on J has it changed a lot from what we had like a couple of years ago and where is this taking us I mean it's it's different in the sense that um it's a general purpose AI right most of the machine learning and AI that we are used to with exceptions um are scoring systems that score people for risk typically you know like which tell decide on things like the APR on your credit card or the amount you pay for insurance or whether you
should be accepted to college the exceptions are important though the exceptions are like Google search or the you know recommendation engines like whether you get recommended a movie on Netflix or possibly what you Should be recom recommended on Instagram or Tik Tok right so those are more broad but they're not as broad as gen so gen is kind of like even more broad um and it's it's interesting for me because it's um one of the most heightened hype machines like it's it it is not itself a hype machine but it is accompanied by the most
intense marketing hype machine that I've seen since sort of big data was reinvented like more than 10 years ago we haven't Seen as much marketing about any products like with some of the big ones like uh as you say big data or the cloud it was like a lot of marketing or the internet itself like the do com but at the end of the day I I remember hearing Max techar from MIT probably I don't know if you were there at the same time but um I remember him saying that we don't have to confuse
the hype of the product and companies being over Val at with the core of the technology Because The Internet it changed our lives and we still I mean that that it's hyped it doesn't means that it's not useful or that it's going to be like socially changing yeah I mean I I liken gen as a sort of revolutionary tool to Google search Google search changed our lives because until then all we had was altav Vesta and we couldn't really use it it wasn't that use ful like but once Google search wasn't introduced we could actually
like you know we could find the Information on the internet and it started all sorts of possibilities like all the all the bulletin boards were accessible we had Wikipedia like springing up so we really had information at our fingertips in a new way and I think when we think about adjusting to gen we should think about how we adjusted to Google search it was revolutionary but it didn't ch change who we were so much but at the end it is a cultural change because we things Differently than we do them now because of these kind
of Technologies well I I I would like to take a pause there like I feel like we could we could decide how much of a revolution it is if you know um especially if we are aware of the hype um around gen like one of the things I I like to emphasize when I talk to people about gen is the extent to which it has been designed to make us trust it when in fact it's not really well first of All it's not trustworthy um it's also not conscious um it's also just in some sense
repackaging what we already get from the internet and Google search and Reddit um it's repackaging it in a way that is meant to be um triggering our trust okay but if we think about it just that way like think of it as like a conversational version of Reddit then it it's not that big a deal yeah it actually the thing is it really sounds What it says it sounds right it sounds like uh like it speaks to you with lots of confidence and then it makes you trust it or believe what it says but of
course hallucinations are a real problem and they still there it's not that much anymore or it feels like it's not that much but I guess because especially most of us we use it at our work and then we use it with things that we know so when hallucinates we kind of feel safe that we can catch it but I think one of the Biggest differences with like maybe uh Google search or other the Technologies like B is that geni is in the hands of everybody not just in the hands of people that really knows what
to do with it so is it a good thing or is it a bad thing that is like so Democratic okay well I'm going to back you up for a second I I like to um I I'm going to push back on the even the notion of Hall okay I would prefer to call it being Wrong like being incorrect being false just or lying un unreliable okay because hallucination if you think about it is endowing it with Consciousness yeah right and that's the trick that the hype machine has created has done has created for us
um you know I I read this paper um it was about gen but I think it gave away more than it was trying trying to because it was like Pro gen AI um in a big way but it was about like the ancient Greek notion of rhetoric and how Like in order to make a persuasive argument which is what the study of rhetoric was all about you have all these different you know you have these characteristics of an argument like you have to display expertise you have to this is the one that killed me you
have to display the ability to admit you're wrong you know there's all sorts of things that sort of when you do those things um in an argument in a rhetorical argument you gain the trust of the People around you um and so I'm sure you know that if you tell J like chat gbt like no you're wrong about this it'll apologize totally but do you know that if you tell them if you tell gen geni or chbt that you're wrong about something where it's actually right it will also apologize of course it will always I
guess in the system prompt is always predefined that we are right over whatever he believes exactly it's like the customer's always right type of Programming but my point being like that if you step back for a moment and you realize that it will just apologize for whatever you can tell it 1 plus one is three and it'll say oh I'm sorry of course 1 plus 1 is three yeah then you realize that this is just this is a design it's designed to make you trust it it's not designed to be right right it's not correcting
itself and to your to answer your question like I don't think it's it's that Democratic it's It's not as Democratic it's not more democratic than Google search was by the way Google search was Democratic in the sense that it eventually learned from clicks like what is quote unquote more valuable as a website you know so in some sense it was gathering information by its usage um just as I'm sure gen is doing but like gen doesn't really learn I guess my point is Gen isn't learning truth from humans it's just learning what humans want to
hear totally I think Um it was El ma that said that um chpt or gen in general any llms are not designed to tell you the truth are designed to tell you something that pleases you and then I think that comes with the reinforced uh human learning know that this technique where llms are train on a final phase on if you're like thumbs up or thumbs down so basically it's like the cookie on a dog if you sit when I say sit I give you Cookie and if not I don't so basically you learn to
Sit but you not really trying for truth or anything you're just trying to say an answer that pleases me and then obviously I think that doesn't makes it undemocratic or not it just I think my view of democratic is that almost everyone nowadays can have access to an llm that has a level of I don't know if intelligence we can talk about your definitions but at least a level of knowledge that I don't have so I can ask it about the Roman Empire I can ask it About anything and I could get something that it's
much closer to have a proper answer to that than myself that I don't have the knowledge and I think that makes it really Democratic that almost everybody at least in the first world we can afford the 20 bucks a month for CH GPT and get access to this technology how do that different from Google search though well it's conversational and I think that makes a difference on the interface uh I think that especially I Agree that it feels different but in terms of actual access to facts do you really think it's different not in facts
I think when you ask it things it's not much different than Google it's faster to me like uh it's easier to ask than actually bro through the 10 Blue Links and find which one is the one that has the information Etc but the point is that you can do things not only answer questions so when I make a custom GPT that for me it's like I do when I when I Go to a company to integrate AI into their Workforce and stuff like that I think the custom gpts is the best thing that openi made
for companies they are really powerful with a very very low curve of learning uh with a very simple like natural language prompting you can get the thing that can make you like a I don't know a company report and every month you can do it once and over again in the way that you train to gave it documentation so I think this is really Powerful for people to optimize their productivity and I think the Simplicity of talking to it yeah it's incredible of course you could create all the Technologies before that do similar things uh
like there was automatization before before geni but I think that's where it was key that it got something super simple that helps me do my job faster okay but if you don't mind I'm going to just like really be careful about differentiating between a couple Things one is access to facts and then one is you know access to like conversational right so sort of the ease of interaction and then finally the third thing which you just mentioned is how how helpful is it to do my job and those things are really three different things um
but we can have one without the other or or can we um I think we could I think we can separate those things I think we can isolate those things like I I think like Access to Wikipedia by itself is addressing the facts stuff um you know I think like a data scientist like I'm thinking like how is this trained if we think of it just as trained on the the information on the internet which is an approximation to the truth we can think of the conversation that we will have with uh chbt or other
types of llms as like basically talking to somebody who's posting on Reddit yeah right like that's where it's getting its conversation tone And stuff which is actually a tiny little sliver of like human intercourse right like human conversation um so it's ex from my perspective like extremely biased towards like loud loud mouths you know the interet who have like way too much time on their hands to AR was train on no if I'm not wrong nowadays um at a scientist and mathematicians that are training these models are aiming more for Quality data than actually too
much Data no I think no they're not I wish that were true okay um I've I've talked to people who are like training the next generation of llms and they're like oh we don't have enough data so we're going to synthesize data yeah exactly that that's I mean that's not quality but the data you synthetize it's synthetized in a direction that you want I guess like so basically it's avoiding the trolls from Reddit and all this kind of um it might be avoiding ham but I don't think It's avoiding Reddit like there's just not enough
data out there for them to actually ignore data and how hard it is to just go through all the data and clean it up from the data you don't want I guess I'm sure there's waiting and there's waiting towards Wikipedia but Wikipedia is Tiny compared to the amount of data they're sucking up and if they're going to the length of like synthesizing fake data then you know the there is not enough Data yeah exactly but but is it is it a bad thing that they synthetizing data because I thought it was a good thing because
probably we can give a sample of what is good data of course it can be biased by whoever makes it but if there is like a sample of what you consider good data and you ask chpt to make more of this so at the end it may not be as good as all the books if we had Shakespeare working full-time on this but at the end of the day it can do like Better than probably what you the average thing you find on the internet listen I mean I think we can again I'm going to
separate things right um like I think we can distinguish between like something that is great and something that is useful you know and and the great the the notion of is it good is it good at something it is a very narrow question like I you'd have to tell me exactly how you're using it you know I'm an auditor of algorithms And as such like I only audit I don't I would never audit chat gbt as a whole that would just not be in my in my vocabulary right okay so when I get hired to
audit an algorithm I'm like how are you using it exactly are you hiring people based on this yeah because I guess the use of it is can make it good or bad EXA the technology by itself the technology is a tool yeah so it's almost the same thing as saying like how good is a hammer mhm like hammers are great At certain things and they're terrible at other things so you just can't answer that question don't don't be fooled into answering a question that vague you just have to say exactly how are you using the
hammer who's who's wielding the hammer are they good at their job are they drunk you know they're H the context is everything and and what is your feeling about the context of society like given what you know about society and the years you've been in Earth with us um what you think is is going to happen with the Gen is it going to be good or is it going towards ending as bad as social media um well I'm stating there that social media is bad that's a lot yeah you added a lot um I would
say that like it's going to be good for certain people and bad for other people right so one thing I'll tell you and I always do that I apologize but that's my job my job is as an auditor to think who are the Stakeholders who's winning who's losing ex right so there's for every winner there's going to be a couple losers and like it's very important to think two-dimensionally like that on the two-dimensional like Matrix of stakeholders and their concerns or benefits right but I would say like it's is it good at things yes it's
unbelievably good at things yeah here's a couple things that gen AI is good at one of them is um it's probably going to Replace call centers tally um especially narrowly defined call centers that are like trouble with your like Verizon smart smart phone or whatever and we already know this because when we try to get help when we call people we will never talk to another human again it's just not happening right we just get to talk to a chatbot so it's good at that it's already replaced thousands if not hundreds of thousands of jobs
that have been offshore to the Philippines and Other places um and so the winner is the company the losers are all those people who lost their jobs but at the same time like this there is like a very clear example of that which is claro it's a fintech from Europe and they partner with open Ai and they developed like a chatbot and they were very famous because at the beginning of the year they published the results of the first month of the chatbot which was basically that uh the chatbot did the same scoring As like
human uh customer service but then if you look into the data it passed from 11 minutes of resolution time to 2 minutes so I always put the example that when I buy shoes and I don't like them and I want to give them back that chatbot gave me 9 minutes back of my life then if you calculate these 9 Minutes multiplied by the amount of things I give back on the year and then by the amount of people in Spain uh the the the what the chatbot has given to Society even if it has taken
actually in the case of Clara 700 jobs um the amount of things it gave to society it's really positive on the end because it's a lot of minutes that you are recovering but of course these poor 700 Souls that lost their job are not in the in the best place so I guess we are in that scenario you mean that some winners some losers so at the end of the day do you think in a in a in a global is it going to be like the rich win and the rest of Society loses or
is it going to be more balanced are you like able to give it like a this is going to be okay for us or point out one other thing that might be happening with that 9 Minute like window is that like they just might be hanging up on people that they're like this is complicated and I want to keep my numbers down so I'm because the AI is looking for the results no they're not human mm so they're going to just sort of optimize to whatever works for them And if they what if their optimization
goal is to minimize the amount of time on calls they're going to hang up on you like you just don't know you know what I'm saying yeah it depends on what they rate them on no yeah what you mean is like the as these systems they look for the best evaluation from whatever metrics they set them up they will try to reach that and whatever shortcuts they can take to reach that is like the cookie from the dog no so so if it's Like shorter call time could end up being useless I I I say
this having experienced this exact thing last week when my Wi-Fi wasn't working I just got hung up by the chat bot like three times so you already have customer service AI in in the US here I don't I mean I I seen this adver from Bland AI it says like the the AI you already talked to but I still believe that in Spain we don't have yet customer service with the I don't know maybe well get ready buckle Up baby and I'll tell you what else like you might not even know yeah exactly that's the
point because have we passed the touring test oh who cares I just [ __ ] the touring test okay um the touring test was not even like the touring test we talk we talk about is not even what Turing came up with like he came up with a much more complicated thing but if you're asking me whether AI is conscious the answer is no no not conscious but can I tell the difference With nowadays can we tell the difference I think it really depends on how deep the conversation goes okay you know conversation you and
I are having right now is more profound than a conversation I've ever had with a chatbot yeah um yeah I guess um but even a short conversation with the chatbot I would say you can tell they're they're not human okay even even if it's like these new chat GPT advanced mode and stuff like the last technology that sounds Human you know I actually I don't want to get bogged down into conversations like that because the truth is like for me it's either never going to happen or it's already happened okay know like it's ites let's
not count the years until we have AGI right because the truth is as soon as we had I'm going to go back to Google search then we had kind of superhuman abilities that we just got used to and when we have really good um llms or gen happening we're Going to get used to that too um and and I do think the important question which we I've kind of avoided but like or you you asked but we haven't gone back to is like what about those people that lose their jobs like how fast will people
lose their jobs people lost their jobs when tractors were invented yeah but then they went to the city and got other jobs in manufacturing so the question is whether that what are what are the next category of jobs and are they going to Be enough of them quickly enough or are we going to have a revolution on our hands I think that's really the question and do you have an answer for that or an opinion at least I don't know yet I don't know because it's looking fast I mean I'm a come from photography and
I'm a photographer and like summer last year we realized that we could not tell the difference anymore between an AI image and a image created by someone like me that's my point it's like it either is Never happening or already happened the same happen with like translators they mostly not getting any job I think graphic designers are on the verge of it and like many more I I mean even like um like yeah plenty of people like actors probably going to be replaced by Avatar and then if you're not already famous it's going to be
very complicated we will start seeing and we already seeing like digital celebrities that are coming up and they don't exist and they are Getting like hundreds of thousands of followers in social media and Etc so how far do you think or how quick better because I think everyone agrees that if we think about like a thousand years from now we all think about Star Trek or Star Wars and we think that jobs are a whole different concept but of course when you start dropping down the timeline it's where the discussion is so I don't think
anyone thinks that we will work forever But probably many people did not think that their generation may not have to work yeah so yeah Elon is saying now that 5 years from now it's optional to work and do you think this is like over I just want to say as an aside I [ __ ] hate that guy so every time you say his name I just get I try to mention that's okay you can just go ahead and say it it's a Prov no it's it's the good thing about him is that he's like
very um controversial but he brings the topic on The table because he has reached do he does and then the point is like if in 5 years we have to think about Ubi um then this is going too quick but if it's in 50 years maybe we can adopt okay but Elon if we're going to go there like he's the richest man in the world is he willing to share any of the money he's making I don't know that's what it comes down to you know and I'm glad you brought up Star Trek because I'm
a huge oh me too huge Trek fan and do you know That they they refer obliquely um and then there's like one episode I don't know which which series maybe Voyager I don't remember which series but like one episode about the actual moment of Revolution which was in San Francisco do you remember this no because I watched the the itly series not the last one oh okay spoiler spoiler alert there's a moment when they're like oh no we can't keep going in this capitalistic sense because There's not enough jobs and we and and there was
an actual Revolution okay and people revolted and they were like no we need like Universal it was beyond Universal like healthc care right it was like Universal everything yeah um but it was a bloody Revolution you know so like that's that's the question we have to ask yeah because but I just wanted to to finish like just to be clear I mentioned the farmers being replaced by tractors we've also seen the en entire industry Of Music being replaced by Spotify plus Taylor Swift right it's like complet flattened out like that's happened already to so many
Industries I think people are up in arms now especially like Freelancers you know artists It's like because it's happening to them too yeah and like copy editors and anybody who like used to write copy for a living totally any kind of writing I'm a writer myself I actually kind of wake up in the morning wanting to write a new book and Then go to bed at night thinking no one reads anymore because why would would anybody read yeah because all of it is schlock because it's all chat B generated or like you suspect it is
right so it's just I'm just saying like the thing that's been happening to a lot of people is now happening to even more people so in some sense like we shouldn't be surprised at all that's what technology does in another sense we should be like well maybe we can figure Out how to have solidarity with each other and like move towards the next step without a bloody Revolution yeah because I think the revolution doesn't come from the fact that technology gets better it comes from the fact that it comes too quick for society to adapt
and I think that's our problem right now we never had a technology that is like evolving as quick as AI it's just like we had like yeah we we had like an industrial revolution but even that took Like several years and AI in the last two years has changed I mean I discovered AI at the beginning of 2023 and since then it's been a Non-Stop changing and like things that I thought will happen in the next 10 years happening in one year yeah is it going I'll back you up though yeah like my contention when
I wrote weapons of math destruction which I wrote 10 years ago was that every bureaucracy was being replaced by by AI it wasn't called AI then it was called Big Data mhm but think about that that means that every HR every human resources like Department in every large company was being replaced by an algorithm that was it was already like vacating huge swaths of bureaucratic jobs and bureaucratic jobs are a lot of jobs right and then and that's true in government as well as in corporate settings I agree with you that gen is is doubling
down on that Phenomenon but it's not a new thing like it's really the technology of predictive algorithms that's you know so many if you think about our jobs so many of our jobs is giving people what they expect or what they want yeah whether it's art or music or a you know deciding who gets into this job or who gets that loan or who gets this mortgage there is so many jobs at stake here so I agree with you but I just Don't want to say it's to last two years it's to last 20 years
no no no totally speeding up totally I think it just it just spit up to a point where chat GPT came into the market and then people like me that we don't have any novel about programming or anything it just fell in our hands and I think that's what made it viral because basically people like me we can use it now like before it was like relegated to those of you that had the knowledge and means to Be able to use the models yeah but when it became tools and I have it in my PowerPoint
then basically it becomes like every single human on earth can take advantage of this technology I like that and you know like my I guess my the sort of Premier example would be like stock trading yeah stock trading used to be like a Burly industry of like these sweaty men who used to be football players like fighting to get their like trades over the desk on time and now Those stock market floors are empty because they've been replaced by computers I used to be one of those quantitative Traders like I was one of those nerds
and you're right that it was like only some people can even think about doing that and now what the difference is that it's been handed over yeah now download Robin Hood or itotto or any of these apps and you're like trading stock like knowing nothing about it so messed up yeah and which is messed Up because you don't have any like any knowledge of it and you're just gambling with it and that's that's probably God Can We complain about cryptocurrency too oh yeah absolutely I'm not a big fan of cryptos I do think I I
I mean I take your point like that like this technology has made it too easy to do certain things and one of those things is like trading in stupid stupid ideas like cryptocurrency mic coins and not even that ones even the good ones yeah All that stuff it's just like an in incredible Ponzi scheme and it just it makes me really sad to see how many people are swept up in it so at the end like if we have to simplify do you think that AI is going to take us to a better place that
we were or is it going to be like for me social media had a good purpose but it went really wrong and from being something that had to give everyone a voice it became something that it just became like an a place to Bully people and to just make them feel bad with themselves and everyone living off an expectation of what people thinks and I think that was our first L loss battle against Ai and I'm not very convinced if the next one is going to go better like what do you think for for for
well first I think I want you to read my book shame machine I want to that's Really what I'm talking about there um I completely agree with you that social media has like it if you don't mind I'm going to just say a little bit which is that like for me the the sh the sort of old school shame industry was was were these like direct to Consumer Industries like the beauty industry or the weight loss industry um that would sort of Shame people directly and say if you want to solve your bad feelings then
buy our products and then they would sell You products that do not solve the problem right um and they push on that on that theyt pushing on that bruise um and building more bruises right some some of them literally set out to make people feel ashamed of themselves there was this in article I found that was like in Japan women didn't feel um self-conscious about body hair so the razor companies needed to First shame them about body hair and then create thus creating a market and then selling Them the razors MH so that was like
the the old school shame industry it's been around for thousands of years um the new version of this is social media which does this ingenious thing of getting us to shame each other for free but we still can't get enough of it because it feels so good to get retweeted or liked or whatever because the algorithms are really good as well at showing you what reward you for you know Shaming other people um and being righteous about about like what the rules are around here and then of course splitting us into smaller and smaller Norm
groups so that we like hate each other and like also surfacing the most shameful thing about that other Norm group that we can now pounce on so it's like it's quite an amazing trick if you can do it I mean they did like a really good job at what they were trying to do but but the thing Is is J going the same direction because on there is a big difference give me a give me tell me a little bit more about what you're thinking okay so so on I think like one of the biggest
differences for me that gives me a little bit of hope that this may end up in a better place than social media is that social media we assume it had to be free and for chpt we are paying so they can make money without using us but I don't think the money we pay is enough For that so I would be kind of happy that they raise the fees if that mean that they will not exploit us but I'm sure that like a year from now uh we already seen some experiments where they are starting
to put advertising so when chat GPT recommends you TV it's sponsored and that is kind of as you said like this thing sounds like it knows it [ __ ] it knows what he's saying so I'm believing it so when someone pays CH GPT for saying certain things and That could go really wrong like because one thing is that recommends me a Sony TV instead of an LG but then at some point it will go political and then what is doing to democracy when you have like your personal assistant that is with you 24/7 and
all of a sudden it starts giving you or preparing the land for you to go one way or another politically and just because they got paid so I think that's really dangerous there so I don't know how we going to make sure that that Doesn't happens but I think that would be one of the worst outcomes of AI because I really I'm really sure that AI is going really personal and we will have like a small AI with us fulltime all the time okay a few things first of all we can choose not to do
that we still have yes we have the power to choose that as a group as a society as a collective um second of all I think it's already happening you know um then that's why I Started by saying like we should think of Chachi BT as trained by Reddit that means we are picking up the politics of Reddit in in chat gbt and that's definitely what I find when I work with chat gbt like when I program with it it's like it has opinions it already has there's no there's no non-political opinions right like it's
all politics you know and like there was an interesting Bloomberg um journalistic um uh analysis Of like ranking um resumés for a for some kind of job desri uh job description that you know standard like racist sexist aist like um results and it was just chat chat what like who's better at who's going to be a better fit for this job and it was randomized except for the names it was just like classic sociological racism and sexism you know so it's all embedded in there like the politics are embedded in there but that's because the
way we trained it Or because it's been because we're training it on human data and human data is biased and and human data is not represent Ed it like human human behavior and opinion is not represented by the data because we have these loudmouths right that's where we started okay so those are two of the reactions but I think the the most important reaction to what you just said is I completely agree with you like we haven't seen the final step of of large Language models which is how do you tie this to a revenue
model exactly the business model isn't there yet and they are losing a lot of money on letting the high school seniors like get their admissions essays like written for free by chat gbt like they're losing a [ __ ] ton of money they're not going to continue to do that forever and so the question becomes what is the business model going to look like and I agree with you that it's going to be Advertising because that's that's what it is right that's what it's been in Google that's what it's been in social media the entire
internet is bills on Advertising um and people just aren't used to paying for these things they and they're and they've they've made that trade they've been socialized to make that trade I'm not saying it's a good thing I'm not saying it's inevitable but our habits are that we make the trade of giving out our blood in the form of data Private data personal data in in response in like in return for this kind of service and the service here is like how do I cheat on my homework yeah and by the way I'm in terms
of like de democratizing cheating I'm totally for this I just want to say that okay okay okay so I guess you're like thinking in the way I thinking that this may end up badly actually open AI they got some documents filter to the the information uh from the last um raise of capital Where they were saying that they planning to lose money until 2029 and they plan to lose 44 billion until then and then they will make 100 billion in 2029 so I I think s Alman has a pretty clear idea of what the business
model is going to be which obviously we don't know but I think he's very clear on the idea of how is it going to be and there is a turning point for some reason in 2029 where maybe technology is good enough for what he's planning for or Whatever but yeah I think yeah I think they losing lots of money nowadays so that's what doesn't gives me hope and you know I don't know how you feel about meta open sourcing everything which in one way you can look like a green wash it looks like they trying
to be here the the good guys trying to make everything open source but at the same time we know that meta open source social media to just make money on your time so on your attention so I'm really skeptical about The open source version are you Pro open source or or more about controlling it with small like some big companies that have the tech um you you always do this you like ask me like you say something and then you ask me a question but I want to respond to Sam thing which is just like
Sam Alman has a God complex and we shouldn't trust a single thing he [ __ ] says okay he's almost as bad as Elon all right so that that was I I need You to make me a ranking of what's the yeah the worst guys like the worst guys yeah um but uh what was the question about meta what how do you feel about open source how you source is completely [ __ ] useless to almost everyone okay right and then for the people who can code and can you can set up enormous server server
Farms to build things which is a very small Elite group of people they still don't have the traction and the like the network power To make anything work um having said that like okay it's going to help the competition in the business competition in terms of of like who's going to win the large language models were but I don't feel like it's really helping anybody except Nvidia okay yeah chips that's why Nvidia is like promoting has enormous bubble I just want to say short Nvidia that's my you think so I invest in Nvidia on the
summer of last year and so far it's you invested good at a good Time don't invest now no no definitely I think now it's really high yeah that's all I mean I I mean I'm sure they will get do you think it will be competition or it will be just that they will just blow I just think it's like we're at the point now of this enormous like competition for who's going to control the market in large language models and everyone's throwing their hat into the ring so everyone has to have these chips that are
in short supply and then the Next phase is going to be how do we do this on much less data much less um energy wasted hopefully hopefully um how do we make lightweight versions of this that are still useful to to replace people with from their jobs yeah but that's what's happening no because like from what used to be like nowadays there's models that you can almost run on your phone that as good as CH gpt3 so obviously guess we are on the path for if CH bt4 is actually doing part of my Job I
guess in a year or two I will have a like a open- source model in my phone that will be as good as now is Char gb4 and we will have five or six as a top thing but there will be a point where the tide is high enough for covering my needs with like a model that is like the standard for everyone I don't even need to pay for it because the other ones are like really much higher up so these ones I will going to be like not really paid attention to but I
guess that that's the Normal path again we go back to the point where technology has to improve it's part of it the problem is how fast it is improving no because then then that's the difference it makes I have a question for you okay when I it's the first time that a guest ask me questions and I love it when I went to give a TED Talk which was a million years ago like 2018 or 2017 million years ago yeah kind of felt that way okay um I had been interviewing all These trck drivers whose
livelihood was being threatened by surveillance right so they are being surveilled the cameras in their trucks are Inward and outward the outward ones were to figure out how to drive a truck automatically yeah the inward ones were to make sure that they didn't ever go to the bathroom for more than 5 minutes like their quality of life was crap right okay that's just an sample of the kind of person I interviewed for my job I also Interviewed teachers that were being fired by algorithms that nobody could explain to them you know just all these people
who really saw this revolutionary of algorithms as like an oppressive Force but when I went to the Ted Talk the audience was made up of people who were so excited like in this kind of boyish way and when I say boy I mean they were all men okay and very rich men and they're all so excited about this technology because they were like oh This is going to make me smarter it's going to be like me plus this chip in my brain that lets me access all the information okay and of course my question to
them was like don't you have Google search already like we already have that chip in our brain it's just just a different a little bit further away from our brain but anyway my point is that it was the attitude the divide the cultural divide of attitude and what I'm hearing from you and here's my Question is like you're goo both ways yeah I do and that's my dcot toy because I I think the person I most recognize myself with is with Demi s habis he says the thing that if you're an accelerationist like if you
want AI to come as soon as possible that you're not really aware of how much it's going to impact Society because if you knew you would want it to be like compal but slow I I would not have a problem the day I gets better I just think Society cannot Handle it so my problem here is that I think the technology is going to be amazing it's going to cure cancer it's going to make so many positive things for us but it's gonna overrun us on the on the meantime at least Our Generation I have
two kids I have a kid she's seven and the boy is 10 and I think by the time they are like adults this is going to be probably better organized or I hope so because right now if I had a kid that was 16 I Would be really not knowing I have a kid that's 16 okay sorry but um so what is your advice like what should she study what should she do because maybe whatever she's studying or what actually my feeling is what we've been telling these young people to study over the last 10
years which is programming we've been telling them you be a programmer you're set for life you're going to be okay this is going to be useless in like maybe two years or I don't know if you Share that but I think like CHP is programming really good and obviously not as good as the best of ours but definitely better than most of people uh right now for example if I have a problem on my website I don't call my programmer anymore I can just send a picture of it and a link to chat GPT and
it tells me what I have to change on the CSS and that's probably very basic for a programmer it's not like deep python but at the end of the day if today is doing That what he's going to do tomorrow so obviously we can touge a ceiling but it doesn't looks like so far on the last two years of my experience well okay I'll first I'll answer the question which is what do I tell my kids I have three kids okay um I tell them to be flexible I tell them to be willing to learn
anything because what they have to be good at is being good at things you know like they have to be Nimble and like you're right uh it doesn't make sense to be good at programming basic languages anymore it does I mean you can do that that's fine but if if you're only doing that you're going to be left behind replaced yeah um and that's a really important thing I would also say though that I'm I'm more worried about your kids than my kids okay because my kids are you know know there's still I I don't
think it's going To happen in the next three years I think it's going to happen in the next 30 Years 30 maybe the next 15 years like I I and I I you know not to be like super pessimistic but I really don't know what's going to happen in you know to your kids or to kids that are not yet born or about to totally totally I don't know it either uh I just to some degree I almost prefer that it happens in 30 than in 3 years whatever it has to happen and whatever this
revolution or Change or whatever social thing is because I think there will be less people damaged by it if it's in 30 then if it's like that soon because it will not come it will not come like all of a sudden out of the blue like I think like it was Conor o Hy I think it's called he was saying like last year that every time you train a big model is like throwing dices I don't think we are at that stage yet where like out of CH gpd6 could come out AGI but I think
I mean I I wish it was coming on that on that amount of time because I think that gives Society time to prepare because on the way we will see kind of symptoms no we will see like small Sparks of AGI that will start getting us to realize that we have to work on a universal basic income or it will give us like some Sparks that make us realize we have to change the way that people makes money because work based money will not work anymore or I don't know different Things that may give us
time to adapt ask you another question okay what if AG GI happen tomorrow because between you and me I don't think AGI is ever going to happen slash it's already happened okay so how assum it's already happen how do we Define AGI just to Define AGI that's that's of course the problem but like when I say I don't think it'll ever happen I don't think computers will ever be conscious okay you know but I do think that computers and algorithms are Already better than we are at a lot of things I agree with that I
think it so why don't we just assume it happens whatever it is and then ask ourselves what would actually be different about today's world well first of all me as a company owner it would not make sense to hire people because I could hire AIS or AI services or whatever because I assume a it's already true let let me just say okay that's what I was saying at the very beginning like we didn't my company Didn't hire like well I have a tiny company but new new startups they don't hire HR they have high hiring
Al or they they hire less people in HR way fewer they have one person who basically supervises supervises the AI exactly that's already true yeah I think that's the realistic but the truth is that not all startups do this like if we check now like I don't know any small company that set up in Barcelona on the last year I'm sure they still hiring people But there will be a point where it be so so so popular so common so Democratic that you will not even consider hiring someone and then here in Spain we have
an employment rate maybe I don't know maybe around 10% or something like it's quite High um what happens when the unemployment rate is like 40% yeah so that's where it gets pretty ugly so to me the difference is if this AGI means 16 20% or it means 40% I think that's the edge of the sort where it becomes Like bloody it becomes messy or when it becomes sustainable with a crisis right you know like so I think for me what happens if tomorrow a happens it depends on how big it is of a deal if
it's as big as a deal like for me AGI the definition that I recognize the most is an AI that can do most of productive work of humanity that humanity is doing at the moment okay so if that's a good definition that's actually my favorite I've ever heard great so if it gets to That point for me it's a complicated situation if we don't have the social structure to hold it I do think but I do you agree with me that like we should already be asking that question like for that AB absolutely I've been
super harsh on politicians here in Spain that like you know we had elections like not long ago there is nothing about the in the programs and I think it should be the main thing I should I mean there is some stuff going on in Europe the is oh oh God you want to talk about pathetic there's there's like nothing happening in this in at the federal level in the stat Us in the States you had the the 1047 which was like bailed out at the end and in California but at the the end of the
day like the AI act which looks around the world like we are like leading like the regulation of AI it's talking about the use of AI which is obviously something necessary like you cannot make bioweapons with this but It's not talking about how we're going to handle the work problem and I think there is many problems with AI that may happen but I think the first ones and the biggest ones is deep fakes and jobs and I think no one is talking fak tell me more why do you think that's such a big deal you
don't think it is the Lost of truth from like of Truth is a big deal I don't think deep fakes represents that well I think when you can I mean I think people maybe it's because they Come from the from the photography in video Market but I think people takes um videos as a proof of Truth and then when you can fake videos you can make like Talking Heads that sound like exactly like Elon Musk saying whatever or any other um of course if we take it to the political level that was this this thing
in America where someone cloned Joe Biden in a very poorly way and then theying to theary election or whatever anyway um I think this is a problem for Society and it's a big one and I think we already there like we've been doing deep fakes for a long time but before to have inference on a election in America it had to be Russia or it had to be like a big state now my kid 10 years old can make like a deep fake of trump send it on Twitter make it viral and boom you know
okay but let me give you an alternative way of thinking about that which makes me sleep at night or helps me Sleep at night thank which is that like that's what's actually brilliant about my kids my kids don't believe anything yeah you know and I like I agree that like that can be a problem if they like lean into nihilism but they know better than my generation that like yeah you can't believe what you see on Facebook in fact don't go to Facebook it's crap it's crappy yeah you know they are not on social media
I'm not saying that nobody Is bought into that anymore but I'm just saying maybe we can think of it as U we had a like a different kind of pandemic where we believed everything we saw on social media and we're inoculating ourselves becoming immune exactly our NE at least our children are I don't think your your children are going to be like bought in no they're very critical about things actually one of the amazing things happened like few days ago we were talking about something about Football football is really big here like soccer I I
just saw some football and and then my kid and me were talking about someone the goalkeeper of a team and stuff and then I asked Chad GPT and Chad GPT got it wrong and my kid I was like he's dad and chpt telling you you're wrong and my kid is like no no dad I know for sure this guy is the goalkeeper and I was so proud when I Googled it and then I found the right answer and I was so proud that he was Like critical enough to believe on his own opinion over mine
and CH GPT which is like a really big heavy load no and I think there was really amazing but but at the end of the day I think like what if this happens may have like some time before this happens where like Society is immune to okay whatever everything is fake so I don't believe anything I see on screen but I think there will be a while where a lot of people will believe what they see in a Screen and there will be tools to make this fake very easily very quickly like there are already
um open source models to do like Talking Heads or cloning audio like we do like probably these podcast is well they hearing you speak Spanish and that's been yeah you don't speak Spanish so do you speak Spanish spish okay much easier for me okay so so the point is like we are at this point where maybe until this moment that we just rebuil people did not know or did Not realize that you don't speak Spanish right so yeah I think I don't know I I'm just GNA like I agree with you I'm just taking a
contrarian opinion because because it's fun we're on a podcast um but I just I just feel like yes misinformation is a huge problem but on the one hand like it's gotten so bad that it's obvious you know and so we have to Grapple with it in real time and we have to talk very directly about what is Authority why do we trust Authority You know how do we double check I mean having said that like one of the alarming things that I I've come across I don't know if this happened here but when you're in
the states like a month ago and you Google something it doesn't just give you the results even though of course the first two pages is advertising but then below that you can find maybe some answers but it first gives you like gemini or whatever some kind of crap crappy large language model Answer and it just occurs to me like as start soon as this starts happening that like if Gemini gives me some [ __ ] and I won't call it a hallucination because Gemini doesn't have Consciousness but like if just is wrong it tells me
wrong information and I try to search to corroborate it I'm getting more large language model response like what does it mean to corroborate in the in this landscape Where everything everything exactly is geni yeah exactly because like all the news outlets are made by llms nowadays or at least they reduce a lot their journalists so basically it's going to get to a point where it's that's that's my whole point it's going to get to point where it's impossible to know if it's true or not because basically everything is going going to be llm generated so
so yeah I think that's the problem where we going like perplexity Now it's like getting like lots of traction and Market it's basically like an llm doing the search and then bringing you whatever the hell he wants and that can be as biased as they want well okay it it wants uh so so at the end is like really it's going to be really complicated that's that's what I put deep fakes on like for me that information the Gemini is giving at the end is like a creation fake for you is like a bookmark for
yeah for any any no It's any any synthetic content created it can be visual audio or text as well like right uh now we are finding like books in Amazon that are fully written by llms but they are like sold as if they was an author behind so there's a point where I can tell the difference I mean I kind of disagree with you that everything that LMS produce is crap I think no no that's not what I meant like probably better than average human producers that's why it's really useful To cheat on your homework
exactly so it's better than most students I'm just saying when it's crap you can't tell you can tell the difference because when you try to do the research it's more llms exactly lm's all the way down and it's going to be and it's going to get even worse I think I think it's going to get worse and worse yeah so what would be your advice if you were like okay like you be choosing to advise the president of the US what is the first action needs To be done to get this on the right track
do we have to blow the whole thing out you know I mean my my job is to audit algorithms for all sorts of consistencies or inconsistencies and so I think my my suggestion would be to like force transparency on how crappy things are and I of course I wouldn't call it hallucinations but like yes I would say like you know there should be a disclaimer on any result of if like These facts are not real okay this was this is this is not based on truth there are most of like CH has one that says
CH can get things wrong or something like that or I don't know if it's CL or yeah but they are like minimal not really they're not the whole point of it is to get you to trust it right we went through that so like it doesn't really count you know um yeah so for me it's like the the fact that that all of these L language models have no notion of Truth hasn't really been public publicly available right it is like we're supposed to Tred except for exceptions no we should know that it doesn't have
Notions of Truth it only predicts the next word that people say on Reddit okay that's all it does um it says that oh would this be a word that somebody on Reddit would say like that's if you think about it that way then you immediately stop trusting it and that's what I would suggest right okay um and That's why I wrote my book that's like why I do what I do which is like why are we trus in these things to be trust like why are we trusting them we shouldn't be having said that they're
they're super useful right um and so the other thing I would suggest to policy maker which I do is to um publicly measure the extent to which we can or cannot trust things okay like how does it fail how often does it fail what are we measuring what do we Define by fair is it fair to these Various protected groups like how do are we measuring that can we appeal that measurement because some people will think that's the wrong way to measure fairness you know like having that discussion which is to say really having a
ethical discussion about its use yeah yeah I had I had the other day here like uh Quantum physis physician I don't know the word in English um but basically he's the the boss of the quantum physics in uh Singapore and Abu Dhabi and really Amazing guy and he was saying it's the time now to decide what we want AI to do not to actually legislate like we are putting like written down laws instead of like deciding as a society what do we want it to do yeah it's like we should be able to have a
discussion a public discussion with everyone that is involved which is basically everyone on Earth about what do we want AI to be able to do in our lives like how far do we want it to go no and that's not being Done like things like AI act you cannot do bioweapons you cannot do that well maybe we don't want it to do anything at all and no one has that question so it's basically like we are just getting it like it's it's like uh I got a very viral Tik Tok on a podcast I went
where I was saying like who the hell gave the right to these people to change the world without asking for permission and I I feel that way I'm not saying I'm not capitalizing and using AI because Everyone says you're giving talks about AI so it's kind of but I have this kind of dual point of view where I think this is really useful and I'm taking advantage of it for my own but then I think this is going to end up really badly if we didn't do something soon and I'm not very sure what has
to be done and that's where I try to find a vision of more experts right well I mean that's well I'm not an expert more any more than anyone well you are definitely well I'm an expert on how AI works exactly but I'm not an expert on how we should protect the public good okay from this technology and that I agree with you is a public discussion that we need to have um like it's really a question of like it's a philos philosophical question to right what is the role government is the role of government
to restrict the use use cases of this technology or is the role of government to make sure that whatever happens with Technology people have a basic dignity exactly in their lives and um I could go either way but neither of those things is happening exactly yeah the point is that the conversation is not on the table so that is the big problem the conversation isn't on the table because the alans of the world keep on going to Congress and convincing somehow convincing congress with her twinkly eyes that no jobs are going to be lost even
while they're selling these Products to companies saying you'll be able to replace hundreds if not thousands of workers with this technology it is like a complete [ __ ] show of contradiction yeah that's what's happening it is it is but um I seen the people from the big TCH like Su s Alman saying they need to be regulated but I think then when it comes to regulation they are totally against it so it's basically like it's a lobbying effort yeah exactly so they want to be Regulated to the extent that they look like okay this
is good enough for us we can bypass it it reduces competition right and that that is where I my expertise comes into play which is like I can see through that particular lobbying effort right so they're saying they want to be regulated but when you ask them what the regulation should look like it is useless and will not stop them at all it say will stop smaller companies than them to not compete with Them exactly yeah that's the only thing it will accomplish is it'll make them bigger yeah well taking advantage of your mathematician and
there is some things that I don't understand about AI maybe you can enlighten me on that things I have a few very quick questions like do you think AI is intelligent yeah is it a wrong term with it the wrong naming from the beginning yes it is how would you define it then I mean it's good at things right I mean Actually some AI is quite good at chess right that like the whatever it is that is good at chess it's very finite toy universe with a well- defined outcome which is winning the game of
chess it's very good at chess and and similarly go MH what it's not good at at all is um like ethical quandaries like figuring out what um sort of weighing different outcomes for different stakeholders and trying to figure out like what would be the best Overall you know situation here because that's not how AI is trained AI is trained on quite specific definition of success an objective function with penalty for mistakes and it is incredibly you know like I wouldn't say linear but like it is like one-sided right it's like that's all it cares about
now I'm not saying that couldn't be a nuanced definition of of success and that is sort of my life's goal is to figure out How to make that definition of success um adhere to specifical specific ethical constraints um but as it's trained Now by companies it is optimized to a sort of usually pretty stupid definition of success like maximize accuracy maximize efficiency or maximize profit with no constraints okay so like yeah it's um it's not going to be intelligent in the way that we think of intelligence as like how do you solve this problem Without
hurting anyone without doing something that's you know morally wrong okay like those are just so obvious to us when we're we're thinking about how to do something hopefully most they they're like unspoken constraints like common sense they Common Sense constraints but like that's not how AI thinks I won't even I accidentally use the word thinks they don't think right they're trained towards one goal okay that goal is Typically dumb okay so that behavior does it means that they don't reason either or reasoning is something different well okay so then you're then you're asking me like
it depends on which a you're talking about like you could argue like depending on what you mean by reason that like the chess AI reasons about how good to play how well to play chess like reasons about the next move in the following sense I mean that's fine I I don't want to quibble About the notion of of of reasoning I've already talked about gen AI it doesn't have a notion of Truth yeah um so it's not trying to say truthful things is just trying to predict the next word in a sentence but that's where
I remember um statement from Ilia Su that he said like okay imagine the case where you have like a police novel like book and then it has all these intrate things and exposing the case in one way or another The way we like these kind of novels and then you arrive to the last page where the policeman says Okay so the Assassin is and then AI has to predict that word so if it predicts that we're right to predict that we're right it had to rason through the book does it makes any sense maybe we
even like I think sometimes we given to some words too much value like it's reason creativity intelligence are they maybe words that we are giving them too much human sense beyond what they Actually mean because I think AI to me it looks like it reasons and then I'm going to say something that Hinton said he said if it looks like it does it it's maybe because it's doing it and maybe it's imitating reasoning or maybe this reasoning if we are talking about Consciousness I don't think they are conscious or they will be anytime soon but
the point is that reasoning it just means you look at certain facts and make a conclusion out of them so I would say It does but you know it better than me you know much better than these machines well I mean let me say it this way like Google had figured out a long time ago that it's easier to translate between languages based on like this Corpus of translations that it already had and this sentence from French was translated into this sentence in Spanish um multiple times in exactly the same way So if you give
me this sentence in in French I will give you this sentence in Spanish it was just like I've seen this happened so many times that this is the answer um and it had so much data that it was it was Finding commonly done things a lot so it it was and it was good at it right that's not reasoning that's pattern matching but it looks like reasoning right so that's what I would argue is happening most of the time and and I I just want to say that I Don't blame you for projecting or anyone
for projecting reasoning onto this first of all it's it's designed to make us trust it fill his way yeah but also it does look like reasoning um if it if it were coming from a human we would think oh you're smart you know it's not coming from a human though it's coming from like a pattern matching engine yeah um that's good at its job yeah at the end of the day is like a mathematical algorithm Making conclusion Based on data no that's the point but and more important than that like going back to your example
of the um spy novel and like the the mystery novel is like it would be able to predict the killer if mystery novel is like this were formulaically written and it's like almost exactly the same plot which we see right we see that if you watch M Mrs Marvel and all the all those different spy or like aath the Christi nov like they are for foric and There are common themes and almost exactly the same plot a bunch of times so maybe it could come up with the right answer and that's interesting it's a kind
of a party trick if you will of AI okay but what it cannot do is is reason like in a human and moral way to come up with a new answer to a new question so that gives me to the next point like I guess you don't think they're creative no okay I think we can endow creativity to It the products because with your prompting you make it do something that was not done before I guess or what do you mean I mean we yes for example okay I do think that um I think it'd
be a great world if artists visual artists could use AI to build new art and given and get paid for that yeah I mean it's it's getting there the problem is that it's a very small window I think like um I have lots of because I have a photography Academy as Well and I have a lot of like students that they sell stock and now they selling AI stock not camera stock anymore but this is not going to last for long because all the AI stock websites they already have their Builders to build images so
the client is going to prompt and not the photographer in the way but I think that's that's a matter of temporary but it's going to be a window like prompt engineering it's going to be profession But it's not going to last for long because these things are trained to understand us better so at some point it will be the same if I talk to it or my mom talks to it it will not make a difference because the thing will understand better I guess but I have a little bit more hope than you do like
I agree that the jobs that we have now are going to they're going to shift radically but I do think that human art and human creativity is something that Is just simply not predictable okay so I do think that like the technology as good as it's going to get at like doing stuff that's been done in the in the you know in the the style of this So-Cal this artist you know it's going to be really good at that and people are not going to need to pay for that stuff anymore but but true creativity
musical or artistic that's that's still going to be unique I think it would be relegated to the value that we give to have a Picasso just for the sake I mean there is plenty of people that can paint like Picasso and they just do their litography or whatever or copies but then you still want to have a real one but that's not because of the art no not you people in general but that's not just because of the art itself it's just because of what it means to have a Picasso the intrinsic value of it
because there is not so many Etc so I think uh there is this company in America started something that is called made by humans it's kind of a label that they give companies that don't use AI in any process and I think there's going to be something like handicraft there will be something that we will give value to it right but noway you can see lots of products that are from factories that made to look handicraft yeah with imperfections and stuff like that you know like the the Nuggets Theory from McDonald's where they make them
Different shapes just for for you to think that they are cut by hand but at the end of the day they not so I don't know how long this will last but down to the creativity path I think and that is something it was really difficult for me to understand and assume but I think AI is creative because when I prompt for an image on a image generator like me Journey or whatever I asked for like a German guy with a white shirt sunglasses Etc in a c byn light but then it puts a Car
there and that car is gray I did not prompt for a gray car it could be white it could be black it could be red it could be a sa it could be a uh any kind of car but it Chosen and that's really where it crosses the line of free will it it has chosen to put that one car that specific car to me that is a very basic level of creativity I'm not saying AI is creative like Picasso or Mozart but I did not tell it to put that card yeah but the programmer
told it to do Something spontaneous okay and then but spontaneous it's creative okay I know yeah it's creative if you think so but it doesn't think so because it doesn't have Consciousness I like like large larger point is that like creativity is something we endow art with okay and famous artists are ones that like a lot of people agree this is creative right okay it's not it's not you know what I mean I just I'm Saying that if it works for you I guess it's semantic arguing no like in many people if you tell them
that this original because it never this image doesn't exist before then they are more comfortable with it than if you talk about creativity so I think there is a lot of what how do we Define words to then find if actually we can yeah I agree you just said something very interesting you said the programmer has told the AI to do something but then There is a concept that I never fully understood and I can't believe actually it's real which is the concept of the blackbox are AIS blackboxes can you explain what is the blackbox
concept of an AI I mean blackboxes is just a very general idea which is just that you have input and something happens and then you get something out right it's it it means that it's mysterious what happens in the Box right but is it true that we don't understand how they do what they do beyond the surface um yes not only is that true but it's been true for Big Data algorithms for decades like we don't understand almost any of the machine learning algorithms that I work with yeah actually some of the positive points of
using those algorithms is that they to think that our mind could not connect all the all these dots like I remember There was one case here in Barcelona where they used like a big data AI thing to predict where there will be more accidents on the traffic where there will be more crash just because of the rain because of the state of the road and the temperature Etc but it was to a point where they will deploy police without knowing why just saying like these things said go to that street and they would go there
but obviously I mean it gets to the point where big data and AI can do things that we cannot even get to connect interesting example with the traffic I mean so yes and no so let me let me give you a slightly different example where the answer is still yes or no but it's it's a little bit more obvious what's going on which is um the things called predictive policing algorithms um which use which use like prior arrests and locations of Prior arrests to send police to those neighborhoods this is used all over the United
States um and it was in particular used in New York City where I was living when I wrote about it um and there the you know we on the one hand we don't understand the algorithm because it's complicated uses like neural Nets which no human being can really explain this coefficient being 0.01 instead of 0.2 means such and such they won't be able to explain at that granular level What what is going on inside the algorithm on the other hand if you look at the data that it's trained on you realize that it's just going
to send police back to the same exact neighborhoods where police are you know does yeah like you're overp policing black neighborhoods and arresting people for smoking pot in Harlem but not in on Wall Street where Everyone's walking around with cocaine in their pocket like if you if you started sending cops to Wall Street checking their pockets you'd have way more arrest records there and you'd have an algorithm sending cops to I was going to say because we haven't been arresting people in that white neighborhood there is no data about that and the algorithm cannot propose
that because it it's not in the training so yeah and and that's really important for my from my work is Like we don't need to understand the blackbox to audit the algorithm no because you audit the output and the input all we need to know is how does it treat different people and is it fair okay but then when when we try to put this under control and then like AI act or legislators in America Etc they say like you need to tell me that this is not going to be able to do that before
you release the product but we cannot technically know what it's going to be Able to do like I don't know like that they can do bioweapons uh there is prompt injection there is all these things and then like I think the blackbox concept is kind of like a technical limitation that it invalidates most of the regulation we can do against AI because I can tell you okay you can release AI as far as it can do this this and this but you cannot grant that until you train it because there is emerging capabilities and this
blows my head like When when they tell me like all of a sudden these things started to speak uh in a different language because the user but it was not program to speak in that language yeah uh that was from Google and this blows my mind this I always put this example that I buy an oven for my house to make pizzas and I can expect that there is some deviation from the specs like where the oven can have like an amount of decrease may be hotter or colder than what it's supposed to be but
I will never expect that when I want to do a pizza the oven calls the pizza house and brings me pizza home you know this is kind of like an immersion capability that I will never expect it to do but that's the best way I can get someone to explain me um that these things until we train them we don't know what they're capable of okay well this is let me just say that this is different is a different fact about gen Than it is about most of the preceding generations of algorithms so you've got
me there okay like an algorith that predicts um the default rate of a mortgage borrower is not going to order pizza right it's just because easier to keep track of like the constraints and how are you treating black borrowers versus white borrowers that's the kind of question having said that I do think it's overstated a little quite a bit um About like what could go wrong with um gen in terms of powers like calling the the pizza place like don't give it access to the phone like sorry but like what but can we because then
it's when it gets really messy know when like AI may be being useful to us it's getting like in our daily life and then of sudden it will ask me to call to reserve my restaurant but then it has the phone of the pizza house or it has the access to the phone because it can manipulate Me and they are very good actually manipulating so um well maybe the word is not is not that because it's it implies there is some intention but then obviously these things don't have intention but but they are like pleasing
people have you seen this last Trend where you ask the chat GPT tell me something that uh you know about me that I don't know about and then it just goes through all the memory it has about you and it just makes something clever and It not what is flattering you and then it's like oh look Chach thinks I'm smart and you cut yourself being like you're an idiot you're thinking this thing is just trained for that oh it sure is but then it's basically there kind of like really good at manipulating us and knowing
how to make us happy or how to give us what we want but that's again that's a computer programmer trained that it's like it's a flattery sub routine it's not an emerging capability That they had no you should not think of it that way you should think of it as like a very explicit I just want everyone to close their eyes and imagine Sam Alman and his cronies in a boardroom deciding how do we flatter the [ __ ] out of our users so they love us and they use us and trust us that was
very obvious when they presented the advanced voice and they presented with a voice that was very similar to scar Johansson it was so flattering and like so Flirting and sexy and yep very intentional obviously because she told them not to do it and they did it anyway right I think I think that was like a point where they made a mistake like one of these things that you can see the true colors you can see the true colors there and well if you squint you can see their true colors a lot more yeah what in
what things do you see the all of this intention all of this trust stuff by the way I you know like it gets So hyperbolic that we're talking about like what if like the AI started a nuclear war what like no like there are guards we can do this this is not an inevitable science fiction like distopia dystopia like okay there are only certain people that can start a nuclear war what keep it that way okay yeah exactly let's let's keep it out of the hands of the AI but what is your feeling with because
we we hear a lot of like uh jeofrey Hinton Joshua benju Max techar Talking about like important scientist like recently they got a Nobel Prize um but they talk about the dangers that AI could get out of control and then you have on the other side which is one of the problems that normal people like me we have and then you have like Yan leun that thinks like all of this [ __ ] you guys lost your minds this is is impossible to make a car without inventing first the breakes and what what is your
your take on all of this Like can AI get out of control or you think like there's no way it's funny cuz like I don't agree with Yan Leon about much but I do agree with him mostly about this stuff um I I do I I really do think that there's like a a mismatch between people's like their awe of this technology which I think part of that by the way is like supported by their desire to be Gods you know there's just like a little bit of like a God complex in these Engineers Who
are like we created intelligence that's superhuman intelligence like they are God creating creating creatures right that are superhuman like there's some part of them that just wants to think that and that's feeding into this but like it's not it's not the case these are these are good at predicting the next word in sentences like we don't have to we don't have to let it have that much power over us it's just it's a Choice we're making and can we can we tweak it to that point because I seen like some papers from Claud they were
trying to like this alignment papers where they try to make AI have the same objectives uh golden bridge paper and stuff um it looks like that's of course that's what I work on right like you know I'm not saying I have nothing in common with these these folks that are worried I'm worried too but I'm worried about what we're already doing not about Some futuristic I toally agree on that you know I think there is a very important thing here that the more you think about the tomb scenario the less you think about work and
deep fakes which I think is our biggest problems right now that the AI as it is it can already disrupt I'll go further I think it's a intentional lobbying effort to keep us from thinking about what's actually happening right now oh wow that's Conspir no it's it's a little bit it's a little bit but actually you know not that much I think the effect of altruist movement is putting [ __ ] tons of money into getting us to think about doomsday scenarios so we don't deal with what's happening today all right so we are Dum
like like numed about what's happening right now and just think oh we don't have to worry about people losing their jobs because we're avoiding everyone getting killed the other wave is bigger Yeah and that's a bigger deal humanity evaporating is bigger even though it would happen in 4,000 years with probability 0. 0 1% that's more important than a few people losing their job let us swallow the pill no yeah let's us okay that's an interesting take yeah I I I had thought about that but of course when you get like big names and obviously like
now uh I think for many people that didn't know anything about theyi these last weeks where Demi got The noble and and Hinton as well got the noble I think it's giving them more Authority and they are actually the people that is saying like guys be careful with this stuff I do think we need to be careful and I also think that they're like good people that are that are earnest MH but I I think that they're you know I just think that they're like getting over reacting they're over responding to a a theoretical problem
instead not truly Responding to a current problem okay yeah that's a I like this take I really like it what do you think is the feeling on the industry like when you're in America your colleagues you probably relate to other mathematicians make as much money as soon as as soon as possible that's the only that's the only actual take in the United States okay and that United States being ahead of the curve is an asymmetrical advantage over every other country and if we don't Keep that Advantage then China will win oh wow so it's really
polarized onica it's very about National Security if if if there's ever any kind of like shouldn't we be careful it's like CH you want China to win is that what you want okay so it's basically an N race oh yeah totally which by the way like why are we trying to win against China yeah exactly that is like not the right thing to do and if the ones that say that this can get really out of hands are right it Doesn't matter who wins because AI is going to win if that's the case no so
it doesn't really matter it doesn't I'm just saying that their model is a surveillance state in a way that's even beyond our corporate surveillance State yeah which is saying something all right explain me what's the difference between big data and gen what is the main difference here like why it's gen is what we've been talking about it's like a very general purpose Tool that can be used for any conversation or any kind of like art form right it's not a scoring system almost all actual um other things are predicting a particular score of of a
particular thing I mean actually gen under the hood is predicting a score I was going to say it's based on the same probability of the next word that's the score it's Max maximizing the probability of the next word but it's it comes out and it is interpreted as a Conversation M right whereas like Netflix recommendation engines are literally scoring all the movies that it thinks you haven't seen based on your you know what you've liked in the past and ranking in order and showing you the top 20 or whatever it is so it's like much
more directly a score and you know then you have the swaths of Big Data algorithms that are like suggesting what your rank should be in a college application process I mean it's it Basically almost always scoring you on some one-dimensional range but isn't a geni doing the same but in a much more broad spectrum of things like basically just words yeah I can give words but well with multimodels we can do it in in different like I can send a video to AI Studio to Gemini and it's multimodel you can check the video and give
me an opinion if that's a good video or a bad video so it's basically so that's a but that would be a meta scoring system that Be a scoring system and you can ask and yes the answer is yes you can ask geni to score something okay um but it's going to give you the answer that has got highest scored you see what I mean it's scoring at a at a at a minute level at all times and then it's basically the same technology repackaged to cover more topics but of course not being so specialized
is not able to do the same thing as like big data algorithms it doesn't do the same thing That's true I mean because it's trained to score words not to not to be mathematically accurate or consistent right and and you see that you know you could ask chat gbt the same question a bunch of times and it always gives you different answers answers there's no consistency you can ask it a basic math question it doesn't have the right answer okay and then like is it a better thing than algorithms or a worse thing because you
Have less control okay yeah but I mean like than specialized algorithms okay so it's an algorithm let's start there like yeah what is an algorithm an algorithm is just something that predicts something based on historical patterns okay so this is predicting the next word based on all the cor the Corpus of all of its training data which is Reddit okay Wikipedia yeah but you've been really like actually you wrote one of the best Selling books on the top where algorithms can be really bad for society yes so how does something that is making a result
based on a training data it's supposed to make a decision worse than my own how how that just to be clear like these things are tools and I think of them neutrally okay they're not evil until humans decide to use them in evil ways okay so the tool itself it's not bad for society is depending on the use you make out of them so there Are good algorith Ms or sorry that's a bad definition there are algorithms that use for the good almost never okay can you give me some examples of uh really bad algorithms
beyond the ones that are racist because sometimes I guess the the algorithm is doing something that is racist without the person who made the algorithm or trained the algorithm intending to it's just because of the data set is corrupted or like it's bad so what are like examples Of algorithms that are actually that they've been made with a purpose of of like enriching these people and like paying bad for society what examples what are the most dramatic I think and I I worked in adte right um which is you know the the world of the
internet like clicking on ads and stuff there is a whole you know so I'm sure everyone who's listening to this podcast knows that like you are evaluated you're you're giving all these services on the Internet and in return you're you're giving data about yourself you're profi you're getting profiled not so much in Europe as in in the States but um then you are sused out for how much you're worth how much is your click worth and for people like me when I have I have extra money I'm I'm a Knitter I like handmade things yeah
um and I like nice yarn you know the answer is always going to be like show me yarn because I will click on it and buy Expensive yarn right but but what about people who don't have a lot of money then there's a whole swath of like predatory advertising um that is literally meant to pick off people that are vulnerable to like let's say a gambling addiction okay um I think that's a pretty good example of an evil use of algorithms totally but any kind of vulnerability actually going back to my shame machine book anything
that you can be ashamed of And made to to buy a product to try to solve so basically any algorithm made to exploit your weakness in a l alith foreign reaching is bad I mean I would consider that bad if it's exploitative and and like there's a lot of algorithms that end up being bad not intentionally you asked me for an intentionally algorithm yeah because then the point is obviously if I intentionally want to make an algorithm Good or bad it all depends on I guess what data I give it as a vase of what's
good or what so I guess can I give you an example yeah of the same algorithm being used for good or evil okay cuz I I love this because actually for me it's like a little puzzle that I like to play with myself it's like here's an algorithm that's being used for bad but how could it be used for good okay you know and I think most of the time there's an answer but this is an Algorithm that um was based on a questionnaire that you would give to incoming freshmen in a college to find
out how they're doing and there was a college um and I don't I I'm not sure of the name so I don't want to say the wrong name but there was a college in the south in the United States that gave everybody this questionnaire with the intention of getting rid of the ones that were struggling why because the college Wanted to improve its ranking in the US News and World Report college ranking system which is this evil I'm sure not sure if you have this in Europe but like something like that it's pretty unique American
crap where like everybody cares about this ranking system so all the colleges want to get a better ranking and um they'll do anything to get that better ranking and one of the major ingredients in their ranking is the what's called freshman retention how Many of the freshmen it start actually finish the year and then and there's another graduation rate but like freshman retention is a big one but it doesn't start till like October 15th that's when the official like number of freshmen that started okay come like that's the official number so if you can get
rid of a struggling freshman before that date and counts so this awful awful College gave all these freshmen this and actually forced their teachers their Professors to give this questionnaire and one of the professors got wind of the plan and and fed it to the student newspaper and so they ran a story and like everyone got fired it was a huge mess okay so that's pretty evil yeah right like let's get rid of these kids who are often by the way first generation college students from um you know poor families you know they are struggling
you know the same algorithm was essentially The same algorithm was given to um the freshman at UT Austin in col in in Texas um in in order to figure out which kids needed extra help okay right it's the same just is the intention of it and what you do with the results exactly because I guess one of the things is that an algorithm it just gets to an answer and the answer is kind of neutral it just depends what you do with the answer no yeah sometimes I say like you know because the subtitle of
my Book was how big data increases inequality and threatens democracy but the the in increases inequality thing is like my job as a data scientist was to make lucky people luckier and unlucky people unluckier MH and that's how you can tell an algorithm is pretty [ __ ] up right so it's a pretty rare algorithm that does the opposite right but it's not it it's and and unfortunately capitalistic incentives is why that's true I was Going to say because at the end of the day it's the theory of the Hummer used for like fixing a
chair or used for killing someone but at the end of the day the tool doesn't has anything to do so you should be able to regulate to put a law that makes that if you use the algorithm for that then you go to prison is that existing actually in in America like is there any regulation of what you do with the data you pick up or because here we are very protective of Data in Europe which I'm not sure if it's a good thing or not but um because obviously it's giving us some I would
be in favor of almost any regulation related to it if it was Global but then it gives you like the fact that you cannot compete with other markets so that is like complicated line but anyway the the thing is if you can Define like if you make a bad use of the hammer you go to prison but if you make a good use of a hammer it's totally fine that's Okay and you don't have to actually regulate the hammer itself but yeah is this existing is in America there is like laws against I mean I
yeah thank you for fraping it that way like that nobody's going to prison I mean there might be fines they might stop doing it because they don't like getting fined or may be put out of business the pH bad but it's not criminal um but to be clear I completely agree with that approach like you cannot um well you can but I Don't think it's wise to abolish an algorithm I think it's wise to regulate the use of an algorithm when it is high stakes and that's what I like about the EU AI act right
they're not saying don't use chat gbt or whatever they're saying when it's high risk make sure it's working well and that's the only thing you could possibly do that's reasonable but then there is these things where when we take it to AI which is i b maybe is it right to say it's more complex Than uh traditional algorithms of Big Data like a because it's kind of like this black box where more things happen or well yes and no but when you're narrowly defining a use case it's probably not more complex no absolutely but the
point is how can we make sure that no one uses it for that purpose if we don't regulate the tool and that probably takes us to the view of weapons in America Firearms between America and Europe where here we Have much lower rate of people killed by firearms because we don't have access to them and that's a very clever or very clear example of something where regulation went over the use because you can use weapons for hunting I guess I guess that's or for protection I don't know whatever uh that's the reasoning behind the the
legis metor carries that well I don't think chbt is actually a gun okay so I don't I think we're stretching the metaphor too far I think Typically what happens is bureaucracies with big power use algorithms in weaponized ways and so you have to you have to be careful about how they are using those uh you know exactly example hiring or firing people okay like one of the examples I have in my book is like teachers getting fired by an a a random number generator almost almost a random number generator and it wasn't I would say
this is another intentionally bad algorithm if you if you wanted to know Um only some teachers were being fired not all teachers like basically teachers of poor kids were being fired and it was awful um but the but it's a random number generator you can't like abolish random number generators right do you see what I'm saying like you have to be like thoughtful about how is it being used okay and a random number generator has no power as such it is only powerful when people in a bureaucracy wield it as if it is somehow knowledgeable
and Trustworthy okay so then where where I'm trying to go is okay we have this tool which is capable of doing like helping researchers to cure cancer and all of that thing like let's put in the same box the whole geni thing no like a fault with J GPT it's not the same but let's let's put it together for the example and we have this technology that can do that but at the same time is the same technology that will take over the market of like customer service and get All these people fired of their
jobs it's not going to be the use of the tool or shall we forbid companies to use the tool to make money because that's the core point of a company like a CEO he has like a like an obligation of making the most money so actually a CEO that can apply gen AI to get people of the customer service fired and and make more money for the company is what he should be doing actually according to what his job is and if they can if they can prove Or give strong evidence because there's no real
proof it's not mathematics it's data but if they can build strong evidence that what they're doing is legal and fair and reasonable then go ahead use that but where where's the line of reasonable unfair like firing 700 people from Clara is actually fair and reasonable people from what from Clara these people that they put the chat bot and got 700 people out is this I don't know I don't know that example My point is that probably not you know if it if it came up as a extreme case but my point is that like the
eui ACT isn't asking is asking for it to you know build an a case build a case that what you're doing is reasonable and I'm just saying that that's yeah yeah no I totally agree when we put it in things like yeah anywhere any any things that can do bioweapons or whatever it it just has like some very clever like very clear examples of things that are Obviously that way and is is not changeable but way I don't know think there some side effects I don't know if they can actually prevent chat gbt from showing
people how to build a bomb I that's my point that's my there's a lot of bypassing capacities I've heard a lot of different people and if they can't do it then maybe they should just shut down chat gbt that wouldn't be the worst thing in the world okay yeah that's that's where I was leading to like if it Gets to the point where we cannot avoid the tool to have like a negative consequence beyond all the positive consequence they may have does it gets to the point where we have to be like okay guys this
is just not good for us so right let's well again I I think we're going to like I'm going to lean back on the earlier conversation we had that like this has to be a public conversation and it should not be up to Sam Altman on his crony silic Valley or Elon Musk or Elon Musk okay let's finish up with um do you think we were going to be in 5 10 20 years like what what is your predictions of where this is going um I think that a lot of people will have lost their
job okay and uh and then the question is like what what are the next what's the next generation of jobs and I don't know I don't know okay yeah I I see it in the same way I see that there is going to be a transition and that transition is going to be Painful for most of society I think after we will be all right and then probably we get to the Star Trek point where we just our job is exploring the universe but I think in between there will be a time where it's going
to be tough like the Industrial Revolution probably was tough for many people I want to thank you for being here today it was really amazing to talk to you it was like a really deep conversation I appreciate it so thank you very much Appreciate it thanks [Music]