there's so much to talk about in technology now the title is debating technology I don't think there's a debate technology yes or no um but in covering Silicon Valley for 25 years I often hear you know technology can be used for good or bad which is inherently true but sometimes that's used to say especially by the makers of the technology well it's going to be used for good or bad hopefully the good outweighs the bad and to me that neglects our responsibility to push and steer and limit the tech techology so it is used for
good um but we're going to talk about this is a moment of great excitement especially with artificial intelligence Robotics and all these Technologies um but it's also a moment of great concern a lot of people have legitimate fears about what this change will bring uh that's enough from me I'm excited to be joined by David Newman head of the MIT media lab and Yan Lon who leads AI uh research and other activities at meta um Dava maybe to start with you I mean you have such a broad background in technology from obviously your experience in
space where is your head these days what are the problems uh and areas that you think need our attention and where are you wrestling your brain around thank you everyone good morning pleasure to to be with you so where's my brain typically an outer space you know thinking about becoming an uh you know uh inter species will we find life elsewhere it's not option b so where my head really is and thinking about technology and the disruption that we feel and that much more orders of magnitude more disruption that's coming so maybe I paint the
picture it really is I think a a you know technology super cycle now convergence of probably three Technologies at once you know the Industrial Revolution that was it was okay when we put one technology at a time geni took me 30 seconds before getting into this AI it's it's coming large langu it's but it still it's an infancy at the MIT media lab we've been working on AI for 50 years so now that it's uh common in everyone's hands a co-pilot I'm sure we're going to debate that and talk a lot about it with my
esteemed colleague and and expert developing that we're doing a lot most important thing I want to emphasize just in the introductory about Ai and gen AIS we design for humans human centered human flourishing at the media lab so is it trusted is it responsible that's that's the premise actually we don't do it if it's not but hold on to your seats you know everyone um rocket launch is coming soon soon I think we'll all be talking about gen bio if you're not already not just synthetic bio but generative bio we don't bio biology is is
is organic so when AI morphs into to gen bio it's no longer a large language model but we're working on actually media lab um you know large nature models now you're ingesting biology and genetics and biological wrap that all around into sensors internet of things we're pretty famous for I call it now internet of all things because I have iot for the oceans to monitor all biodiversity for the land climate air atmosphere you might think of and from space more than half of all of our climate variables are now measured from space so that so
hopefully that kind of technological Whirlwind I don't know what else to call it you know coming with geni Gen bio sensors measuring everything to finish up I put humans and human centered design right in the middle and asking The Upfront questions is it intentional for human flourishing and all living things flourishing if the answer to that is no with our algorithms then I don't think we should be doing it and Yan that's a that's a good point to turn to you how do we make sure uh the AI we can build is the AI we
want how are you trying to focus your work and the development at meta to make sure that we get an AI that works for Humanity um there's two two answers to this the first thing is you try to make it work well and reliably uh and uh the the the flavor of generative AI or AI that we have at the moment um is not quite where we want it to be in terms of is very useful we we should push it we're pushing it um trying to make it more reliable trying to um um make
it applicable to kind of a wide area of a wide range of of of areas um but it's not where we want it to be and it's not very controllable um for various reasons so I think what's going to happen is that um within the next three to five years we're going to see the emergence of a new brand or Paradigm for for AI uh architectures if you want um which um may not have the the the limitations of current AI systems um so what are limitations of current systems there are four things that are
essential to inell behavior that they really don't do very well one is understanding the physical world second one is having persistent memory and third and fourth are being capable of uh reasoning and complex planning and llms really are not capable of any of this um there is a little bit of an attempt to kind of Bolt some wordss on them to kind of get them to do a little bit of this but uh but ultimately this will have to be done in a different manner um so there's going to be a another revolution of AI
over the next few years and we may have to change the name of it because it's probably not going to be generative in the sense that we understand it today um so that's that's a first point some people have called this in different uh uh names um so technology we have today um large language models deals very well with the discrete world and language is discret um I don't want to upset stepen who is St Pinker is in the room here but to some some extent language is simple U uh much simpler than understanding the
real world which is why we have ai systems that uh can pass the bar exam or solve equations and things like that do pretty amazing things uh but we don't have robots that can do what a cat can do um the understanding of the physical world of a cat is way Superior to everything we can do with with with AI um so that tells you the physical world is just way more complicated than than human language um and it's because why is language simple it's because it's discrete objects and the same with DNA and proteins
right is discrete so so the application of those generative methods to this kind of data has been incredibly successful because it's easy to make predictions in a discrete world you can never predict what word will come after a particular text but you can produce a probability distribution of all possible words in the dictionary and is only a finite number of them if you want to apply the same principle to understanding the physical world you will have to train a system to predict videos for example right show a video to the system and ask it to
predict what's going to happen next and that turns out to be a completely intractable task um so the the techniques that are used for large language models do not apply to uh video prediction uh so we have to use new techniques which is what we're working on at at at MAA but it may take a few years before that that pens out um so that's kind of the um the first thing and and what that pans out um it will open the door to brand new class of of applications of AI because we'll have systems
that we'll be able to uh reason and plan uh because they they will have some mental model of the world that current system we don't have so they'll be able to predict the consequences of their actions and then plan a sequence of actions to arrive at a particular objective um and that may open the door to U real agentic system is talking agent Ki but nobody knows how to do it and that's kind of one way to do it properly and also to robotics um so the coming decade may be the decade of Robotics because
that was the first answer and the second answer which is shorter um the way to make sure that AI is uh applied properly is to give the tools for people to build diverse set of AI systems and assistants um with which understand all the languages in the world uh all the cultures uh value systems Etc and that can only be done through open source platforms so um I'm I'm a big believer in the idea that uh the way the AI industry and and and ecosystem is going um open source Foundation models are are going to
be dominant over proprietary systems and they they're going to basically be the substrate for the entire industry they already are to some extent uh and and they're going to enable a really wide diversity of uh of AI systems and I think it's crucially important because within a few years you and I both are wearing those smart glasses right and you can talk to an ni assistant uh using those things and ask any question but pretty soon uh we're going to have more and more of those things with displays in them and everything and all of
our digital diet would be mediated by AI assistance and so if we only have access to three or four of those assistance coming from you know a couple companies on the west coast of the US or China is not going to be good for cultural diversity democracy uh everything else you know we need a very wide diversity of AI assistance that can only happen with open source which is what meta is um being promoting as well well thank you both I think that sets up up well for a discussion and as a reminder this is
a town hall not a panel so we're going to be bringing in both the audience here in this room of incredible guests as well as uh those on the live stream so the first thing we did is we asked uh the folks on the live stream uh there is a slido you can join um also we asked how would you like these emerging Technologies uh to contribute to the Future and we're not going to show all the answers but here's a word cloud of some of what uh folks have said so if we just quick
quickly look at that well that's the question I'm not quite sure how we get to the answer new technology it's a blank slate all right well I'm sure people you know talked a lot about both what they're excited about and what they're worried about um you know I want you to get ready with your questions in the room I'm sure everyone has some but Yan I want to follow up on the open source thing because there's really a big debate I mean as I said technology is not a debate but the approaches we take and
certainly open source has all the advantages that you mentioned um it allows people all over the world to join in only a few people are going to be able to train one of these giant uh models but a lot of people can make use of them and can contribute at the same time there's a real concern that taking this powerful technology and giving it to the world and saying basically meta says here's our acceptable use policy here's what you and can't do but to be honest there's really no way of enforcing that once it's out
it's out how do we make sure something is both open source and safe uh so what what we do at meta is that when we distribute a model so by the way we say open source but we know technically those things are not really open source because you know the code the source code is available the weights of the model are available for free and you can use them for whatever you want except with those restriction Clauses uh you know don't use it for for for dangerous things um so the the the way we do
this is that uh we we fune those system um and Red Team them to to make sure that at least to first order uh they're not you know kind of spewing complete nonsense and or or or toxic cancers or or things like that but um but there is a there's a limit to how well that works and and those those systems can be J broken you can uh do what's called prompt injection so type of prompt that will basically take the system outside of the domain where it's been fine-tuned and you know you're going to
get to its uh you know uh kind of root uh uh things and then that depends on uh what training data is been pre-trained on uh which of course is a combination of high quality data and not so high quality data and Dava is putting something like that into the world I mean obviously there are benefits to open sourcing that way MIT is Pioneer in open source there's an MIT license for open source I can't remember it may even be the license that meta uses um at the same time when you talk about having this
technology be human- centered and putting humans and our needs and concerns at the Forefront what do you think needs to be done you talked about synthetic biology and you know all these things obviously there's a lot you know there's a lot of neglected diseases there's a lot of things we want to use these new technologies for and we don't want everyone just in their home developing new microorganisms to run around so what are your thoughts on how we make this technology broadly available but still safe yeah thanks that's that's the question and seeing what people
are concerned about too AI in space I agree with that we can talk about the word cloud but so um you based on open source platforms with but with guardrails and and we have to be all held accountable right now we can you know ask the the audience as well you know does AI work for you what I mean do you trust it is it responsible is it representative of you do you think it has the training data that represents you well let's ask the audience how many of you feel that you think it's safe
secure and um you know you're going to launch in and use it today you know during this debate anyone raise their hand well I I think there's the answer and I think it's not how many people would be open to AI would love to use AI once they do feel it's safe and secure everyone so that's why I asked the question so it's not there yet so it does it's not representative doesn't represent everyone in this room um the world is much more diverse than what we have in in the room so it doesn't work
so maybe this is where the debate starts so we're you know open source we want to be open source want all the you know all my students are superstars and Geniuses want all the next generation of the world to be able to give their creativity their curiosity because that's how human flourishing happens but if we just let the algorithms uh again on their own I think that we really have to rethink is it is it where's the trading data come from where's the transparency where is the transparency does it work for all of us I
think if uh those an those questions are answered well we have a we' have the majority of folks you know opting in and then and hopefully making it better right open sourcing is because you can get all the good ideas and enhance things so we see that you know coming enhancing it making it work for everyone but I think we you know here and in very intentional where's the transparency where's the the trust uh you know has it kind of gotten away from us so these are really important questions and Yan I want to push
you one more time and then I really I hope you all have your questions ready because I'm coming to you next um I want to push you one more area which is values and I wrote about this last year that you know social media has been about content moderation what speech do you allow where do you draw the lines obviously you know it's something that meta has spent a lot of time on has had different approaches um but it strikes me that these AI systems are going to have to have values and I wrote that
you know your PC doesn't really have a set of values your smartphone you know yes there's some App Store moderations so you know at the extreme there's some limits um but the AI system is going to answer the hard questions and you know how do we do that in a world where you know people in the Middle East have different values than uh people in the US people in the US have different values than people in the US um recently meta made ACH bunch of changes to how it's going to approach that allowing a lot
more speech even uh that might be considered very offensive distasteful even dehumanizing where is the role of the tech companies in putting their thumb on the scale of the values you know I how much pressure is there going to be from governments to control what speech how AI chat Bots for example answer questions around gender sexuality human rights so there is a interesting debate about this so this is not specialy I should tell you but um but it's an interesting topic nevertheless that I'm interested in um So Meta has gone through several phases uh concerning
content moderation and uh how how is how best to do it um and uh including with questions not not just about toxic content but also about uh disinformation which is much more difficult problem to deal with so uh until 2017 let's say uh detecting things like hate speech on U on social networks was very difficult because the technology just wasn't up to Snuff and counting on users to flag uh objectionable content and then have it reviewed by humans just doesn't scale particularly if you need those humans to speak every language in the world um and
so that just was not technologically possible you just couldn't do it uh and then what's happened is that there's been this you know enormous progress in natural language understanding uh since 2017 basically and and that has made enormous amount of progress so now detecting H speech in every language in the world is basically possible with some good level of of reliability so the proportion of ha speech for example that is taken down automatically by AI system was on the order of 20 to 25% late 207 uh late 2022 5 years later because of Transformers are
supervision you know all the stuff that that uh uh is everybody is excited about today uh it was 96% now that probably went too far because uh the number of false positives of of of good content that was taken down is probably pretty high so there are countries where people just want to kill each other and you probably want to kind of you know calm down so so put the threshold detection threshold pretty low countries where there is an election and and you know things going to r r up so also you want to lower
the threshold detection so that more things get U get taken down to sort of Camp people down um but then most of the time you want people to be able to debate important uh societal question including for questions that are you know very controversial like like gender and and and political opinions some somewhat extremes and so what's um what's happened recently is uh the company realized it went a little too far and and there were like just too many Force positives um and now the the the detection trols are going to be changed a little
bit to authorize uh discussions about topics that are you know big questions of society even if if the topic is offensive to some to some people so that's that's a big change um but it's it doesn't mean content moderation is going to go away it's just there it's just you change the threshold and again the answer is different in different countries so uh in Europe it's illegal hate speech is illegal um you know neonazi propaganda is illegal right you you have to do it for legal reason you have to moderate that for legal Reason Not
So in in the US in various countries you have different standards as as you said um then there is a question of disinformation and there uh until um until now meta used uh fact checking organization to fact check the big uh post that had a lot of uh gathered a lot of attention but it turns out this system doesn't work very well it doesn't scale you don't have a large coverage of uh of the content that is being posted because those organization you know there's only a few of them and they have a few people
working for them and and so they they can't just uh debunk every you know uh dangerous misinformation that circulates on social networks so the system that is being implemented now that will be red out is um is qu forcing essentially have people themselves um um you know kind of write uh comments uh on on uh on posts that are controversial um and that is likely to have much better coverage there are some studies that show that this is a a better a better way of doing conent moderation particularly if you have some sort of karma
system where people who make comments that turn out to be reliable or liked by other people then so that they get promoted several uh uh forums have used this system in for many years um so the The Hope um with the MAA is that this will actually work better and it also has a big Advantage which is that U meta has never seen itself as having the legitimacy to decide what is right or wrong for society um and so in the past has asked governments to to regulate as governments around the world this was during
the first Trump Administration uh tell us what is acceptable on social networks on on on for online discussion and the answer was cricket there was basically no answer I think there was some discussion with the government in France but uh the Trump administration at the time the first one said where the First Amendment here go away you're on your own um so the all those those policies kind of resulted from this absence of uh regulatory environment um and now it's quite source is you know content moderation for the People by the people well there's much
more uh we could talk about but I don't want to oh yes if I could get us back to values I think that's that's the right question so um if we can that's that should be the first question uh what are the values so and you have to be able to articulate your values like articulate my values it's it's up to leadership to articulate values so you know for me is um Integrity Excellence curiosity community community encompasses belonging and collaboration so if you can articulate your values and then as designers as Builders as technologists flow
from those values we could get it right what if we get this right so I think you really we need to back up so Med I should articulate and in the you know the checking what are the values do we have aligned values then we can collaborate then we can all collaborate work together and respect um our cultural differences and all the you know the Cornucopia that humanity is and and that's that's wonderful and that's the opportuni is to go across uh you know all the cultures but but I think we fundamentally still have to
have the discussion about values and do we share values that's that's the I think fundamental yeah the core core core share values that you know need to be expressed I mean the in that sense the content policy from it are published right so it's not it's not a secret um but then there is the implementation of it right and and and um M the p as made mistake deploy the system and then realize that this is not working the way we wanted it so can of R it back and replace it by other systems it's
it's constantly but you could lead you could you could lead in industry and you know lean in and be out in the front that discussion uh by all measure actually ma is is leading in terms of content moderation absolutely and daa is that your sense I mean are you concerned with the new policies that um you know I mean obviously it's very difficult to say what are shared values there are a lot of debates again even in the US at the same time um you know we talked about a human centered world and the new
policies um certainly allow a lot of uh dehumanizing Speech whether it's uh comparing women to objects uh trans people to it g people mentally ill have they gotten that balance right or are they going no we don't have the right policies absolutely no emphatically no we know what's wrong and right we know human behavior we know civility we know what makes you happy when you're teaching your kids we should probably look at at our our children our kids and the Young Generation as as well especially when we talk about um values and and what we
have and and you know who who we who we aspire to to be there's a chance to get it right but um you know we've run the experiment uh you know internet one internet two I think we've running the experiment so this is the opportunity to to get it right I want to bring in the audience who who would like to uh build on the discussion we've had and please just say your name and where you're from uh there's a mic coming around but keep the intro short and ask a question I'm Mukesh from Bangalore
India uh so yeah your group is at the Forefront of AI research and so are many other groups around the the world do we know where we are going like can is there a mental model for 5 years from now because we all speculating and asking questions about where a is today challenges and so on do we understand where we're going enough to we have some prediction about 5 years or is just too much wide open so my colleagues and I ADM certainly understand where we are going I can't claim to understand what other people
are are doing particularly the ones that are not publishing their their research and basically you know have clammed up in in recent times um but um the way I see things going so first of all uh I think the the shelf life of the current Paradigm uh large language model is fairly short probably 3 to five years I think within five years nobody in the right mind would use them anymore at least not as kind of the central component of an AI system um one analogy that some people have made which have uh recycled is
um llms are are good at manipulating language but not at thinking okay manipulating language is done by little piece of the brand right here called the bar area it's about it's about this big it only popped up in the last few hundred thousand years can't be that complicated what about this the frontal cortex that's where we think right we we don't know how to reproduce this um so that's what we're working on um you know having systems s of build Mentor models of the world so if the plan that we're working on succeeds you know
with the the the timetable that that we we uh we hope uh within 3 to 5 years we'll have system that are complete different Paradigm they may have some level of Common Sense they may be able to learn how the world works from observing uh the world go by and maybe interacting with it uh you know deal with uh real world not just discret discret World um and open the door to another application I want to give you just a very U uh interesting um calculation uh a typical uh fish model today large language model
is train on 20 trillion tokens or 30 trillion tokens uh a token is typically three bytes so that's about uh you know 9 10 to the 13 bytes 10 to the 14 bytes okay let's round it up uh this basically is uh almost all of publicly available text on the internet it would take any of us sever hundred, years to read through it okay um now compare this with what a four-year-old has seen in the four years of life uh you can put a number on how much information gets to the visual cortext or or
or through touch if you BL um and it's about um it's about 2 2 megabytes per second about 1 Megabyte per optic nerve about 1 B per second per optic nerve fiber we have one million of them for each eye uh multiply this by four years and now four in four years um a child has been awake a total of 16,000 hours so figure out how how many bites that is 10 to the 14 same number in four years um so what it tells you is that we're never going to get to human level AI
which some people call AI but that's a misn um we're never going to get to human level AI by just training on text we need systems to be able to learn how the world works from sensory data um and and so that means LNS are not it talk about that not we're not going to get hum within two years like what some of people have been saying and you've been talking about that as well yeah so that's that's my point you know this is and it's infancy so I think it's actually you know um that's
the way to clear it where you know LMS are in the infancy is you know infant I four-year-old is not infant but very very early on uh but when you move to generative biology um training data when you move to sensors internet of thing when you move to you know the almost infinite you know amount of of data information we have and um you know just multi- sensory you're talking about you know you have the the glasses on you know your vision but you're looking at text how much do we get tacti hearing sensing smelling
right have you all had your coffee this morning you know what's the first thing what was the first thing that you know you really related to this morning probably you know breakfast sense of coffee smell so would put the multisensory capabilities again for for humans and I want to be clear from the earlier you know com Humanity flourishing humanity and all living and all living beings all all living the appreciation for all of life for all of life human centered design in terms of some our Technologies but uh you you get to choose your orientation
you get to choose who you're uh designing for and so I think that's really important too not the egocentric Humanity versus the rest of you know it's it's that's that's the question you know how long will we be here um spaceship Earth Technologies for space up there that's my that's my specialty um it doesn't need us you know so a little humility please being being humble earth going to be fine without Humanity we're a bit of a nuisance a huge nuisance so you know Earth is 4.5 billion years old I have my sister planet Mars
probably find past life there about 300 3.5 billion or so so again that that view let's please you know with with humility and approach this and then the question is you know do we want to live in Balance do we we want to live the best lives we can and flourish and then then I think you just approach it you know with different questions you approach solutions from a from a different perspective thanks I think I heard something over here I'm not sure if it was a phone or a question but I know I see
a hand um talk a lot about existence and um there's a coming because we have a live stream audience and say who you are moris band light speed um for Dava um you know you talk about AI you talk about existence to I'm glad you're making life or working on making life human life a multiplanetary species um where does AI fit in into this broader need um do you see it as an existential threat do you see it as an existence enhancing technology for example generative bio is it our great filter thank you for the
the the question so um you know I think we're the threat I think the people are the threat you know not my not my algorithms uh and for you know the question when I'm I do think about you know searching finding life elsewhere in the universe it's a huge help so when you say ai ai is not very useful anymore it's almost just like saying technology so then we can get now we should say the the specifics you know if we're talking um so when it comes to to travel space for me humans are here
on Earth We're sending our probes and our scientific instruments so it has a lot to do with autonomy and autonomous system and no the human having information here but that that that Loop of of information sensing and and exploration but these are all autonomous um robots and systems we are going to send people and then we bring our own supercomputers with us so that first human Mission to Mars will be it'll surpass our current 50 years of exploring on Mars so that's the benefit of of humans or you know human intellect but so so it's
a great question so it's a mix up is a threat we use it to the advantage of again capabilities searching exploring and you can get in my case searching for the evidence of Bio signatures or or finding life elsewhere so when you're focused you know and you know your mission and again be very transparent about how you're using algorithms Ai and we always bring in uh something that's you know very much uh Missing in in most of the development when we get down to more foundational models specific um you know personalized you know foundational capability
whether it's for health or climate or exploration you got to bring in the physics so there physics is more if you just if you let things go just mathematically statistically I mean look at where we're at fantastic but I'm a big believer and again I'm biomimic trying to I'm trying to understand nature I'm trying to understand living systems always bringing in like foundational physics with my math and you know and you proceed along that that course so while we continue the discussion in here I also invite those online we have a couple questions for you
what excites you about the technology that we're talking about what worries you and we have the opport to do some more word cloud so if you're online and using slido please share your thoughts there and then we had a question there uh they're gonna bring a microphone everyone if you can just wait for a mic it'll help those online Martina hirayama state Secretary for Education research and Innovation Switzerland uh my question uh goes to you Dava so you talk about uh values concerning AI so we have a divide concerning access to AI or not what
influence will it have uh if uh we consider that we do not share the same values on Earth in all areas where we live not even talking about space what influence will this have on divide yeah it's um so again I think it's fundamental to so um you know I give a list of five or six so my hope is it um we can agree on uh t or three of those two or you know just two or three of those you probably won't be uh the entire set but but I think we have to
look for agreement and shared values and and then and then work together and um if not then that's maybe the scenario that plays out of of the threat division destruction I don't want that path I think we have an alternate path so I think the hard work is People to People sure policies regulation what do we agree on what do we agree on how what what future scenarios and their scenarios it's very plur what future scenarios do we agree on and if we can agree on some of those if we can share some of those
those values and I I think we can we could take a poll you know see if we can get one amongst all this you know diversity here so that's uh you know it's a it's not an answer it's just part of the discussion of what can we um share and what do we what do we share together and and make that the building blocks to to get it right and and Yan that is kind of the challenge of building these systems for a globe again where the world doesn't agree on a lot um there's hopefully
some basic things we agree on though it seems like we struggle even on those I know you've talked about using Federated learning and and to really make sure the world is represented in these models but how do we build for a world where there is so much disagreement again when AI systems aren't going to just moderate content they're going to create an answer content well I I think the answer to this is diversity so if uh again if you have two or three AI system that that all come from the same location you're not going
to get diversity so the only way to get diversity is having systems that are uh you know train on all the all the languages and cultures and VAR systems in the world uh and those are Foundation models and then they can be fine-tuned by a large uh diversity of of people who can build assistance with different ideas of what you know good value systems are and and then people can choose so it's the same idea is a diverse press right you need a diversity of opinion in the Press um to to at least have the
the basic ingredient of democracy so it's you know it's the same for it's going to be the same for AI system you you need them to be diverse so one way to do this I mean it's quite possible that it's quite likely that it it's going to be very difficult for a single entity to train a financial model on all the data all the cultural data in the world and that may eventually have to be done in sort of a Federated fashion or or distributed fashion where every regions in the world or every interest group
or whatever has their own Data Center and their own data set and they contribute to training a a big Global model uh that may eventually constitute the repository of all human knowledge I saw a hand over here and if you can wait for the mic thanks yeah well we're passing the mic and I think that's that's much more exciting uh to me more the Federated toing it again transparency because then it's it's it's more customized it's more personalized it's going after you know for the work that it's it's again going after a medicine or health
or a speciic a specific you know breast you know it can be more specific and much more precise so to me that's very exciting hi my name is MTA josi and I'm from London um I was listening to a panel yesterday and they talked about a concept that really startled me and I went back and did a bit of research on it and it's called alignment faking in llms uh which is about how you know the llm models are giving answers which are which they are faking to align to whatever is being asked to them
or whatever the general can say is probably an experiment that has happened in the last few months but it was really startling and I just thought I'd get a few thoughts from you on that okay um I have a perhaps a slightly controversial opinion about this which is that uh to some extent llms are intrinsically unsafe okay because they're not controllable you don't really have any direct way of controlling uh whether what they say is you know certain characteristics you know with respects guard rails the only way you can do this is by training them
to do it but of course that training can be undone by you know going outside of the the the domain where where they've been trained um so to some extent they're inally unsafe now that's not particularly dangerous because they're not particularly smart either right so they they're useful um they they are in terms of intelligence they are more like intellig assistance in the sense that you know if they produce a text you know that a lot of it can be wrong in it and you have to kind of go you know do a pass on
it and correct some of the mistakes and like you know know what you're doing it's a bit like you know driving assistance for cars we don't have completely autonomous consumer cars but we have driving assistance and it works really well so same thing uh but we should forget about llms so this idea that somehow we should extrapolate the uh capability of llms and and realize oh they can you know fake the intention first they don't have any intentions and and like you know simulate values they don't have any values um and uh and you know
convince people to do horrible things they don't have any notion of of of what this is at all um and as I said they're not going to be with us five years from now we're going to have much better system that are objective driven where the output that those system will produce will be by by reasoning and the reasoning will U guarantee that whatever output is produce satisfy certain guard rails and the system will not those system will not be able to be um it wouldn't be possible to jailbreak to jailbreak them by changing the
promp basically because that would be sort of hardwired in the in the guard rails so given what Yan just said daa you know the Big Talk the big buzzword this year is agents and giving more power to these llms given what Yan just said about their limitations and this is one of the companies making it should we be worried about giving more autonomy and agencies to A system that has no values makes mistakes yeah well and I don't think so I agree with what y said you know that LM said they're not smart they don't
have rationality they don't have an intention I mean they're just they're just lacking think of them as you know math math and statistical you know probabilities like that so all of the probably what we much more care about you know in humans is well judgment that's you know the question is like well this is seems very alerting because it's you know fakes fake fakes of any types are are alert right so um the question what do we do about this because you know agents so you know agenic uh you know it is turning into gentic
so simple there's some simple uh I don't know if there are solutions there just simple ideas we can do right um you know we have copyright things and things like that what if it just you know comes up every time we're using a generative uh you know model why is why isn't it watermarked why isn't you know to why don't we know that uh you know what's you know is this coming from a human is this you know coming from an algorithm just just you know just visually just saying you know just Watermark that you
know it's generative just some more information about what you're looking at so the person the user you know if this is being you know served up to someone that they can take it I want to do the flip side of this argument too debate you know with myself I mean you know published a paper on um unlocking creativity you know again with machine learning it's fantastic some generative capability you have an idea we have an idea so we just do some simple brainstorming and generate again to me I like actually images uh you know the
text because it it maps to the human brain we're almost perfect in terms of image mapping and and looking at visuals so you say my sentence what's that image and you know Yan can have has and we look down and we're going to have a really nice discussion it's going to help us actually be more creative more we can have more discussion if it's kind of a prompt for us you know that's where it's a tool you know it really is then an assistant it's helping us Converse and have a discussion or or a debate
I think it should definitely be flagged we know we have to know where it comes from we have to know you know what the ingredients are into the recipe so it's hard to believe we only have a couple minutes left and I want to give each of you a chance to give us one thing we haven't talked about what aren't we talking about enough that we should be talking about and maybe we'll be talking about next year okay I'm going to go by the list that we're see here exactly this is what excites you the
most about technology okay brain computer interface forget about that uh this is not happening anytime soon at least not the invasive type that neuralink is working on the the non-invasive type so things like you know electrogram bracelets that m is working on yes that's happening this year um and that's exciting actually uh but but like drilling your brain no um except for clinical purpose uh gaming virtual world meta of course has been sort of very active in this in this space with metaverse say exploration you are the expert uh it's exciting as well um regulation
uh that's a very interesting topic um that um uh I think people are in government have have been brainwashed to some extent into believing in the existential risk story and has led to regulation that are frankly counterproductive because the effect that they have is essentially make uh open source the distribution of Open Source AI engine essentially illegal and in my opion that's way more dangerous than um than all the other potential dangers um consumer robotics as I said maybe the coming decade will be the decade of uh of of Robotics because maybe we'll have ai
systems that are sufficiently smart to understand how the real world works and in your previous Cloud there was efficiency and and power uh consumption uh efficiency there is enormous motivation and uh uh incentive for the industry to make AI infs more efficient so you don't have to worry about people not being motivated enough to make AI systems efficient this is the main cost of running an AI system is power consumption so enormous amount of work there uh but the technology is what it is thanks DAV we have a minute left yeah speed round I'll take
I'll take uh three of them um um I um politely uh disree brain computer interfaces um no we it's it's not it's not off it's happening now um in terms of we have a digital central nervous system so we're are already having brain control over especially in the the area of breakthrough and Technologies for replacement for Prosthetics so half human half robotic new robotic legs you know uh get rid of phantom phantom foot because the brain is literally controlling the robot so it's uh we're to the the cyborg phase we're doing that it's it's implanted
people are walking around um soon will'll hopefully be paraplegics in the future maybe quadriple so the brain is controlling uh you know a digital uh Central nervousness the brain is quite powerful so it's the surgery so I'd love to talk about that but but that's here that's a that's not even the future that's that's the now um after you know space we talked about a little bit but again for scientific purposes uh you know Finding life what does that why explore out sour because it tells us it's not option b sorry Elon it's not option
b it's for flourishing humanity is to appreciate all of us together our humanity and what we can get right here on Earth and definitely living in balance with Earth so but it's necessary because when we design for space in the extreme environments of the Moon Mars you name it Europa Clipper you anywhere in the solar system exoplanets it's because for us it pushes us it pushes the technology makes me um you know really sharp in the game and ser as technology so very optimistic about that I think we will find the evidence of life or
past life in the next decade robotics this is um consumer robotics okay um but what if we what if it's just the robotic again it Hardware software robotics should think of you know physical systems well guess what now what robots to they are the AI they're the algorithm they're the software so we do get to that physical cyber we get to where we don't talk about hardware and software we get to know just the robotics or the machine it's it's embedded with uh the software I my uses my favorite um use cases and for health
you know revolutionizing um individualized you know personalized medicine things like that rather than buying it and more stuff and more stuff and more consuming what if you you make your own again we're back to open source let everyone you know do it yourself make it yourself open source it and use it from all recycled you know let's think about you know what's circular so what can we do with everything any waste that's the new to me that's uh the new robotic you know informed physical cyber system of the future in the hands of course of
our our kids and they'll do some just a little bit of Education they'll do some pretty wonderful things with it if you leave it to the next Generation Well that's a great place uh to leave things we are going to have to leave it there thank you so much David Newman from MIT Yan Lon from meta everyone in the room and everyone who's joined us thank you