[Music] well well well well welcome everyone here we go um save that for the good stuff it's coming up welcome uh for those of you that I don't know my name is Dan Le and I'm the CEO of the museum been here for about 6 and a half years particularly excited uh about this program tonight it lines up wonderfully with our new Mission which I'll mention in a moment but I wanted to start off by thanking all the members the trustees the supporters the volunteers the crew that puts the programs together uh we couldn't do
it without your support and help so thanks to everyone we got a pretty much a full house here today and we're streaming online so welcome to the audience who's looking from afar um and you'll know if you're not aware That the museum evolved its Mission about 6 years ago we will continue to collect and preserve like all collections institutions do and we will do it for posterity but we also care a lot about people uh CU in the beginning people were computers and then we invented these things called computers and now life doesn't exist without
them so the mission of the institution has evolved uh to decode technology the Computing path where we get agency Because we have the collection the digital present which is a moving Target and it's ever changing and the future implications on The Human Condition and uh the program tonight is going to speak deeply to that from a personal perspective and I'm very excited to welcome Fe Lee Dr Lee to the program tonight her story uh from this book the world I see is really one that speaks very deeply at a personal level but also at a
world class and definitive Professional level about the implications of computing on The Human Condition so we do believe that all these Technologies can be used for the greater good um and that is our goal at the institution is to help people realize uh and consider the best use of these Technologies for the greater good I'd also like to thank tonight's sponsor the Patrick J McGovern Foundation I think we'll put that up on stage we couldn't do these kinds of programs at Scale uh and the McGovern Foundation has been a terrific supporter Pat McGovern uh senior
uh passed some time ago was one of the founding Trustees of this Museum going away way back to Boston where it was initially instanced and and obviously moved to the Bay Area some 20 plus years ago so we want to thank them for their support in an ongoing series about Ai and The Human Condition so uh without further Ado uh you have um I think you're here because you know uh a Lot about or you're interest Ed in learning more about what's going on uh with AI and The Human Condition and let me introduce Fe
with a little bit of her background she's the Sequoia professor of computer science at Stanford and the Denning co-director of the Stanford Institute for human centered AI uh she was the director of Stanford's AI lab and during sabatical was the VP of of um Chief scientist of AI and ml uh artificial intelligence machine learning At Google uh and did an immense amount of research and that uh stay as well she's also serving on the national AI research resource task force which was commissioned by Congress and the white house so please I'd like to welcome uh
Dr Lee to the stage thanks thank you and Tom khil is here to moderate the conversation I had the Good Fortune of meeting Tom some long time ago when he was working in the White House of office science technology And policy in the Obama Administration he worked with both the Clinton and the Obama Administration has a lot of experience in the policy Arena as well as um working very aggressively in the national uh nanotechnology initiative and the Brain initiative he's now currently the CEO of Renaissance philanthropy which is a new organization focused on the consideration
of this new Renaissance which we're living through and in many cases creating in the Neighborhood so I want to want to thank Tom and F for the program and uh join me and welcome them to the stage thank you let me leave you with [Applause] your okay everyone has to rush out and buy this book get some for your friends and relatives as well it's it's a great read um so uh fa we we've got to see how nerdy this audience is um so how many of you could explain to someone else how stochastic grading
descent and back Propagation work ra raise your hand okay all right okay great um so uh fa one of the things that you talk about in your book is uh a little bit about the history of of AI so I'm wondering if you could start with what was going on in 1956 and how long did it take the researchers then to to to figure that they'd be able to solve artificial intelligence okay well first of all thank you thank you uh Computer History Museum thank you Daniel and uh Tom for Inviting me I do want
to say that happy for those of you who are celebrating lunar uh calendars happy mid-autumn festival [Applause] today um okay now let's go back to 1956 that's not the dmouth uh Workshop was it yes okay I thought that was 1959 okay my memory has faded so um there are real historians in this audience I know that so 1956 a steamy summer in uh Dartmouth College uh the founding fathers of AI John McCarthy Marvin Minsky Claus Shannon who's the fourth person um there's one more person sorry we remember this um and uh also u convened a
group of computer science scientist under I think a uh a small Grant from DARPA right uh to discuss the future uh the Computing and that time I think John McCarthy um just newly minted this uh field calleded uh artificial intelligence and and they spend the Workshop in that summer try writing this white paper on what artificial intelligence is what would it do how would we solve this problem focusing really on reasoning deduct deductive reasoning and and uh trying to make machines think like humans answer questions make decisions and uh it's been uh quite a journey
you know 70ish plus years and uh I we have seen ups and downs we have seen real You know you think we're in a hype cycle now we had hype Cycles uh in the in the' 70s about expert systems uh really starting to see real applications of using first order logic and expert systems to to um in in AI that time but then that bubble crashed pretty badly because um it didn't deliver at that time I think there were magazine covers that talking about robots taking over the society 1970s and uh that didn't deliver um
the Funding really drained funding at both uh in Academia as well as in Industry drained I think uh um military funding or defense funding was still there but some researchers actually shied away from those uh from those funding sources so by and large the whole field shrunk and then comes 1990s I would say that there is a quiet um Revolution that started to happen in the field of AI the whole public world still sees that period as the winter AI Winter but I personally think that was the that was the early spring where the green
shoots exactly the snow hasn't totally melted but I think that was driven by first um it really was driven in my opinion by statistical modeling which combined with computer programming we start calling it machine learning um the field of AI and machine learning found its language and and that language through statistics through machine learning start to crack open individual Fields like uh uh natural language programming computer vision speech recognition and uh and and research start working these uh fields in pretty deep ways um personally I entered AI as a PhD student at CCH in the
year of 2000 so a lot of the public still thinks that was the that was the uh winter time but for me two things happened during my PhD I think that was defining of my generation of AI researchers one is statistical machine learning and that's When a lot of you know um I my first class in graduate school literally the first class I walked into was called neuron Network and pattern recognition um we read back propagation papers and but we also did support Vector machines basion net you know boosting methods and kernel methods and all
that that's one thing that's happening to us we use these tools to start to look at AI problems like computer vision but another thing I think that happened um Outside of our Labs outside of Academia that came to have a defining role in AI is the internet because uh I think Google was founded in 1999 uh or 2000 um the internet start to give us data and uh and as just to finish as you know and of course there's gpus starting to come in the about 10 years later so so things are star to quietly
converge and I think by um around 2010 to 2012 the the public moment of AI start To really happen it's at least in Silicon Valley the public moment started to happen when right Google and uh other companies were trying to acquire this little startup that probably doesn't even have a name coming out of University of Toronto that won the image net Challenge and uh and then since then we we are in the in the modern AI the the era of modern AI Rebirth of AI right so a project that that you worked on played a
very Important role in changing people's views about about what was possible uh and that was the image net so you you worked with your colleagues to create a data set of 50 million photos and labeled them so why did that play such an important role um in help thing to jumpstart This Modern Wave of of AI right so for those of you who don't know imag that was a a data set project a data set project that was started in back in 2006 and took a few years and uh A published in 2009 at the
end it in 2009 it became the biggest data set in AI field it is consisted of 15 million um internet images uh human sorted curated and and organized and cataloged across 22,000 uh um natural object categories and uh image net project um three at immediately after we published image that as a open-source data set we engag the research community in this annual image net challenge to ask machine Learning researchers and computer vision researchers across the globe to participate in this annual challenge of what we call object recognition and that annual challenge uh began in 2010
and led to the moment in 2012 which was a uh the first place winner of that Year's challenge is what now everybody knows called Alex net was a work done by University of Toronto researchers including Jeff Hinton elas saser um Alex Kushi and uh and that moment is pretty Symbolic to the world of AI because three fundamental elements of modern AI converge for the first time and that was namely new network this is why Tom was quizzing you on back propagation that was the that was the mathematical underlying mathematics of neuron network uh so so
first element is Neuron network uh second element is um um big data using image net and the third element is G GPU Computing and at that time it was two gpus um you know um and uh the Significance of image net is um it's kind of trivial today everybody knows AI is driven by data but pre-image net people did not believe in data everybody was working on completely different paradigms in Ai and with tiny bit of data uh sometimes not even you know like handcrafted feature engineering exactly exactly so this very radical idea is I
we had was scratch all that in the data models with data drop drive high capacity models with Datadriven uh methods and to drive generalization in AI that was that was deeply uh suspicious by many people right and so um so there wasn't this view that hey one way to think about these neuron Nets is that they're Universal function approximator and if you give them enough examples they can map between the input they can learn a function that will map between the input and the output so that that wasn't of the mainstream view no it wasn't
I see Okay now I thought it was interesting in your book that a lot of your more senior colleagues uh were wondering why you were doing this uh so uh so I think this is a good good example of if you believe in something you know sometimes you should keep doing it because obviously it had a huge impact even if you're not getting the love from your colleagues that that you would like at the time yeah but you know look I don't I did write it from a negative point of view I Think this is
scientific progress is that being challenged whether it's your senior colleague Junior colleague your student I'm constantly challenged by my students and I probably have 99 stupid ideas um every day maybe once in a while one good idea so it was fine I was challenged it was fine because it was a untested idea but I guess the flip side of the story is for for especially the younger people is just because you're challenged does doesn't mean you should Give up so that's the important lesson here yeah yeah um so you know now going from 2012 to
2024 uh what are some of the most important advances that you think we've made in the interum in AI right believe or not 2012 is also the same year Jennifer daa and her colleagues discover crisper she and I had a conversation that 2012 um it turned out you know um two major Scientific technological breakthrough came in a so okay so 2012 since then it's been 12 years and uh what has happened well several things has happened right um so in the field in the in the more research field I think Alex net plus image that
was a major moment it it really um opened the door to the Pioneers including the the technology companies like Google start to doubling down on uh on deep learning it was the beginning of deep learning era then I Think a public moment came in uh January 2016 when Alpha go um played the gold Master Lisa do and won um won the matches and I think that was the first time that um there was the first time public was aware that machines are powerful enough to challenge humans in tasks that humans tend to think is deeply
um unique to our ability right and uh and also it introduced the a new class of algorithm called reinforcement learning you know That that in in on top of uh deep learning so that was uh that was a moment and and there as this point between 2016 to 2022 I think it was a gradual increase of just more investments in AI in big Tech in um in entrepreneurship there is also we're starting to it also coincides with the first glimpse of tech lash um I would say Tech Clash for a lot of us happened after
kambridge Analytics You know 2016 election but it was around the time machine learning bias was being called out around the time self-driving car fatality happened the earliest I think it's around 2017 MH so we start to have a societal conversation um and and tension between excitement of tech but also concerns of tech all this I think accumulated in um end of October 2022 when Chad happened Chad you know for those of us who are researchers we kind of saw that was Happening you might be thinking oh she's just bluffing but I'll tell you I'll tell
you why because we're in the Hat of the co-director of Stanford human Center uh Institute in 2021 we actually founded the world's first center for research of foundation models because we saw gpt2 results and at that time the public was not aware but the researchers like us realized my colleagues percil Leon Chris Bing they realized oh my God this is going to Change so we immediately put resources to form this Center so when chat GPT happened we were kind of grateful we started this but we also were shocked by the medior mediatic rise of the
attention I think the difference between alphao moment and Chad GPT uh moment in terms of public awareness it's not just the number of people is the first time AI is that intimately in the hands of individual users Alpha go is not in the hands of any user other than You know the The Go Master but Chad GPT is at your fingertip and that was an Awakening moment not only for every single individual it's also Awakening moment for governments right the kind of you know before chpt our Institute had part of our mission is to bridge
the gap between Tech world and policy world so you being in Washington I I would not naturally fly to Washington all the time but I was going to Washington just to to Continue the conversation but after chbt it was like Washington was just calling us you know um so I think like what's going on exactly so I think really these 10 years has been the public sees this in discrete dots of events we see this as a continuous just just more and more just log log plot right right exactly more and more Investments and uh
movements yeah yeah so is there still a debate within the research Community about Whether these large language models are stochastic uh parrots or whether they're still or there's actual reasoning going on so what what do you is what do you think of that debate I understand why you use the word stochastic C parot because it's specifically coming from a paper that is critical of a large language models and I think it's important to recognize we do need to criticize these models from different angles both in terms of its ability Energy consumption its limitations the bias
and all this but I would have call from a scientific point of view I would use a more neutral tone rather than calling it you know either God or parrot it it really is a large model that has so much ability to not only pattern match pattern learn but to also do prediction and also to predictions with very Capable um demonstration of even some level of reasoning right because it's able to explain to you what things are right um I know that there's just a new release a few days ago 01 I personally haven't had
time to to test it there it took reasoning to even another step further in inference time so I think it is fair to say it does have the pattern recognition which some people might call parenting right parting uh ability but it also has uh some level of reasoning MH but I'm always so careful especially being an educator my responsibility is to be an honest Communicator with the public I'm always so careful from hyper hyping up what this reasoning is includ including some more hyperbolic extrapolation of you know um sension or or Consciousness um so you
know what do you think is likely to happen over say the next 3 to five years so what what do you think are some of the biggest Limitations of the systems as they currently exist and what are some of the areas uh that you think we can make real progress in in terms of improving their performance right Tom I don't know if you're asking narrowly about language models or you're asking about ai ai in general yeah so for example there's some people who believe that uh you know we can just make an incredible amount of
progress by buying more gpus so buying two million gpus rather than two Gpus uh and you know more data more synthetic data right so we you know Transformers attention is all you need we you know so so there are some people who believe that that we can just make an incredible amount of improvement by scaling up the technology as it exists today and there are other people who say well today's version of AI has these fundamental limits and we're going to have to explore new approaches like you know neuros symbolic approaches or Something like that
so where do you do you have like a strong view on on that debate well so first of all all good points the truth is I do think we're in a real AI digital Revolution so so the next 3 to five years will continue to be very exciting for technology but also tension filled for our society including policy uh everything you've asked is more on the technology side so first of all I fundamentally believe every point in human history the technology and Science are limited you know it it we can always push the frontier forward
for uh personally speaking I'm so super excited by spatial intelligence that's Way Beyond language if you look at human and animal intelligence language is only one piece of intelligence even if we're looking at advanced intelligence humans have built civilization based on so much that's beyond language from you know the construction of the pyramid to the to the um intricate design of machine for The first Industrial Revolution the discovery of DNA structures the the the creation of C cinematography and and all this a lot of this is built upon spatial intelligence that goes beyond language so
there are definitely new doors open uh other than language um technically speaking um scaling laws for data we're still seeing a very healthy evidence of scaling law for dat data but it's also very intriguing that we're hearing more and more about um where are we running To the limit of data MH especially text based data on the Internet it's very likely we're we're possibly running in the limit but where I sit in in uh um higher education I also see that there's so many pockets of scientific discovery where data haven't even been properly harvested you
know from digitization of some of these data to modeling of these data so I actually think the next 3 to five years we're going to see blossoming of scientific discovery in different Fields because of AI ml not just the commercialization of large Foundation models uh we're going to see more spatial intelligence I'm personally involved in that I'm excited by that uh we are also going to see the next 3 to 5 years is not just the years of Technology it is also the years of how we deploy these models how we govern these models now
that you know here in our home state California there is AI bills being discussed right and uh Personally I'm both supportive of safety measures and policy measures but also I'm concerned about you know even while intended um bills that might have unintended negative consequence to the to the you know scientific and uh open- Source community so all this will play out in the next 3 to five years for sure yeah so I definitely want to come back to the policy issues but I I want to maybe have you describe for the audience a little more
about what you mean by Spatial intelligence so what does it mean for a computer to be able to see do and learn um and what are how would we know whether we were making progress uh in spatial intelligence so you know one of your uh colleagues at Stanford uh Chelsea Finn said you know we're still very far away from being able to have a robot show up at a house uh that it's never seen before and make breakfast for example very far I I can't wait but it's Very far so I was just um this
audience is so dark Tom and I cannot see show of hand so I won't ask questions but uh if you CH Trace back the development of human language of course this is still a scientific study area but roughly the earliest protol language moment happened in the very early ancestor of humans about one to two million years ago that's the earliest you can trace Trace back a lot of people Say language is developed within the the the language we use today is within the the last 300,000 years but if you trace back the ability to see
space the 3D World to understand what's going on to see obstacles to see food to see how you navigate to reason about this it traces back to 540 million years ago that's when the animal world underwater first developed light sensors and with that ability perception began with that when perception began animals start to move In an intentive way before that they're just floating around they can't they can they were probably touching a few things because there was early tactile sensors but it was very very chill but once you can literally speaking once you can see
did get that from your kids yes once you can well I work with young students they um once you can see you you start to develop spatial intelligence you start to plan your life you start to see food you start to hide Away from being someone else's food and that evolutionary process of intelligence just began so spatial intelligence summarizes all this ability in today's language I would say is the ability to understand reason generate and interact with 3D worlds now we liveing simultaneously physical world as well as digital worlds so this spatial intelligence applies to
both physical and digital worlds so which ties back to if you ever want a robot that can come To your house to make breakfast one of the most important thing the robot needs to have is icial intelligence because the robot needs to know where's your fridge where's your stove where's the egg how do you you know crack open an egg and the and put it in the pen and and all this all this is part of spatial intelligence mhm got it um why is that so [Laughter] Funny so uh so there's a lot lot of
discussion about this concept called artificial general intelligence and I'm wondering whether you think that is a useful concept or not so what people usually mean by that is that it will be possible to do maybe let's side the set aside robots for a while because that's a little farther off but that in a if you imagined a sort of a remote worker uh that you would be able to have an AI That does sort of every useful economic economically useful thing that a human does uh we would be able to do with uh some AGI
first of all do you think that's a useful concept uh uh number one and and number two do you think uh you know some people are saying oh that's going to happen in three years do you think that's sort of wildly optimistic right that's a good question it's also you know I have to admit that Uh it's such a Silicon Valley question MH um that's where we are f f i know um you know sometimes in my head I have dialogues with the pioneers of AI John McCarthy you know Marvin Minsky Ramen Raman har also
you know Alan touring he probably would not have called himself the pioneer of AI cuz when he was daring Humanity with the question of thinking machines eventually translated to touring test he wasn't thinking about the words a I yet it wasn't invented but When I was having dialogue with these Giants I think that their definition of AI would be very similar is that General capability of intelligence so if they C AI having that in mind it is hard for me as a scholar to differentiate the word AI from AI mhm because I think they're deeply
overlapping right and if you look at when AGI as a term came about it was probably not even 10 years ago it came out of more the industry marketing World MH I there's nothing bad about it but From a academic scientific technological researcher educator point of view the the the some of you who read my book knows I I I use this word North Star a lot as a scientist we chase the hardest problem that we might never solve in our lifes time but they Inspire us and I think the Northstar of AI as a
field is always that General capability so what do I think about the word AGI nobody asked me when they invented that word I it's fine but I the AI that As a field that we we love we we still believe in is largely overlapping with the this definition now 3 years are we going to achieve that um if I mean in front of a venture capitalist I'll say yes of course but you're not and you're not so I I think I think we need to be responsible you know what does that mean would machine surpass
human in important tasks we have already done some some of that right you know like 20 the DARPA Grand Challenge of self driving car was 2006 right 2006 my colleagues abasan thr and his team drove let a car drive 138 miles in the desert of Nevada right and that was that was a incredible capability and then we have ma that can translate tens of languages that's just superum we have we have so many tasks that already surpassed Alpha fold is it yeah Alpha fold Alpha go you know even image that that's 1,000 Arcane classes of
objects like Star nose mole or you know so many species of dogs and these are all superhuman capabilities um so so I think we have achieved some we will continue to achieve some but that I I think without a clear definition if you're if the holistic of Being Human and being as intelligent as human as being as intricately and and complex as human in three years I do not believe mhm Okay um so let's talk a little bit about what you're doing at Stanford uh with this initiative on human centered AI first of all what
what do you mean by human centered AI yeah that's a great question I think human Center AI at this point for me is a frame framework to think about my yours AI work because AI is made by people is used by people will impact people's lives and what is a guiding framework to think about this Technology I I came out I I think in March 2018 I was still a chief scientist at Google I wrote a New York Times article call putting the stick on the ground and calling this framework human Center AI precisely because
I was so inspired by my work at Google I had the chance to interface with so many businesses from Individual developers in Japan cucumber Farmers using AI all the way to 4 50 companies hoping to use AI to Revolutionize their their entire business model I realized this technology is bigger than anything I had imagined it's going to impact our lives and businesses and and World in such profound way and that realization actually send a chill down my spine it is scary to think that way it is scary to realize a tool can be that powerful
and we better think about the implication and to me that deep implication has to be grounded in the Human implication and once I thought about that my colleagues and I at Stanford put that stick on the ground and say we need to approach AI with the human Center framework now at Stanford a uh hii we think about the human impact of AI in three concentric Rings individual community and Society I'll give you an example individual really has to do with every single individual matters how does this technology um you know impact you Or benefit you
you know if you're artist how are you using that to augment it or is it taking away your intellectual property if you're a patient is this technology making you um heal better without taking away your human dignity if you're a student how are you learning you know uh this uh anything you're interested through this of through the help of this technology so there's individual impact then there's Community impact right how is AI how can AI be Used as a tool to help communities that are underresourced for example AI plus tele medicine is a deeply deeply
um good use case for communities that don't have access to hospitals and and enough doctors in the meantime can biasing AI impact one Community more than the other we're seeing that already so that's the the the the community uh uh aspect then we have Society right today we cannot stop talking about ai's impact in November our Democratic Process how is AI and deep fake and and information war or going to change all this we cannot stop talking about jobs MH you know from software engineering to to um truck drivers to Radiologists in in AI is
impacting the whole society so all these are are human issues so math is clean but human world is messy and AI has exited from only that clean math clean programming world to a messy human world yeah somebody once said technology is easy humans are hard yes especially Little ones yeah what are some of the potential benefits and applications of AI like ambient Health that that you're most excited about right thank you for queuing that because it's chapter 10 of my book but there really it is um really it is um boundless I personally I got
very inspired by just spending endless hours sitting in Primary Care in EM mergency Department Outside of uh uh surgery rooms in Ambulatory Care settings because I have a alien parent who is deeply deeply ill for so many decades I take care of my mom and I realized our our our Health Care system is full of humans taking care of hum humans but all these humans health care workers from nurses to doctors to to um caretakers they don't have enough time and they don't have enough help and so am ambient intelligence in health care Setting really
uh came from a collaboration between me and my collaborators in Stanford Medical School wanting to use technology to provide an extra pair of eyes and ears to help doctors and nurses and caretakers to to make sure our patients are safe or their their situation is not deteriorating rapidly for example I I don't even want to see a show off hand it would just make you make me sad but so many of you have personal family members and friends Who have fell and that's a deeply deeply painful and costly injury especially for elderly but how do
you predict that how do you alert that how do you help them how do you help our elders or or patients that you it's hard to have a human to be watching 24 hours but computers and cameras can help um or you know um ambul uh ambient intelligence can can help uh monitor the the the the a a COPD patient in terms of the way their their conditions are And alert um doctors when the oxygen has changed rapidly or some situation has changed so that's just one example of AI being almost a guardian angel to
be augmenting our caretakers to take care of people but we're see exciting use cases in education personalized the learning right is so obvious that AI can um can be a a tutor can be a TA can be a a teach assistant to our teachers in different learning environments I think one of your former grad graduate Students Andre is is working on that yes I exactly yeah I just saw him a few days ago yes but there are a lot of use cases in agriculture believe or not I had a former student years ago before deep
learning re uh Revolution started uh or co-founded a startup using computer vision technology to detect weeds in fields so that it can um you know um it can keep the crops healthier you know I've heard salmon Farmers using AI to Help farming Salmons you know the the use cases of positive uh uses of AI is just countless right and so how how can we prepare more people to have both a computational back background but also be a domain expert for example in the same way that your colleague Daphne Culler has a machine learning background but
she's also learned a lot about Healthcare and and Drug Discovery because it seems to me that the people who have a foot in both worlds both the Computational expertise and domain expertise are going to be in a position to help identify some more of these compelling use cases that's a great Point Tom um I deeply deeply believe in interdisciplinary and multi disciplin disciplinary approach you know even even if you don't want to get a PhD at the intersection of I personally got it at the intersection of AI and computational Neuroscience or or intersection of AI
and com computational biology or Intersection of AI and political science even if you don't go as deep as a PhD in all your journey as a student there is a lot of value to embrace both the Computing the stem Fields as well as your areas of passion um whether it's biology or art or or uh policy or you know chemistry and so on so for students out there if you're in school if you're thinking about college if you're in college I do think what Tom said is really valuable is to embrace that Interdisciplinarity I think
zooming out a little bit um you know AI is the new language for computing I I have been uh quoted in saying that anywhere there's a chip there is or will be AI as small as a light bulb which has a chip in it there will be AI as big as robots and cars and and whatever so given how important this technology is I do believe in um in um educating our our kids young educating our students from all background all walks of life educate Our public with this uh with this technology at least if
you if not coding at least know what this is but last but not the least I also think even if your passion is not in Computing computer programming or in in the technical details of AI if your passion is in arts if your passion is in political science in law in medicine there is a place for you because because it's the domain experts who will be using AI to make a difference in your domain so don't be Afraid of embracing it from your you know perspective and use it to make a positive difference yeah um
which of the you know there's people list a lot of potential risks so you've already talked about some of them you know people are going to lose their jobs uh you people will use deep fakes to disrupt elections they'll be we'll be reinforcing existing biases um you know some people have more speculative concerns uh like uh this Idea of instrumental convergence you know so if if a if we give an AI system a a uh an objective function of trying to achieve some goal then it's going to have a sub goals wanting to make copies
of itself and have access to more computational power wh which of the uh the risks that people talk about do you take the most seriously yeah look there are many risks every technology every powerful technology has created harm has been Used for harm and even well intended has had unintended consequences and we have to face that but if you are forcing me to pick a risk as an educator I would say the biggest risk of embracing the new era of AI is ignorance is it's not just the the basic ignorance of I don't know how
to spell the word AI it's not it's even some deeply learned knowledgeable people when they are ignoring details nuances and are communicating AI in hyperbolic ways That is a risk to the society but ignorance you know we know if if we're too ignorant of this technology we miss the opportunity of using it to our benefit if we're ignorant of this technology we cannot call out or recognize the actual risks if we are spreading the ignorant um message we also are misleading the public or policymaking so the the a lot of these um a lot of
the root of these these issues are actually stemmed in the lack Of understanding so that we're not assessing risks right or we're hyperbolically communicating it or we just completely just missed it so so so that's how I would put it and and what are some examples of that that you see now where people are are saying things that you think are to are totally off base well okay so I think anybody who says AI is all good as if you know you can swap in this word technology is all Good it's only good it can
never do bad I think that's that's a ignorant ignorance of the past right we look at humans history with tools every tool has been used in harmful ways you know so we have to recognize if your data set is biased you're going to have really bad Downstream um uh impact in terms of fairness if you uh if you are making you know um if you don't know where how the the the the AI is made you might actually be so ignorant and be working With deep fakes without your knowledge so these are all not good
but there's also another swing which is this is such a demon that it's existential crisis it is going to proliferate itself replicates itself turn off I don't know uh Power grids and and all that also I think that is hyperbolic and it ignores that AI is not an AB ract concept it's actually lives in physical systems even if it's virtual software digital programs it lives in Physical system it lives in data centers it lives in electric grids it lives in a human society and there are so many things that is Tethered and and and um
and contextualized that uh um that it's you know that hyperbolic assumption right is but you know some of the people raising some of these more speculative concerns are people like Jeffrey Hinton who you know presumably understand the technology so so why do you think that There are so why why do you think that there are people who have been deeply involved in the technology who've gotten more concerned over the last couple years so first Jeff I'm really really respect je I knew Jeff since I was a graduate student um actually last year I was in
Toronto I had a public uh discussion with Jeff Hinton um on this very issue and it's on YouTube I I think it's one of the very few times that Jeff and I uh or Jeff and anybody had a Public discourse about this if you listen to him carefully he is concerned he's also calling out the potential um risks right but you know there is also a layer of amplification of his concern and and we have to dissociate I I totally respect that discussion with Jeff and I agree with him irresponsible use of this technology would
lead to really dire consequences and he has his version of Irresponsible use I have my own version of irresponsible use yeah I think also you know I respect every individual for calling out these risks in their own ways but I also want to be a responsible communicator and educator I want to let the public know that it is still our human Collective individual responsibility to harness and govern this uh technology and there is absolutely it's not only there is time it's absolutely there's time there's Also you know there's just everything we we it's in our
hands and we shouldn't give that up right so you talked about governance um and you've played a very important role in getting this idea of a national research Cloud on the political agenda um if you did have an opportunity to uh brief the next president and they said uh fay F what should I do uh you know what what advice would you give the next president about the most important things that the uh that the US Government could do to try to promote the benefits but also understand and manage the risks right um I probably
will say the same thing as I said to President Biden last June and also earlier this year I met him at the State of the Union uh Speech is that I believe that our nation needs a very healthy AI ecosystem and when I say it's a ecosystem it includes um um public sector Academia uh entrepreneurship We Now call Little Techs as well as big tech technology and uh our country is a very strong democracy and we believe in the value of this democracy and I believe that having a healthy AI ecosystem plays to our strength
and can have a very positive role and but what's what's something we could do to try to public investment yeah Public public investment is really really important now that I am partially in the private sector it makes me even More uh convinced that the discrepancy between private sector investment and public sector investment of AI is just so huge right like my Stanford a my Stanford computer vision lab shared with a couple of other faculty has zero h100 mhm it also has zero A1 100 we're still using a6000 and other older chips MH and uh and
yet the the the the big Tech has you know like you said hundreds of thousands and millions of chips and I think that public sector investment is Where The Gardens of ideas the flowers Blossom we wouldn't be here today and I wouldn't be here if it were not to public sector and also I mean when did Jeffrey Hinton start working on artificial neural that's how many decades ago yeah when he was in CMU or maybe even earlier Right image that was from uh from uh public sector you know and the next three to five years
we talk about the scientific discovery we're going to see exciting things coming out A lot of them will come out of public sector and also the best thing that come out of public sector Academia guess what are people exactly so we need to invest public sector yeah great well uh we have a very smart audience so I'm sure you've come up with lots of really good questions um let's see uh one was for your new company uh how will you collect enough data to build a spatial map of the world to support real-time Localization so
you you might want to address the premise of that question but clearly you know data is going to be you're not going to be able to make progress on spatial intelligence in the absence of data so maybe you could address that right we are not publicly discussing the details yet because we're not ready when we're ready we will I'm a little amused that this person already knows where we're building uh that's their version of story I'm not Commenting on that but you're right um AI is driven by data it's important our company spatial intelligence is
absolutely pixel based right so a lot of pixel data is will be driving this technology right here's a great question um from Amy uh and this relates to something that you worked on uh a AI for all but she says um I'm a 12-year-old Middle School student what can we do to encourage more girls to study Ai and better prepare for The AI era great question um I I think every 12 year-old should be encouraged to embrace this no matter if you're a girl a boy you live in rural world you live in Silicon Valley
this is this is if you love it embrace it and for you know for for Amy as well as just thinking about when I was 12y old um there was no AI well at least I didn't know there was AI um I Loved math I loved physics the one thing that I'm grateful today that my parents and my teachers did to me and I will say it to Amy and all the students out there is Follow Your Passion follow your curiosity and uh be resilient do not if there are negative voices just tune it out
there are plenty of people from your parents to your teacher to your friends to to your role models are out there to support you and uh just keep doing it keep Going uh what's the most important human problem to solve with spatial intelligence Beyond making breakfast lunch no just Kidd um spatial intelligence really can power many things from creating to design how many of you want to just have a app that you can just imagine all the furniture you know rearrangement um to robotics to arvr to you know um specific areas whether it's teaching learning
Health Care you know Factory manufacturing and all that so so it really is um a deeply prevailing horizontal technology that can impact all all of these areas so we have a question about the combination of small models and AR glasses so is is that something that uh that You' thought about I'm definitely excited by the New Media I know this is early right like again we're in Silicon Valley I'm sure many of you have stayed up late to buy the The Vision Pro so I Was very excited actually apple called it spatial Computing cuz at
that time I was already thinking about spatial intelligence for many years and and I was like yes because spatial Computing needs spatial intelligence so but just this form factor of um glasses or I I really believe in glass possibly headsets but glass are very exciting to me um and uh the edge compute uh or or small models it's very exciting but um small models can can be useful not just For glasses and and headsets it really is very powerful for uh Edge compute whether it's smart devices robots especially home robots you cannot carry a server
in the back trunk right so uh so there's a lot of use for small models yeah I'm very interested in the role that uh multimodal models and smart glasses could play uh in terms of Workforce Development yes so there are a lot of you know we don't have enough electricians so you could Imagine that sort of earbuds Ai and smart glasses abut providing sort of just in time just enough training as part of a you know an apprenticeship program um uh what do you what can we do what can the research Community do what can
companies do to address the fact that uh other languages are other than English are underrepresented right so this is a great question this goes to data bias And all this first of all I do think when I say public sector investment in AI I think every country should have public sector investment that itself is related to the local culture local language you know so so from that point of view it is individual researchers it's important individual researchers pay attention but it's also important that governments and uh and big organizations that can deploy large amount of
Resources pay attention to This it it's absolutely true that uh English is dominating and we should be aware of that and uh um this goes back to my point of public sector investment even in this country right is that I'm sure we have incredible researchers students out there thinking about other languages but right now they're lacking data sets they're lacking compute resources and uh we need to fix That um so there were some kind of philosophical questions uh from the audience um and I'm I'm wondering uh you know if you if you can talk about
what sort of effort you've made to engage people in the humanities and the and the social sciences yeah at at at Stanford and what are what are some examples of insights that that they've been able to provide that have been sort of interesting to you as as a computer scientist actually this is the Most fun part of my last five years establishing and co-running this institute is really Reaching Across the campus Stanford particularly has about I think eight schools from school of the law school the business school the medical school the the now the sustainability
School the the humanities and uh Natural Sciences School uh the engineering school just just talking to colleagues and reaching out to students and researchers and Scholars across the Campus is extremely fun and Illuminating what what have I learned for example I um you know um talking to my um talking to my Humanities colleagues really opened up my understanding of human expression and creativity and what does it mean how do we think about ai's relationship with people who are deeply creators you know especially when Chad GPT and Sora came out you know from Hollywood's uh writer
strike all the way to the the concern about the voices the The the the the artist the the um individual um copyrights uh all the way to artists were at the avagard of embracing this tool it's just so complex it it really I I didn't have a formal um education to even wrap this around my head and they teach me in thinking about that one thing I did learn again talking to this audience probably deeply te technical I think it's really important technologists listen and reach out to humanist and and social Scientists and and also
in your own work setting it could be you know legal it could be you know um product it could be marketing it could be many different functions uh because technology doesn't live in a vacuum Tech you know it takes a complex human effort to make technology Bel benevolent and good and just going in with that humility and respect and and giving the other side the dignity they deserve is really the the the most fundamental thing we could Do to form these Bridges um how important is it that you think we make uh progress in areas
like explainable and interpretable AI That's a great question I think by in large it is important but again I think it's important we we go a little nuanced for example even explainability has different layers um for example everybody knows Tylenol is good for fever and headache explain to me the molecular pathway of Tylenol in fact even today even scientists don't know all the details yet you will never say Tylenol is a inex explainable drug is because there is a whole system around drug development around regulatory measures around the the approval process of a drug that
has enough of the the the the the explanation that that made you or most of the public convinced and and feel trust feel trusting so that's one way of explainability another way of Explainability is um uh for example you know especially you Tom drove from Lafayette over here Google if you put in Google map it'll give you choices right you know this route you pay but it's 4 minutes faster this route is syic I don't know if there's any syic route from LA to Mountain View right now but honestly that doesn't explain to you the
algorithm of point A to point B but as a human user you feel there is enough explainability in terms of in terms of Your choices and again back to Medicine we we hardly any of us were not doctors understand treatment yet your doctor used certain kind of human language to explain to you what this treatment is I'm using this example I'm spending time to use this example to to to kind of share with you that it's important to think about the case use case it's also important to think about the definition of explainability and and
that definition the particular definition and Particular use case really need to match sometimes we don't need the mechanistic molecular pathway level explainability sometimes we need a different explainability and we so so to answer your question it is important but it depends on the use case it's important in different ways uh well we have a lot of people in the audience who would like to know more details about your business plan for World Labs but we'll we'll skip those questions um so they're the VCS in the audience yeah um so there's question you know you you
mentioned that in addition to studying AI you also studied Neuroscience um and so there have been some people who are interested the question uh you know what can AI learn from Neuroscience so the you know convolutional neural networks were at least sort of loosely inspired by how The you know the human visual system works uh you know people have looked at the uh dopamine reward circuit and that's been a source of inspiration for reinforcement learning right um are there other areas where you see you know potential collaboration between uh neuroscience and and AI clearly Mother
Nature has figured out something about low power Computing because our brain only uses 20 watts right exactly dimer the any light bulb in this room so you Know when we founded Stanford Hai one of the three major research pillars is neuroscience the in the cross disciplinary collaboration between neuroscience and AI to me is foundational to the advance of our field and also to the advance of both Fields going forward and I've got I'm very very uh lucky to work with colleagues like Seria gangully and Mike Frank and Noah Goodman a lot of colleagues at um
at Stanford are at the Forefront of this Interdisciplinary research right for example um um young children's development you know early days kids uh very young children do a lot of curiosity driven um uh learning how does that translate to um AI you know AI systems right that's one in Inspiration we also know that um that backprop is a very very simplistic translation of what's going on between the two neurons in our brain in addition to synaptic connections there's a lot of dendritic Connections that are actually deeply electrical chemical and very nuanced no machine learning algorithm
today has incorporated any of these Complicated new uh you know interesting uh uh synaptic and uh neuronal communication channels and the flip side is totally true our neuroscientist colleagues whether they're using animal models or cellular models are collecting a massive amount of data and using machine learning is or AI is a fascinating way Of helping them to discover their their science and last but not least um even my lab I I find it fascinating that now we're collaborating with psychologists and using um non-invasive electrical um electrical um uh EG waves from humans to Drive robots
you know that's completely non-invasive ways so so the point is these two Fields have a lot of uh uh cross pollination and to me is one of the most exciting area of interdisciplinary research right right Well fa we have enough questions to keep you here until 10 uh but uh uh please join me and and thanking fa for a terrific interview thank you and remember the world's I see yeah that's for sure so thank you both very very much again this is a terrific there's so many takeaways um that I have personally um the public
support uh is really one that's so so fundamental I think and Tom you had a Lot to do with some of those things in in in government and um I think without it we're we're going to be at a loss you know at at this stage because so much of this is tied to the societal implications the person that you were looking for the fourth person oh yes at the Dartmouth conference was Nathaniel Rochester yes who who was working at IBM at the time and I wanted to tell all of you who haven't been in
the exhibits Lately there's a holth machine downstairs which is a machine that Herman holth built as a result of a public call to solve a problem that the US government had which was to codify the census from the 1890 era because the techniques by which the the addition was done um would not allow for the census to be counted in due time as a result of the population growth and through the combination of public call and private Initiatives um he came up with a machine that was based upon a punch card which was designed and
built for the jacard Looms in the industrial revolution to store the patterns by which all these fabrics and and drapes were considered so we we just go back to whether it's DARPA funding or whatever there there's there needs to be a societal call for this and if not now I don't know when so you laid out some wonderful thoughts for everyone on stage i' like to thank you Both again and please Jo join me one more time and then don't go away thank you