Even as a AI scientist, I feel that I can hardly catch up with the progress of AI. There's a quote from 1970s about AI. The most advanced computer AI algorithm will still play a good chess move when the room is on fire.
>> Dr Fay Lee is a professor of computer science at Stanford University as well as the co-director of the Stanford Institute for Human-Centered AI. We're gonna discuss how she's creating eyes for AI with computer visioning. >> There's just so much public discourse about AI and many of them are ill informed and that's dangerous.
>> Everything that has consciousness has eyes. If AI starts to have eyes, wouldn't it just be that they're living and sentient at that point? >> AI as a technology can be [music] used by the badness.
So from that point of view, I do have fear. it can go very wrong. If you don't know anything about AI, it is important to educate [music] yourself because what's up yap gang?
Welcome back to another episode of our AI vault series. Joining me today is none other than the godmother of AI, Dr Feay Lee. She's a Stanford professor, co-director of the human- centered AI institute, and pioneering scientist behind.
Dr Lee believes that AI is a powerful tool to help us solve important problems, and she believes that AI should empower and enhance our human well-being. In this conversation, we'll talk about how computer vision models are trained, what they can and cannot do, and why ethics and AI isn't optional. It's essential.
You'll hear stories of how AI is already saving lives, spotting disease, and even helping in rescue missions, but also where we face risks and what guard rails we need if AI is going to work for people and not against them. So grab your coffee, grab your notebook, settle in, and join me for this incredible conversation with the godmother of AI, Dr Fay Lee herself. >> Thank you, Hala.
I'm very excited to uh join this show. >> Likewise. I'm so honored to talk to somebody like you given all your credentials.
In fact, Wired named you one of a tiny group of scientists perhaps small enough to fit around a kitchen table who's responsible for AI's recent remarkable advances. So it feels like AI is changing every day. There's new developments all the time.
So my first question to you is can you walk us through the development of AI? Like what can it currently do now and what can't it do right now? >> Yeah, great question.
It's true. Even as a AI scientist, I feel that I can hardly catch up with the progress of AI, right? So um it is a a young field of around 70 years old, but it's progressing really really fast.
So what can they do right now? First of all, it's already everywhere. It's it's around us.
Another name for AI that is a little less of a hype name is machine learning. It's really just mathematical models uh built by computer programs so that the program can uh iterate and learn to um make the model you know predict or decide uh on data better. So it's fundamentally machine learning.
For example, uh if we shop on Amazon app, the kind of uh recommendations we get is through machine learning or AI. If you go from uh place A to place B, the the the algorithm that gets you the the road, you know, the to map out the the path is machine learning. If you um uh go to Netflix, you know, there is uh recommendation that's machine learning.
If you watch a movie, there is a lot of, you know, machine learning, computer vision, computer graphics to make special effects, to make animations. That's machine learning. So machine learning and AI is already everywhere.
What cannot do? Well, no machines today can help me to fold my laundry or cook my omelette. It cannot take away it cannot take away complex human reasoning.
It cannot create in a way humans create in the combination of both uh reasoning, logic but also you know beauty, emotion it um um there is you know there's a quote from 1970s about AI and I think that quote still ex is true today. It says that the most advanced computer AI algorithm will still play a good chess move when the room is on fire. It it's it's a quote to show that machines are programmed to do tasks, but it's unlike humans.
We have a much more fluid organic contextual situational awareness of our our own thinking, our own emotion as well as the the surrounding. And that is not what AI is today. >> So insightful and I love that you said that it's sort of like an evolution of machine learning because I always wondered like, well, what's the difference between machine learning and AI?
It sounds pretty similar. So machine learning was almost like the basics of AI, >> the tool of AI. AI is, you know, it's a little bit, think about physics, right?
Physics in in Newtonian time, the the most important tool of physics was calculus. And uh yet we call the field physics. So artificial uh intelligence is a scientific field that is researching and developing technology to make machines think like humans.
But the tools we use, the mathematical computer science tool is dominated by machine learning, especially neuronet network algorithms. >> So good. So AI is actually fresh on my mind because two days ago I interviewed Dr Steven Wolffra.
I don't know if you know him, >> Mathematica. >> Yeah. Uh he did the Wolfram project and computer language.
Wolf. Yeah. So I just interviewed him and we talked about chat GBT and how chat GPT works.
And he was explaining to me that when they were developing chat GBT, what was surprising is that they found out that these like simple rules would would create all this complexity that they could give Chhat GBT simple rules and then it could write like a human. And it turns out that we actually still don't really understand how AI learns, which to me is like mindboggling. How did we create something and yet we don't even know how it really works?
Can you elaborate on that a bit? >> Yeah, it it really at the end of the day there are things we understand, there are things we don't. So it's not like completely we don't.
So it's neither white box nor blackbox. I I would call it a gray box and and depending on your understanding of the AI technology, it's either darker gray or or or lighter gray. Uh so the things we know is that it is uh it's neuron neuronet network algorithm that is behind say a chat GPT model or a large language model.
Of course you hear the names of transformer models sequence to sequence and all that. At the end of the day, these models take data like document data and it it learns how the words and and sometimes even subwords, right? Parts of the words are connected with each other.
There are patterns to see, right? If you see the word how, it it tends to be followed by are and then it tends to be follow uh by you. So, how are you is a frequently occurring sequence.
So that pattern is learned and once you learn enough in a big huge neuronet network uh your ability to predict um the next word when you are given a word is really really uh quite amazing amazingly high to the point that it can converse like more or less like a human and because in the training data it has so much knowledge whether it's chemistry or movie reviews or you know um um geopolitical facts it has memorized all of them and so it can give out very very good answers. So those are the things we know. We know how the algorithm works.
We know it needs training. We know that um it's learning and predicting pattern. What we don't know is that um because these models are huge, they're billions and billions, hundreds of billions of parameters.
And then inside these models, there are these little ma nodes. Each one of them have a little mathematical function that connects to each other. So how do we know exactly how these billions and billions of parameters um learn the pattern and where is the pattern stored and why sometimes it hallucinates a pattern versus it gives out a correct answer.
There is no not yet precise mathematical explanation. We don't know at the level of there's no equation that can tell us oh I know exactly why at this moment the chat GPT gives you the word how are you versus how is he you know so so that that's where the greyness come from we these are large models with behaviors that are not precisely explained mathematically So from my understanding these neural networks are made to sort of replicate the how the human brain works. Basically there's >> uh I would not use the word replicate.
They're inspired. >> Human brain is it has it has resemblance. For example, they're made by small neuron neuron nodes.
They're connected in hierarchies. But human brains fundamentally work in a chemical electrical way. The way the the neuron neuron uh communication are very complex.
Sometimes it's through spike. Sometimes the spike also releases chemicals you know like there is just these kind of nuanced um function and also the connectivity how how one area of the brain is connected to others are are not um the same as neuronet network. So we're inspired but not replicating.
>> That's a really really helpful uh distinction right there. Yes. >> So talk to us about how AI models are trained like how does AI learn typically?
So typically AI model is given a vast amount of data and then some of the data are labeled with human supervision like if I give AI models millions and millions of images some are labeled cats, dogs, microwaves, chairs and all that and they learn to associate the pattern with uh with the labels. Sometimes in recent especially in language um uh domain the what we call self supervision it's it's you give it millions and millions trillions of documents and it just keeps learning to predict the the next you know se the next syllabus the next word because all the training data is showing you all these uh sequences of words and that there you don't have to give additional label, you just give the the the the documents and that's called self supervised learning. So whether it's supervised with additional labels or uh supervised without additional label is self-supervised, it starts with data.
>> Now data goes into the algorithm and the algorithm has to have a objective to learn. Typically in the language model the objective is to predict the s the next syllabus as accurately as the the training data shows you. In the case of you know uh images with cat labels for example is to predict uh an image that has a cat with the right label cat instead of the wrong label microwave.
And then because it has this objective, it um if during training if it makes a mistake, you know, if I didn't predict the next word right or if I label the cat wrong, it goes back and iterates and updates its parameters based on the mistake. uh it has some mathematical rules or or learning rules to update and then it just keeps doing that till it you know when humans ask it to stop or it no longer updates you know whatever stop criteria and then you're left with a ginormous neuronet network that's already trained by ginormous amount of data and in that neuronet network it has all the the parameters the mathematical parameters that's already learned Now you can take this and now you have a new sentence come in and then it goes through this model because it has all the parameter it has learned it predicts um what I should say given the new sentence like hello hala how is your breakfast today and it would predict I had a great breakfast today or or whatever. So so that's how it's going to be used.
M so it's so interesting like basically like chat GBT it's just predicting the next word and the next word and the next word and the ne next word based on all the different patterns and trying to figure out what makes sense to come next so that's super clear what I don't understand with something like chat GBT is that it's so good at writing human language but it's known to make like simple math mistakes right how is it possible that it's good at doing human language but then on math for example it's known to make like stupid mistakes It's because math um the way we do math in human mind is different from the way we do language. language has a very clear pattern of sequence to sequence like I say the word how uh you know the the the word are and you typically follow but sometimes it doesn't right so I have to learn these patterns but if I say the word one plus it's not like five typically follows or two typically follows right like there is actually a a deeper rule of 1 + 2 equals 3 of course when it has seen enough of that it should do it should predict three for today's language model and actually it does this is too simple an example but the point is that um math takes a higher level of reasoning than just following statistical patterns and large language model by and large follows statistical patterns so some of the mathematical reasoning is is lacking >> Hey young and profiters I got a question for you are you still running your business on a clunky old phone system because that's like competing with one hand tied behind your back. I mean, I've been there.
Missing calls, missing deals, but not anymore. Quo, formerly Open Phone, is the modern way to run your business communications. It's rated the number one business phone system for 2025 with over 3,000 reviews on G2.
You can run everything from one app on your phone or computer. Your team can share one number, like a shared inbox, respond faster, and keep every customer happy. And when you finally log off, Close AI steps in, taking calls, logging notes, and even qualifying leads after working hours.
It's like having a 24/7 assistant that never sleeps. Quo is offering my listeners 20% off your first 6 months at quo. com/profiting.
That's quo. com/profiting. You can even keep your existing phone number for free.
Quo, no missed calls, no missed customers. What's up, Yap gang? If you've ever had to hire, you know the stress of waiting too long to fill a role.
Projects slow down, workloads pile up, and frustration sets in. That's why when it comes to hiring, Indeed is all you need. Other job sites make it tough to get noticed, but Indeed sponsored jobs helps you stand out and hire fast by putting your post right at the top where relevant candidates can actually see it.
And it works. Sponsored jobs get 45% more applications than non-sponsored ones. Plus, you're not locked into contracts or monthly fees.
You only pay for results. And here's how fast Indeed really is. In the minute I've been talking to you, 23 hires were made on Indeed worldwide.
There's no need to wait any longer. Speed up your hiring right now with Indeed. And listeners of the show will get a $75 sponsor job credit to get your jobs more visibility at indeed.
com/profiting. Just go to indeed. com/profiting right now and support our show by saying you heard about Indeed on this podcast.
Indeed. com/profiting. Terms and conditions apply.
Hiring Indeed is all you need. So, you've got a new book. It's called uh the worlds I see and you say that the worlds you see are in different dimensions.
So can you talk to us about why you titled the book this way? >> Yeah, this title came about after I finished re writing the book and I realized the journey of writing the book is really peeling into different experiences. There is the world of AI that I you know experience as a scientist.
The book is a coming of age of a of a young scientist. So I experience the world of science in different uh stages. But there is also the world as a immigrant right like I go through life in different uh parts of the world and uh how do I handle or or go through that?
And then there is like more subtle but profound world like learning to be a human. I know this sounds silly but especially the context of a AI scientist it's really important part of the book is exploring my journey of um living and taking care of a alien parents and how that experience you know build my own character how we help each other support each other and at the at the towards the end of the book how that experience made me see my science in a different light compared compared to maybe other scientists who haven't had this human uh very profound human experience. So, it really is different worlds uh that I experience and it's blended into the book.
>> I love that. And I love how you call it a science memoir. And so, you say that you're involved in the science of AI, but you're also involved in the social aspect of AI.
So, what do you mean by the social aspect exactly? I started in AI as a very personal journey. It's just a young science nerd loves an obscure niche uh you know like nobody knows field but I'm just fascinated in a private way that how do we make machines think how do we make machines see and that I was happy and I would have been content with that you know through the rest of my life honestly even if nobody in the world has heard of AI I would be happily in my lap being a scientist.
But what really changed is around 2017 2018. I felt like me as a scientist and the tech world woke up and realized, oh wow, this this technology has come to a maturation point that it's impacting society. And because it's AI, it it has so much it's inspired by human thinking.
It's inspired by human behavior. Um it has so much human implication at the individual level as well as the societal level. So as a scientist, I feel I was thrusted into a messier reality that I never I never really realized.
Now I have a choice. a lot of my fellow scientists um would just continue to stay in the lab which I think is very admirable and respect respectful respected and uh and and just still just focused on the science but my other choice is to recognize as a scientist as a educator as a citizen I have social responsibility my responsibility is more uh focus focused on well I need to educate young people and while I can teach them equations and and coding and all that I also want to share with them what the social implications are of this size because it's my responsibility I also has a responsibility to communicate with the world because even starting quite a few years ago now it's even worse because of the large language model there's just so much um public discourse about AI and many of them are ill informed and that's dangerous right that's unfair that's dangerous it tends to harm people who are not in the position of power and I have a responsibility to communicate >> and then third I also feel Stanford especially as one of America's higher institutions have a responsibility to um to help make the world better, to help our policy makers, to help civil society, to help companies, uh to help entrepreneurs, to to to educate, to inform, and to um give insights. And that all this is the messiness of of meeting the real world.
And I feel I shouldn't shy away from that. I should take on that responsibility. >> Yeah, for sure.
You're one of the most knowledgeable people about AI. We need you to tell us what what are the you know the roadblocks that we need to look out for and how can we make sure that we use AI for good and not for bad and and and take the steps to do that. So let's talk about uh computer vision next.
So you are a computer vision AI scientist. So what first got you interested in this and what is computer vision AI? >> Yeah.
[gasps] Well, in one sentence, computer vision AI is part of AI is the the specific part of AI that makes computers see and and understand what it sees. And this is very profound. When humans open our eyes, we see the world not only in colors and shades, we see it in meaning, right?
Like I'm looking at my messy desk right now. It has cell phones. It has, you know, a cup.
It has, you know, monitor. It has you know um my allergy medicine and it's a it has a lot of meaning and more than that we can also construct you know especially even if we're not the best artists we you know humans since the dawn of civilization have been drawing about the world has been sculpting about the world has been building bridges and and uh and monuments and uh has created the visual and the the world. So the ability to see and visually create and understand is so innate in humans and wouldn't it be great if computers have that ability and that is what computer vision is.
So interesting. And you know when I think about consciousness, everything that has consciousness has eyes. And I always this always like freaked me out.
Like bugs have eyes, fish have eyes, and the eyes look like our eyes. Like fish eyes look like our eyes. And that's so like scary, weird.
The fact that all these living things have eyes. If AI starts to have eyes, wouldn't it just be that they're living and sentient at that point? So first of all, Hala, you touched on something really, really profound because visual sensing is one of the oldest evolutionarily speaking.
So 540 million years ago, animals to started developing eyes. It was a pinghole, you know, that collects light, but it evolved into the kind of eyes the fish, the octopus, the the the the elephant, the the the eyes we have. So you actually touch on something really profound.
This is extremely innate embedded into our development of our intelligence. And of course you also ask a philosophically really profound question. Everything has eyes as consciousness.
Actually a neuroscientist or neuro philosopher will probably um you should invite one to debate with you. Uh, for example, does a tiny shrimp using ice doing things? Does it have consciousness or it has just perception?
I don't have an answer. Honestly, how do you measure consciousness? Right?
Just because the shrimp can see the rock and uh and and and climb around, does it mean it's just a sensory reflection reflex or it has a deeper consciousness? I don't know. So just because machines have eyes does it develop consciousness?
It's a topic we can we can talk about but I just want to make sure that we are at least on the same page that just seeing itself doesn't mean it has consciousness but the kind of visual intelligence we have like I just described to understand to create to build to to represent a a world with such visual um visual complexity at least in humans it does take consciousness. >> Mhm. Yeah.
It's everything that you're saying is just so interesting. Even that shrimp example, it's true. It's like even though it's like navigating, swimming around rocks and whatever, doesn't mean that it's actually conscious.
It could be, to your point, just all like reflexes >> and that makes it a little less scary if machines end up having eyes. So, how are you replicating biological processes like vision in computers now? >> Yeah.
So um again I think a lot of computer vision is biologically inspired and there it's inspired in at least two areas. One is the algorithm itself. So the the whole neuronet network algorithm in fact back in the 1950s and60s the computer scientists were inspired by vision uh neuroscientists when they were studying cat memorial um visual system they discover the kind of hierarchical neurons and it's because of that it inspire computer scientists to build neuronet network algorithms.
So, so the the visual the animal visual structure in the brain is very much the foundational inspiration to today's AI technology. So that's one area. The second inspiration come from functionality, right?
The ability to see what do we see like humans are not that good at seeing color for example. We we see color rich enough but the truth is >> there's infinite wavelength that defines infinite colors but we have only probably dozens of colors. So, so clearly we're not seeing just colors in the same way like if I use a machine to register wavelength, but we on the other hand we see meaning, we see emotion, we see all these things and it's just incredibly inspiring that we can um build these functionality into machines and that is another part of biological inspiration.
It's the functional uh inspiration and with that I think there is a lot to imagine for example um you know first of all visually impaired uh patients if we help them with artificial uh visual system to understand the world rich world we see it will be tremendously helpful um machines right um I don't know do you have a Roomba in your house >> yeah Yeah. >> Right. So, it almost is kind of seeing.
It's not seeing the same way we are, but it's kind of seeing and mapping. But one day, I hope I not only have a room, but I also have a cleaning robot, right? Like then it needs to see my house in a much more complex way.
And then u the most important, right? For example, rescue robots. There are so many situations that puts humans in danger or humans are already in danger and you want to rescue humans but you don't want to put more humans in danger.
Think about that Fukushima nuclear leak incident. Um people had to really sacrifice to go in there to stop the leak and all that. It would be amazing if robots can do that and that needs seeing it needs visual intelligence in much deeper ways.
M that's so interesting and it's helpful for you to say that because my first reaction is like why are we giving robots this much power like we're like losing our power as humans but to your point it can help humans and I know that's a whole like what you talk about is human- centered AI right so can you define what human- centered AI is in your own words >> yeah human- centered AI is a framework of developing and using AI and that framework puts humans human values, human dignity um in in in the center so that we're not developing technology that's harmful to humans. So it's really a a way to see technology or use technology in a benevolent way. Now I'm not naive.
Uh I know technology is a double-edged sword. I know that double-edged sword can be used intentionally or unintentionally by by um bad you know in in bad ways. So human- centered AI is really trying to underscore that we have a collective responsibility to focus on the good development and good use of AI.
And it was really inspired by my uh time in industry when I was on sbatical as a professor is seeing the incredible business opportunities that is already opening the floodgate of AI back in 2018 and knowing that when business start to use AI it impacts lives of every individual. Right? So I went back to Stanford and together with my colleagues we realized as a thought leadership um institution as Americans higher education uh place to educate the next generation students we should really have a point of view to develop and stay at the the forefront of the development of this technology.
This is how we formulated the human- centered AI framework. >> Yeah. Bam.
One of the best parts of my job is that it takes me everywhere. I've traveled for interviews, speaking gigs, and podcasting events. And this fall, I'll be heading to Nashville and then LA for some exciting podcast interviews.
I love that traveling gives me a chance to try new things, meet incredible people, and experience different cultures firsthand. Along the way, I've booked some unforgettable homes on Airbnb that felt warm and welcoming the moment that I arrived. These experiences really inspired me to start thinking about hosting my own place while I'm away for travel.
But here's the truth. I could really use some support to manage some of the details of hosting. That's why I'm excited about Airbnb's co-host network.
A co-host is a vetted local expert who partners with you to make sure guests are always having an amazing stay. They handle guest communication, booking, supplies, and even add thoughtful touches to elevate your space. You don't need to be on-site or hands-on to host like a pro.
Just team up with a co-host and make it happen. Find yourself a co-host at airb. com/host.
Yap gang. The origins of this podcast were once just a dream. That dream became a podcast you're listening to today, which has since grown into a thriving media business.
Taking your business to the next level is a dream lots of us share, but too often it just remains a dream. We hold ourselves back thinking, "What if I don't have the skills? What if I can't do it alone?
" I want you to turn those whatifs into why nots and help your business soar with Shopify. Shopify is the commerce platform behind over 10% of US e-commerce, helping millions of businesses from startups to household names like Gym Shark and Mattel. They handle everything from website design to inventory management and global shipping so you can focus on your vision.
Need to find new customers? Their built-in marketing tools have you covered. Want to sell globally?
They help you sell in over 150 countries. Plus, their checkout is the best converting on the planet, so you'll never miss a sale. Turn those whatifs and keep giving those big dreams their best shot with Shopify.
Sign up for your $1 per month trial and start selling today at shopify. com/profiting. Go to shopify.
com/profiting. Again, that's shopify. com/profiting.
And one of the biggest fears that people have with AI is that AI is going to replace all of our jobs. Now, AI is probably going to create a lot of jobs, and I've talked a lot about that with other guests on the podcast, but how do you suggest that we make jobs and and take consideration into making sure that AI doesn't take all the jobs? >> Yeah.
So, uh several things, Hala. First of all, um why do we have jobs? It's really important to think about it.
I think jobs is part of uh human prosperity because we need that to translate into financial financial you know reward so that we have the prosperity that our family and we need. It also is part of human dignity. It's beyond just money is the meaning for many people.
It's the meaning of you know life and and self-respect. So from that point of view I think we have to recognize job shift throughout human history technology makes and also other factors uh creates destroys morphs transforms jobs but what doesn't change is the need for human prosperity and human dignity. So I think when we think about AI and its impact in jobs, it's important to go to the very core of what jobs are and means and what uh technology can do.
So when it comes to say human dignity for example um I do a lot of healthc care uh research with AI and it's so clear to me that many of the jobs um um that our clinicians and healthcare workers do are part of humans caring for humans and that emotional bond that dignity that respect uh can never be replaced. What is also clear to me is that American health care workers, especially nurses, are over fatigued, overworked, and if technology can be a positive force to help them to to to help them take care of patients better, to reduce their workload, especially some of the repetitive, thankless uh work like constant charting or walking miles and miles a day to to fetch pharmacy, medicines and and all that. If those parts of the job, the tasks can be augmented by machines, it um it is really truly intended to uh protect the human prosperity and dignity but augment human capabilities.
So from that point of view, I think there is a lot of opportunity for AI to play a positive role. But again it depends on how we truly first of all it depends on how we design AI. In my lab we did a very interesting research.
We were trying to create a a big um robotics project to do a hund a thousand human everyday tasks. You know, but at the beginning of this project, it it it was very important to us that we are creating robots to do these tasks that humans want help. For example, buying a wedding ring.
I don't think even if you have the best robot in the world, who wants a robot to choose a wedding ring or opening Christmas gift, it's not that hard to open a box, but the human emotion, the joy, the family bond, the moment, it's not about opening a silly box. So, we actually ask people to rank for us thousands and thousands of task and tell us which tasks they want robots help. For example, like cleaning toilet.
Everybody wants robots help. So we focus on those tasks that humans prefer robotic help rather than those tasks that humans care and want to do themselves. And that is a way of thinking about human center AI.
How do we create technology that is beneficial, welcomed by humans rather than I just go in and tell you I'm using robot to replace everything you care about. >> Mhm. >> Another layer just to finish this topic is policy layer, right?
Like uh economic social um well-being is so important and technologists don't know it all and we shouldn't feel we know it all. We should be collaborating with civil society, legal world, policy world, econ economists to try to understand the nuance and the profoundness of jobs and tasks and AI's impact. And this is also why our human- center AI institute at Stanford has a digital economy lab.
We work with policy makers and thinking about these issues. we try to inform them and provide information and uh and uh to to help move these topics forward in a positive way. >> I feel like you're touching on a lot of uh you have three aspects to your human- centered AI framework, right?
So AI is interdisciplinary. AI needs to be, you know, trying to make sure that we have human dignity and and and you know, using it for human good. And then there's also one about intelligence.
Can you break down your your three pillars of your human- centered AI framework? >> Yeah, the three pillars of the human- centered AI framework is really about thought leadership in AI and focusing on what higher education institute like Stanford can do. Uh, one we talked about is that interdisciplinary um, recognizing the interdisciplinary nature of AI, welcoming the multistakeholder studies, research and education uh, policy outreach to make sure that AI is embedded in the fabric of our society today and tomorrow in a benevolent way.
The second one is what you said is focusing on augmenting humans, creating technology that enhances human capability and human well-being and human dignity rather than taking away. The third one is uh about continue to be inspired by human intelligence and develop technology AI technology that is compatible with humans because um you know human intelligence is very complex. It's very rich.
We talk a lot about emotion, intention, compassion and uh today's AI is lacks most of that. It's pretty far from that. Being inspired by this can help us to create.
And also, by the way, there's another thing about today's AI that that is far worse than humans. It draws a lot of energy. Humans, our brain works around 20 watts.
that is like dimmer than the dimst light bulb in your house. >> Yet we can do so many things. We can create the pyramid.
We can, you know, come up with E equals MC square. We can, you know, write beautiful uh music and all that. AI today is very very energy consuming.
It's bulky. It's huge. So, there's a lot in human intelligence that can inspire the next generation AI to do better.
>> Every time I have an AI episode, I feel like I learned so much that I didn't really realize before. And, you know, we've had conversations with other people on the show about how a lot of people are scared of AI getting like apex intelligence that it's going to be so much smarter than humans. It's going to take over the world.
It's going to control humans. Do you have any fears around that? >> I do have fears.
I think um you know who lives in 2024 and don't have fears [laughter] you know um and as a citizen of the world I think our our civilization our species is always defined by the struggle of dark and light and by the struggle and good and bad. I I think it's you know we have incredible benevolence in our DNA but we also have incredible you know badness in our DNA and AI as a technology can be used by the badness. So from that point of view I do have fear.
Um the way I cope with fear is try to be constructively helpful is try to advocate for the benevolent use of this technology and to use this technology to combat the badness. Um at the end of the day um any hope I have for AI is not about AI is about humans. You know the to paraphrase Dr king that the arc of history is long but it does bend towards justice and benevolence in general but to come down from that abstract thinking I think we have work to do I honestly because if AI is in the hands of bad actors if AI is concentrated in only a few powerful people's hand it can go very wrong right we don't need to wait for sension AI.
Even today's car, imagine there is a bad person who is in charge of building 50% of America's car and that person just wants to make all the car brakes malfunction or add a sensor and say if you see a pedestrian run it over. Actually, today's technology can do that. You don't need sension AI.
But the fact that we don't have that dystopian um scenario is first of all human nature is buying large good you know our car factory workers our business leaders in building cars nobody thinks about doing that right >> we also have laws right if someone is trying to do harm we have we have societal constraints we also try to educate the the the the the population towards good things, right? So all this is hard work and we need that hard work in AI to ensure it doesn't do bad. >> Yeah.
So I just want to give an example that when I was talking to Stephen Wolfframe because the interview is fresh in my head and he said something that made me feel a little bit about at at ease with AI and the fact that it could get really smart. He said we're living in AI. We live in nature.
Nature is so complex. We we can't control it. It has simple processes that are really really complex.
We can predict it all we want, but we can't like we'll never really know what nature is going to do. And already we live in a world where we're interacting with nature every day and we have to just deal with the fact that we don't control it and it's smarter than us to a degree. And he's like that's what maybe AI will be like in the future.
It will be there. It will be its own system. What are your thoughts on that?
>> That's a very interesting way to put it. Um, okay. First time I heard that.
Um, I like his I like his way of saying that humans in the face of complexity and powerful things that we still have a way to cohabitate with it. >> I don't agree nature is AI in the in the sense that nature is not programmable and uh I don't think nature has a collective intention. It's not like the earth wants to be a bigger earth or blue earth or or you know so from that point of view it's very very different but I appreciate the way he he he says that and I also think using his analogy we also live with other humans and there are humans who are more stronger than us smarter than us do better whatever than us but yet by and large our world is not everyone killing each other right like by and large now this is where we do see the darkness >> and this has nothing to do with AI human nature has darkness and we harm each other and the hope is it's not just the hope the work is that when we create machines that resemble our intelligence we should prevent it to do similar harms to to us to each other and and try to you know bring out the the the the the better part of ourselves.
>> As we wrap up this interview, because I need to get you out on time, uh I wanted to ask you a couple of questions. So, first off, to all the young entrep you're talking to a lot of young entrepreneurs right now and people who want to be entrepreneurs, what's your advice to them about how to embrace this AI world? So, first of all, I I hope you read my book, The Worlds I See, because the book is written to young people, for young people.
It's a coming of age of a scientist. But the true theme of the book is finding your northstar is finding your passion and uh and and believing in that against all odds and uh and and chase after the northstar. And that is the core of what entrepreneurship is about is that you believe in bringing something to the world and against all odds you want to make it happen.
and that should be your northstar. Uh in terms of AI, it's a incredibly powerful tool. So depends on what business and products you're making.
It it either can empower you or it's a essential part of your your core product or it, you know, keeps you competitive. So you it's so horizontal that for most entrepreneurs out there if you don't know anything about AI it is important to educate yourself because uh it's possible that AI will play um either in your favor or in your competitor's favor. So knowing that is important.
>> Yeah. Okay. And uh since we're just about time here, I'm just going to ask you one last question and this is really about visioning.
Okay. Let's vision a world 10 years from now, 2034, where there's human- centered AI. And let's also try to visualize a world 10 years from now where maybe it's not human- centered AI.
Maybe it got in the bad hands of some folks. the world that's human- centered AI, I think it's not too far from at least the North America world we live in, even though I know we're not perfect, is that we still have a strong democracy. We still believe in individual dignity and uh and uh you know by and large free market capitalism that we are allowed as individual to pursue our happiness and prosperity and respect each other and AI helps us to do better scientific discovery to have self-driving cars to to help people who can drive or you know reduce traffic to uh make life easier to make education more uh personalized to empower our teachers and health care workers to discover cure for diseases to uh to uh alleviate our aging population problems to to make agriculture more effective to to find climate solutions.
There is so much AI can do in the world that we still um we still have the good foundation. Dystopia world is AI can be used as a bad tool to topple democracy, right? uh disinformation is a incredibly harmful uh way of um u of of harming democracy and and this the the the the civil um life we we have right now.
If it's completely concentrated in power, whether it's state power or individual uh power, um eight eight makes the rest of the society much more subject to the to the to the will and possibly wrath of that of that power whether it's AI or not. We have seen in human history that concentrated power is always bad and concentrated power using powerful technology is not a recipe for good. >> Yeah.
Well, Dr Lee, I'm so happy we have somebody like you who's helping us to navigate the AI world, who's also helping to shape the AI world in a way that hopefully is going to be good for humans. Uh, please let us know where we can learn more about you and everything that you do. >> Thank you, Hala.
Thank you for promoting my book and then please constantly check in with Stanford Human Center AI Institute newsletter and website. Amazing. We'll stick all those links in the show notes.
Dr Lee, thank you for joining us on Young and Profiting Podcast. Thank you Hala.