What is the origin story of the way AI has changed all our lives in the last few years? The conversation you're about to hear is with someone who has seen it all. Someone who has played a really transformative role herself in what is happening around us now in the development of this transformational technology.
She has been called the godmother of AI, but I think that doesn't fully cover what she has contributed herself. And frankly, I think her work ought to be better known. She is a scientist, a professor, a tech CEO, and someone who has seen hardship in her life.
She moved from China to the United States when she was 15 with little English, and later ran her parents dry cleaning business so that she could help her family stay afloat. So here's my conversation with Dr Fei-Fei Li on The Mishal Husain Show. Dr Fei-Fei Li thank you for coming on the show.
I'd love to start with this remarkable period for your industry. It's three years since ChatGPT was released to the public, and since then there have been new vehicles, new apps, huge amounts of investment flowing towards the industry. How does this moment feel to you?
Well, first of all, Mishal, thank you for inviting me to this show. Very excited. How does this moment feel to me?
It's, it's a good question because AI is not new to me. I've been in this field for, 25 years. It's, I live and breathe it every day since the beginning of my career.
Yet this moment is still as daunting and, almost surreal to me in terms of its massive, profound impact. It's it's, it's generally true in my mind. This is a civilizational technology, and it's surreal personally to me because I'm a part of the group of scientists that made this happen.
And clearly, I did not expect it'll be this massive. When was the moment it changed? Because I know you've talked about the years when it was like an AI winter and what you are describing now, how the moment feels extraordinary even to you.
Is it because of the pace of developments, or is it because the world has woken up to it and therefore turned the spotlight on people like you? I think it's intertwined, right? But for me to define this as a civilization of technology is not about spotlight.
It's not even about how powerful it is, it's about how many people it impacts. At the end of the day, technology, the end of technology is its impact to people. I call it civilization moment because everyone everyone's life, work, wellbeing, future will somehow be touched by or impacted by AI.
In bad ways as well as good? Well technology is a double-edged sword, right? Yes I think, both ways, because, since the dawn of human civilization, we create tools we call technologies.
And these tools are meant for in general for doing good things. But along the way, we might use it in the wrong way, we might intentionally use it in the wrong way or we might have unintended consequences. Yeah.
And strands of this I know will emerge in different ways over the course of this conversation. But you said the word power, and I'm struck by the fact that the power of this technology is in the hands of a very small number of companies, most of them American. How does that sit with you?
You are right. The major tech companies hold so much of the of the, tech itself. And through their massively reaching products they are, impacting, globally, our society the most.
I would personally like to see this technology being, much more democratized. I would like to see, no matter who builds or holds the profound impact of this technology to do it in a responsible way. And I also do believe every individual, in this era, should feel they have the agency to impact this technology.
We'll talk a bit more in a moment. I think about the democratization and how that might be achieved, but you, of course, in terms of companies in this field, you're part of that because you are a tech CEO as well as an academic. In fact, I think you're very young company, little more than a year old, is reportedly already worth billion.
Yes. I am co-founder, CEO of World Labs and we are a little more than a year old, and we are building the next frontier of AI which is spatial intelligence, which people don't hear too much about today because we're all about large language models. But, yet I believe spatial intelligence is as critical and complementary to language intelligence.
This idea of virtual worlds, which which again, we'll dig into. But before we do that, I want you to take us back again to the fact that you've seen the whole trajectory of this industry. You've been in it for for 25 years.
I know that your first academic love was physics. What was it that what was it in the life or work of the physicists you most admire that made you think beyond that particular field? Yeah, that's a great question.
I, I grew up in, in, it's not a small town, but it's a less well known city in China. And I come from a small family. So you could say life was small in a sense.
You know, a small family. Not too big a city. It was in the 80s, my childhood, which was fairly simple and isolated.
You're an only child, as was often the way in China under the one child policy. My family in general is just small. And physics is almost the opposite.
It's vast. It's audacious. The imagination is unbounded.
You look up in the sky, you can ponder about the beginning of the universe. You look at a piece of, a snowflake. You can zoom into the molecular structure of the matters.
You think about time, you think about magnetic field You think about nuclear. It takes my imagination to places that you can never be in this world. And what really really fundamentally, fascinates me to this day about physics is asking audacious questions.
Let's not be afraid of asking the most, boldest, audacious question about our physical world, of our universe, of our, of where we come from. But your audacious question, I think, was about what is intelligence. Yes.
And I think it's rooted in my physics, love of physics and my training in physics that I look at, each physicist I admire look at their audacious question right from Newton to, Maxwell to Schrödinger to Einstein, my favorite physicist. And I wanted to find my own audacious question. And somewhere in the middle of college, my audacious question shifted from physical matters to intelligence.
What is it, how does it comes out? And most fascinatingly how do we build intelligent machines? And that became my quest, my North Star.
And that's a quantum leap because from machines that were doing calculations and computations, you're really talking about machines that learn and that are constantly learning. I like you use the physics pun, the quantum leap. Yeah.
Yeah. Crossing over into, into ordinary parlance. But I think didn't you have this light bulb moment, as you're thinking about intelligence, you realize that a critical part of that is our ability to recognize objects in the world around us.
The fact that around us right now, there are multiple objects. We know what they are. Yes.
The ability to recognize the vast number of objects, the variability the diverseness, that humans have about recognizing objects in the world is foundational. And there's a lot of neuroscience studies that backs this. So I decided that that was my first North Star, my PHD dissertation is to build machine algorithms.
We at that point called it machine learning algorithms, to recognize as many objects as possible. I think the key question is how, right? How do you teach the machines?
And what I found really interesting about your background is that you are reading very widely, and there are two key breakthroughs that make your ultimate breakthrough possible. And one is that you start thinking and learning about what psychologists and linguists are saying that is related to your field. Tell me about both of those.
Well, that's the beauty of doing science at the forefront. Because it's new. No one knows how to do it, As AI scientist, the you know, you you think about, you look for possible answers and it's pretty natural to look at human brain and human mind and, and, try to understand, or be inspired by what humans can do.
And one of the inspirations I got in my early days of studying or trying to unlock this visual intelligence problem is, look how how how our visual semantic space is structured. There are so many, tens of thousands and millions, of objects in the world. How are they organized?
Are they organized by alphabets, or are they organized by size or colors? And you asking that because you have to understand how our brains organize in order to have something to teach the computers? That's one way to think about it.
And I come upon this linguistic work called WordNet. WordNet is a way to organize semantic concepts, not visual, just semantics, or words in a particularly the particular, taxonomy. Give me an example.
An example is in a dictionary, an apple and an appliance are very close together. But in real life an apple and a pear are much more closer to each other. And the apple and pear both belong to fruits, whereas an appliance, belongs to a whole different family of objects.
And I came upon this taxonomy called WordNet that describes this organization of objects, and I made a connection in my head is that, two things is very clear. One is that, this could be the way visual concepts are organized. Because apples and pears are much more connected than the apples and washing machines, for example.
But also even more importantly is the scale. Is that if you look at the number of objects are there described by language, you realize how vast it is right? How big it is.
And that was a particular for me is to recognize the world, we as intelligent animals, as humans, we experience the world with massive amount of data. And somehow we process it all. Yeah.
Somehow we process it all. And also learn it all. And we need to endow machines with that.
Hence the big data sets that you needed. Yes. And actually, it's worth noting that at that time, I think this is early part of the century, The idea of big data doesn't really exist.
No. Big data was not, this this phrase did not exist. And, nobody talks about data driven machine learning, the kind of, scientific data sets we were playing with were tiny.
How tiny? For example, in, in images, we were most of the graduate students in my era were playing with data sets of four or six or at most 20 object classes. And each class, there were a couple of hundred examples at most and that's, that's how tiny it is.
Whereas fast-forward three years later, after we created ImageNet, that it had 22,000 classes of objects and 15 million annotated images. Yeah. And ImageNet was a huge breakthrough.
It's the reason that you've been called the 'Godmother of AI. ' I'd love to understand more about what it is about you that enabled you to make these connections and see things that other scientists didn't, and one that immediately comes to mind is that you acquired English as a second language after you moved to the United States. Is there something in that that led to what you're describing?
Mishal, that's a great question. I've been asked this question, and my true answer is, I don't know. Human creativity is still such a mystery.
You know, people talk about AI doing everything and I disagree. There is so much about the mystery of human mind that we don't know. So I can only conjecture that a combination of my my interest and my experiences led to this, including, the curiosity and the hunger for audacious problem definition.
I find myself I'm not afraid of asking crazy questions in science. Yeah. Not afraid of, seeking solutions.
That's out of box. And maybe my appreciation for, for for the linguistic link with vision might be accentuated by my own journey of learning a different language, but I don't know the answer. What was it like coming to the United States as a teenager?
Because I think that's a particularly difficult stage in life to move and to have to make new friends. Even if you weren't battling a language barrier. That was hard.
I think, being an immigrant and, I came to this country, in the, in America when I was 15, in New Jersey, Parsippany New Jersey. None of us, my parents, nor I spoke much English at all. I was young enough to learn more quickly.
It was very hard for my parents, and we were very, very, we were not financially very well at all. My parents were doing cashier jobs, and eventually, I was doing restaurant jobs, eventually, by the time I entered college. Also, you know, my mom's health was not so good.
So, I, my family and I decided we have to run this little dry cleaner shop in New Jersey to to, to, make make some money to survive, And you got involved yourself working in the dry cleaner? I joke today I was the CEO. I ran the dry cleaner shop for seven years.
From what age? I was 18, 19 until the middle of my graduate school. So all the way, all the way through college and graduate school.
Yes, 7 years. Even when you at a distance, you're running your parents dry cleaning business. Yes.
Because I was the one who speak English. So I take all the customer phone calls, I deal with the billing, I deal with all the, you know, inspections, all the, the business. What did it teach you?
Resilience. As a scientist, you have to be resilient. Because science is a non-linear journey.
It's, nobody has one conjecture and all the solutions in front of you all of a sudden. You have to go through such a challenge to arrive and find an answer. And, as an immigrant, you learn to be resilient.
Were your parents pushing you because they clearly wanted a better life for you. That's why they left China for the United States. So how much of this was coming from them?
And how much from your own sense of responsibility to them? Yeah, actually, that's a great question. To their credit, they did not push me much.
They're not tiger parents in today's language. I think part of it is they're just trying to survive to be honest. And they, especially my mom, she is a intellectual at heart, she loves reading.
But, but the combination of a tough, survival driven immigration life, plus her own health issues, she was not pushing me at all. She, I think I, I don't know, as a teenager, I kind of had no choice. I either have to make it or not make it and the stakes are pretty high.
So I was pretty self motivated. And also one of the, lucky thing I have is that, I was just curious. I was always a curious kid and my curiosity had an outlet which was science and that really grounded me.
I wasn't curious about nightclubs or, or other things. I was I was an avid lover of science. But also an avid reader that from your early childhood, you're putting away the children's books and turning towards the the classics, Jane Eyre.
The grownup books. My mom liked reading and she probably thought that you are talking about when I was nine, ten. At that time she probably did have an influence, more influence.
She thought I was precocious enough that that just read some grownup books. You also had a teacher who was really important in your life. Tell me about him.
I excelled in math and I liked math and I befriended the head teacher, Mr Bob Sabella, the math teacher. We became friends through the mutual love of science fiction reading and - Which you'd read in Chinese. Which, at that time, I was reading in Chinese.
Eventually I started reading in English. But he, he he he's a remarkable person because he probably saw in me this desire to learn. So he went out of his way to create opportunity for me to continue to excel in math.
I remembered I placed out of the highest curriculum of math. And so there was no more courses for me. He would use his lunch hours to create a one-on-one class for me to, to do.
And now that I'm a grownup I know that he was not paid extra. He was not waiting to be winning any prize to do that. It was really out of a teacher's love and sense of responsibility.
And he really became such an important person in my life. He and his family. Is he still alive, Mr Sabella?
Mr Sabella passed away when I was, when I was grown up, when I was a assistant professor at Stanford. But his family, his his two sons, his wife and I, I think they're my New Jersey family at this point. You use the word love about what he did for you and I.
I wonder if they also, he and his family introduced you to American society. That that were they were your first friends and an entrance to the whole world of America beyond your school. Absolutely they were they were my they they introduced me to the quintessential middle class, American family.
But they live in a suburban house. They have two lovely kids. Of course, they're all married and have the grand children generation now.
But it was a great window for me to to know the society, to be grounded, to have friends and, and have a teacher who cared. Do you think you could have had the career that you have had in China? Because now there are like significant advances happening in AI in China?
I don't think I'm able to answer this question because I think life is so serendipitous, right? The journey would be very different. You know, in a way, we could have simulated all possibilities, but what is timeless or or what is invariant for anyone is the sense of curiosity, the pursuit of North Stars.
So if I were me, I would still be doing AI somehow I believe. Do you still feel connected to China? It's part of my heritage.
I'm also, you know, I feel very lucky that my career has been in American higher education and in Silicon Valley, and then also in and out of industry, and being in tech. The combination of all these ingredients is very global. It feels very global.
The environment my family right now is in, which is Stanford, San Francisco, Silicon Valley is very, very, international, you know, so I feel very connected. And the discipline I'm in, which is so horizontal, it touches people everywhere, that I do feel much more like a global citizen at this point. Yeah.
And of course, it is a global industry, but there are some really striking advances in China, not least the number of patents and the number of AI public papers coming out, the DeepSeek moment earlier this year. As you look ahead in this century, do you think China will catch up with the US in the way that it has in other fields, like manufacturing? I do think China is a powerhouse in AI.
I think US is, I think in this, in this moment, most people would recognize, the two leading countries in AI are China and US. I think UK is also a very, a very, powerful player in AI. I do travel around the globe.
I think that the excitement, the energy and also, frankly, the ambition of many regions and many countries in the world of wanting to have a role in AI, wanting to catch up or even come ahead in certain areas in AI, that is pretty universal. Yeah. And your own next frontier, spatial intelligence, tell me what you mean when you use the words spatial intelligence.
What are you working on right now? Yeah. So spatial intelligence is, is the ability for AI or, frankly, any intelligence to understand, perceive, reason and interact and also create, spaces, worlds.
It comes from a continuation of visual intelligence. As I said, my career started in visual intelligence and the first part of my career, first half of my career, around the ImageNet time was trying to solve the fundamental problem of just understanding what we are seeing. And that was a very important problem.
But it's not enough because that's a very passive act. It's just receiving information and being able to understand this is a cup, this is a beautiful lady, this is a microphone. But if you look at the evolution, you look at, human intelligence, perception is profoundly linked to action.
We see because we move. We move because therefore we need to see better. And that connection.
How do you create that connection has a lot to do with space, because you need to see the 3D space. You need to understand how things move in the world. You need to understand when I touch this cup, how do I mathematically organize my fingers so that it creates a space that would allow me to grab this cup.
All this intricacy is centered around this capability of spatial intelligence. And I've looked on your website and I've seen where you the preview that you've released Marble, you can see essentially a virtual world which one goes into and from room to room, and one door opens and you go from one place to another. But I'm not sure how you use it.
Is it essentially to you a tool for training AI, it's a different way to train AI? Rather than, for example, Meta saying this is in the metaverse, this is the world you can go into and spend time in as a human being. Right.
So let's just be clear with the definition. Marble is a frontier model. It's a frontier world model.
What it does is not just to see and go into a world what what's really remarkable is it generates a 3D world at a simple prompt. A prompt could be give me a modern looking kitchen. Or the prompt could be here's a picture of a modern looking kitchen, make it a 3D world.
It could even be here's a couple pictures of a modern looking kitchen, make sure they be, you know, create a coherent modern looking kitchen for me. And you might ask why? There are actually many, many use cases.
First of all, the ability to create a, a 3D world is a fundamental ability. It's fundamental to humans, and it's, I hope one day it's fundamental to AI. Second, if you are a designer or architect, you can use this 3D world to ideate, to design.
If you are a game developer, you can use it, because it's 3D, to create a lot of games. Right now it's very painful to to obtain these 3D worlds. So that you can design games.
If you are a VFX producer or artist, you can use that to create movies. If you want to do robotic simulation, these worlds will become very useful as training data for robots or evaluation data. If you want to create immersive, educational experiences in AR VR this this model would help you to do that.
So it actually has incredibly horizontal use. Interesting. I'm imagining girls in Afghanistan.
Maybe you could do virtual classrooms in a in a very challenged place, Yes, or I'm imagining, for example, how do you explain to an eight year old what is a cell? One day we'll create a world that's inside of a cell. Human body, cell of a human body?
Or any cell, right. And then the student can walk into that cell and understand the, the, the, the nucleus, the, the, the enzymes, the, the membranes, the the. So you can use this for so many, possibilities.
Okay. So that's your next frontier. Can we and I'm conscious yours is a very big complex industry.
But there are some immediate pressing issues. And I wonder if I could put a selection of those to you for you to give us an instinctive or even a nutshell response, you know, on how you see them. For example, and you have heard this many times before, number one, is AI going to destroy jobs, large numbers of jobs?
Technologies do change the landscape of labor. A technology as impactful and profound as AI will have a profound impact in jobs. Which is happening already.
Which is happening already. Salesforce said 50% of their customer support roles are going to go because of AI. Right, and software engineering, contact centers, you know, analyst jobs.
So it will there's no question about it. It's not going to create as many jobs in its place, is it? And I, I wonder if that worries you.
I think the jury's still out there. Every time humanity has created a more advanced technology, you know, for example steam engines electricity, PC, cars, you know, we have gone through, difficult times, but we also have gone through re-landscaping of jobs. So only in talking about the number bigger or smaller doesn't do it justice.
We need to look at this in a much more nuanced way and really think about how to respond to this change. You know, there's the individual level of responsibility. You've got to learn, you've got to upskill yourself.
But there is also company level responsibility. There's also societal level responsibility. So this is a big, big question.
Number two is an even bigger question. If you take those headphones, I want to play you the voice and a view on the existential question about whether human beings are going to be replaced by AI. And this is a voice you will know, Professor Geoffrey Hinton, whose work has overlapped with yours and who is a Nobel laureate as well.
When AI gets superintelligent, it might just replace us. How do we prevent it taking over? Even if we all the countries collaborate, what do you do?
And I think at present for all the big companies and governments have the wrong model. Their basic model is I'm the CEO and this superintelligent AI is the extremely smart executive assistant, I'm the boss. It's not going to be like that when it's smarter than us and more powerful than us.
What do you think of that? Because Professor Hinton thinks, there's a 10 to 20% chance that AI leads to human extinction. So, first of all, Professor Hinton or I call him Geoff because I've known him for 25 years, since I was a first year graduate student, he's someone I admire and studied his technical papers.
But this thing about replacing human race, I actually do, respectfully disagree. Not in the sense that it'll never happen. It's in the sense that if human race became really be in trouble, in my opinion, that's a result of humans doing wrong things, not machines doing wrong things.
But the very practical point that he put in that clip is where he says, how do we prevent the superintelligent creation taking over at the point that it becomes more intelligent than us? We have no model for that. Now, if that creation that is more intelligent than us says, turn off human beings' life support, or do something else that is existential, how would we stop it?
So I think this question has made an assumption, which is from today, which we don't have such a machine, super-intelligent machine, up to the day that that thing is created, that humanity has done nothing to prevent that day to arrive, just assume that day would come. I don't even necessarily think it might. That's a probability, but that's a that's a fair question.
Or at least scientifically, it's interesting to conjecture that, but let's just assume that's the right conjecture. It's going to come. We still have a distance.
We still have a journey to take from today to that day. And my question is why would humanity as a whole allow this to happen? Where is our collective, responsibility?
Where is our governance? Or regulation? Yes.
Which is why, I wonder, then, do you think there is a way to make sure that there is an upper limit to superintelligence? I think there is a way to make sure there is a responsible development and usage of technology. Internationally agreed, at the government level, is it a treaty?
Not yet. Is it just companies agreeing to behave in a certain way? You're right.
The field is so nascent that we don't yet have the level of international treaties, we don't yet have the level of global consent. I think we have global awareness. And I do want to say that, we shouldn't over click on the only one possible, consequences or negative consequences of AI.
This technology is powerful. It might have other negative consequences. It also has a ton of benevolent applications for humanity.
We need to look at this holistically. Yeah. Do you get frustrated by some of the questions, then?
Because I know you talk to politicians, people with political power a lot, you've done that in the US, you've done that in the UK and in France and elsewhere. What's the most common question they ask you that you find frustrating? I wouldn't use the word frustrating.
I would use the word concerned because I think our public discourse of AI need to move from the very simple question of what do we do when machine overlord is here? So I, I don't get frustrated, I get concerned if the only way to ask this question is this very simple binary, do you want it or not want it? Another question I get asked a lot, possibly more than this question, this question from parents Is that worldwide, parents ask me AI is coming, how do I advise my kids?
What's the future of my kids? What should they do? Should they study computer science?
Are they going to have jobs? So answer it then, because people listening to this are probably thinking exactly the same. What do you say?
I say that. AI is a very powerful technology and I'm a mother. The most important thing we should empower our kids is to empower them as humans with agency, with dignity, with the desire to learn.
And there are timeless values of humanity. You know, be an honest person, be a hard working person, be creative, be critically thinking. So don't worry about what they'll study?
Worry is not the right word. Be totally informed and understand that your children's future is going to be living the world of AI technology. And depending on their interests, their passion, their personality, their circumstance, prepare them in that in that future.
Worry doesn't doesn't solve the problem. I've got another industry question, which is about the huge sums of money that are flowing into, again, not that many companies, like yours. And whether this might be a bubble, whether this might be like a dotcom bubble, where it turns out that some of these companies are overvalued.
First of all, my company is still a startup. When we're talking about huge amount of money we really look at the big techs. AI is still a nascent technology.
And, and from a development point of view, there's still a lot to be developed. The science is very hard. It takes a lot to, to to make scientific and technological breakthroughs.
This is why resourcing these efforts are still important. The other side of this is the market. Are we going to see the payoff from the market?
By and large, I do believe that, the applications of AI is so massive, whether we're talking about software engineering, to creativity to, health care, to education, to financial services, that I think, we're going to continue to see an expansion of the, market of AI. I look at it as, there are so many human needs, both in terms of well-being as well as in terms of productivity. That can be helped, by AI as an assistant, as a collaborator.
And that part, I do believe strongly that this is an expanding market. But what does it cost in terms of power and therefore energy and therefore climate? There's a prominent AI entrepreneur you probably know Jerry Kaplan, who said that we could be heading for a new ecological disaster because of the amount of energy consumed by the vast data centers that we're going to need in growing numbers.
This is an interesting question. I do think that, in order to train large models, we're seeing, more and more need for power or energy. But it never, nobody says these data centers must be powered by fossil fuels, for example.
So our innovation on energy side will be part of this innovation cycle, right. I think it's just because the amount of power they need is so enormous, it's hard to see it coming from renewable energy alone. I think right now this is true.
But I also know that us, for example, I visit Middle East. There's a lot of effort in building renewable energy for big data centers. I do think that countries need, to that need to build these big data centers, need to also examine its energy policy and industry.
This is an opportunity for us to invest and develop more renewable energy. What worries you about your industry? Because you're painting a very positive picture and you've been at the forefront of this, and you see much more potential.
So I understand where you're coming from, but in your quieter moments? Well I'm not a, I'm not a tech utopian, nor am I a tech dystopian. I think I'm actually the boring middle.
The boring middle wants to apply a much more pragmatic and scientific lens to this. So what worries me? Of course, any tool in the hands of the wrong mindset or wrong intention would worry me.
This, you know, like we said, that since the dawn of human civilization, fire was such a critical invention for our species. Yet using fire to harm people, would be, is massively bad. So any wrong use of AI worries me.
The wrong way to communicate with the public worries me, because I do feel there's a lot of anxiety in different parts of the world. The one worry I have is our teachers, especially our K-12 teachers, These people, and my own personal experience tells me, they're the backbone of our society. They are so critical for educating our future generations.
Are we having the right communications with them? Are we bringing them along? Are our teachers using AI tools to super power their profession?
Are they, are they passing that or helping our children to use AI? This is the Bloomberg Weekend Interview. And so we're always interested in people's lives as well as their work.
And I realize that your life, the life that you're living today is so different from the way that you grew up working in your parent's dry cleaner, keeping it, keeping it running. I still do a lot of laundry at home. Are you conscious of the power that you have as a leader in this industry of our time in the future?
That's an interesting question, Mishal. I'm conscious of my responsibility. I understand I'm one of the people who brought this technology to our world.
I understand that, I'm so privileged to be working at the, you know, one of the best universities in the world, educating the tomorrow's leaders and doing the cutting edge research. I'm conscious of myself being an entrepreneur and the CEO of one of the most exciting startups in the Gen AI world. So everything I do has a consequence.
And that's a responsibility I shoulder. And I take that very seriously, because this is what I keep telling people in the age of AI, the agency should be within humans. The agency is not the machines, it's ours.
My agency is to create exciting technology and to use it responsibly. And the humans in your life, your children, what do you not let them do with AI or with their devices or on the internet? Not only my own children, anyone.
I'm a teacher, am educator. It's it's the timeless advice of don't do stupid things with the with your tools. You have to think about why you're using the tool and how you're using it.
It could be as simple as don't be lazy just because there is AI, right? If you want to understand math, maybe, large language models can give you the answer, but that is not the way to learn to ask the right questions. You could use the AI tool to prompt a good question, so that you learn because of it.
So don't be lazy is one of them. The other side is don't use it to do bad things. For example, integrity of information, right?
Fake images, fake voices, fake texts. These are issues of AI as well as our social media driven communication in our society. I think you're sort of calling for old fashioned values amid this, amid this world full of new developments and challenges we couldn't have imagined even three years ago.
Yeah, you could call it old fashioned, you can call it timeless. As a educator, as a mom, I think there are some human values that are timeless, and we need to recognize that. Okay, finally, your reading, because, you know, from the from the child who is already reading the classics at an early age nowadays, what what do you read when you're kicking back at the weekend, if that ever happens?
Yeah, I honestly, I read a lot of technical papers these days. It's, it's kind of sad I, you know, you know, I, I do read, I also read to my kids, I have to say the, the, my favorite book these days is Harry Potter. Because it's such a great book.
And I read that bedtime reading to my kids. Well, that series is going to keep you going for quite a long time, depending on which book you're at. Yes, I'm almost done.
So you're not taking away the children's books in their lives then? No, because they're living a totally different time, they actually are exposed to information in a completely different way from my, my time that, you know, I there's like barely TV, right, in the 1990s or 80s in China. Now they have the entire internet.
So, I give them a Kindle. Yeah. And Chinese.
Do you speak to them in Chinese? I speak to them in, not great, but I do speak to them in Chinese. And, and their father speaks to them in Italian.
Okay. Citizens of the world as you've become as well. Yes.
Dr Fei-Fei Li, thank you so much. Thank you, Mishal.