Hey, everybody. Trevor Noah here. And today, I'm really excited because we're going to be joined by Juan Lavista Ferres, a driving force behind Microsoft's AI for Good Lab.
Now, all over the globe, forward-thinking organizations are looking at ways to use AI to confront some of humanity's greatest challenges. And this includes everything from reshaping disaster response, all the way through to expanding healthcare accessibility. And today, I have one pressing question: Why is it that, in some cases, AI isn’t just a solution, it is the only solution that we might have?
You can’t find answers without asking the right questions. This is The Prompt, with me, Trevor Noah. Juan, ¿cómo estás?
Bien, todo bien. It's so good to be here with you because I've had the extreme privilege of traveling around the world with Microsoft, looking at AI for Good, and why it's so necessary. Maybe you can just give me a really short introduction as to how you became really the face within the organization in looking at using AI for Good, particularly.
So, this is an effort that started in Microsoft. We wanted to see how we could use AI to help like with some of the world’s biggest problems. Right.
And early on, what we realized is that by partnering with organizations, we could work together to solve some of these problems. There are some issues that seem really pressing because they are and there are some issues that need to be solved before they become pressing. How do you and your team decide how to deploy the resources?
We start with multiple projects that either we reach out to the organizations or the organizations reach out to us. We work with them to try to understand the problem. From those projects, we try to see, okay, from these amount of projects, how many of these projects is there data that we can use?
In order for us to do our job, we need data. So, let's talk about the satellites. Yes.
This is an actual satellite from Planet Labs. Okay. They have over 200 satellites of these and they are all in a line, and while the earth rotates, all of these satellites are taking a picture for every single small piece of the earth, all at noon.
Every pixel that they take a picture is roughly three meters. They're taking a picture of the entire earth. Every single day at noon.
So you want data that's as clear as possible for the AI to work from. For the first time, we have pictures around the world every day at very high resolution. And the purpose of that is to get what information to to use, for what purpose?
It depends on the problem. For example, in the case of the Amazon, we're using to monitor deforestation, monitor illegal mining. We're using this data to help the UN and the UN agencies, have better information.
So, for early disaster response, but also disaster preparedness. In the case of the work we are doing with IHME is to map where all the structures are. If we have the housing, we can understand population, we can understand services, we can understand roads.
So, now it stands to reason, if you're getting thousands of pictures from one of these and you have 200 of these taking a multiple of those, you don't have enough humans to look at each piece of information. It would be impossible. Exactly.
Now using AI, we've basically been able to understand where people are living, how things are changing because of disasters, where weather might be impacting the planet. This is fantastic. Yeah.
Is there a future Is there a future where you will get one of these for me, for my house so I know who keeps eating my yogurt? We could likely work with Planet on that. Let me show you the other project that we have that is related to retinopathy of prematurity.
Retinopathy of prematurity is now one of the leading cause of blindness in children. Okay. This disease affects very small, premature babies that, before, they wouldn't survive.
Now, thanks to improvement in healthcare, thanks to improvement in medicine in general, there's more of these babies surviving, which is great news. But a lot of those babies are not ready to live in the planet yet. Their retinas are not completely developed.
Okay. Once you have a severe case of Once you have a severe case of retinopathy of prematurity, the doctors have very small window, usually 24 hours, where they need to diagnose the disease, and they need to do a surgery. 24 hours.
If you have a case like that, blindness through ROP is completely preventable. So we've been working with organizations in Mexico, in Colombia, and in Argentina, to develop an AI algorithm that we run on a phone. And using this, this is an ocular lens, And using this, this is an ocular lens, and just using a phone, you will be able to use this lens to take a picture of the retina of the child.
Then you can have an algorithm that run on that and detect, as good as a doctor, the chances that that child suffer from ROP. Eighty percent of the world have smartphones. But before we have this technology, doctors would have to buy very expensive equipment, like $40,000, $100,000 equipment to just take the picture.
I mean, that's just absolutely fascinating. I've heard rumors, and this is one of the reasons I'm here today, that it's not just images. I hear that AI is helping you understand sounds?
Let me show you. This is the project that we're doing. We call it Project Guacamaya.
This is a collaboration between Microsoft, my team, and the Universidad de los Andes in Colombia, and the Humboldt Institute. The Humboldt Institute, what they have been doing is, since 1992, has been recording sounds in different parts of the Amazon. Once you have these recordings, you would have an expert that would just listen to the recordings and try to understand the health of the forest.
Wait, wait, wait, Juan, you can't just move that quickly. Sorry. You said, you said someone would listen, to the sound of the Amazon forest?
Yes. And they would, through listening to that sound, have to try. .
. To detect it. To detect the health of the forest.
Yes, for example, for understanding the different type of animals that are there, that requires a lot of expertise. I mean, I can only imagine. This is, for example, a frog.
What you see here is the representation of the acoustic data. So, once you listen to this, this is just a frog. But of course, the frog is not by itself.
All of these animals, all of these sounds, are on different frequencies and they have a different fingerprint. The sound fingerprint is different. Okay.
First, you have an expert that says this, what you see here in that frequency, that's a frog, and a specific frog. What the algorithm can do is learn from that and identify that this is a frog, that this is a macaw. The great thing about AI that before wasn't possible is the fact that now they can do recordings, just upload the recordings to these algorithms and the algorithms will do the classify, and understand how the ecosystem has been changing in different parts of the Amazon.
A majority of these projects are in this book we are launching. This is our AI for Good book. One of the things that is important is to show the type of problems that you can solve using AI, so other scientists can say, “Hey, my project looks a lot like that project.
” Hey, my project looks a lot like that project. Right. “Maybe I can use this technology.
” So, that's the whole objective of the book is to showcase some of this project. It's wonderful to sit with you like this and bring it all together. Looking at all of the work that the AI for Good is really doing, and the AI Labs for Good has been doing so, thank you very much.
Trevor, thank you very much. Mucho gracias.