It's a very intense time in the field. We obviously want all of the brilliant things these AI systems can do. Come up with new cures for diseases, new energy sources, incredible things for humanity.
That's the promise of AI. But also, there are worries. If the first AI systems are built with the wrong value systems, or they're built unsafeely, that could be also very bad.
WY sat down with Dennis Sabis, who's a CEO of Google Deep Mind, which is the engine of the company's artificial intelligence. He's a Nobel Prize winner and also a knight. We discussed AGI, the future of work, and how Google plans to compete in the age of AI.
This is the big interview. [Music] Well, welcome to the big interview, Dennis. Thank you.
Thanks for having me. So, let's start talking about AGI a little here. Now, you founded Deep Mind with the idea that you would solve intelligence and then use intelligence to solve everything else.
And I think it was like a 20-year mission. We're like 15 years into it and you're on track. I feel like yeah, we're pretty much dead on track actually is what would be our estimate.
That means 5 years away from you know what I guess people will call AGI. Yeah. I think in the next 5 to 10 years that would be my you know maybe 50% chance that we'll have what we defined as AGI.
Yes. Well, some of your peers are saying 2 years, 3 years and others say a little more. But that that's really close.
That's really soon. How do we know that we're that close? There's a bit of a debate going on at the moment in the field about definitions of AGI and and then of of course dependent on that there's different predictions for when it will happen.
Uh we've been pretty consistent from the very beginning and actually Shane Le, one of my co-founders and our chief scientist, you know, he helped define the term AGI back in I think early, you know, 2001 type of time frame. And we've always thought about it as you know a system that has the ability to exhibit sort of all the cognitive capabilities we have as humans. And the reason that's important the reference to the human mind is the human mind is the only existence proof we have maybe in the universe that general intelligence is possible.
So if you want to claim sort of general intelligence AGI then you need to show that it it generalizes to all these domains is when everything's filled in all the all the check marks are are are filled in then when we have it we it's yes so I think there are missing capabilities right now you know that all of us who have used the latest sort of LLMs and chat bots will will know very well like on reasoning on planning on memory I don't think today's systems can invent you know true do true invention you know true creativity hypothesize new scientific theories. They're extremely useful. They're impressive.
Um, but they have holes. And actually, one of the main reasons I don't think we're we're at AGR yet is because of the consistency of responses. You know, in some domains, we have systems that are can do international math olympiad math problems, you know, to gold medal standard with our alpha proof system, but on the other hand, these systems sometimes still trip up on high school maths or even counting the number of letters in a word.
So that to me is not what you would expect. That level of sort of difference in performance across the board is is you know not consistent enough and therefore shows that these systems are not fully generalizing yet. But when we get it is then like a phase shift that you know then all of a sudden things are different all the check marks are checked.
Yeah. You know and we have a thing that can do everything. Are we then pow in a new world?
I think, you know, that again that is debated and it's not clear to me whether it's going to be more of a a kind of incremental transition versus a step function. My guess is it looks like it's going to be more of an incremental shift. Even if you had a system like that, the the physical world still operates at the in the in with the physical laws, you know, factories, robots, these other things.
So it'll take a while for the effects of that you know this sort of digital intelligence if you like to really imp theories on that too where it could come faster. Yeah. Eric Schmidt who I think used to work at Google uh has said that it's almost like a a binary thing.
He says if if China for instance gets AGI then we're cooked because if someone gets it like 10 minutes before you know the next guy then you can never catch up you know because then it'll maintain bigger bigger leads there. You don't buy that. I guess I think it's an unknown.
It's one of the many unknowns which is that you know that's sometimes called the hard takeoff scenario where you know the idea there is that these AGI systems they're able to self-improve maybe code themselves future versions of themselves that maybe extremely fast at doing that. So what would be a a slight lead, let's say, you know, a few days could be could suddenly become a chasm if that was true. But there are many other ways it could go too where it's more incremental.
Some of these self-improvement things are not able to kind of um accelerate in that way. Uh then you know being around the same time uh would not make much difference. But it's important I mean these issues and the geopolitical issues I think the systems that are being built they'll have some imprint of the values and the kind of norms of the designers and the culture that they were uh embedded in.
So you know I think there it is important these kinds of international questions. So when you build AI at Google you know do you have that in mind? Do you feel you competitive imperative to in case that's true oh my god we better be first?
It's a very intense time at the moment in in the field as everyone knows. So many resources going into it, lots of pressures, lots of things to that that need to be researched and there's sort of lots of different types of pressures going on. We obviously want all of the brilliant things that these AI systems can do.
You know, I think eventually we'll be able to make, you know, advance medicine and science with it like we've done with AlphaFold, come up with new cures for diseases, new energy sources, incredible things for humanity. That's the promise of AI. Um but also there are worries both in terms of you know if the first AI systems are built with the wrong value systems or they're built unsafeely that could be also very bad and you know there are at least two uh risks that I worry a lot about.
One is bad actors in whether it's individuals or rogue nations repurposing general purpose AI technology for harmful ends and then the second one is obviously the technical risk of AI itself as it gets more and more powerful more and more agentic can we make sure we uh the guard rails are safe around it they can't be circumvented and that interacts with this idea of you know what are the first systems that are built by humanity going to be like there's commercial imperative there's there's national imperative and there's a safety aspect to uh worry about, you know, who who's in the lead and where those uh projects are. A few years ago, the companies were saying, "Please regulate us. We need regulation.
" And now in the US, at least, the current administration seems less interested in putting regulations on AI than accelerating it so we can beat the Chinese. Are you still asking for regulation? Do you think that that's a miss on our part?
I I think um you know and I've been consistent in this I think uh there are these you know uh other geopolitical sort of overlays that have to be taken into account and the world's a very different place to you know how it was 5 years ago in many dimensions but there's also you know I think the idea of smart regulation that makes sense around these increasingly powerful systems I think is going to be important I continue to believe that I think though and I've been sitting on this as well it sort of needs to be international which looks hard at the moment in the way the is working because these systems, you know, they're going to affect everyone and they're they're digital systems. So, you know, if you sort of restrict it in one area, that doesn't really help in terms of the overall safety of these systems getting built, you know, uh for the world um and as a society. So that's the bigger problem I think is some kind of international cooperation or or collaboration I think is what's required and then smart regulation, nimble regulation that moves as the knowledge about the research becomes you know better and better.
Would it ever reach a point for you where you would feel man we're not putting the guard rails in you know we're competing that we really have to stop or you can't get get involved in that? I think the a lot of the leaders of the of the main labs at least the western labs you know we do it's there's a small number of them and we do all know each other and talk to each other regularly and a lot of the lead researchers do the problem is is that we it's not clear we have the right definitions to agree when that point is like today's systems I although they're you know they're impressive as we discussed earlier they're also very flawed um and I don't think today's systems are posing any sort of existential risk But um so it's still theoretical but the problem is there a lot of unknowns. We don't know how fast those will come and we don't know how risky they will be.
But in my view when there are so many unknowns then one I'm optimistic we'll overcome them. Uh at least technically I think the geopolitical questions could be actually end up being trickier given enough time and enough care and thoughtfulness you know sort of using the scientific method as we in you know approach this AGI point. That makes perfect sense.
But on the other hand, if that time frame is there, we just don't have much time, you know. No, we don't we don't have much time. I I mean, we're increasingly putting resources into security and um things like cyber um and also research into controllability and understanding of these systems, sometimes called mechanistic interpretability.
You know, there's a lot of different subbranches of AI. That's why I want to get to interpret that are being invested in and I think even more needs to happen. Um and then at the same time we need to also have uh societal debates more about institutional building.
How do we want governance to work? How are we going to get international agreement at least on some basic principles around uh how these systems are used and deployed and and and also built? What about the effect on work on the marketplace?
You know how much do you feel that AI is going to change people's jobs? You know the way jobs are distributed in in the workforce? I don't think we've seen my my view is if you talk to economists they they feel like there's not much has changed yet you know people are finding these tools useful certainly in certain domains like things like alphafold many many scientists are using it to accelerate their work so it seems to be additive at the moment we'll see what happens over the next 5 10 years I think it's there's going to be a lot of change with the jobs world but I I think as in the past what generally tends to happen is new jobs are created that are actually better that utilize these tools or new technologies is what happen with the internet, what happen with mobile.
We'll see if it's different this time. Obviously, everyone always thinks this new one will be different and it maybe it will be. Um, but I think for the next few years, it's most likely to be, you know, we'll have these incredible tools that supercharge our productivity, make us, you know, um, uh, really useful for creative tools and and actually almost make us a little bit superhuman in some ways in what we're able to produce, um, individually.
So I think there's going to be a kind of a kind of golden era of the next period of what what we're able to do. Well, if AGI can do everything humans can do, then it would seem that they could do the new jobs, too. That's the next question about like what AGI uh uh brings.
But, you know, even if you have those capabilities, there's a lot of things I think we won't want to do, you know, with a with a machine. You know, I sometimes give this this example of doctors and nurses. you know, uh maybe a doctor and what the doctor does and the diagnosis, you know, one could imagine that being helped by a AI tool or or even having an an AI kind of doctor on the other hand like nursing, you know, I don't think you'd want a robot to do that.
I think there's something about the human empathy aspect of that and the care and so on that's particularly uh humanistic. I think there's lots of examples like that where but it's going to be, you know, a different world for sure. If you're you would talk to a graduate now M what advice would you give to keep working through the course of of a lifetime you know in the age of AGI?
My my view is currently and of course this is changing all the time with with with the technology developing but right now you know if you think of the next 5 10 years as being um the the most productive people might be 10x more productive if they are native with these tools. So I think kids today, students today, my encouragement would be immerse yourself in these new systems, understand them. So still I think it's still important to study STEM and programming and other things so that you understand how they're built.
Maybe you can modify them yourself on top of the models that are available. There's lots of great open source models and so on. and then become, you know, um, incredible at things like fine-tuning, system prompting, you know, system instructions, all of these additional things that anyone can do and really know how to get the most out of those uh, tools and do it for your, you know, your research work, programming, things that you're doing on your course and then come out of that being incredible at utilizing and those new tools for whatever it is you're going to do.
Let's look a little beyond the five and and 10 year range. Tell me what you envision when you look at the our future in 20 years and in 30 years if this comes about. What's the world like when AGI is everywhere?
Well, if everything goes well, then we should be uh in an era of what I like to call sort of uh radical abundance. So, you know, AGI solves some of these key what I sometimes call root node problems in the world facing society. So good one examples would be curing diseases, much healthier longer lifespans, um finding new energy sources, uh you know, whether that's optimal batteries and better better you know room temperature superconductors, fusion um and then if that all happens um then you know we should be it should be a kind of era of maximum human flourishing where we travel to the stars um and colonize the the galaxy.
Um that's that's that's you know I think the beginning of that will happen in the next 20 30 years if if if if the next period goes well. I'm a little skeptical of that. I think we have an unbelievable abundance now but we don't distribute it you know fairly.
I think that we kind of know how to fix climate change right we don't need a AGI to tell us how to do it yet we're not doing it. I I agree with that. I think I think we've been as a as a species, a society, not good at collaborating.
And I think climate is a good example. But I think we're still operating, humans are still operating in a zero sum game mentality because actually the earth is quite finite relative to the amount of people there are now and our cities. And I mean this is this is why our natural habitats are being are being destroyed and and it's infecting, you know, wildlife and and the climate and everything.
And it's also partly because people are not willing to accept we do now to to to figure out climate but it would require people to make sacrifices and people don't want to. But this radical abundance would mean would be different. We would be in a finally like it would feel like a non-zero sum game.
How would we get Jordan into that? Like you talk about disease. I give you an example.
We have we have vaccines and now people are some people think we should let me give you a very simple example. Water access. This is going to be huge issue in the next 10 20 years.
It's already an issue. Countries and different you know poorer parts of the world, drier parts of the world also obviously compounded by climate change. We have a solution to water access.
It's desalination. It's easy. There's there's plenty of sea water.
Almost all countries have a coastline. But the problem is it's salty water. But desalination only very rich countries, some countries do do that.
Use desalination as a solution to their freshwater problem. But it costs a lot of energy. But if energy was essentially zero, there was renewable free clean energy, right?
Like fusion, suddenly you solve the water access problem. Water is who controls a river or what you do with that does not becomes, you know, much less important than it is today. I think things like water access, you know, if you roll forward 20 years and and there isn't a solution like that could lead to all sorts of conflicts probably that's the what the way it's trending, especially if you include further climate change and there's many many examples like that.
You could create rocket fuel easily because you just separate that from seawater, hydrogen and oxygen. It's just energy again. So you feel that these problems get solved by AGI by AI.
Then we're going to our outlook will change and we will be that's what I hope yes that's what I hope but it would that's still a secondary part. So the AGI will give us the radical abundance capability technically like the water access. I then hope and this is where I think we need some great philosophers or or or social scientists to be involved that should hopefully um shift our mindset as a as a society to nonzero sum.
You know there's still the issue of do do you divide even the radical abundance fairly right of course that's what should happen but I think there's much more likely once people start feeling and understanding that there is this almost limitless um supply of of of raw materials and energy and things like that. Do you think that, you know, driving this innovation by profitm companies is the right way to go? We're most likely to reach that optimistic high point through that.
I think it's the current, you know, capitalism or, you know, is the current or the western sort of democ democratic kind of, you know, systems uh have so far been proven to be sort of the best drivers of progress. So I think that's true. My view is that once you get to that sort of stage of radical abundance and post AGI, I think economics starts changing even the the notion of value and money.
And so again, I think we need I'm not sure why economists are not working harder on this if they maybe they don't believe it's that close, right? But but but if they really did that like the the AGI scientists do, then I think there's a lot of economic new economic theory that's required. You know, one final thing.
I actually agree with you that this is so significant and it's going to have a huge impact, but when I write about it, I always get a lot of response from people who are really angry already about artificial intelligence and and and what's happening. Have you tasted that? Have you gotten that push back and and and anger by a lot of people?
It's almost like the industrial revolution people Yeah. I mean I think that anytime there's I I haven't personally seen a lot of that but obviously I've you know read and heard a lot about it's very understandable that's that's happened many times you say industrial revolution when there's big change a big revolution and I think this will be at least as big as the industrial revolution probably a lot bigger that's surprising there's unknowns it's scary things will change but on the other hand when I talk to people about the passion of why I'm building AI which is to advance science and medicine and understanding of the world around us and then I explain to people you know and I've demonstrated it's just talk here. Here's AlphaFold, you know, Nobel Prize winning breakthrough can help with medicine and drug discovery.
Obviously, we're doing this with isomorphic now to extend it into drug discovery and we can cure diseases, terrible diseases that might be afflicting your family. Suddenly, people are like, well, that's of course we need that. It would be immoral not to have have that if that's within our grasp.
And and and the same with climate and energy, you know, many of the big societal problems. It's not like we're, you know, we we know, we've talked about there's many bigish challenges facing society today. And I often say I would be very worried about our future if I didn't know something as revolutionary as AI was coming down the line to help with those other challenges.
Of course, it's also a challenge itself, right? But at least it's one of these challenges that can actually help with the others if we get it right. Well, I hope I your optimism holds out and is justified.
Thank you so much. I'll do my best. Thank you.