What keeps you up at night? For me, it's um this question of international uh standards and cooperation and and and also not just between countries but also between companies as we get and researchers as we get um towards the final steps of um uh AGI and and I think we're on the cusp of that. You know, maybe we're 5 to 10 years out.
Some people say shorter. I wouldn't be surprised. It's a you know, sort of like a probability distribution, but it's coming.
So either way, it's coming very soon. Deis Sasabis co-ounded Deep Mind in 2010 and is now the CEO of the company which was sold to Google in 2014. In 2024, Habis shared the Nobel Prize in Chemistry for the development of Alphafold, an AI system that predicts the 3D structure of proteins.
In March, Time reporter Billy Peragro interviewed Asaris in London for the 2025 Time 100 list. For the purposes of the video, could you explain what AGI stands for and what it kind of means in a sentence? AGI stands for artificial general intelligence and we define that as a system that is capable of exhibiting any cognitive capability humans have.
Could you talk about AGI and when you first realized maybe that that might be the key to unlocking not just individual scientific discoveries, but a whole sway of them. So we've always been interested in building general AI or AGI from the beginning of Deep Mind. That was always the aim.
In fact, that was the original aim of AI as a field back in the 1950s. So in some senses, we're we're we're kind of realizing that grand dream on the way you can use those general techniques uh for specialized solutions to problems. So Alpha Fold is a good example of that where um the problem's well specified.
its enormous value to society and in this case biology and medical research uh in its own right for what it can do. So it doesn't really matter what methods you use but you start with the general methods and then you add some specializations on top. Um so it still uses neuronet networks and those all the techniques we built for our games and then it adds some new things for protein specifically.
Um, of course we in the simultaneously we've been advancing our general AI techniques and now that's obviously in the world of language but also more recently multimodal foundational models that can understand not just language uh or play a game but actually um the whole entire spatial world context that you're in and start understanding about things like the physics in the world and be able to process um things like images and video and sound. So obviously this technology if it's created will be very impactful. Could you paint the best case scenario for me?
What does this world look like if we create AGI? So the reason I've worked on um AI and AGI my entire life really and career is um because I believe if it's done properly and responsibly it will be the most beneficial technology ever invented. So the kinds of things that um I think we could be able to use it for, you know, winding forward 10 plus years from now is potentially curing maybe all diseases with AI um and helping with things like finding new energy sources or helping develop new energy sources, whether that's fusion or optimal batteries or new materials like new superconductors.
Um I think uh some of the biggest problems that face us today as a society whether that's climate or disease will be helped by uh AI solutions. So um I think if if we wind forward in 10 years time I think that the optimistic view of it will be we'll be in this sort of world of maximum human flourishing traveling the stars you know with with um all the technologies that AI will help bring about. You've also been quite vocal about the need to do this responsibly to avoid the risks.
Uh could you paint the worst case scenario for me? Sure. Well look worst case I think has been covered a lot in science fiction.
Um, I think the the two issues I worry about most are AI is going to be this fantastic technology if used in the right way, but it's a dualpurpose technology and it's unbelievably powerful. It's going to be unbelievably powerful. So, um, what that means though is that bad actors or would be bad actors can repurpose that technology for potentially harmful ends.
Um so one big challenge we have as a field and as society is how do we enable access to these technologies to the good actors to do amazing things like cure terrible diseases at the same time as restricting access to those same technologies to would be bad actors um whether that's individuals to all the way up to rogue nations uh and that's a really hard uh conundrum to solve. Um the second thing is uh AGI risk itself. So risk from the technology itself as it becomes more autonomous, more agent-based like which is what's going to happen over the next few years because they'll be more useful for all the good users.
Um but how do we ensure uh that we can stay in charge of those systems, control them, interpret what they're doing, understand them, put put the right guard rails in place that are not movable by very highly capable systems that are self-improving. That is also an extremely difficult challenge. So those are the two main buckets of risk.
If we can get them right, then I think we'll end up in this amazing future. It's not a worst case scenario, though. What does the worst case scenario look like?
Well, I think if you get that wrong, then um you know, you've got uh all these harmful use cases uh being done with uh these systems, you know, and that can range from uh um kind of doing the opposite of what of what we're trying to do without finding cures. you could end up finding, you know, toxins, these kinds of things with those same uh uh same systems. And so a lot of the cases, all the good use cases, if you invert the the goals of the system, uh you would get the the sort of harmful cases.
Um and as a society, we've got to this is why I've been sort of in favor of international cooperation around this because the systems wherever they're built or or however they're built, uh they're going to affect they can be distributed all around the world. They're going to affect everyone. um uh pretty much every corner of the world.
So um we need sort of international standards I think around um how these things systems get built, what sort of designs and goals we give them and how they're deployed and used. There's a lot of talk in the AI safety world about the degree to which these systems are likely to do things like power seeking, to be deceptive, to kind of, you know, seek to uh disempower humans and and escape their control. Do you have a strong view on on whether that's like the default path or is that a kind of tail risk?
Like what's your perception? My my my feeling on that is that it the risks are are unknown currently. So it we you know there's a lot of people my colleagues and um famous chewing award winners on both sides of that argument right some uh you know like Yan Lun would say that there's no risk here it's sort of um it's it's it's uh it's uh it's all hype and then there are other people who think we're you know it's doomed by default right uh Jeff Hinton and Joshua Benjamin people like that and um I know all these people very well I think some the right answer is somewhere in the middle which is if you look at that debate there's very smart people on both sides of that debate So what that tells me is that we don't know enough about it yet to actually quantify the risk.
It might turn out that as we develop these systems further, it's way easier to align these systems or or keep control of these systems than we thought or we expected sort of hypothetically from here. Some quite a lot of things have turned out like that so far. They've been easier than people thought in including making them useful to the world, you know, with just some fairly simple RHF fine-tuning on top of these models and then suddenly they become useful chat bots.
So that's interesting. So there's some evidence towards the fact that that things may be a little bit easier than some of the uh uh uh uh most pessimists were thinking, but in my view there's still significant risk and we've got to do research um carefully to kind of quantify what that risk is and then deal with it ahead of time with as much foresight as possible um rather than after the fact um which you know with technologies this powerful and this transformative um could be extremely risky. [Music] It seems like whatever the answer to that question, the impact on society is going to be transformative to the level that we haven't seen in our uh you know lives.
Yeah. You're a dad. Yeah.
How are you thinking as a parent about how to bring a child up in in a world where so much is likely going to radically change? Well, I think we've seen a lot of change even in our lifetimes. um from you know if I think back to my childhood where it was the dawn of the computer age and I was working on you know my first ZX Spectrum that I got when I was a small kid and started programming and then to where we are today even my early games industry work when I was doing AI for games like theme park and then today we've got systems like VO that creating entire realistic video um you it would have been hard to dream about that you know 20 30 years ago and yet we cope with it we seem to adapt and um and I think Human beings are sort of inf infinitely adaptable.
I think that's a good thing about us. We we sort of normalize to whatever is going on today with our technology today, smartphones and computers and internet all around us and we treat it as you know kids these days just as second nature. And I suspect that's what's going to happen with this.
What I'd recommend though is just like we did in the computer age is you got to embrace and and and I think the the coming change, learn about the tools and learn how to work effectively uh and make the best use of them. And I think you'll end up sort of being superpowered in some way uh both creatively and productivity wise um if you use them in the right way. And I think that's probably the the next stage that we're going to go through.
um probably the kids these days that are growing up with these tools um they'll learn all sorts of new workflows that um probably will be a lot more efficient um than we can imagine today. Is there anything that you do differently as a parent that you might not have done if AGI weren't on the horizon? No, I still think it's I get asked this question a lot.
For example, is it worth learning programming and mathematics and even things like chess to train your own mind? I think it is because um although for example let's take programming the nature of programming is changing and it may well change very radically in the next few years and actually in some ways democratize it because we'll start programming with natural language instead of with programming languages. So then the the the the the kind of value part of that starts going towards more the creatives and the designers.
So it's going to be pretty interesting time. Um, but I think the people that will get the most out of that will still be the ones with deep technical understanding of what these tools are doing, how they were made, and therefore what are their limitations, and and what are the things they're strong at that you can um uh that you can use. What keeps you up at night?
For me, it's um this question of international uh standards and cooperation and and and also not just between countries but also between companies as we get and researchers as we get um to towards the final steps of um uh AGI and and I think we're on the cusp of that. You know, maybe we're 5 to 10 years out. Some people say shorter.
I wouldn't be surprised. It's a you know, sort of like a probability distribution, but it's coming. So either way, it's coming very soon.
And u I'm not sure society's quite ready for that yet. And uh and we need to think that through and also think about uh these issues that I talked about earlier with to do with the u controllability of these systems and also the access to these systems and ensuring that that all goes well. So there's a lot of challenges ahead and a lot of research and a lot of um uh uh discussions that need to be had.
What TV, movies, books do you think AI uh get AI right and why? So, my favorite uh movies um on the on the on to show how useful AI could be. Um I really like the robots from Interstellar.
Um extremely helpful, extremely knowledgeable uh and in the end self-sacrificing and um I think that's a good example also have a lot of humor of of um uh you know how robot assistants or helpers could be um very useful in the world. Um and then maybe on the on the darker side but also inspired me when I was young was Bladeunner and things like that where you know interesting questions about um uh uh autonomous systems are they conscious and that was the whole dilemma. I think it was a philosophical piece in some sense with Bladeunner uh and and it's very interesting about the nature of being human.
I mean that's a question that's coming up a little bit more and more now right? Are these systems on the verge of consciousness? Yeah, my feeling is they're not at all currently.
Um and my recommendation would be if we have the choice and we if we understood what consciousness was, we should build systems first that are not definitely not conscious and a kind of tools and then we can use those tools to better understand our own minds and maybe what this phenomena is of consciousness that we all feel and then once we understood that um which one of the things I want to use AI for in the sciences is to advance neuroscience um then maybe we could take you know think about taking that next step. Nice. Um, final question.
You have a an opportunity to have a dream dinner party. You can invite anybody alive or dead. Uh, say say four guests.
Who would you choose? Oh, wow. That's really hard to to to narrow it down to.
Um, you have six if you want. Yeah. I mean, I think some I would probably invite many of my scientific heroes.
So, for sure Alan Turing, um, uh, Richard Fineman, um, and, uh, maybe Newton and Aristotle. What do you reckon the the conversation over there? Well, I'm pretty sure with that set of people, it'll be very philosophical around maybe these questions about what are the limitations of these AI systems and and and what does it tell us about the nature of reality and then uh you know, I think that it does tell us a lot about uh uh and will do tell us a lot about what what's going on in the universe around us.
I think AI is going to be the ultimate tool for science and certainly that's what um has always been my passion and what I plan to use it for. I think actually one more question while we have you on video. Um it's quite clear you see yourself as a a scientist first and foremost.
What would you say are do you see yourself more as a scientist, a technologist? You're far away from Silicon Valley in London. I mean how do you identify?
Yeah, I I I identify myself as a scientist first and foremost. Um the whole reason I'm doing uh everything I've done in my life is in the pursuit of knowledge and and and trying to understand the world around us. I've I've I've kind of been obsessed with that I think since I was a kid of all the big questions and and for me building AI is my expression of how to address those questions is to first build a tool um that in itself is pretty fascinating and is a statement about intelligence and consciousness and these things that are already some of the biggest mysteries and then uh it could it's dual purpose because it can also be used as a tool to investigate the natural world around you as well like chemistry and physics so and biology.
So what uh what more exciting adventure and and pursuit could you have? So I see myself as a scientist first and then maybe like an entrepreneur second mostly because that's the fastest way to do things. Uh and then finally maybe a technologist engineer because in the end you don't want to just theorize and think about things in a lab.
You actually want to make a practical difference in the world. I think that's where the engineering uh part of me comes in.