So I think one of my favorite parts of the film is that part where the team has just told you, well we could just find the structures for all the proteins and just release those. I and then you release them to the world and you see the the map of the globe light up as people in real time are getting all of those structures. What was that like?
Tell me what was the what was the feeling that you had? I mean look there there was so many amazing moments and the team will remember this of um but that was one of the big highlights. It was very satisfying to see that this sort of idea that we maybe if we crack this really important problem you know potentially millions of researchers around the world will will make use of it and um you know to see that sort of lighting up all across the globe um is really a kind of humbling and amazing experience.
I came here for the uh AI for science forum which you held and I think this the thing that shocked me is that for 50 years the work of tens of thousands of scientists revealed the structures of 150,000 proteins. That was the grand sum of human effort. And then in a few years your team small 15 20 people was able to find the structures of 200 million.
Yeah. Well look I mean first of all the first thing to say is we couldn't have done it without the first 150,000 right. So that incredible, you know, we need to thank the the structural biology community, you know, thousands of researchers painstakingly putting together these structures using very exotic and pretty expensive and complicated equipment um over 50 years like you say and the sum totals 150,000 but it was enough to uh kickstart us to be able to create a system like Alphold to learn from those 150,000 and then actually uh learn further from its own predictions, the best ones of its own predictions and sort of feeding that back into the system and then eventually being good enough to kind of understand something about protein, something fundamental about protein structure.
So then eventually we could do all 200 million and and I think as John says in the film, you know, it usually takes a PhD student their their whole PhD. That's kind of a rule of thumb of like to find the structure of one protein. So you know 200 million times 5 years a billion years of PhD time which is quite something you know to have done in in a year.
See like I feel like I didn't get it before I came here and I heard those numbers and I was like oh things have fundamentally changed and I don't think the world gets it yet. Um so I think that's one of the exciting things about this film. And I think, you know, another thing that's really important to keep in mind is you figured out all 200 million now they're out there, but the discoveries and the breakthroughs that are going to come from that, they're going to take decades, but we are going to be reaping the the rewards of that for for decades, centuries, I think.
So, I mean, it's sort of opened up uh and and this is why we put out there into the world. We knew we could only think of a tiny fraction of what the entire scientific community might do with it. And it's really gratifying to see the whole range of things that people are already doing.
Over two and a half million researchers from pretty much every country in the world working on their really important biology and medical uh problems and making great progress with that. And and right now I think it's super well known in the scientific community but as you say I don't think it's it's appreciated yet in the general public what this is going to do. And I think that will come in the next 5 10 years as we start getting uh you know AI designed drugs that were helped by things like Alpha Fold and many many other amazing things for society that will come as a downstream consequence of us knowing what these structures are.
Now can you think of any examples that have happened since the film? Uh well there's many in fact a few of them were were were mentioned in those headlines you know these these ideas of um designing enzymes which are types of proteins you know that catalyze certain reactions and maybe we could uh modify some of these enzymes to help deal with some environmental issues we have like the amount of plastics in the oceans or perhaps doing even carbon capture things like this um I think incredible opportunity and obviously the main reason I was I I was interested in doing uh protein folding was to accelerate drug discovery And uh and we spun out a company, a sister company called Isomorphic Labs that actually is developing other technologies around AlphaFold and and the newer versions of AlphaFold to actually start not only do you know do you understand the structure of a protein, but then you could design a drug compound to bind to the right part of the protein surface once you understand what it structure, what its function is. And that's the beginning of understanding disease and maybe trying to cure some of these terrible diseases.
And we're working on, you know, uh, cancers and cardiovascular diseases, all sorts of things, you know, more than a dozen drug programs. And one day, I hope, uh, you know, we'll be able to reduce drug discovery down from taking like 10 years on average to go from understanding a target to having a drug in in the clinic to, you know, maybe a matter of months, perhaps even weeks, just like we did with the protein structures. Yeah, that's extraordinary.
I wanted to ask you about your origin story. um you know something that occurred to me well here's my thinking right AI in a way is not new dates back to the 40s and maybe 50s and it went through a series of sort of booms and then busts or AI winters as people refer to them um I think in the film you said there's no point in being born you know ahead of your time 50 years ahead of your time so I think that my question for you is when you were graduating from Cambridge that was kind of an AI winter Um, did you see something that other people didn't see that led you to know the time for AI was coming or were you just obsessed with this idea of intelligence and just ridiculously lucky to be born in this moment? Well, look, it's a bit of both, I would say.
So, and actually there's many people in the audience, many of my colleagues and friends who've been with me that almost that entire journey. you saw some of them, David Silver, Ben Copy and Shane Le and um they'll remember this very well and Tim Stevens and it's um look I I have to be honest I would have done it no matter what because um I when I was growing up and you saw that with the chess and other things I just felt that intelligence and and therefore artificial intelligence was the most fascinating thing one could work on. I always wanted my passion was was to try and understand the universe around us you know sometimes call it the nature of reality all the big questions.
So physics was my favorite subject at school and all the big physicists Richard Feman and Steven Weinberg all the great physicists Carl Sean. Um but I sort of thought that we needed another helping hand like a tool that could help us help us as human scientists understand the world better around us. And uh and that for me was obvious to me from the beginning as I was when I was a teenager that um it would be AI and it would be the you know not only the most uh maybe most powerful tool to help us do science but the most interesting uh thing to develop in itself you know interrogate what intelligence is and try to understand what it is uh and while you're trying to build something that is intelligent.
So I think I was always going to do that. Um but also when you look at these AI winters and you look at the state of technologies you find it you have to have a good reason why you think you might be able to try it in a new way those winters are are in a way learning you know opportunities to learn why did those methods not work those deep blue methods that we saw that beat Gary Kasparov amazing they could win the chess but really was a little bit of a dead end because they were hard programmed hardcoded to only do that one thing play chess so it wasn't some sense was missing the essence of intelligence in in many ways this this general generalness and this learning cap capability and we knew we had these these techniques they were very nent you know neural networks became deep learning and then reinforcement learning as you heard we we knew those techniques um could potentially scale why did we know that because actually the the the human brain is a form of those you know we're a neural network obviously that's what inspired neural artificial neural networks in the first place was was was you know neurons in the brain And reinforcement learning is one of the main ways that animals including humans do learn. You know the dopamine system in the brain implements this form of reinforcement learning.
So you know in the limit this must be possible using these types of learning techniques. But of course you don't know at that point if you're 50 years ahead or not right with your time. But I just want to be clear on what you're saying.
In essence, you're saying that the AI models that you're currently working with are in some sense analogous to the human brain or the human brain is analogous very very loosely speaking they're inspired by the same types of techniques and approaches uh you know biological learning systems use right that's the key it's the learning and the generality do you think then at some point AI will be conscious well that's a that's a huge question and and obviously you know we have to you know they're not not necessarily agreed upon definitions of consciousness obvious Obviously there are aspects of it like self-awareness and things that are agreed upon. Um I think that's part of the I always felt actually answering that question was one of the things that will come about being on this journey with AI trying to build artificial minds and then comparing them to what we know about about uh uh the human brain and then seeing what the differences are if any and those differences might tell us what uh and certainly help us understand our own minds better. things like dreaming, emotions, creativity, and things like consciousness, all the mysteries of the mind uh and uh and then uncover help us understand them and then maybe understand how special they are to the substrate that we're in.
You know, we're carbon based versus the silicon based systems that we're building. You started DeepMind here in London and you had certain forces, investors maybe trying to pull you to Silicon Valley, but you resisted. Tell me what it was about this place or the culture that that made you want to stay here.
Well, look, I I I've been I I was born in London. I've lived in London my whole life and you know, I think there's a lot of amazing things about the cultures that I was immersed in. You know, you saw me going to Cambridge and the sort of golden triangle of Oxford, Cambridge and Imperial as we're nearby and UCL, all these august institutes.
I think um the UK has always been very strong in science and innovation. We punch well above our weight. There's also obviously a rich history in computers with Charles Babage and Alan Turing.
So I feel we're trying to carry on in that tradition. But there was some practical reasons. One is that uh uh I at the time when we started in 2010, there was a lot of talent trained by these top places that um unless they wanted to go and work for a hedge fund or something in the city in finance, they wanted to do something really intellectually challenging.
There weren't there aren't that many companies doing that kind of stuff in the UK or actually in Europe really. So I felt that we could um gather a lot of talent together very quickly that was probably being underutilized in in Europe and that that's how it transpired. But the second reason was that I think AI is so important.
It's going to affect the whole world. Obviously you've heard me talk about in the film that you know I think it's going to be one of the most important things ever invented. I felt that I do think it's needs the international sort of approach and cooperation around what we want to do with this technology.
how we want it to be deployed, how we want it to um affect our society. I it's going to affect everyone in all countries. Um so I don't I think it needs to be uh uh built with more uh voices and stakeholders uh than just sort of 100 square miles of um California, you know, in Silicon Valley and also beyond technologists and the scientists just building it.
think it needs um social scientists, economists, psychologists, you know, governments, academia, all to be involved um in in in defining how this this this enormously transformative technology should go. Yeah. Well, it's clearly going to be very powerful and one of the issues that the the film addresses is the morality and ethics around that and I think particularly the safety of it.
What keeps you up at night when you think about AI? Well, many things and and um you know, I don't get much sleep these days, but I I for many reasons, but I think um Shane and I, you know, will remember this is that we we actually uh when we started out 2010, um it's only 15 years ago. It's kind of amazing to see how the world's changed.
And in 2010, no one was talking about AI. Nobody was doing industry. Um but we knew that this was a, you know, this had the kernel of something incredibly important.
And uh and we planned for success. So, we thought it was going to be a 20-y year journey and often when you do that in technology and in startups and and hard sciences that that it always stays 20 years away, right? So, somehow, but for us, it's it was actually it really has been 20 years and we're sort of 15 years in now.
Um, and we planned for success, but we knew that success meant all these amazing things, curing diseases, you know, um, solving energy crisis, climate, using AI to help, all of these things. Um but also it came with these risks, risks of harm, enormous risks of misuse. And so from the beginning we've been very cognizant of that responsibility.
Um but also trying to push that debate and be role models about how to develop this technology in a responsible way. Is this potentially unstable in that you could have a hundred companies who have the utmost ethics and morality and they think about safety to an extreme level and you have one actor who doesn't. Yeah.
And then it ruins it for everyone. Yeah. Well, that's the huge that's one of the huge risks that I worry about today is, you know, so-called race dynamics, right?
Race to the bottom. You know there's many uh examples of this in history right and even if all the actors are good in that environment let alone if you have some bad actors you know that can drag everyone to to to to rush too quickly to um cut corners these kinds of things because in individual it's a sort of tragedy of the common situation for any individual actor sort of makes sense but but as an aggregate it doesn't and um and I've been saying that for a long time and Shane and others many others uh Helen and people work on respons responsibility at deep mind. We've been talking a lot about this and that's why I was so pleased to see some of these international summits being set up.
the first one in the UK, Bletchley Park, and then just recently in Paris uh that Macron hosted, President Macron. And I think we need those kinds of uh international debates uh about where this is going and um and one of the big problems is how do we uh give access to these technologies and you've seen with AlphaFold you know open to the world open science obviously that's better for progress than amazing uh all the all the all the good researchers and the good people around the world can can can build on top of that work and do amazing things with it but at the same time you want to restrict access to to that same technology to would be bad actors whether that's individuals or even rogue nations and it's very hard balance to get right like it's there's no one's yet got a good answer for how you you know do both of those things I think initially I was encouraged by the amount of effort required to develop AI so there's many references in the film to the Manhattan project and I think it's one of the benefits of nuclear weapons that in order to develop them you actually need basically state sponsorship or you know a huge huge undertaking and initially AI looked the same way. This is going to take you know the huge tech companies or or states to develop this but lately there's these new developments like deepseek and there's an Alibaba model and they look much more sort of thrifty.
Yeah. Which I think there could be a fear that that really democratizes the access to this technology increasing the probability of a bad actor. Yeah.
So look that that you you know you're exactly right and I feel like this it's it's sort of it's very good on the one hand you know more people accessing these technologies um you know hobbyists you know kids like I was back when I was tinkering around with theme park can now you know uh uh work on some really interesting AI systems and probably come up with amazing new uh uh applications. Um but yeah it's it's sort of uh it's you know it's it's it's available to everyone and it is worrying and I feel like you know maybe we need some new uh uh approaches you know where maybe uh the market environment or something else is set where it kind of incentivizes the right behavior right so you know I was talking to some economist friends of mine and maybe they need to get involved now to set up the right incentive structures so that uh actually the players that and the actors that are are are have the right intentions, you know, backed by government society are actually the ones that that get successful and and those AI systems are more powerful and and and more productive. Um, and maybe we have to start thinking about those kinds of approaches to deal with the practicality that we're in, which you know, I'd much rather there be a a calm CERN-like effort towards AGI, these final few steps, but given the geopolitical framework we're in, maybe that's not possible.
So, we have to be more pragmatic about it. For sure. In the film, you talk about how the future will be radically different.
So, I want to ask for myself and for everyone in this audience. Given that you were one of the leaders at the forefront of this, what do you think the world will be like in 5 to 10 years? Do you do you have an outlook on that?
And I guess further to that, I have four kids. I'm like, what do I do with like do do I send them to school? Is that even worthwhile anymore?
Like, so so you are the guy that I want to ask this question to more than anyone in the world. Sure. Well, let's start with the same question.
For sure. send them to school. I I I I I say my kids too.
Look, I think the next 5 10 years is going to be um it's a bit you know what I would say to kids these days is embrace the new technologies and as parents I think let your kids play with them that they're coming and they're going to increase productivity, creativity. I think it's going to be amazing. It's a bit like my era, my generation with computers, the advent of computers, you know, there was a lot of fears about that too and even gaming.
And then um people work out, you know, if you're growing up with that, it feels natural to you, second nature. And then um they're often the ones that can extend it into new ways we couldn't even dream of today. So I think a lot of that's going to happen.
So I still think it's important to do maths and computer science because you'll be best placed to take advantage of these frontier technologies and use them in new ways. So um so that I don't the recommendation I think is the same as it's always been. um maybe just be prepared that things are going to move even faster and to learn you know about adapting and learning to learn actually learning quickly to adapt to a new technology that's going to come out it seems like almost every week uh in terms of like society what I see happening is I mean 5 10 years is a long time in AI hard to predict that far ahead but um what I certainly imagine in the areas of science is I think a new renaissance almost a new golden age which I hope alpha fold is just the beginning of um of us understanding and making lots of breakthroughs in many areas of science uh and helping us with all the biggest questions, you know, from curing all diseases to helping with uh new energy sources and and and climate.
And I think we're going to start seeing that all in the next 10 years. That's extraordinary. Well, I look forward to it.
I hope you do as well. Uh we're going to leave it there. Um but yeah, congratulations on all your great work and winning the Nobel Prize and it's just tremendous.