you play this forward in your brain. You you've been in the tech industry for a long time and from looking at your work, you it feels like you're describing this as the most sort of transformative, potentially harmful technology that humans have really ever seen. You know, maybe alongside the nuclear bomb, I guess, but some would say even potentially worse because of the nature of the intelligence and its autonomy.
You must have moments where you you think forward into the future and your thoughts about that future aren't so rosy. >> Well, >> because I have those moments. Yes, but but let's let's think let's answer the question.
I said think five years. In five years, you'll have two or three more turns of the crank of these large models. These large models are scaling with ability that is unprecedented.
There's no evidence that the scaling has laws as they're called have begun to stop. They will eventually stop, but we're not there yet. Each one of these cranks looks like it's a factor of two, factor of three, factor of four of capability.
So let's just say turning the crank all of these systems get 50 times or 100 times more powerful in and of itself. That's a very big deal because those systems will be capable of physics and math. You see this with O.
1 and um and open AI all the other things that are occurring. Now what are the dangers? Well there's the most obvious one is cyber attacks.
There's evidence that the raw models, these are the ones that have not been released, can do what are called day zero attacks as well or better than humans. A day zero attack is an attack that's unknown. They can discover something new.
And how do they do it? They just keep trying because they're computers and they have nothing else to do. They don't sleep.
They don't eat. They just turn them on and they just keep going. Um, so the so cyber is an example where everybody's concerned.
Another one is biology. Viruses are relatively easy to make and you can imagine coming up with really bad viruses. There's a whole team.
I'm part of a commission. We're looking at this to try to make sure that doesn't happen. I already mentioned misinformation.
Another probably negative, but we'll see is the development of new forms of warfare. I've written extensively on how war is changing. And the way to understand historic war is that it's the stereotypically the the soldier with the gun, you know, on one side and so forth.
World War trenches. You see this, by the way, in Ukra in the Ukraine fight today where the Ukrainians are holding on valiantly against the Russian onslaught. But it's sort of, you know, mono ammono, you know, man against man.
Sort of all of the stereotypes of war. So in a drone world, which is the sort of the fastest way to build new robots is to build drones, you'll be sitting in a command center in some office building connected by a network and you'll be doing harm to the other side while you're drinking your coffee. Right?
That's a change in the logic of war. Um, and it's applicable to both sides. I don't think anyone quite understands how war will change, but I will tell you that in UK in the Russia and Ukraine war, you're seeing a new form of warfare being invented right now, right?
Um, both sides have lots of drones. Tanks are no longer very useful. A $5,000 drone can kill a $5 million tank.
Um, so that's called the kill ratio. So basically, it's drone on drone. And so now people are trying to figure out how how to have one drone destroy the other drone.
Right? This will ultimately take over war and conflict in our world in total. >> You mentioned raw models.
This is a concept that I don't think people understand exists. The idea that there's some other model that's the raw model that is capable of much worse than the thing we play with on our computers every day. >> It's important to establish how these things work.
So you the way these algorithms work is they have complicated uh training things where they suck all the information in and they uh one we currently believe we've sort of sucked all of the written word that's available. It doesn't mean there isn't more but we've we've literally done such a good job of sucking everything that humans have ever written. It's all in these big computers.
And when I say computers I don't mean computers. I mean supercomputers with enormous memories and the scale is mindboggling. Uh, and of course there's this company called Nvidia which makes the chips which is now one of the most valuable companies in the world.
Um, surprisingly so incredibly successful because they're so central to this revolution and good for Jensen and his team. So the important thing is when you do this training it comes out with a raw model, right? It takes six months and you know you wait 24 hours a day you can watch it.
It gets close to there's a measurement that they use called the loss function. when it gets to a certain number, they say, "Good enough. " So then they go, "What do we have?
" Right? What do we do? Right?
Um so the first thing is let's figure out what it knows. So they have a set of tests and of course it knows all sorts of bad things which they immediately then tell it not to answer. To me the most interesting question is in over a fiveyear period these systems will learn things that we don't know they learn.
How will you test for things that you don't know they know? The answer in the industry is that they have incredibly clever people who sit there and they fiddle, literally fiddle with the networks and say, "I'm going to I'm going to see if it knows this. I'll see if it can do this.
" And then they make a list and they say, "That's good. That's not so good. " Right?
So all of these transformations. So for example, you can show it a picture of a website and it can generate the code to generate a website. All of those were not expected.
They just happened. It's called emergent behavior. >> Scary.
>> Scary but exciting. And so far um the systems have held. The governments have worked well.
Um the these trust and safety groups are working here in the UK. Um one year ago was the first trust and safety conference. Um the government did a fantastic job.
The team that was assembled was the best of all the country teams here in the UK. Um, now what's happening is these are happening around the world. The next one is in France in uh early February, and I expect a similar good result.
>> Do you think we're going to have to guard I mean, you talk about this, but do you think we're going to have to guard these raw models with with guns and tanks and machinery and stuff? >> I worked for the Secretary of Defense for a while. Uh, in my in Google, you could spend 20% of your time on other things.
So I worked for the secretary of defense to try to understand the US military and um one of the things that we did is we visited a plutonium um factory. Plutonium is incredibly dangerous and incredibly secret and so this particular base is inside of another base. So you go through the first set of machine guns and then you have normal thing and then you go into the special place with even more machines, guns and even because it's so secure.
So the the metaphor is do you fundamentally believe that the computers that I'm talking about will be of such value and such danger that they'll have their own data center with their own guards which of course might be computer guards. But the important thing is that it's so special that it has to be protected in the same way that we protect nuclear bombs and prolifer uh and programming. An alternative model is to say that this technology will spread pretty broadly and there'll be many such places.
If it's a small number of groups, the governments will figure out a way to do deterrence and they'll figure out a way to do non-prololiferation. So, I'll make something up. I'll say there's a couple in China, there's a few in the US, there's one in in Britain.
Of course, we're all tied together between the US and Britain and maybe in a few other places. That's a manageable problem. On the other hand, let's imagine that that power is ultimately so easy to copy that it spreads globally and it's accessible to for example terrorists.
Then you have a very serious proliferation problem which is not yet solved. This is again speculation >> because I think a lot about adversaries in China and Russia and Putin and and I think I know you talk about them being a few years behind, maybe one or two years behind, but they're eventually going to get there. They're eventually going to get to the point where they have these large language models or these AIs that can do these day zero attacks on our nation and they don't have the same in like sort of social incentive structure if they're a communist country to protect and to >> um guard against these things.
Are you not worried about what China's going to do? >> Um I am worried and I'm worried because you're going into a space of great power without fully defined boundaries. What Kissinger and we talk about this in the book the the genesis book is fundamentally about what happens to society with the arrival of this new intelligence and the first book we did age of AI was right before chat GPT so now everybody kind of understands how powerful these things are we talked about it now you understand it so once these things show up who's going to run them who's going to be in charge how will they be used so from my perspective I believe at the moment anyway that China will behave relatively responsibly and the reason is that it's not in their interest to have free speech in every case in China when they have a choice of giving freedom to their citizens or not they choose non-freedom and I know this because I spent through all their I spent all that time dealing with it so it sure looks to to me like the Chinese AI solution will be different from the west because of that fundamental bias against freedom of speech.
Because these things are noisy. They make a lot of noise. >> They'll probably still make AI weapons though.
>> Well, on the weapon side, you have to assume that every new technology is ultimately strengthened in a war. Um the tank was invented in World War I. At the same time, you had the initial forms of uh airplanes.
Much of the Second World War was an air campaign which essentially built many many things. And if you look at the the there's a a book called Freedom's Forge about the American U structure. According to the book, they ultimately got to the point where they could build two or three airplanes a day at scale.
So in an emergency, nations have enormous power. >> I get asked all the time if everyone's if anyone's going to have a job left to do because this is the disruption of intelligence and whether it's people driving cars today. I mean, we saw the Tesla announcement of the robo taxis, whether it's accountants, lawyers, and everyone in between, that's job or podcasters.
Are we going to have jobs left? >> Well, um, this question has been asked for 200 years. Um, there was there were the lites here in Britain way back when.
And inevitably when these technologies come along, there's all these fears about them. Indeed, with the lites, there were riots and people, you know, destroying the looms and all of this kind of stuff, but somehow we got through it. So um my own view is that there will be a lot of job dislocation but there will be a lot more jobs not fewer jobs and here's why we have a demographic problem in the world especially in the developed developed world where we're not having enough children.
Uh that's well understood. Uh furthermore we have a lot of older people and and the younger people have to take care of the older people and they have to be more productive. If you have young people who need to be more productive, the best way to make them more more productive is to give them more tools to make them more productive.
Whether it's a machinist that goes from a manual machine into a CNC machine or in in the more modern case of a knowledge worker who can achieve more objectives. We need that productivity group. If you look at Asia, which is the centerpiece of manufacturing, they have all this cheap labor.
Well, it's not so cheap anymore. So, do you know what they did? They added robotic assembly lines.
So today when you go to China in particular, it's also true in Japan and Korea, the manufacturing is largely done by robots. Why? Because their demographics are terrible and their cost of labor is too high.
So the future is not fewer jobs. It's actually a lot of jobs that are unfilled with people who may have a job skill mismatch, which is why education is so important. Now, what are examples of jobs that go away?
Automation has always gotten rid of jobs that are dangerous. physically dangerous or ones which are essentially too repetitive and too boring for humans. I'll give you an example.
Security guards. It makes sense that security guards would become robotic because it's hard to be a security guard. You fall asleep.
You don't know quite what to and these systems can be smart enough to be very very good security. Now, these are these are important sources of income for these people. They're going to have to find another job.
Another example in in the media in um Hollywood, everyone's concerned that AI is going to take over their jobs. All the evidence is the inverse. And here's why.
Um the stars still get money, the producers still make money, they still distribute their movie, but their cost of making the movie is lower because they use more they use, for example, synthetic backdrops. So they don't have to build the set. Um they can do synthetic makeup.
Now, there are job losses there. So the people who make the make make the set and do the makeup are going to have to go back into construction and personal care. By the way, in America, and I think it's true here, there's an enormous shortage of people who can do highquality craftsmanship, right?
Those people will have jobs. They're just different >> and they may not be in Los Angeles. >> Am I going to have to interface with this technology?
Am I going to have to get a neural link in my brain? because you we you you go over the subject of there being these sort of two species of humans potentially ones that do have a way to incorporate themselves more with artificial intelligence and those that don't and if and if that is the case what is the time horizon in your view of that happening >> I think neuralink is much more speculative because you're dealing with direct brain connection and nobody's going to drill on my brain until it needs it trust me >> I suspect you feel the same >> uh I I guess my o my overall view is that um you will not notice how much of your world has been co-opted by these technologies because they will produce greater delight. If you think about it, a lot of life is inconvenient.
It's fix this, call this, make this happen. AI systems should make all that seamless. You should be able to wake up in the morning and have coffee and not have a care in the world and have the computer help you have a great day.
This is true of everyone. Now, what happens to your to your profession? Well, as we said, no matter how good the computers are, people are going to want to care about other people.
Another example, let's imagine you have Formula 1 and you have Formula 1 with humans in it, and then you have an a a robot Formula 1, which where the cars are driven by the equivalent of a robot. Is anyone going to go to the robotic Formula 1? I don't think so.
Because of the drama, the human achievement and so forth. Do you think that when they run the marathon here in London, they're going to have robots running with humans? Of course not.
Right? Of course, the robots can run faster than humans. It's not interesting.
What is interesting is to see human achievement. So, I think the commentators who say, "Oh, there won't be any jobs. We won't care.
" I think they miss the point that we care a great deal about each other as human beings. We have opinions. You have an a detailed opinion about me having just met me met me right now and I for you.
We just sort of naturally set up your face, your mannerisms and so forth. We can describe it all right. The robot shows up is like, "Oh my god, what another robot.
How boring. " >> Why is Sam Alman working on the the founder of Open AI, one of the co-founders of OpenAI, working on universal basic income projects like Worldcoin then? >> Well, WorldCon is not the same thing as universal Bitcoin.
uh um universal basic income. There is a belief in the tech industry that it goes something like this. The politics of abundance, what we do is going to create so much abundance that most people won't have to work and there'll be a small number of groups that work who typically these people themselves and there will be so much surplus everyone can live like a millionaire and everyone will be happy.
I completely think this is false. I think none of what I just told you is false. But all of these UBI ideas come from this notion that humans don't behave the way we actually do.
So I'm I'm a critic of this view. I believe that that we as humans, so I'll give an example is um we're going to make legal the legal profession much much easier because we can automate much of the technical work of lawyers. Does that mean we're going to have fewer lawyers?
No. The current lawyers will just do more laws. They'll do more they'll add more complexity.
The system doesn't get easier. the humans become more sophisticated in their application of the principles. We are naturally basically uh we have this thing called um basically reciprocal altruism.
That's part of us but we also have our bad sides as well. Those are not going away because of AI. >> When I think about AI, this simple analogy you often think of is say my IQ is Steven Barla is 100 and there's this AI that's sat next to me whose IQ is 1,000.
What on earth would you want to give Steven to do? because because that 10,00 IQ would have really bad judgment in a couple cases because remember that the AI systems do not have human values unless it's added right I would much rather talk to you about something involving a a moral or human judgment even with the thousand I wouldn't mind consulting it so tell me the the history how was this resolved in the past how were these but at the end of the day in my view the core aspects of humanity which have to do with morals and judgment and belief beliefs and charisma, they're not going away. >> Is there a chance that this is the end of humanity?
>> No. Um, the way humanity does is much it's much harder to eliminate all of humanity than you think. All the people that I've looked with on these biological attacks say it's it takes more than one horrific pandemic and so forth to eliminate humanity.
And and the the pain can be very very high in these moments. Look at the World War I, World War II, the holodore in uh Ukraine in the 1930s, the Nazis. You know, these are horrifically painful things, but we survived, right?
We we as a as a humanity survived and we will. >> I wonder if this is the moment where humans couldn't see past around the corner because, you know, I've heard you talk about how the AIs will turn in, they'll be agents and they'll be able to speak to each other and we won't be able to understand the language. >> I I have a specific proposal on that.
Um, there are points where humans should assert control and I've been trying to think about where are they. I'll give you an example. There's something called recursive self-improvement where the system just keeps getting smarter and smarter and learning more and more things.
At some point, if you don't know what it's learning, you should unplug it. >> But we can't unplug them, can we? >> Sure you can.
There's a power plug and there's a circuit breaker. Go and turn the circuit breaker off. Another example, um there's a there's a scenario theoretical where the system is so powerful it can produce a new model faster than the previous model was checked.
>> Okay, that's another intervention point. So in each of these cases um if the if agents and the technical term is called agents, what they really are is large language models with memory and you can begin to concatenate them. You can say this model does this and then it feeds into this and so forth.
You can build very powerful decision systems. We believe this is the the the thing that's occurring this year and next year. Everyone's doing them.
They will arrive. The agents today speak in English. You can see what they're saying to each other.
They're not human, but they are communicating what they're doing. English to English to English. As long as, and it doesn't have to be English, but as long as they're human understandable.
But let's So the thought experiment is one of the agents says, "I have a better idea. I'm gonna communicate in my own language that I'm going to invent that only other agents understand. That's a good time to pull the plug.
>> What is your biggest fear about AI? >> My actual fear is different from what you might imagine. My my actual fear is that we're not going to adopt it fast enough to solve the problems that affect everybody, right?
And the reason is that the that if you look at every everyone's everyday lives, what do they want? They want safety. They want health care.
They want great schools for their kids. Why don't we just work on that for a while? Why don't we make people's lives just better because of AI?
We have all these other interesting things. Why don't we have a um a teacher that is an AI teacher that works with existing teachers in this language of the kid in the culture of the kid to get the kid as smart as they possibly can. Why don't we have a doctor or doctor's assistant really that enables a a human doctor to always know every possible best treatment and then based on their current situation, what the inventory is, which country is, how their insurance works, what is the best way to treat that patient.
Those are relatively achievable solutions. Why don't we have them? If you just did education and healthcare globally, the impact in terms of lifting human potential up would be so great, >> right?
that it would change everything. It wouldn't solve the various other things that we complain about about, you know, this celebrity or this misbehavior or this conflict or even this war, but it would establish a level playing field of knowledge and opportunity at a global level that has been the dream for decades and decades and decades. If you love the D CEO brand and you watch this channel, please do me a huge favor.
Become part of the 15% of the viewers on this channel that have hit the subscribe button. It helps us tremendously and the bigger the channel gets, the bigger the guests.