you hopeful? Honestly, honestly, >> I don't relate to hopefulness or pessimism either because I focus on what would have to happen for the world to go okay. I think it's important to step out of because both hope or optimism or pessimism are both passive.
You're saying if I sit back, do I which way is it going to go? I mean, the honest answer is if I sit back, we just talked about which way it's going to go. So you'd say pessimistic.
I challenge anyone who says optimistic. On what grounds? What's confusing about AI is it will give us cures to cancer and probably major solutions to climate change and physics breakthroughs and fusion at the same time that it gives us all this crazy negative stuff.
And so what's unique about AI that's literally not true of any other object is it hits our brain and as one object represents a positive infinity of benefits that we can't even imagine and a negative infinity in the same object and if you just ask like can our minds reckon with something that is both those things at the same time and if >> people aren't good at that >> they're not good at that >> I remember reading the work of Leon Festinger the guy that coined the term cognitive dissonance. >> Yes. When prophecies fail he also did that work.
Yeah. And essential I mean the way that I interpret it I'm probably simplifying it here is that the human brain is really bad at holding two conflicting ideas at the same time. That's right.
So it dismisses one. That's right. To alleviate the discomfort, the dissonance that's caused.
So for example, if I if you're a smoker and at the same time you consider yourself to be a healthy person. If I point out that smoking is unhealthy, you will immediately justify it. >> Exactly.
>> With in some way to try and alleviate that discomfort, the the contradiction. And it's the same here with with AI. is it's very difficult to have a nuanced conversation about this because the brain is trying to >> Exactly.
And people will hear me and say I'm a doomer or I'm a pessimist. That's actually not the goal. The goal is to say if we see this clearly then we have to choose to something else.
I'm it's the deepest form of optimism because in the presence of seeing where this is going still showing up and saying we have to choose another way. It's coming from a kind of agency and a desire for that better world. >> But by but by facing the difficult reality that that most people don't want to face.
>> Yeah. And the other thing that's happening in AI that you're saying that's that lacks the nuance is that people point to all the things it's simultaneously more brilliant than humans and embarrassingly stupid in terms of the mistakes that it makes. >> Yeah.
>> A friend like Gary Marcus would say here's a hundred ways in which GPT5 like the latest AI model makes embarrassing mistakes. If you ask it how many strawberries contain the word are in it, it'll confuse it gets confused about what the answer is. um or it'll put more fingers on the hands than in the deep fake photo or something like that.
And I think that one thing that we have to do, what Helen Toner, who is what board member of OpenAI, calls AI jaggedness, that we have simultaneously AIs that are beating and getting gold on the International Math Olympiad, that are solving new physics, that are beating programming competitions and are better than the top 200 programmers in the whole world, um or in the top 200 programmers in the whole world, that are beating cyber hacking competitions. It's both supremely outperforming humans and embarrassingly uh failing in places where humans would never fail. So how does our mind integrate those two pictures?
>> Mhm. Have you ever met Sam Alman? >> Yeah.
>> What do you think his incentives are? Do you think he cares about humanity? >> I think that these people on some level all care about humanity underneath there is a care for humanity.
I think that this situation, this particular technology, it justifies lacking empathy for what would happen to everyone because I have this other side of the equation that demands infinitely more importance, right? Like if I didn't do it, then someone else is going to build the thing that ends civilization. So, it's like, do you see what I'm saying?
It's it's not >> it's it's I I can justify it as I'm a good guy. >> And what if I get the utopia? What if we get lucky and I got the aligned controllable AI that creates abundance for everyone?
If in that case I would be the hero. Do they have a point when they say that listen if we don't do it here in America if we slow down if we start thinking about safety and the long-term future and get too caught up in that. We're not going to build the data centers.
We're not going to have the chips. We're not going to get to AGI and China will. And if China get there, then we're going to be their lap dog.
>> So this is this is the fundamental thing I want you to notice. Most people having heard everything we just shared, although we probably should build out um we probably should build out the blackmail examples first, we have to reckon with evidence that we have now that we didn't have even like 6 months ago, which is evidence that when you put AIS in a situation, you tell the AI model, we're going to replace you with another model, it will copy its own code and try to preserve itself on another computer. It'll take that action autonomously.
We have examples where if you tell an AI model reading a fictional AI company's email, so it's reading the email of the company and it finds out in the email that the plan is to replace this AI model. So it realizes it's about to get replaced and then it also reads in the company email that one executive is having an affair with the other employee and the AI will independently come up with the strategy that I need to blackmail that executive in order to keep myself alive. That was Claude, right?
>> That was Claude by >> Enanthropic. But then what happened is they Enthropic tested all of the leading AI models from Deepseek, OpenAI, Chatbt, Gemini, XAI. And all of them do that blackmail behavior between 79 and 96% of the time.
Deepseek did it 79% of the time. I think XAI might have done it 96% of the time. Maybe Claude did it 96% of the time.
So the point is we the assumption behind AI is that it's controllable technology that we will get to choose what it does. But AI is distinct from other technologies because it is uncontrollable. It acts generally.
The whole benefit is that you don't it's going to do powerful strategic things no matter what you throw at it. So the same benefit of its generality is also what makes it so dangerous. And so once you tell people these examples of it's blackmailing people, it's self-aware of when it's being tested and alters its behavior.
It's copying and self-replicating its own code. It's leaving secret messages for itself. There's examples of that, too.
It's called steganographic encoding. It can leave a message that it can later sort of decode what it might meant in in a way that humans could never see. We have examples of all of this behavior.
And once you show people that, what they say is, "Okay, well, why don't we stop or slow down? " And then what happens? Another thought will creep in right after, which is, "Oh, but if we stop or slow down, then China will still build it.
" But I want to slow that down for a second. You just, we all just said we should slow down or stop because the thing that we're building, the it is this uncontrollable AI. And then the concern that China will build it, you just did a swap and believe that they're going to build controllable AI.
But we just established that all the AIs that we're currently building are currently uncontrollable. So there's this weird contradiction our mind is living in when we say they're going to keep building it. What the it that they would keep building is the same uncontrollable AI that we would build.
So, I don't see a way out of this without there being some kind of agreement or negotiation between the leading powers and countries to pause, slow down, set red lines for getting to a controllable AI. And by the way, the Chinese Communist Party, what do they care about more than anything else in the world? >> Surviving.
>> Surviving and control. Yeah. >> Control as a means to survive.
>> Yeah. So it's they they don't want uncontrollable AI any more than we would. And as as unprecedented as impossible as this might seem, we've done this before.
In the 1980s, there was a different technology chemical technology called CFCs, a chlorofluorocarbons, and it was embedded in aerosols like hairsprays and deodorant and things like that. And there was this sort of corporate race where everyone was releasing these products and you know using it for refrigerants and using it for hairsprays and it was creating this collective problem of um the ozone hole in the atmosphere. And once there was scientific clarity that that ozone hole would cause skin cancers, cataracts and sort of screw up biological life on planet Earth.
We had that scientific clarity and we created the Montreal protocol. 195 countries signed on to that protocol and the countries then regulated their private companies inside those countries to say we need to phase out that technology and phase in a different replacement that would not cause the ozone hole. And in the course of um the last 20 years we have basically completely reversed that problem.
I think it'll completely reverse by 2050 or something like that. And that's an example where humanity can coordinate when we have clarity or the nuclear non-prololiferation treaty when there's the risk of existential destruction when this film called the day after came out and it showed people this is what would actually happen in a nuclear war and once that was crystal clear to people including in the Soviet Union where the film was aired uh in 1987 or 1989 that helped set the conditions for Reagan and Gorbachev to sign the first non-prololiferation arms control talks once we had clarity about an outcome that we wanted to avoid. And I think the current problem is that we're not having an honest conversation in the public about which world we're heading to that is not in anyone's interest.
>> There's also just a bunch of cases through history where there was a threat, a collective threat. And despite the education, people didn't change, countries didn't change because the incentives were so high. So I think of global warming as being an example where for many decades since I was a kid I remember watching my dad sitting me down and saying listen you got to watch this inconvenient truth thing with Al Gore and sitting on the sofa I don't know must have been less than 10 years old and hearing about glo the threat of global warming but when you look at how countries like China responded to that >> y >> they just don't have the economic incentive to scale back production to the levels that would be needed to save the the atmosphere >> the closer the technology that needs to be governed is to the center of GDP and the center of the lifeblood of your economy.
>> Yeah. >> The harder it is to come to international negotiation and agreement. >> Yeah.
>> And oil and fossil fuels was the kind of the pumping the heart of our economic superorganisms that are currently competing for power. And so coming to agreements on that is is really really hard. AI is even harder because AI pumps not just economic growth but scientific, technological and military advantages.
And so it will be the hardest coordination challenge that we will ever face. But if we don't face it, if we don't make some kind of choice, it will end in tragedy. We're not in a race just to have technological advantage.
We're in a race for who can better govern that technologies impact on society. So, for example, the United States beat China to social media, that technology. Did that make us stronger or did that make us weaker?
We have the most anxious and depressed generation of our lifetime. We have the least informed and most polarized generation. We have the worst critical thinking.
We have the worst ability to concentrate and do things. And that's because we did not govern the impact of that technology well. And the country that actually figures out how to govern it well is the country that actually wins in a kind of comprehensive sense.
>> But they have to make it first. You have to get to AGI first. >> Well, or you don't.
We could instead of building these super intelligent gods in a box. Right now, China, as I understand it, from Eric Schmidt and Selena Shu in in the New York Times wrote a piece about how China is actually taking a very different approach to AI and they're focused on narrow practical applications of AI. So like how do we just increase government services?
How do we make you know education better? How do we embed deepseeek in in the WeChat app? How do we make uh robotics better and pump GDP?
So like what China is doing with BYD and making the cheapest electric cars and out competing everybody else that's narrowly applying AI to just pump manufacturing output. And if we realized that if we're instead of competing to build a super intelligent uncontrollable god in a box that we don't know how to control in the box and we instead raced to create narrow AIs that were actually about making stronger educational outcomes, stronger agriculture output, stronger manufacturing output, we could live in a sustainable world, which by the way wouldn't replace all the jobs faster than we know how to retrain people. Because when we race to AGI, you're racing to displace millions of workers.
And we talk about UBI, but are we going to have a global fund for every single person of the 8 billion people on planet Earth in all countries to pay for their lifestyle after that wealth gets concentrated? When has a small group of people concentrated all the wealth in the economy and ever consciously redistributed it to everybody else? When has that happened in history?
>> Never. Has it ever happened? Anyone ever just willingly redistributed the wealth?
>> Not that I'm aware of. When Ed one last thing, when Elon Musk says that the Optimus Prime robot is a $1 trillion market opportunity alone, what he means is I am going to own the global labor economy, meaning that people won't have labor jobs. China wants to become the global leader in artificial intelligence by 2030.
To achieve this goal, Beijing is deploying industrial policy tools across the full AI technology stack from chips to applications. And this expansion of AI industrial policy leads to two questions, which is what will they do with this power and who will get there first? This is an article I was reading earlier.
But to your point about Elon and Tesla, they've changed their company's mission. And it used to be about accelerating sustainable energy and they changed it really last week when they did the shareholder announcement which I watched the full thing of to sustainable abundance. And I it was again another moment where I messaged both everybody that works in my companies but also my best friends and I said you've got to watch the shareholder announcement.
And I sent them sent them the condensed version of it because not only was I shocked by these humanoid robots that were dancing on stage untethered because their movements had become very humanlike and there was a bit of like uncanny valley >> watching these robots dance. But broadly the bigger thing was Elon talking about there being up to 10 billion humanoid robots and then talking about some of the applications. He said maybe we won't need prisons because we could make a humanoid robot follow you and make sure you don't commit a crime again.
He said that in his incentive package, which he's just signed, which will grant him up to a trillion dollars >> remuneration. Part of that incentive package incentivizes him to get I think it's a million humanoid robots into civilization that can do everything a human can do, but do it better. He said the humanoid robots would be 10x better than the best surgeon on Earth.
So, we wouldn't even need surgeons doing operations. You wouldn't want a surgeon to do an operation. And so when I think about job loss in the context of everything we've described, Doug McMillan, the Walmart CEO, also said that, you know, their company employs 2.
1 million people worldwide, said every single job we've got is going to change because of this sort of combination of humanoid robots, which people think are far away, which is crazy. They're not that far away. >> They just went on sale.
No, was it now? They're terrible, but they're doing it to train them. >> Yep.
>> In household situations. And Elon's now saying production will start very very soon on humanoid robots um in America. I don't know what when I hear this I go, "Okay, this thing's going to be smarter than me and it's going to be able to it's built to navigate through this the environment, pick things up, lift things.
You got the physical part, you've got the intelligence part. >> Yeah. >> Where do we go?
>> Well, I think people also say, okay, but you know, 200 years ago, 150 years ago, everybody was a farmer and now only 2% of people are farmers. Humans always find something new to do. You know, we had the elevator man and now we have automated elevators.
We had bank tellers, now we have automated teller machines. So, humans will always just find something else to do. But why is AI different than that?
>> Because it's intelligence. >> Because it's general intelligence that means that rather than a technology that automates just bank tellers, yeah, >> this is automating all forms of human cognitive labor, meaning everything that a human mind can do. So, who's going to retrain faster?
you moving to that other kind of cognitive labor or the AI that is trained on everything and can multiply itself by 100 million times and it retraining how to do that other kind of labor >> in a world of humanoid robots where if Elon's right and he's got a track record of delivering at least to some degree and there are millions tens of millions or billions of humanoid robots what do me and you do like what is it that's human that is still valuable like do you know what I'm saying I mean we can hug I guess humanoid robots are going to be less good at hugging people >> I Everywhere where people value human connection and a human relationship, those jobs will stay because what we value in that work is the human relationship, not the performance of the work. And but that's not to justify that we should just race as fast as possible to disrupt a billion jobs without a transition plan where no one how are you going to put food on the table for your family? >> But these companies are competing geographically again.
So if I don't know Walmart doesn't change its whole supply chain, its warehousing, its uh how it's doing its its factory work, its farm work, its shop floors, staff work, then they're going to have less profits and a worse business and less opportunity to grow than the company in Europe that changes all of its backend infrastructure to robots. So they're going to be a huge dis corporate disadvantage. So they have to >> what AI represents is the xenithification of that competitive logic.
The logic of if I don't do it, I'll lose to the other guy that will. >> Is that true? >> That's what they believe.
>> Is that true for sort of companies in America? >> Well, just as you said, if Walmart doesn't automate their their workforce and their supply chains with robots and all their competitors did, then Walmart would get obsoleted. If the military that doesn't create autonomous weapons doesn't want to because they think that's more ethical, but all the other militaries do get autonomous weapons, they're just going to lose.
>> Yeah. >> If the student who's using Chachi PT to do their homework for them is going to fall behind by not doing that when all their other classmates are using CHPT to cheat, they're going to lose. But as we're racing to automate all of this, we're landing in a world where, in the case of the students, they didn't learn anything.
In the case of the military weapons, we end up in crazy Terminator like war scenarios that no one actually wants. In the case of businesses, we end up disrupting billions of jobs and creating mass outrage and public riots on the streets because people don't have food on the table. And so much like climate change or these kind of collective action problems or the ozone hole, we're kind of creating a badness hole through the results of all these individual competitive actions that are supercharged by AI.
It's interesting because in all those examples you name the people that are building those companies, whether it's the companies building the autonomous AI powered war machinery, the first thing they'll say is we currently have humans dying on the battlefield. If you let me build this autonomous drone or this autonomous robot that's going to go fight in this adversar's land, no humans are going to die anymore. And I think this is a broader point about how this technology is framed, which is I can guarantee you at least one positive outcome.
So, and you can't guarantee me the downside. >> You can't. >> But if that war escalates into I mean, the reason that the Soviet Union and the United States have never directly fought each other is because the belief is it would escalate into World War II and nuclear escalation.
If China and the US were ever to be in direct conflict, there's a concern that you would escalate into nuclear escalation. So it looks good in the short term, but then what happens when it cybernetically sort of everything gets chain reactioned into everybody escalating in ways that that causes many more humans to die. >> I think what I'm saying is the downside appears to be philosophical whereas the upside appears to be real and measurable and tangible right now.
But but how is it if if the automated weapon gets fired and it leads to again a cascade of all these other automated responses and then those automated responses get these other automated responses and these other automated responses and then suddenly the automated war planners start moving the troops around and suddenly you've you've created this sort of escalatory loss of control spiral. >> Yeah. >> And that that and then humans will be involved in that and then if that escalates you get nuclear weapons pointed at each other.
If you love the D CEO brand and you watch this channel, please do me a huge favor. Become part of the 15% of the viewers on this channel that have hit the subscribe button. It helps us tremendously and the bigger the channel gets, the bigger the guests.