Sam Altman is out as CEO of OpenAI. A superstar CEO on one side, the disgruntled board and the other 747 of 770 employees sent a scathing open letter to the board five days after he was unexpectedly fired. Sam Altman is back. Does this even count as a firing? This was a brutal. I guess I'm not really supposed to talk about this right now. But this is what now with Trevor Noah. Hey, what's going on? Nice to meet you. How are you doing? Good. How are you? Absolute pleasure, ma'am. I'm too. Thanks for taking the time.
Thank you. at what I feel like is a crazy time, right? Feels like the craziest time I have left. Yeah. Lived through. Yeah. I mean, you're at the. You're at the center of it all. So I wonder what that feels like, because I'm. I'm just an avid watcher of everything in this space and in this world, you know? And I feel like you're somebody who's been affected by it all. I mean, I mean, just just just right now we get the news. Sam Altman was on the shortlist for time magazine person of the year. Not
to get that thrilled not to get it. Of course. Wait, why? I have had more attention this year than I would have liked to have in my entire life. And that is a big one. Oh, happy for Taylor Swift. Okay. Okay. Oh, so you don't want you don't like the attention? You don't want the attention? No. It's been brutal. I mean, it's like fun in some ways. And it's, like, useful in some ways. But, from, like, a personal life, like, quality of life. Trade off. Yeah. Yeah. Definitely not. Yeah. But, you know, this is it.
Now, like, this is what I signed up for. Right. It's it's the infamy now. Yeah. Do people recognize you in the street? That's the kind of trade off that's really bad. Yeah. You just feel like you never. I'm sure it happens to you, too. But I never get to be anonymous anymore. Yeah, but people don't ask me about the future. Don't ask if you're gonna destroy the world. Exactly. Exactly. There's a there's a slight difference. People might want a selfie from me. That's that's that's the stance. A lot of selfies. well, congratulations. You are time
magazine's CEO of the year. Yeah. That's that's that's probably one of the, strangest moments, right? Because I guess time magazine's making this decision, a few weeks ago. You might not have been CEO of the year. I don't I don't know if they would have still been able to give you the award. I guess it was for your work before. I don't know. I don't know how it works. How how does it feel to be back in CEO? I'm still, like, recompiling reality, to be honest. Yeah. I mean, it feels great in some sense, because one
of the things I learned through this whole thing is how much I love the company and the mission and the people. Right. And, you know, I had a couple of experiences in that whole thing Where I went through, like, all of the, like, the full range of human emotion. It felt like in short periods of time. but a very clarifying moment for me was, the like. So it all happened on like a Friday afternoon at Friday at noon, and then the next set the next morning on a Saturday morning, a couple of the board members
called me and said, you know, what would you like to talk about coming back? And I had really complicated feelings about that. but it was very clarifying at the end of it to be like, yes, I do. Like, I really I love this place and what we're doing. And I think this is, like, important to the world. And like, the thing I care most about. It feels like in in the world of tech, hiring and firing is something that everybody has to get used to. I mean, I know back in the day, you know, you
were at the was the Y Combinator, right? And you were fired from that position. And everyone has a story. And what are we what what is. I don't want it to be. Oh no no no no no no no. Don't tell me, tell me these are things. You know, you get the research and then you go from there. Oh, I mean, I had, like, decided like a year earlier that I wanted to just come to OpenAI. Right? It was like a complicated transition to get here, but, like, I had been working on both OpenAI NYC
and, like, very much decided that I wanted to go do OpenAI. Okay. And I've never regretted that one. All right. So then. So then you've never been fired. And this is a tough place to be in as a person, not does. It doesn't even count as a firing. Like if you could fight. And you know what I was going to say? It's not like this was a brutal. I guess I'm not really supposed to talk about this right now. it was this was a very painful thing. Well, I think that felt to me personally, just
as a human, like, super unfair the way it was handled. Yeah. Yeah. I can imagine, you know, a lot of people will talk about, you know, getting getting fired from their jobs. It became it became a trend, I guess, during Covid especially, people would talk about getting an email or a mass video that would go out. And then, you know, thousands of employees would, would, would be let go. You seldom think it would be possible for that to happen to a CEO of a company. And then and then I think even more so, you don't
think of it happening to a CEO who many people have have termed like the Steve Jobs of this generation and and the future. You don't say that about yourself. Certainly not. No, I think I think a lot of people say that about you, you know, because I mean, I was thinking about this and I was going, I think calling you the Steve Jobs of this generation is is unfair, in my opinion. I think you're the Prometheus of this generation. No, you really are. You really are you. It seems like to me you have stolen fire
from the gods. And you are at the forefront of this, of this movement and this and this time that we are now living through. Where once I was, was only the stuff of of sci fi and legend. You know, you are now the face of the forefront of what could change. Yeah, like civilization five. Do you think it'll change everything? I thought I do, I mean, I, I could totally be wrong about what I'm about to say, but my my sense is we will build something that everybody, almost everybody agrees is AGI. The definition is
hard, but we will build something that people will look at and say, all right, you all did that. That's artificial general intelligence that you like, you know, a human level or beyond human level system. Let's before you go into the details on that, like what would you say is the biggest difference between what people think AI is and what artificial general intelligence is? We're getting close enough that the way People define it is important and there are differences in it. So for some people, they mean a system that can like do some significant fraction of
current human work. Yeah, of course we'll find new jobs. We'll find new things to do. but for other people, they mean something more like a system that can help discover new scientific knowledge. Okay. And those are obviously very different milestones have very different impact on the world. but the reason I don't like the term anymore, even though I so stuck with it, I can't stop myself from. You don't like which term? AGI. Okay. All all that I think it really means to most people now is like, really smart AI, but it's, it's become super fuzzy
in what it is up to now. And I think largely just because we're getting closer. But the point I was going to try to make was we're going to make AGI whatever you want to call that. And then at least in the short and medium term it's going to change the world much less than people think, much less than people think in the short term. I, I think society is it has a lot of inertia. The economy has a lot of inertia. The way people live their lives has a lot of inertia. Yeah. And this
is probably healthy. this is probably good for us to manage this transition, but we all kind of do things in certain ways, and we're used to it in society, as the super organism does things in a certain way and is kind of used to it. So watching what happened with GPT four as an example, I think was instructive. People had this like real freak out moment when we first launched it. Yeah. Said, wow, I didn't think this was going to happen. Here it is. And then they went on with their lives and it definitely changed
things. People definitely use it. It's a better technology to have in the world than not. And of course, you know, GPT four is not very good. And five, six, seven whatever going to be way, way better. But for was the moment in ChatGPT interface, I think was the moment where a lot of people went from not taking it seriously to taking it very seriously. And yet life goes on. Is that is that something you think is good for us as humanity and society, or is life supposed to just go on? I think, or as, as
as one of the fathers of this product, one of the parents of this idea, Do you wish that we all stopped and took a moment to, I guess, take stock of where we are? I think the resilience of humans individually and humanity as a whole is fantastic. Okay. And I'm very happy that we have this ability to absorb and adapt to new technology to changes and have it become, just like, you know, part of the world, that it really is wonderful. the I think Covid was a recent example where we watched this. It. Yeah. You
know, like, yeah, the world kind of just adapted pretty quickly. And then it felt pretty normal pretty quickly. I mean, another example in a sort of non-serious way, but instructive was when all of that UFO stuff came out. It was a couple of years ago now. Yeah. a lot of my friends would say things like, maybe those aren't real UFOs or real aliens or whatever. Everyone. Yeah. And yet they just kind of, like, went to work and played with their kids the next day. Yeah. Because, I mean, what are you going to do? What are
you going to do? What are you going to do? If they're flying by, they're flying by. What are you going to do? So so do I wish that we had taken more more of a time to take stock? we are doing that as a world, and I think that's great. I'm a huge believer that iterative deployment of these technologies are really important. We want we don't want to go build AGI off in secret in a lab. Yeah, we know what's coming and then drop it on the world all at once and have people have to
say like, here we are. Like, you think we have to get used to it gradually and sort of grow with it. And so this conversation, now that society that our leaders, our institutions are having, where people actually use the technology, have a feel for it, what it does, what it can't do, where the risks are, where the benefits are. I think that's awesome. And I think like, maybe in some sense the best thing we ever did for our mission was to adopt. So far, was to adopt the strategy of iterative deployment. Like, we could have
built this in secret and then built it up for years longer, and then just and then dropped, deployed it. And that would have been that. You know, it's interesting. You today we walked into the, into the open AI building, just like a little bit of a fortress. And it feels like it feels like the home of the future. And I saw a post of yours. Where did you come in as a guest today? Not anymore. I'm back now. I did okay one day during the middle. All right, I saw you, I saw you, I saw
you had a post where you came in as a guest, and I was like, damn, that's that's that's a weird one. It was like coming home, but then it's. Yeah, that was an it is home. It felt like it should have been somehow a very strange moment. to, like, put on a guest badge here. Yeah. But I was like, everyone was like, so tired, so exhausted on so much adrenaline. Yeah. It really did not feel momentous in the way that I guess I could say I had hoped it would. It should have been like a
funny, you know, moment to like, reflect on and tell stories that there were moments that day that were like that, like one of my proudest moments at that day is, I, I was very tired, very distracted. and, you know, We thought the board was going to put me back as CEO that day, but in case they didn't, I got interviewed for, L3, which is like our lowest level software engineering job by one of our, like, best people. And, you know, he gave me a. Yes. That was like, a very proud moment. Okay, okay. That's cool.
That you still got the skill, but the, the the the badge was not as poignant as I would have. Right? I'd love to know what you think you've done right as CEO, to have the level of support that we've publicly seen from the people who work at OpenAI. You know, when the story broke and I, I won't ask you for the details because I know, you know. No, no, because I know you can't comment about the internal investigation stuff I that I won't. Yeah, I mean, that stuff. But what I mean is, you know, I
know you can sort of speak about just the feelings and, and, and, and what's been happening in the company as a whole. It's rare that we'll see a situation, unfold the way it did with OpenAI. You know, you have this company and this idea that for one minute doesn't exist for most people on the globe. The next minutes you release ChatGPT the simple prompt, just a little chat box that changes the world. I think you go to 1 million users in the fastest time of any tech products. Five days? Yeah, five days. And then it
shoots to 100 million people and it very quickly, I know on an anecdotal level, for me, it went from nobody in the world knew what this thing was. I was explaining it to people trying to get them to understand it. I had to show them like poetry and simple things they would get. And then people are telling me about it, and now it just becomes this ubiquitous idea Where people are trying to come to grips with what it is and what it means. But but on the other side, you have this company that's trying to
in some way, shape or form, harness and shape the future. And and the people are behind you. You know, we see the story. Sam Altman is out no longer CEO. And then the swirling everything. I mean, I'm like you. I don't know if you saw some of the rumors. They were crazy. One of the craziest things I saw was They said, like someone said it was it was it was wild and funny. They said, I have it on. Good, good. I have it from good sources that Sam was fired for trying to have sex with
the AI. That's. That's what someone. I mean, I don't even know I'm supposed to react to that. I, I saw that I was like, I guess I in the given the moment, I should officially deny that which did not happen. Yeah. And I don't think it could happen because I don't think people understand, the combination of the two things. But but what what got me was how the salaciousness of the event seemed to bring open AI into a different spotlight in a different moment. And one of the big things was the support you had from
your team, like people coming out and saying, we're with Sam. No matter what happens. And that doesn't normally happen in company. CEOs and its employees are generally in some way shape or form, disconnected. Yeah, but it feels like this is more than just a team. What I'm about to say is not false modesty at all. There's plenty of places I'd willingly take a lot of credit. I think this one, though, is not about me. Other than me is sort of like a a figurehead representation. But, like, I think one thing that we have done well
here is a mission that people really believe in the importance of. And it was a I think what happened there was like people realized that the mission and the organization and the team that we have all worked so hard on and made such progress too, but have so much more to do. like that was, under real threat. And I think that was the that was what got the reaction. It was really not about me personally, although, You know, hopefully people like me and think I do a good job. It was about the shared loyalty we
all feel, and the sense of duty to completing the mission and wanting to maximize our chances at the ability to do that. at the top level, what do you think the mission is? Is it to get to artificial general, get, get, get the benefits of AGI distributed as broadly as possible and successfully confront all of the safety challenges along the way. Okay. That's that's an interesting, you know, second line, I would love to chat to you about that later. You know, getting into the safety of it all, when you look at OpenAI As a, as
an organization, the very genesis of OpenAI was really strange, you know, and you'll you'll correct me at any point when I'm wrong. But, you know, it seems like it was started very much with safety in mind. You know, where where you brought this team of people together. And you said, we want to start an organization, a company, a collective that is trying to create the most ethical AI possible that'll benefit society. And you see that even in the even in the, I guess, the profits, the way the company defines how its investors could receive the
profit, etc. But even even that changed at some point in OpenAI. Do you think do you think you can withstand the forces of capitalism? I mean, there's so much money in this. Do you think that you can you can truly maintain a world where money doesn't define what you're trying to do and why you're trying to do it? It's it has to be some factor. Like just if you think about the costs of training these systems alone, we have to find some ways to play on the field of capitalism, for lack of a better phrase.
Okay. But I don't think it will ever, be our primary motivator. The and by the way, I like capitalism. I think it has huge flaws, but relative to any other system the world has tried, I think it is still the best thing we've come up with. But that doesn't mean we shouldn't strive to do better. and I think we will find ways to, spend the enormous, like, record setting amounts of capital that we will need to be able to continue to advance the forefront of this technology. that was like one of our learnings early on.
It's just the stuff is way more expensive than we ever thought, right? Like we knew we kind of knew we had this idea of scaling systems, but we just didn't know how far it was going to go. You've always been a big fan of scaling. That's that's something that's like a I've read about you. Yeah. You know, even when, one of your mentors and I think one of the people you invest with now in, infusion. Yeah. Power. They said whenever you bring an issue to Sam, the first thing he thinks about is, how can we
fix this? How can we solve it? And the second thing he says immediately is, how do we scale this? This I don't remember. I'm terrible with name. Interesting. but I know it was somebody you work with interestingly, you know. No, it is. It is. Right. But I haven't heard someone say that about me before. Yeah, yeah, but it is. I think that it it's it's been sort of like one of my life observations across many different like facets of companies and also just fields that scale often yields surprising results. So like scaling up these AI
models led to very surprising results. scaling up the fusion generator makes it much better. And all of these obvious but some non-obvious ways to scaling up companies has none of these benefits, right? Scaling up groups of companies like Y Combinator has non-obvious benefits. Right? And I think there's just something about this that is not taken seriously enough. And in our own case, you know, in the early days, we knew scale was going to be important if we had been Smarter or more courageous thinkers or whatever, we would have, like, swung bigger out of the gate.
But it's like, really hard to say. I want to go build a $10 billion or a bigger computer. So we didn't write, and we learned it more slowly than we should have. But we did. But now we see how much scale we're going to need. and again, I think capitalism is it's cool. I have I have nothing against it as a, as a system. Well, no, that's not true. I have a lot of things against it as a system, but I have I have no pushback that It's better than anything else we have yet discovered.
Have you have you asked the, ChatGPT. If I could design a system? I have, a different not maybe not to design a new system. Yeah, but like, you know, I've asked a lot of questions about, like, how AI and capitalism are going to intersect an interesting, AI, one of the things that we. So we were right about the most important of our initial assumptions that I was going to happen. It was possible that, yes, deep learning, which a lot of people laughed at, by the way. Oh man, we got ruthlessly laughed. Yeah. But even some
of our thoughts about how to get there, we were right about it. But we were wrong about a lot of the details. which of course happens with science. And that's fine. You know, we had a very different approach for how we thought we were going to build this before the language model stuff started to work. We also, I think, had a very different conception of what it was going to be like to build an AGI, and we didn't understand this idea that it was going to be like iterative tools that got better and better, That
you kind of just talk to like a human. and so and so our thinking was very confused about, well, when you build an AGI, what happens. And we sort of thought about it is there was this moment before AGI and then this moment of AGI and, you know, then you need to to like give that over to some other system and some of the governance I now think it can be. And I'm really happy about this because I think it's much easier to navigate. I think it can be, I want to say like just another
tool, because it's different in all these ways. But but in some other sense, we have made a new tool for humanity. We've added something to the tool just right. People are going to use that to do all sorts of incredible things. But people remain the architects of the future, not one AGI in the sky. It's you can do things you couldn't do before. I can do things I couldn't do before. We'll be able to do a lot more. And in that sense, I can imagine a world where part of the way we fulfill our mission
is we just make really great tools that that massively impact human ability and everything else. And I'm pretty excited about that. Like, I love that we offer free ChatGPT with no ads. because I personally really think ads have been a problem for the internet. but we just like, put this tool out there. That's the downside of capitalism, right? Yeah, yeah. one of them, I think there's much bigger ones personally. But we put this tool out there and people get to use it for free, and we're not like, trying to turn them into the product. We're
not trying to make them use it more. All right. And I think that shows like an interesting path that we can do more on. So so let's do this. You know, you know, in our time together, in this conversation, there's so many things I would like us to get to hopefully, you know, we won't be able to answer all questions, obviously. But there are a few ideas, you know, a few headings and a few spaces I wanted us to, to live with. And I guess the first and most timely is what what what happens now
for the future of the company? Where do you see it going? You know, one of the things I found particularly interesting was what the new board was. How the new board was, was, was comprised, you know, for OpenAI, you know, where previously you had like women on the board. Now you don't. Yeah. You know, where previously you had people who had like no financial incentive on the board. Now you do. And I wonder if if you worry that that guardrail that you were part of implementing is now gone, you know, what do you do? You
have a board that's now not focused on protecting people or or, you know, defining a safer future as opposed to making money And getting this thing to be as good or as as big as it can be. well, I think our current, our previous governance structure and board didn't work in some important sense. So I'm all for figuring out how to improve that. And I'll support the board in that work to do that. obviously the burning board needs to grow and diversify, and that'll be something that I think happens quickly. and voices of people who
are going to, advocate for people who are traditionally not advocated for and be really thoughtful about not only AI safety, but just the the lessons we can take from the past about how to make these Very complex systems that interact with society and all of these ways. Right, as good as possible, which is both mitigating the bad and sharing the upside, that needs to be represented. so I'm, I'm excited to have a second chance at getting all these things right and clearly got them wrong before, but but yeah, like diversifying the board, making sure that
we represent all of the major classes of stakeholders that need to be represented here, figuring out how we make this a more democratic thing, continuing to push for governments to make some decisions governing this technology, which I know is imperfect, but I think better than any other method Of doing this that we can think of so far, engaging with our user base. More to like help. Let them help set the limits on how this works. That's all super important. that'll be one major thing going forward is board and expanding the board and governance. And yeah,
it gonna really like I know our current board is small, but I think there's so committed to all the things you were just talking about. Then there's another big class of problems. if you asked me a week ago, I would have said stabilizing the company, okay, was my top thing. But, internally, at least, I feel pretty good. We did not lose a single customer. We cannot lose a single employee. We continue to grow, and that's pretty amazing. We continue to ship new products. we are key partnerships. Feel strengthened, not hampered by this. and things are
on pace there. And the sort of research and product plan for the first half of next year, I think feels better and more focused than ever. but there's a lot of clearly there's like a lot of external stabilization. We still have to do, and then and then beyond that, like, we're. Really confronting the possibility that we just, like, we have not been planning ambitiously enough for success. You know, we had we had like ChatGPT if you want to subscribe to ChatGPT plus right now, you have you are not able to you not there. We just
ran out of too many people. Yeah, yeah. And so given how good we think the future systems we create are going to be and how much people seem to want to use these, We have been like behind their plan all year long, and we'd like to finally get caught up. I mean, I found myself constantly thinking about you as a person. You know, when, when, when the whole board saga was taking place and, whenever, whenever there's a storm, I'm always interested in what's happening in the eye of the storm. Yeah. You know, and I wondered,
like, where were you when this all when this all broke, like, well, what were you doing? What what was going on in your world? On a personal level, I the reason I left is a thing people say about me is I'm like, I am good at sitting in the eye of the hurricane while it, like, turns around me already and staying super calm. And this was this time, like, turns out like not, but this was the experience of, like, being in the eye of the storm and having it not become, I was in Las Vegas
at F1. Oh, okay. Yeah. You an F1 fan? I am, yeah. All right. Who's your team? Do you have one. I honestly I like I mean Verstappen is so good. It's hard to say okay. But I feel like that's the answer. Everyone would say I still no no no I actually it depends. It depends on when they joined the sport like I was a, I was a Schumacher fan because that's what I watched like. Well I mean Nigel Mansell then like Ayrton Senna and you know what I mean. But yeah okay. Well he's I like
now Verstappen is he's he's precise with it. I see why and just like like I it almost gets bored of like what. Why watching it when so often. But it's it's incredible. so I was like so excited for that. That first night I got in like late on a Thursday. That first night, someone, like, they forgot the well done, a manhole cover. So someone drove over it. The first lap blew up like one of Ferrari's engines. Oh, wow. Stop to the practice. So I didn't get to watch it. I never got to watch any race
the whole weekend. I was in my hotel room. I took this call I had no idea was going to be and got fired by the board. And it was just this, like, it felt like a dream. I was like, I was confused, it was chaotic. It did not feel real. It was like. Like obviously like upset and painful. But confusion was just like the dominant emotion at that point. It was like it was just in a fog. In a haze. I was like, I got didn't understand what was happening. It happened in this, like unprecedentedly,
in my opinion, crazy way. and then in the next like half hour, my phone, I got so many messages that I like broke on my phone. Wow. And who is this from? Employee. Everyone. Every every like it was. My phone was just like unusable because it was just like notifications nonstop. And I, like, hit this thing where it stopped working for a while. That message got delivered late. Then it marked everything as red. So I couldn't even like, tell like, yeah, you know, so it's just you'd spoken to out. You hadn't. and I was like,
talking to the team here, trying to figure out what was going on. Like, Microsoft is calling everybody else, and it was just like, It really was like. Unsettling and didn't feel real. And then. And then I kind of, like, got a little bit collected and I was like, you know what? I, I can go on and I can I really want to go work on AGI somehow. If I can't do it, I'm still going to do it. And just thinking about the best way to do that. Greg quit. Some other people quit, started just getting
like tons of messages from people saying like, we, you know, want to come like work with you, how it's going to be. And at that point, I like going back to opening. I was like, not on my mind at all. Yeah, I can imagine. It's just like thinking about whatever the future was going to be. But, I kind of didn't have a sense of like, what? industry event this was because I, like, wasn't really reading the news. All I could tell was I was getting like, crazy numbers of messages. Right. Because you're actually in the
storm. Yeah, yeah. And I was just trying to, like, you know, be supportive of open. I figure out what I wanted to try to understand what was happening. and then flew back to California, met with some people, and, and kind of was just, like, very focused on going forward At that point. But, you know, also like wishing the best for OpenAI. But and then I stayed up like most of that first night. Couldn't really sleep. Also, it was just like tons of conversations happening. and then it was sort of like a crazy weekend from there,
but I'm sure I still have not. Like, I'm still like a little bit in shock and a little bit just trying to, like, pick up the pieces. You know, I'm sure as I have time to, like, sit and process this, I'll, like, have a lot more feelings about it. Right? Do you do you feel like you just had to jump straight back into everything? Because it it, you know, to your point, you're on this mission. You can see in your eyes you're very driven, you know, and and the world has now tipped over a precipice
that it can never return from, you know, so you're you're moving towards something all of a sudden, it doesn't seem like you'll be able to achieve it in the, in the sphere that you're in. But as you say, Microsoft steps steps. And Satya Nadella says, hey, come, come and work with us. We'll we'll rebuild this team. If there's one thing people say about Sam Altman, if they've worked with him, as they say, he is tenacious. He is. He's unrelenting. He does not believe in in letting life stop you. If you if you have a goal
and if you believe in something and it seems like, like you're moving towards that. You said nothing publicly about OpenAI. You weren't disparaging in any way. but it feels like it took a toll on you for sure. I mean, I think it's anything I won't like bounce back from, but I think it'd be impossible to go through this and not have it take a toll on you. That'd be really. That'd be really strange. Did it feel like you were losing a piece of yourself? Yeah. I mean, like this. We started OpenAI, like, very end of
2015. Like first day of work was was really in 2016. And, and then I've been like, I was working on this on my C for a while, but I've been like full time on this since, since early 2019. And it has like. AGI in my family are like the two main things I care about. So losing one of those is like, And again, it like. Maybe in some sense I should say like, oh, you know, I was like, I going to work on AGI and that care really more about the mission. But but of course,
I also care about like this org. These people are users, our shareholders, everything we built up here. So, so yeah, I mean, it was just, like, unbelievably painful, the. The only comparable set of life experience I had. And this, that one was, of course, much worse, was when my dad died, and that was like a very sudden thing. And, But the sense of, like, Confusion and loss and, you know, you get like, in that case, I felt like I had, like a little bit of time to just really, like, feel it all. But then there
was so much to do, like, it was like, so unexpected that I. And it had to be like to pick up the pieces of his life for a little while. and it wasn't until, like a week after that that I really got a moment to just, like, catch my breath and be like, Holy shit, I can't believe this happened. so yeah, that was much worse. But it was There's like, echoes of that same thing here. I can only I can only imagine when you when you look towards the future of the company and, and your
role in it, how do you not how do you now find a balance between moving OpenAI forward, continuously, propelling yourselves in the direction you believe, but then also also, you know, do you do you have do you still have an emergency brake? Is there is there some system within the company where you say, if we feel like we're creating something that's going to adversely affect society, We will step in, we will stop this. Do you have that ability and is it baked in? Yeah. Of course. Like we and we've had it in the past like
we've created systems that we've chosen not to deploy. Oh, interesting. and I'm sure we will again in the future. Or we've created a system and just said, hey, we need much longer to make the safe before we can deploy it. Like with GPT four. It took us almost eight months after we finished training, before we were ready to release it, To do all of the alignment and safety testing. Right? Right. I remember talking to some of the team and yeah, that's not a board decision. That's just the people in here doing their jobs and being
committed to the mission. So that will continue on. and I one of the things I'm really proud of about this team is, The ability to operate well in chaos, crisis, uncertainty, stress. I give them like an A-plus on that. They did such a good job. And as we get closer to more powerful, very powerful systems, I think that ability of the culture and the team we have built is maybe the most important element, you know, to, like, keep your head cool in a crisis and make good, thoughtful decisions. I think the team here really proved
that they can they can do that. And that's super that's super important. There were I saw this thing where someone was like, you know, the thing we learned about OpenAI is that Sam can run the company without any job there. And I think that's totally wrong. I think that's not at all what happened. I think what happened is the right learning is the company can totally run without me. And it's a it's a culture that the team is ready. The culture is ready. Like I think that's I'm just super proud of that. Really happy to
be back and doing it. But but like I sleep better at night. Having watched the team manage through this, given the challenges ahead, there will be bigger challenges than this that will come up. But I think in some subjective sense, I hope and believe this is the hardest one because we were so unprepared and now we kind of like realize that the stakes and that we're not just in some, in some, like important sense, we're just not a regular company. Oh, yeah. Far from it. Far from it. Let's, let's talk a little bit about that.
ChatGPT OpenAI, you know, whatever it may end up calling it, because, I mean, you've got Dall-E, You've got whisper, you've got all these amazing fancy name ideas, name brand, architecture ideas for us. I would love it. I feel like ChatGPT has just done it. You know, I feel like it is now. You. It is? Yeah, it's a horrible name, but it may be too ubiquitous. But a change, it's you can't change. You think you can change it at this point? I mean, could we drop it down to just like GPT or just chat? I don't
know, I don't know, I don't know, maybe like, maybe sometimes I feel like a product or a name or an idea grows beyond the marketer's dream. Yeah. Space. And people just have it. Yeah. No marketer ever would have picked ChatGPT no name for this, but we may be stuck with it and it might be all right. Yeah. And it's now I mean, the just the multimodal aspects of it like, fascinate me. You know, I remember when I first saw Dall-E come out, you know, and it was just an idea and seeing how it worked and
seeing this program that could create a picture from nothing but noise and, and try to explain it to people. And they were going, but we didn't get the picture from it was like there was no picture. There was no source image in the like. But that's not possible, right? It saw something I was like, and it's so hard to explain some of this sometimes it's even hard to to understand for myself. But but when we when we look at this world that we're currently living in, you know, we talk about them as numbers. GPT 3.5,
GPT four, GPT three, you know, five, six, seven, whatever it may be. I like to remove the technical term in that way And talk more about, like, the actual use cases of the products. One thing we saw in a jump, or one thing we saw between products, was between chat GPT three, 3.5 to 4. We saw what we would call reasoning on a on a much higher level, a little bit of like creativity and some of the first sparks of it. Yes, yes, yes, exactly. And when I, when I look at this, this product in
this world that you're creating now, you know, with general large language models and now the specialized large language models, I wonder, do you think that the use case is going to change dramatically? Do you Do you think that what might right now just be like a little chat bot that people or you like, like, do you think this will be the way the product remains, or do you think it will become a world where everything becomes the specialized GPT? You know, a world where, you know, Trevor has his GPT that's trying to do things for
him, or this company has that GPT that's doing things for them, like, where do you where do you see it? Obviously, it's it's hard to predict the future, but where do you see it going from where we are right now? I think it will be a mix of those. It's it is hard to predict the future. Probably I'll be wrong here, but I'll try anyway. I think it'll be a mix of the two things that you just said. One, the base model is going to get so good that I have a hard time with conviction,
saying, here's what it won't be able to do. So that's going to take a long time. But I think that's where we're heading. What's what's a long time on your horizon. Like what's what's when you measure it? like not in the next few years. Okay. it will get much better every year in the next few years. But like, I'm not sure I was gonna say I'm certain. I think it's, like, highly likely there will still be plenty of things that the model does this 2015, 2026 can't do. But but doesn't the model always surprise you?
You know, when I when I talk to engineers who work in the space, when I talk to anyone who's involved in Iowa adjacent to AI, the number one thing people say the number one word is surprised. People keep saying, there they go. We were surprised that we were teaching, or we thought, ChatGPT was learning about this field, and all of a sudden it started speaking a language. Or we we thought we were teaching it about this, and all of a sudden it knew how to build bridges or something like so. So for what it's worth,
that what's most people's here subjective experience of maybe between like 2019 and 2022 or something like that. Okay. But now I think we have learned not to be surprised and no interest. Now we trust the exponential, most of us. So GPT five or whatever we call it will be great in a bunch of ways. We will be surprised about specific things it can do and that it can't do. But no one will be surprised that it's awesome. Like, at this point, I think we've really internalized that in a deep way. the second thing you touched
on, though, is these custom GPT and more importantly, that you also touched on, like the personal GPT, like the Trevor GPT. Yeah. And that, I think is going to be a giant thing of the next couple of years where if you want, these models will get to know you, access your personal data, answer things in the way you want, work really effectively in your context. And I think a lot of people are going to want that. yeah. I mean, I can see a lot of people wanting that. It almost makes me wonder if, you know,
the new work force becomes one where your GPT is almost your resume, your GPT is almost more valuable than you are in a strange way. Do you know what I mean? So yeah, it's like a combination of everything you think and everything you thought. And the way you synthesize ideas combined with your own personal GPT becomes. And I mean, this is this is me. Just like thinking of a crazy future where you go, you literally get to a job and they go, what's your GPT? And you say, well, here's mine. You know, we always think
of these like agencies, these personalized agencies. I'm going to, like, have this thing go do things for me. But it'd be interesting with what you're saying is if what if instead, this is, like, how other people interact with you, right? Like this is your impression, your avatar, your echo, whatever. I can see getting to that because, I mean, What are we if not a like a, you know, culmination, like a combination of all of our it's a strange thought, but I could believe it. I'm constantly fascinated by by where it could go and what it
could do. You know why? When when ChatGPT first blew up. Right. In those first few weeks, I will never forget how. How people quickly realized that the robot revolution. I know it's not robots, but just, you know, for people, they're like, oh, the robot revolution. The machine wasn't replacing the just that it's that they thought It would, you know, people people thought it would, it would replace. Yeah, truck drivers, etc.. And yet we've come to find that, no, those jobs are actually harder to replace. And it's in fact all the jobs that have been quote
unquote, like stinky jobs, you know, it's like your white collar. Yeah. Oh, you're a lawyer. Oh, they might not need as many lawyers when you have, you know, ChatGPT five, six, seven, whatever you want. you know, you you're an engineer. You are like, where do you. The human body is really an amazing thing. It really is, right? Yeah, It really is. Do you see any advancements where you think it could replace the human body or. We still in, like, mind, like, I think we will get robots to work eventually. Like humanoid. Like robots. Yeah. To
work eventually. but you know, and we worked on that in the early days of OpenAI. We had a robotics program. Oh, I didn't know. We did. we we made this thing they could do, like a robotic hand that could do a Rubik's Cube with one hand takes a lot of dexterity. I think there's like a bunch of different insights rolled into that. But one is that it's just much easier to make progress in the world of bits than the world of atoms. Like, the robot was hard for all the wrong reasons. It wasn't hard because
it was like helping us advance hard research problems. It was hard because the robot kept breaking and that it wasn't that accurate. And the simulator was bad and whereas like A language model, it's just like you can do all that virtually. You can make way faster progress. so like for focusing on the cognitive stuff helped us push on more productive problems faster, but also in a very important way. I think. Solving the cognitive tasks is the more important problem. like if you make a robot, it can't necessarily figure out how to like go help you
make a system to do the cognitive tasks. Yeah, but if you make a system that does the cognitive tasks, It can help you figure out how to make a better robot. Oh yeah, that makes sense. And, and so I think, like, cognition was the core of the thing that we wanted to thrust out. And I think that was the right decision. But I hope we'll get back to robots. Do you do you have an idea of when you will consider artificial general intelligence achieved? Like like how do we know personally. Like when I'll feel like
mission. Yeah. Like what? What what? Because everyone talks about artificial general intelligence. But then I go, how do we know what that is? So this comes back to that point earlier where everyone's got a different definition. Yeah. I'll tell you personally when I'll be thrilled when we have a system that can help discover novel physics. Okay. I'll be very thrilled. But that feels like it's way beyond general intelligence. That seems like do you know what I mean? It's beyond, I think what most people would comment, like, like maybe because this is what I think of
sometimes as I go, How do we define that general intelligence? Are we defining it as brilliance in a certain field, or are we defining it as like a child is artificially generally intelligent, for sure, but you have to keep programing it. You know, they just they come out, they don't speak, they don't know how to walk, they don't know how to. And you're constantly programing this this you know AGI. Yeah. To get to where it needs to go. So how will you like if you get to the point where you have a four year old
child version of a system that can just like just figure it out, you know, can just go autonomously with some help from its parents. Yeah. Figure out the world in the way that a four year old kid does. Oh, yeah, we can call that an AGI if we can. If we can really address that. Truly generalized ability to be confronted with a new problem and just figure it out. Not perfectly. Four year old doesn't always figure it out perfectly either. But you know, then we're clearly going to get it. Are we able to get there
if we don't fundamentally understand thinking and the mind seems like it will you think we can get it or can we get to a place where. So I'm sure you know about this. One of my favorite stories in the world of AI is, I think it was actually a project that Microsoft was working on, but they had this. They had this, AI that was trying to learn how to discern between, male and female faces. Right. And it was it was pretty accurate at some point. It was like 99.9% accuracy. However, it kept failing with black
people and black women in particular. It kept on mischaracterizing them as men and And the researchers kept working. And they were like, what is it? What is it, what is it, what is it? What is it? Was it at some point? And this is I mean, I tell the story this way. I mean, it could be a little bit wrong, but I found it funny at some point they, they sent quote unquote, they sent the AI to I think it was Kenya. Right. So they sent the AI to Africa. And then they told the research
team in Kenya, can you work with this for a while and try and figure it out? And then while the I was running on that side of the world with their data sets and African faces, it became more and more and more accurate with specifically black women. But in the end of it, they found that the I never knew the difference between a male face and a female face. All it had been drawing was a correlation between makeup. And so the I was going, people who have read lips and who have like rosy cheeks and
maybe blue on their eyelids, those are women. And then the other ones are men. And because the researchers said, yes, you're correct, yes you're correct. It just found like a quote unquote cheat code, you know, and you know how this works. Yeah, way beyond what I understand. But it just figured out a cheat code. It's like, oh, I understand what you think a man is and what a woman is. And it gave it to them. And then they realized, because black women are generally underserved when it comes to makeup and they don't wear makeup, you
know, the system just didn't know. But we didn't know that it didn't know. And so I wonder, how will we know That the AGI doesn't know or doesn't know something? Or will we know that it's just cheating to get there? Like, how do we know? And what is the cost of us not knowing when it's when it's like, when it's intertwined with so many aspects of our lives? You know, one of the things that I believe we will make progress on is the ability to understand what these systems are doing. So right now, interpretability is
like made some progress. That's the field of, like, looking at one of these models. And there's different ways you can there's even levels you can do this, that You can try to understand what every artificial neuron system is doing. Or you can like look at as the system is thinking step by step, you know, which of these do I not agree with? Okay. And there will be even more will discover. But but the ability to understand what these systems are doing, hopefully have them explain to us why they're coming to certain conclusions and do it
accurately, and robustly. I think we're going to make progress there before I think we're going to truly understand how these systems are capable of doing what they do, and also how our own brains are capable of doing what they do. So I think we will eventually get to understand that. I'm so curious. I'm sure you are too. I am, but it seems to me that we'll have more progress in doing what we know works to make these systems better and better, and having them help us with the interpretability challenges. And also, I think as these
systems get smarter, they will just be fooled less often. So a more sophisticated system might not have made that make up distinction. And they learned a deeper level. And I think we see evidence of stuff like that happening. You know, there's two things you you actually make me Think of when you say not get fooled that easily. One is the safety side, one is the accuracy side. We one of the first things and I mean the press ate this up. You remember they were like, oh, the AI hallucinates and it it thinks that it is
going to kill me. And it thinks and people love using the word think, by the way, with, with with large language models, which I find particularly funny. Yeah. Because I always think, like journalists, you know, they should be trying to understand what it's doing before they reported, but they've, they've done, I think, the general public a disservice in using the word think. Yeah, quite a lot. I mean, I have empathy for it. Like we need to use familiar terms and we we need to anthropomorphize. But I agree with you that it is a disservice. Yeah.
Because if you're saying it thinks then people go, well, will it think about killing me? And it's like, no, but it's not thinking, you know, it's it's really just using this magical transformer to figure out where words most likely fit in relation to each other. What do you think you're doing? What am I doing? Yeah, that's an interesting one. That's that's what. And now. So now this is what I was going to write Is the ideas that we put together. We talk about hallucinating. Let me start with the first part. Do you think we can
get to a place where, AI doesn't hallucinate? Well, I think the better version of that question is, can we get to an AI that, that doesn't hallucinate when we don't want it to at a similar rate to humans not doing it? And on that one I would say yes, but okay. But actually, like a big part of why people like these systems is that they do novel things. And if it only ever. Yeah. Like hallucination is this sort of feature and bug the. Well that's what that's what I was about to ask. I was like,
isn't hallucinating part of being an intelligent being? Totally. If you think about the way. Let's like if we think about the way an AI researcher does work, okay. They look at a bunch of data. They got this idea that they read a bunch of stuff, and then they start thinking, well, maybe this well, or this, maybe I should try this experiment. And now I got this data back so that, like, didn't quite work. Now I'll come up with this new idea, but this, this human ability to come up with new hypotheses, new explanations. Yeah, that
have never existed before and most of which are wrong, but then have a process and a feedback loop to go figure out which ones might make sense. And that do make sense. that's like a key element of human progress. And how do we prevent the AI from, you know, like, you know, that that garbage in, garbage out output like that, that scenario. How do we right now, the AI is working off of information That humans have created in some way, shape or form. It is learning from what we've considered learnable material with, with everything that's
popped up now, you know, the OpenAI's, the anthropic, the lambdas, the you know, you name them. It feels like we could get to a world where now AI is pumping out more information than humans are pumping out, and it may not be vetted as much as it should be. How do we how do we then? Is is the AI going to get better when it is learning from itself in a way that it might not be vetted? Like, did you get what I'm saying? Totally. How do we figure that out? So it comes back to
this issue of knowing how to behave in different contexts, like you want hallucinations in the creative process, but you don't want hallucinations when you're trying to, like, report accurate facts on a situation. And right now you have these systems that can like generate these beautiful new images that are hallucinations in some important sense, but good ones. but then you have a system that when you want it to be only factual. Again, it's gotten much better, but it's still a long way to go on there. and it's fine. I think it's good if these systems are
being trained on their own Generated data, as long as there is a a process where the systems are learning what data is good and what is who's bad, which again, it's not enough to say who said it or not. Because if it's coming up with new scientific ideas, those may start off as hallucinations, which are valuable. but you know what is good? What is bad, and then also that there is enough human oversight of that process that we are all still collectively in control of where these things are going. but with those constraints, I think
it's great that the systems are going to be future systems are going to be trained on generated data. And then you reminded me of something else, which is I've been wondering, I don't know quite how to calculate this, but I would like to know when there's more words generated by, say, GPT 5 or 6 or whatever than all of humanity at a given time. That feels like an important. Milestone, actually. Now what? I'm saying that out loud. Maybe it doesn't. Generate in what way? Oh, like where the model is Producing more words than all of
humanity in a given year, So there's, you know, 8 billion humans or whatever, and that does seem interesting. Speak how many words per year on average. You can figure out what that is. I mean, you yeah. What does it what does it give us though is the question on the other side. Yeah. That's why I was taking it back after I said it. I if for some reason it feels like an important milestone to me, but I it feels like an important milestone in like a monkey typewriter kind of way because maybe humans are, you
know, we're all monkey type Writing the whole time. And that's where things I think it's I think it's worth like the yeah, the amount of I don't use the word thinking because I think you're right not to use it, but the amount of like. Maybe just the amount of like words generated by human all humanity. Yeah. I'm going to lose you soon. So I want to jump into a few, a few questions. that I think a really poor people will kill me If I don't ask all of you. okay, so one of the main ones,
this is this is from my side. Personally, we always talk about AI learning from the data, right? They fed data sets and we talk about this. That's why you need these mega computers. The cost billions and billions of dollars so that the computers can learn. How do we teach an AI to think better than the humans that have given us the data that is clearly flawed? So, for instance, how do you how does an AI learn beyond the limits of data That we've put out there? You know, when it comes to race, when it comes
to economics, when it comes to the ideas, because we had limited, how do we teach it to not be as limited as we are? If we're feeding a data that's limited? we don't know yet, but that's our big that's like one of our biggest research thrusts in front of us is it's like, how how do we surpass human data? and I hope that if we can do this again a year from now, I'll be able to tell you. But I don't know yet. We don't know yet. Okay. It's really important. However, a thing that I
do believe is. This is going to be a force to combat injustice in the world in a super important way. I think these systems will be. They won't have the same deep flaws that all humans do. they will be able to be made to be far less racist, far less sexist, far less bias. They will be a force for economic justice in the world. I think, you know, if you make if you make a great AI tutor or a great AI medical advisor available, that helps the poorest half of the world more than the richest
have to, even though it helps lift everybody up. So I don't have like an answer to the scientific question you asked, but I do at this point feel confident that these systems can be. Of course, we have to do some hard societal work to make them in fact be, but can be, great for sort of increasing justice in the world. Okay, maybe that that leads in perfectly to a second question, which is what are you doing? What is OpenAI doing? Are you are you even considering doing anything to try and mitigate how much this new
technology once again creates, you know, the haves and the have nots. Every new technology that's come out has been amazing for society as a whole, if we call it that. But you can't deny it creates a moment in time where if you have it, you've got it all, and if you don't, you're out of the game. I think that art, we'll learn a lot more as we go. But currently I think one really important thing we do is offer, truly free service, which means no ad supported, but just a free service to more than 100
million people who are using it every week. and the fact that it's not it's not fair to say anyone, because in some countries we have still blocked or we are still blocked, But trying to get closer and closer to everything. We can do that where anyone can access really high quality, easy to use, free AI. that is important to all of us personally, and I think that there's other things that we'd like to do with the technology, like if we can help cure diseases with AI and make those cures available to the world, that's clearly
beneficial. But putting this tool in the hands of many, as many people as we can and letting them use it to architect the future, that is super important. And, and I think We can push this much, much further. Okay, two more questions. Can I add one more thing that. Yeah, yeah, yeah, it's all your time. Go ahead. Yeah. The other thing that I think is important to that is who gets to make the decisions about what the systems say and not say or do and not do, like who gets to set the limits. Yeah. And
I like right now it is basically the people who work at OpenAI deciding. And no one would say that's like a fair representation of the world. So figuring out not just how we spread access Of this technology, but how we how we democratize the governance of it, that's like a huge challenge for us in the coming year. Well, that's sort of goes to what I was, what I was about to ask you. the safety side of it all, you know, we spoke about this right in the beginning of the conversation. When designing something that can
change the world, that always has to be an acknowledgment of the fact that it can change the world in the worst way or for the worse. You know, with with each leap of technology, there's been an outsize ability for one person to do more damage. Is it possible the first part to make AI completely safe? And then the second part of it is what is your nightmare scenario? What is the thing that you think of that would make you press a red button that shuts OpenAI and or AI down? Yeah. When you go, you know
what this if this is if this can happen, we have to shut down. What are you afraid of? And so the first one is can can you make it safe. And the second part is what is your nightmare scenario? The way I think about so so first of all, I think the insight that you started with, which is the number of people that can cause catastrophic Harm, goes down every decade or roughly every decade. That seems to me to be like a deeply true thing that we as a society have to confront. second, about making
a system safe. I don't think of it as like quite a binary thing. Like we say, airplanes are safe, but airplanes do still crash. Very infrequently. Like amazingly infrequently. To me, we say that drugs are safe, but people, you know will still the FDA will still certify a drug that can cause some people to die some time. Right. And and so safety is not like it's like society deciding something is acceptably safe given the risk reward trade offs. Right. And that I think we can get to but it doesn't mean things aren't going to go
really wrong. I think things will go really wrong with AI. What we have to prevent and and I think society has like actually a fairly good. Messy but good process for collectively determining what safety thresholds should be like. That is a complex negotiation with a lot of stakeholders that we as a society have gotten better and better at over time, But we have to prevent and I think what you were touching on there is that the kind of catastrophic risks. Yeah. So nuclear is the example everyone gives. You know, nuclear war had this very global
impact. And so the world treated it differently and has done what I think is a remarkable job. The last, almost 80 years. and I think there will be things with AI that are like that. certainly one example people talk about a lot is AI being used to design and manufacture synthetic pathogens. Yeah, that can cause a huge problem. I know I think people talk a lot about is, computer security issues, an AI that can just like, go hack beyond what any human could do and certainly in any skill. And then there's another category of things
that I think are just new, which is if the model gets capable enough that it is can it can help design the own way to, like, exfiltrate the weights off of the server and make a lot of copies and modify its behavior? More of like the sci fi scenario. Yeah. but I think we do as a world need to stare that in the face. Maybe not that specific case, but this idea that there is catastrophic Or potentially even existential risk in a way that just because we can't precisely define it doesn't mean we get to
ignore it either. And so we're doing a lot of work here to try to forecast and measure what those issues might be, when they might come, how we would detect them early. and I think all the people who say, you shouldn't talk about this at all, you should just talk about that, you know, issues of misinformation and bias and the issues of today. they are wrong. We have to talk about both. We have to be safe at every step of the way. Okay. That's terrifying as I as I thought it would be. So. So then
I go, oh, by the way, are you are you actually thinking about running for governor? Was that a no, no, no, I thought about it very briefly in like, 17, 2017. Okay. Or 16 even something I thought. So that seemed like a, you know, so like a couple of weeks, kind of like vague entertainment of an idea. Okay. Okay. I guess my final question for you then is, you know, the what now, overall, what's what is your dream? If Sam Altman could wave a magic wand and have AI be exactly what you hope it will
be, What will it do for the future? What what what are all the good sides, what all the upsides for everybody out there. And I mean, this is like a every negative thing to add. Thank you for asking this. I think you should always end on the positive. Yeah. look, I think we are heading into the greatest period of abundance that humanity has ever, ever seen. and I think the two main drivers of that are AI and energy. But there are going to be others too. But those two things, the ability to, like, come up
with any idea, the ability to make it happen, and do this at mass scale, Where the limits to what people can have are going to be sort of like what they can imagine and what we can collectively negotiate as a society. I think this is going to be amazing. We were talking earlier like, what does it mean if every student gets a better educational experience than the best? Like the richest student with the best access can get to that? Yeah. What does it mean if we all have better health care than the richest person with
the best access can get today? What does it mean if people are, generally speaking, freed up to work on whatever they find most personally fulfilling, Even if it means they have to be new kinds of job categories? what does it mean if everybody can? You know, presumably you and I both, like, really love our jobs. Yeah, but I don't think that's true for everybody. Yeah, I agree, what does it mean if everybody gets to have a job that they love? and that they have, like the resources of a large company or a large team at
their disposal? So, you know, maybe instead of the 800 people at OpenAI, everybody gets 800 even smarter AI systems that can do all these things. And people just get to create and make all the like. I think this is remarkable, And I think this is a world that we are heading to. and it will require a lot of work in addition to the technology to make it happen, like society is going to have to make some changes, but the fact that we are heading into this age of abundance, I'm very happy about it. I'll. I'll
leave you with this from my side. I'm a huge fan, huge, huge fan of the potential upsides of AI. You know, like, I work in education in South Africa. My my dream has always been to have every kid have access To the best possible choose a possible. Yeah. You know what I mean. Literally, you know, No Child Left Behind because they can learn at their pace. By the way, what's happening with children who are using ChatGPT to learn things. The stories like I get. Yeah, it's phenomenal. Phenomenal. It really is. you know this like 14
year. Yeah. Yeah. No, it's phenomenal history. And I learned like all of calculus on my own. It really is. It really is phenomenal. And especially as it becomes Even more multimodal when you have like video and all of that's going to be amazing. I dream about that. To your point, health care I dream about I dream about all of it. The one existential question I don't think we're asking enough and I hope you will, and maybe you have been asking it, though, is how do we redefine the purpose of humankind? Once I has effectively supplanted
all of these things because, you know, whether you whether you like it or not, throughout history, you realize our purpose has often Defined our progress. You know, there's a time when our purpose was just religion. And so, you know, fought for good and bad. You should think about it. Religion was really great at getting people to think and move in a certain direction beyond themselves. And they went like, this is my purpose. I wake up to serve God. Whichever God you were thinking of, I wake up to serve God. I wake up to please God.
I wake up and it makes humans, I think one it makes them feel like they're moving towards Something, and two, it gives them a sense of community and belonging. And and I feel like as we as we move into a world where AI removes this, the one thing I hope we don't forget is how many people have tied their complete identities to what they do versus who they are. And once we once we take that away. When you don't have a cloak, when you don't have a secretary, when you don't have a switchboard operator, when
you don't have an assistant, when you don't have a factory worker, when you don't have all of these things. You know, we've seen what happens in history. Oftentimes it's like radicalism pops up this There's there's a mass backlash. Like, have you thought about that? Is there is there a way you can accept that before? How would you describe it? Our purposes right now, I think right now our purpose is it's survival tied to the generation of income in some way, shape or form, because that's how we've been told survival works, right? You have to make
money in order to survive. But we've seen the there've been pockets in time where that has been redefined. France has a great example where they had And I think they still have a version of it, but the artists fund where they went will pay you as an artist. You just make things, just make France look beautiful. Yeah. And that was that was beautiful. I know you were a fan of UBI, for instance. Yeah. We shouldn't go before you talk about that. Well, I just I don't think people's survival should be tied to their, like, willingness
or ability to work. I think that's like a waste of human potential. But, yeah, I agree with you completely. I think. Why wait, let me ask you this before you go. It's like, why do you think universal basic income is so important? Because you don't you don't waste your time or your money on things you don't believe in. So and you spend a lot of time and money on universal basic income. I mean, the last I saw was like, there's like a $40 million project that you're out of $60 million. I, I don't think universal
basic income is a complete solution, of course, to the challenges in front of us. But I do think that, like, eliminating poverty is just inarguably a good thing to do. I think better redistribution of resources will lead to a better society for everyone. But but I don't think giving away money is the the key part of this. Like giving away tools and giving giving away governance. I think it's more important, like people want to be architects of the future. I think as much as I could say there's been a consistent thread of meaning, or of
like a mission for humanity. I think it is like, you know, survive and thrive for sure. Yeah. Individual basis. But, but collectively we do have an emergent collective desire to make the future better. Yes. Now we get off track lots of times. But but the human story is like let's make the future better. And that that is technology. That is governance. That's the way we treat each other. That's like going off to explore the stars, that's understanding the way the universe it's whatever it is. And I have so much confidence that that is so deep
in us. No matter what tools we get, that base desire, that mission of humanity to thrive as a species and as individuals, that's not going to go anywhere. So I'm super optimistic about what the world looks like two generations from now. But what you got at is really important, which is people who are, you know, already in their careers and actually pretty happy And don't want change. Yeah. And change is coming. One thing we've seen with previous technological revolutions is in about two generations, it seems like society and people can adapt to any amount of
job turnover. Right? But not in ten years, certainly not in five years. Right. And we're going to go face that. I think to some degree, as we said earlier, I think it'll be slower than people think, but still faster than society has had to deal with in the past. And what that's going to mean and how we have to adapt through that. I'm definitely a little afraid of we're going to have to confront it, and I assume we'll figure it out. I'm confident we'll figure it out. And I'm also confident that, like you give our
children and grandchildren better tools than we had, and they're just going to do things that absolutely astonish us. And I hope they feel like horrible about how bad we all had it. Like, I hope the future is just so amazing and this human spirit and desire to, like, go off and figure it out and express ourselves and design a better and better world and way beyond the world. I think that's wonderful. I'm really happy about that. And I think in some sense we shouldn't make too much of this, like, little thing. We and you know,
that scene in Star Wars where one of the bad guys is like, don't be too oh, I think it's Vader is like, don't be too impressed with this technological terror you've created. It's like nothing compared to the power of the force. Yes, I do feel that way about AI in some important sense, which is like we shouldn't be too impressed with this, like the human spirit will see us through and is much bigger than anyone. Technological revolution. I mean, it's it's a beautiful message of hope. I hope you're right, because I love the technology. The
one, the one really choppy. No. You know, and the one thing I would leave with you, Sam Altman as, times CEO of the year and one of the people of the year. I think you'll continue to be that, especially in this role. because of how much impact open AI and AI itself are going to have on us, one thing I would implore you to have is continue to remember that feeling you had when you were fired, as you're creating a technology that's going to put many people in a similar position because I Because I see
you have that humanity in you. And I hope as you create, you'd constantly be thinking about that. You know, what I did Saturday morning, went to like early Saturday morning when I couldn't sleep. I wrote down, what can I learn about this that will help me be better when other people go through a similar thing and blame me, like I'm blaming the board right now, And have you figured it out? A lot of I mean, there's a lot of useful, like single lessons, but the empathy I gained out of this whole experience and my, like,
record relation of values, For sure, was a blessing in disguise. Like it was at a painful cost. But I'm happy to have had the experience in that sense. Well, Sam, thank you for the time. Thank you. Really, really enjoyed it. I hope I hope we do chat in a year. about, you know, all the new advancements. that'll be you should definitely come do that. That'll be awesome. I was definitely man. All right, I cool. Thank you. What now with Trevor Noah is produced by Spotify Studios in partnership with Daisy productions, Full Well 73 and Odyssey's
Pineapple Street Studios. The show is executive produced by Trevor Noah, Ben Winston, Jenna Weiss Berman, and Barry Finkel, produced by Emmanuel Happiness and Marina Hankie. Music. Mixing and mastering by Hannah Brown. Thank you so much for taking the time and tuning in. Thank you for listening. I hope you enjoy the conversation. I hope we left you with something. Hopefully we'll see you again next week. Same time, which is whenever you listen, same place, which is wherever you listened. Next Thursday, all new episode what now?