Bandini from openai if you can join us [Applause] sandini is from the policy team of openai thanks Sam I have I hope you have the bandwidth to take more questions for sure um so I just want to start um I want to refer to this one interview we did with you when you were in YC and you specifically called the big Tech is uh being hyper caps that that was something that you came up during the interview a lot of people think open AI is as powerful or more than big Tech today I don't think
anyone really that's nice of you to say I don't know I I just got the question all right um and you propose to um regulate the industry that's also because you want to be part of uh writing that regulation which makes it look like that you know the smaller startups and the companies which are sort of still burgeoning may not have a chance to ever become as big so to be very clear we we've explicitly said there should be no regulation on smaller companies on the current open source models that it's important to let that
flourish the only regulation we've called for is on people ourselves or bigger which include Google yeah I'd be us in Google right now I think okay but really just us I think like we're clearly in the lead if it only we're focused on us right now that'd be fine um you know it's I totally got why people are skeptical of hearing someone running one of the companies in the industry call for regulation but the governments haven't been and we think this is important so we have a moral duty to do it and it's totally reasonable
to ask that question but we feel strongly about doing it has there been any progress at all from yeah a lot actually it's been a quite a good response um we've met with heads of state in many many countries on this trip and I have been really pleasantly surprised every single time about the Nuance of not slowing down balancing not slowing down uh Innovation all the positive economic benefits everything else and realizing that if this keeps going it can get somewhere that does require Global action do anything to add do you want to add to
that um uh I think the only thing that I'll add in terms of Regulation and I think something that we've called for is really setting standards and in a way guiding companies as to what we should be building um because we know these technologies will impact the world and it needs to be a two-way conversation so it really has been about kick-starting that conversation um to kind of have it be a two-way street so we're not deciding for the entire world how these Technologies impact everyone on Sam is because uh usually if you've and you've
seen it for the past 15 years uh regulation always follows you know technology this is the first instance where you are proposing to regulate before it's sort of uh it is big but you know I mean the percentage of people who've used chat GPT or any of the llm models will be very small right now does that dissuade smaller companies and I I think we'll open it up uh if you have an answer to that then to the audience yeah again we are explicitly saying there should not be regulation for smaller companies I think that's
important like we're that there's got there's got to be enough Nuance in the conversation where it can't just be like either you're a CEO that says regulation is bad and then you know you are responsible for your industry and bad or you say regulation is good and then you're like trying to like do regulatory capture if that's it like then kind of what what do we want I think what we need is companies to say what they actually believe in not decided that's that's certainly up to the governments but to be able to give input
in our case we think for very powerful models models that are much more powerful than what we have today we do need Global regulatory something um and if the world decides that's not what they want that's fine but there's nothing we've said and in fact we've explicitly said the opposite about anything on smaller companies um do you wanna there are no questions great just in case um let's do it yeah these are uh some of the people who had uh sent in questions so if we have the might um yeah um gaurav uh an academy
and maybe you can just introduce yourselves as well so uh hi this is gaurav from an academy five years ago you wrote a blog post which said that the next opportunity that you should pick should make everything else you have done look like footnote now you talk about spoke about Fusion today Etc and there are some clear big opportunities but let's say first entrepreneurs starting out or starting their next company or something like that and apart from the market cap or the potential or the USP what are some traits that do you think on day
one say that this opportunity can be super big and can make everything else that you have done look like a footnote I mean I I think this is the most exciting time to start a company since the dawn of the internet I think this is going to be bigger than mobile it might turn out to be bigger than the internet I hope it does but it's at least that big and that means that anything you do uh like can be huge it's been I think hard to figure out what to work on you know these
last 10 years because there has not been a big new technology Trend that's gonna like shake the ground and now we have one so I mean I would definitely do something in AI but what to do I'd pick what you like what you believe in make sure that the business idea has like some basic defensibility to it but like it's open season and this is a tremendously exciting time I want to add to that just put on your investor hat is there too much of frenzy around AI you're always critical of uh yeah there is
too much of a frenzy Iran AI in the short term so it's it's wildly over hyped in the short term you know people saying like there's crazy stuff happening in Silicon Valley right now but I think it's still probably under hyped in the long term if we if we really do we might be wrong we might hit a wall anytime but if we really do make the progress that we think we're gonna make and we have this like magical system that can just do anything you ask no one knows how to think about that no
one knows how to value that but whatever they're thinking is too low so it's yeah short-term underhyped long-term short-term overhype long-term unhyped what do you think um I think I definitely agree with that I think one thing that I'd also add with AI something that's really special now is the way it can reach millions of people across the world like something we've seen in India is there's so much demand for things like education and there's just not enough people who can often active teachers actors doctors actors Loyals with things like AI although that will slowly
become more and more possible so that's a very exciting future and the potential is just very large there yeah to add something to that I think we you know we were talking earlier about oh what what's going to happen to the jobs but maybe the problem is like we don't we don't have nearly enough people to do all the jobs that we want we're we're in this like massive Crunch and if you can make way more like job doing ability available the world would consume a hundred times thousand times more I think we may really
see that um more questions um Sam Rajan here from Peak 15 partners um yesterday was Sequoia capital Peak 15. I got some questions for you yeah I know I know I know you're you're on the hot seat today not me um Sam can you just going to going to startups I mean as you know we've got a very vibrant startup ecosystem in India um specifically focused on AI are there spaces where you see let's say a startup from India building you know you can build on the models you know be it uh GPT and many
others but if you want to build foundational models how should we think about that where is it that a team from India you know three super smart Engineers with you know not 100 million but let's say 10 million could actually build something truly substantial look the way this works is we're going to tell you it's totally hopeless to compete with us on training Foundation models you shouldn't try and it's your job to like try anyway and I believe both of those things I think it I think it is pretty hopeless but Ajay Chaudhary and I
founded HCL the question I want to ask is Ray Kurzweil and others have been talking about uh you know achieving singularity in let's say year 2045 with the kind of uh exponential growth that your products are going to do is it going to be a much earlier than 20 or 45 and what's your estimate you want to go first um I can take that question uh maybe for Sam so I'd say timelines that's something we discuss a lot at open AI um different people tend to have different timelines for like when they think that'll happen
um I think the thing that you're mentioning around like Singularity though and some of these sort of more existential risks and like existential opportunities even perhaps um where I think right now where we are and where we really need to think about is like how do we really measure it how do we really evaluate it so we're in a place where we don't even have that right now so I think that's the starting point when we can maybe start to get some more structure around these abstract ideals um but in terms of timelines people can
have their own sort of estimates and that can really vary a lot I think we're getting close enough that the definition of the term really matters a lot and people do have very different definitions what I what I would say is we need to plan for a world in which 10 years from now we have something that is like a very meaningful contribution to all of the cognitive ability of human civilization maybe as much maybe more maybe less but an important fraction and you know Singularity or not that is a very different world hi okay
go ahead yeah soon I'm a creative professional uh I was just thinking while you're talking and I think a lot of new thoughts coming to my head one of the questions I have is that you know there is a something called creative satisfaction to the individual with the creation the tactile sense of it and if the distance and degree of separation starts increasing and the tool becomes so overpowering that it takes away the belonging to the product it can happen to any job actually that the ones who is doing the job sees uh himself or
herself very distance from it completely and it almost the creation creates it's creating itself so what happens to this individual satisfaction which is related to jobs or creation or any of these things would it become self-sufficient in its in its own way and then the two lower Powers you I think what happens when you give people better tools is they do better things they do more impressive things the floor lifts up the expectations lift up uh and if I feel very lucky and like very grateful to all of humanity all of Humanity's come before me
everyone's built the things that I use but I would not be able to do what I do and neither would anybody else in this room without a gigantic Tech Tree of technology and better and better tools um but you know without the transistor without the operating system that people figure out how to do without all of the work that that enabled for the next step um without airplanes that let me come here and talk to you none of this would happen and so we build better and better tools and that abstract more and more but
we still find quite a lot of fulfillment and we operate at higher and higher levels and I think that's just going to keep going I think human creativity the desire for status fulfillment from work wanting to like contribute useful things back to the world so other people someday get to build on our stuff like that's not going anywhere the the the sort of expectations are just going to go up I think Sam uh two-part questions you can choose to answer one or both uh what have you after doing AI for so long what have you
learned about humans and what do you think uh uh is your understanding of humans after doing Ai and and if you could be in trenches and be build four more companies what would these companies look like what have you learned about humans so I can think about it for longer um sorry oh yeah uh no I can go first um man so so one is I grew up implicitly thinking that intelligence was this like really special human thing and kind of somewhat magical and I now think that it's sort of a fundamental property of matter
and that's that's definitely a change to my world view I think kind of like the history of scientific discovery is that humans are less and less at the center um you know we used to think that like sun rotated around us and then maybe at least we were if not that we were like going to be the center of the Galaxy and there wasn't this big universe and then Multiverse like really is kind of weird and depressing and if intelligence isn't special like again we're just like further and further away from like main character energy
but that's all right that's sort of like a nice thing to realize actually um I I feel like I have learned something deep but I'm having a hard time putting it into into words and but it's like something about it even if like humans aren't special in terms of intelligence we are incredibly important you know we I won't have the Consciousness debate here but like I think there's something strange and very important going on with humans and I really deeply hope we we preserve all of that the second part was like four other companies I
would start um I think I would just like pick the verticals that I felt I knew the best and think about how AI could revolutionize them so maybe the meta answer is I should be like thinking about how you use AI to Make a Better Faster AI company that's it right um yeah can we get people from that yeah we see I've got many questions I didn't want to take over from the incredibly handsome guy in the room but many questions one is I really liked what you said in Abu Dhabi I've been reading about
your various speeches around the world that seemed the most important about it being regulated like atomic energy so could you explain that more because that seems extremely important to regulate Harare has been making negative remarks or cautions about AI mainly what I've understood he says think of AI in the worst politician that you have in mind you obviously meant without naming the typical warring dictators the next question is are you able to share what you had a conversation with the prime minister and coming to human aspects I had the great opportunity of also meeting coursewell
at Google and the most incredible thing he said and it stayed with me please explain that I met him eight years ago and even then he said we are able through AI to make classical music equal or better than the greatest Masters the frontier will be when we make AI or robots capable of love he was very serious about this and I haven't read much about this as if it was his own personal passion has this aspect of humanity which I hope competes with intelligence as an aspect of man progressed that AI will make something
like a robot capable of Greater Love than man and dog man and wife father and son is that progressing so on the iaea yeah basically we think and we're not sure if this is the best answer we're kind of contributing one idea of many good ones to the global conversation in the same way that we say you know nuclear materials provide some real danger some real benefits uh but they affect all of us they affect the globe let's have a system in place so that we can audit people who are doing it we can license
it we can have safety tests you have to pass as you're doing training these systems before you deploy them there's uh there's visibility for Regulators I think that's an important idea we have been very pleasantly surprised by how much enthusiasm there is for it from around the world and you know maybe someone has a better idea which would be great I had dinner with Harari a couple of nights ago we talked we talked about this in Tel Aviv uh I do think there's a lot of like very sci-fi concerns that are pretty far out there
and probably will turn out to be quite wrong but this idea of misuse by a dictator using it to oppress people is a very scary thing and that's not super far away so I think we need to and we spend a lot of time at opening I think of us I think we need to build these systems in a way to address that risk and I think it's going to be a very complicated Global geopolitical challenge the Prime Minister we're going to see tomorrow schedules got shifted around but really looking forward to doing that and
love I hope that we don't all fall in love with robots that would be a deeply depressing um what I hope happens is we are all the best versions of ourselves we all can like figure out how to be better these systems can like help us as coaches as uh maybe like therapists in the future as as sort of like guides and assistance um and make us more present for each other and treat each other better and I really believe that can happen but they'll like all fall in love with the robot thing I'm okay
with the good classical music I don't really want that future personally what do you think um I think on that I'd also say it fundament fundamentally gets to some questions about like what you believe AI can and can do I think it's still a debate whether AI models can feel emotions either now or ever um so definitely not settled um will again play into how we even measure things like that so definitely a ongoing conversation right now um yeah Sam Alessandro from uh School of Management the Azure Center uh education so you talked about the
education and indeed it has already it's helping a lot but I'm looking at it from the other side you said it will generate a lot of new jobs we will all be more productive and gather different jobs under ourselves so what do you think are the skills that we should be teaching for the future for the future future managers to nicely manage this gai in order like right now we are teaching about leadership about uh strategy would it be something like psychology design thinking or a mix of everything something he probably has a better answer
on this one than me I'll just add to that there has been a lot of concern raised about chat GPT becoming like a therapist for a lot of people and while it is fine at a very basic level there have been questions about and concerns about yeah I think one thing that I'll add to that and then I'll take that question as well is we actually also in the Democratic call for AI inputs have asked that as a question like should AI models offer emotional support or like psychological help to humans I think that's an
open space and Society needs to decide what role AI will play um but I think to that question I think many of the skills you mentioned um why they're unique is because I think they play into this fundamental aspect of like people skills EQ and like long-term planning and long-term strategy um these are three things that I'd say right now ai models unclear where the growth isn't unclear how and when they're going to get there um so those three things I think are like perhaps the key things to focus on right right now um these
aspects which feel fundamentally human because they rely on these things around EQ human connection sort of understanding people um so those areas might be some good areas to focus on yeah just just one question to Sam uh I think in April sometime you had said that open AI would start training um gpd5 but you stalled it because of concerns raised by Elon Musk and others no and it's been a few months so uh no I didn't say we were going to start training gpt5 and I didn't say anything in response to Elon uh it was
We There was a letter that some people signed we didn't people asked us are you currently training gpt5 and we said no we're not current again for my earlier statement we just kind of say what we're doing we're we're not training it so we said we weren't that's all but it's not like we were started and then that letter came out and we stopped but is is it happening anytime soon training of we have a lot of work to do before ready to go start that model like these things take a long time you can
see how long it took us between gpd3 and gpt4 this is like it's not like you just like push a button and say okay today we launched gpt5 uh it you know it takes hundreds of people and a lot of research that happens on somewhat unpredictable timelines so uh we're working on the new ideas that we think we need for it but we're not we're certainly not close to ready to start and what you stand on Elon and a bunch of others writing letters wanting to stop any sort of progress to be made on AI
um I think a better framework is external audits red teaming safety tests when we finish gpt4 it took us more than six months until we were ready to release it we did uh your team did a lot of work on it so did many other teams to get ready to be confident that we could put something out that was safe but six months would not have been a magic number and we weren't going to put it out until we were ready and this was a lot of internal work external work future systems may take longer
they may take less Long what I think matters is a set of safety standards and a process to ensure compliance with those because otherwise like you stopped for six months and then you you know how do you know if you made enough safety progress but you're not committing to a date to what to a date for the next yeah I wish I could tell you uh like it's research doesn't like work on a calendar [Applause] questions okay I think I'm Raju kanoria I'm a businessman uh my question to you is that how do you make
ethical choices when it comes to using Ai and one example is obviously in case you are involved in an accident in a self-driving car or in a situation which can cause an accident or for that matter what you mentioned about medical science that you know in the use of medical science uh you can come up with different ideas on how to deal with a disease but when it comes to ethical choices how how do you think AI will play out in the future you want to go first you want me to go first all right
um I think the main thing that I would say we could talk about any of those specific examples but the main thing I would say is those aren't open ai's decisions to make we really want to figure out a way that we engage with society and that we democratize the decision making on these trade-offs so uh the the the projects we've launched recently uh the funding that we've provided it's it's to figure out a way to get the global value system in the pre the moral preferences of people to decide what these systems do we
could go off and make those and it's like very interesting to think about them and we have our own opinions but it shouldn't be like up to open AI to decide and one of the things that I think is so cool about this technology that's different from anything that's come before is this is a technology that can actually learn the collective preferences of the world for decisions like that should we take a few more questions then yeah um I have the mic here should I go yeah yeah swapping yeah hi I'm swathi I'm the co-founder
of cash Guru we are the Performance Marketing space firstly thank you so much Sam for sharing everything so generously and being so transparent about everything I would personally love to know a little bit more about the company culture you've built as an entrepreneur at open IAI because it's not just you who's inspired it's your entire team that is building creating something that doesn't exist today so how did you inspire others as well in your team uh you should take that one that's hard for me yeah um I think I can answer um something about the
company culture um what you're seeing right now here on the transparency around sort of setting this vision and being very open at the company um that's something we actually have at open AI um something that's really good about the company is how all teams can come together and really work together on Big Ideas together um that requires collaboration that requires sharing ideas that requires not being territorial about work um and kind of collaborating in this manner I think that's what the company is really good at and that's something that I think fam and like others
have sort of set from the get-go um I think something else that's really good about the company is like no idea is a bad idea so like no matter how crazy your idea might be or like how sort of far out or how sort of like ridiculous it may sound in the beginning people always hear you out people always engage um so the sort of appetite for always having a discussion is also something that's really good about the company the two other things I was thinking about we we really care about Talent density so I
think a lot of companies have talented people but if you have like even a few mediocre people mixed in through there they kind of like act like the neutron absorbers and stuff just goes wrong so we really try to have like extreme Talent density and the trade-off of that is we don't have that many people relative to what we do and so we really try to be focused like gpt4 was a whole company effort we could not have gone and done three other things at the same time um yeah somewhere there sure okay so I
abroad I work for government of India I'm secretary to government of India and we like assistance so I asked Jr GPT at 5 PM that I'm meeting Sam what should I ask him and you know the responses that I got were so easy so I think the hypothesis that it is biased is totally confirmed but jokes apart uh there are concerns about energy consumption in large language models and just like the crypto thing do you think for quick adoption in public spaces like the Indian government the lightweight AI is that the way forward of course
there is the issues on accuracy and uh so what are your views on lightweight Ai and since you also talked about nuclear fusion I think the energy consumption is at the back of your mind for llms yeah I think look I think this energy conversation about llms has become like a real Sideshow these current models are just not consuming a material amount of energy compared to any like anything else I I don't understand why it's become such a topic of debate I think it is important in the long term if we really keep scaling these
models up they will start to consume lots of energy but will need to be on Fusion or Renewables or something like that anyway by that point um just to get enough energy so I don't know how this has become such a meme but I I don't think it is a material Factor I I think you can certainly use small models for some tasks but on on the whole I think you kind of want to use the best intelligence you can unless there's a good reason not to uh and you know like we always want to
make AI way cheaper and way more available and way smarter and fusion will certainly help us do that um so that's that's I think why we're excited about it we have another 10 minutes um from Excel okay okay sorry to come in again it's a conversation so I took the liberty on you said about AI not wanting to be human or not competing with human perhaps that's the hint I got not to take over but just like AI will take over jobs because they do it AI will do it better than say the truck driver
so similarly when you say human the first thing we know about humans is it's it's human to make mistakes it's it's a human being makes he us to air is human so all we have to do is to make AI human all the qualities of a conversation but it does not err so we have always told our beloved you know mother wife daughter I love you but you know just this one aspect of you it really Riles me is difficult to believe that everybody in the room hasn't said that to a beloved so this AI
robot will displace your most beloved person by having a much better conversation with you without error and what you find irksum about the lover you can program it to not make that mistake and therefore you will get the perfect lover do you want that in in technology I'm a neophyte I haven't been the first mover I'm a late follower so that's how it'll go look uh first of all I think this question of whether AI is a tool or a creature is something that really confuses people and it confused me for a while too but
I now think we are are very much building a tool and not a creature and I'm very happy about that and I think we should and will continue in that direction on the question of mistakes and errors uh I believe that creativity um and certainly the creation of new knowledge is very difficult slash maybe impossible without the ability to make errors and come up with bad ideas and so if you made a system that was certain to never tell you to never say something that it wasn't absolutely sure was a fact um I think you
would lose some creativity in that process and I think one of the reasons that people don't like chat gbt is because it hallucinates and makes stuff up but one of the reasons they do like it is because it can be creative and what we want is a system that can be creative when you want which means you know sometimes being wrong or saying something you're not sure about or experimenting with a new idea and then when you want accuracy you get accuracy so I think there's something there uh yeah you know if people want like
a and some people clearly do if people want to chat with the like perfect companionship bot that never upsets you and doesn't do that one thing that irks you you can have that uh and I think it'll be deeply unfulfilling and a sort of hard thing to feel loved for I think there's like something about watching someone screw up and grow and you know like Express their imperfections that that's the very deep part of love is I understand it um and I think like humans care about other humans in a very in humans care about
other humans do uh in a very deep way um so that you know that like perfect lover chatbot doesn't sound so compelling to me want to add anything all right yeah Sam okay uh I just wanted someone at the back because they tend to get ignored all right uh Sam hi this is Manish Manish I'm the CTO a bit louder oh this is Manish Manish I'm the CTO of TCW we're an asset manager in La so my question is we're fully expecting uh chat upd to have a Monumental impact in the the composition of organizations
so as we start planning for the future as we start working towards the target State organization and you can use technology as an example how best do we um how best do we create a Target State organization so we have a soft Landing so we have one a soft Landing oh um I think just like a rapid adoption of the tools and a tight feedback loop for what what to do with them it's it's too hard to predict the future we say it's not open eye all the time it's too hard to predict the future
a tight feedback loop is how we manage through it and we just try something we observe we correct and do that again and again and again and I think the world is going to reward um adaptability and speed and resilience like more than ever before because the rate of change will be so fast so I'm great to hear your views about AI I'm curious to hear your views about energy as well so per what the models suggest the energy consumption and Reliance on fossil Fields as of 20 years ago and today is largely the same
and we have this great revolution in AI which will you know hopefully revolutionize a lot of things but I'm curious to hear your views on helion and your view on what the next chat GPT for quote-unquote energy would be I don't know quite with chbt for energy means but uh I think if we get Fusion to work and number one we can have the cost of it be like less than one tenth of current energy and number two we can manufacture enough generators for like the whole planet in 10 years then we're that's great now
what of course if you drop the price that much the demand will go up I don't know how much but a lot so we'll have to make even more but we'll figure that out as we go and if we can just start with replacing all base load that's pretty good yeah just just if you've you've yourself said it could either go completely big or go wrong uh what is your biggest fear about AI I would like to hear something his answer um so honestly my biggest answer right now and I'll share in the five-year time
span um it is the economic displacement question and I know my view here might differ from like many people at the company um I think the economic displacement question um for example with the Industrial Revolution when that happened long term it was great for the world great for society a lot of progress but the beach at 50 years after the aftermath were really painful so I think that's what we need to figure out how do we manage this transition how do we make it least painful for society um I think in order to do that
there's a lot of work that like governments people have cut out for themselves so I think that's what I'd say in the now is one of my biggest fears in worries I have a lot um I guess the thing that I lose the most sleepover is that we already have done something really bad I don't think we have but the hypothetical that we by launching Chachi PT into the world shot the industry out of a railgun and we now don't get to have much impact anymore and there's gonna be an acceleration towards making these systems
which again I think will be used for tremendous good and and I think we're going to address all the problems but maybe there's something in there that was really hard and complicated in a way we didn't understand and you know we've now already kicked this off um yeah uh srivatsa Krishna from the Indian administrative service great to see you again Sam uh the question I have I think the smartest move you've done is to go to the regulators and say Regulators that's like saying ask the sun to stop shining because you're way ahead and the
audit you talk about of every node of every server of every network will require so much of energy it's impossible for any regulator in the world to do Point number one point number two you often cite the IAA most of the U.N bodies are past their expiry date if the U.N had worked Russia would have never invaded uh Ukraine IAA many regard is not a success but actually a failure because Collective action among nations is very very hard to bring about so how did you think of this very smart move to get ahead of government
because government always almost always regulates the lowest common denominator it is not designed to do Nuance thank you that's a very cynical take and I really hope you're wrong um and and surely the smartest thing we've done is like create magical intelligence in a computer like rather than like go to Congress and astral regulation I think they're like I hope incomparable in terms of the impact or the the impressiveness of them but I I totally disagree I think the world can come together on important things I think the UN is in bad shape for sure
I think iaea is deeply imperfect let's go do something better but those are the best analogies we have um and to say like oh the governments are just hopeless so call for regulation is some sort of 40 chess move it's just like not how we think uh it's this is an existential risk there's many ways to solve it if the governments don't get their act together we will try our hardest to get the companies to cooperate but we can't control what every company does and we'd at least like to ask for like the dream world
and if we can't have that we'll get the companies that want to play ball together and do our best hi we can take maybe a couple more hi Sam you know I heard Elon Musk talk about the origin of open Ai and his role in it and the argument he was making when he was talking about Microsoft's investment in open AI seemed to give the impression that he had an economic claim upon open AI of some sort or that he was heading towards that is that completely off the charts yeah I don't really want to
get into like an Elon food fight I like the dude I think he's like totally wrong about the stuff you can sort of say whatever he wants but uh I'm like proud of what we're doing and I think we're going to make a positive contribution to the world and I'll try to stay above all of that hi my name is a AI firm called fractal I read your the paper called Sparks of AGI which was written by the Microsoft researchers in March which is very interesting and it already shows chat gpd4 already shows several clues
that it is it's close to AGI so my question to you is what are some of the tests that you have internally to know that you're getting really close are there any tests and can you share how you would test for that is and you already talked about the definitions and how AJR their several definitions but what is the definition that you're working with and what are some of the tests that you have to test that it is close yeah great question I think this this is one of the most important like rarely asked questions
about what what are the right evals for what an AGI would be um first of all that was not our paper and I don't think we're particularly close I don't think gbt4 is particularly close for me the fundamental thing that gpt4 can't do at all that an AGI could do is go figure something out go discover new knowledge go learn how to solve a new problem it's never seen before figure out that it needs to go do complicated planning study some things in a particular order build something write some test code and the tests that
I'm interested in the evals are all around that ability there's plenty of other things you probably have different answers but that's the one that for me would be like all right this is an AGI and the only thing I'll add to that is we've actually published pretty much every eval we ran on gpd4 so they're there in the gpd4 technical report and the system card some of the tests that Sam mentioned are currently more qualitative so they're done by individual researchers running experiments on gpd4 there's one under the arc section that we've published so should
be possible to read about it as well hey hey Sam nitin Sharma antler uh I want to I have a question about AI with respect to web3 you're also involved with World coin a while back people were saying that AI is a force for centralization and web3 will take it the other way as someone involved on both sides I'd love to get your take I think AI will be a force for decentralization in a very powerful way I think whenever you can give people pretty Democratic access to very powerful tools you it is a force
for decentralization and we're seeing this already with the API with Chachi BT so I also had a sort of like fear that air was going to be a big Force for centralization I think without even like being explicit about this my model was that like there was going to be one super intelligence in the sky that like you know we better hope was good and liked us and now what I think is it's much more like we all have like a bunch of systems that help us be more productive and you know you use AI
for one thing I use it for another you and I are both like way more capable than we were in the old world but sort of still like doing our thing and amplifying our own will Just One Last Question here yeah uh hey I'm Cavalier from zepto We're actually NYC uh and YC continuity company so thank you for all of the work you've done before opening as well um you had a conversation recently with Patrick ollison and uh within that the two of you discussed the problem of not enough Founders working on high-risk High reward
very Capital intensive and over a long period you know those types of companies and those types of problems you talk briefly about giving a grant to 100 the smartest people someday I just need to get around to it I I think this is a really good idea any other ideas in terms of how to solve for that um I mean I think there's like a lot of things one could do uh and I think maybe the most important is just like keep like I want to keep trying to deliver the message I'm grateful other people
trying to deliver the message just talking about the importance of this and the feasibility of it and that you actually can raise money and you can get people to work on it and like sure they're gonna like you know it's a little bit harder to get started and people like it takes a long time and people are getting patient but like it's the most fun thing to work on and I think if we just can kind as a society like keep reinforcing that message that it is hard but possible and maybe less hard than an
easy startup um I think that's that's the way to do it so uh we're gonna have to wrap this up but Sam are you going to be around yeah so Sam's here if you guys have more questions um you can come up to him he's around for a bit yeah thank you all thank you thank you very much [Music]